Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Publishing to crates.io #125

Closed
g2p opened this issue Sep 6, 2022 · 14 comments
Closed

Publishing to crates.io #125

g2p opened this issue Sep 6, 2022 · 14 comments
Labels
B-rfc Blocked: request for comments. Needs more discussion.

Comments

@g2p
Copy link
Contributor

g2p commented Sep 6, 2022

Hello,
I'm using h3 and h3-quinn in a reverse proxy I'd like to publish someday.
I'd like to have the h3 crates uploaded to crates.io.
I'm not sure if a proper release is waiting for #34 / #82 (found via #70), but I'd be happy even if the upload has a -pre in the version number. Someone else has uploaded forked versions (https://lib.rs/crates/httproxide-h3, https://lib.rs/crates/httproxide-h3-quinn) with possible changes (.cargo_vcs_info.json points to private commits), and it would be better to be able to use something tagged from this repository.
Tangentially related, license information is missing in h3-quinn/Cargo.toml.

@passcod
Copy link

passcod commented Sep 7, 2022

I'll note that reqwest also wants this (as some kind of prerelease) so they can get an experimental h3 setup going.

@g2p g2p mentioned this issue Sep 9, 2022
@seanmonstar
Copy link
Member

@eagr @Ruben2424 @stammw @camshaft I'm curious to hear from any of you about how we should start publishing the crate.

Publishing to crates.io will help other early adopters try it out, such as unblocking reqwest experimental usage.

At the same time, it's my opinion that proper expectations of this crate should be made. The purpose of this crate is to strictly implement what is necessary for HTTP/3, without including all the batteries, so that others can mold it into their system (such as hyper). So, the public API is certainly less stable. Especially when we notice things like #145, questioning the traits.

That said, I do suspect that publishing some versions and then getting things like reqwest or tower-h3 using it will help give valuable feedback that the API probably does need to change, and possibly quicker than normal. I also imagine that release a 0.1 is likely worthy of a nice blog post and celebration.

All of that to ask, should it just be tagged 0.1 nowish, announce, and then likely quickly tag a 0.2? Or should we tag something even more unstable than 0.1, so that 0.1 can be a little more stable? It's likely not something to spend too much time debating, but at the same time, likely worth a small discussion instead of just shooting from the hip.

@Ruben2424
Copy link
Contributor

I would wait with the 0.1 until h3 supports quinn 0.9. A more unstable tag than 0.1 until quinn 0.9 is supported sounds good to me.

@camshaft
Copy link
Contributor

camshaft commented Dec 1, 2022

I'd be fine with publishing a 0.0.x as an early first step. Even that is much more convenient than a git dependency.

@eagr
Copy link
Member

eagr commented Dec 1, 2022

I agree with @Ruben2424 that 0.1 could wait until we have resolved the quic trait issues. Maybe just do a 0.0.1 quickly to unblock stuff?

@inflation
Copy link
Contributor

We may need to use async trait to support quinn 0.9, that would greatly change the interface.

@seanmonstar seanmonstar added the B-rfc Blocked: request for comments. Needs more discussion. label Jan 4, 2023
@seanmonstar seanmonstar changed the title Please upload to crates.io Publishing to crates.io Jan 4, 2023
@Ruben2424
Copy link
Contributor

I just saw that we accept the unidirectional encoder and decoder qpack streams in the InnerConnection.poll_control() method. But as far as i can see these receive streams are only stored but never polled to receive data.

And we also do not create these streams.

The qpack spec says:

HTTP/3 endpoints contain a QPACK encoder and decoder. Each endpoint MUST initiate, at most, one encoder stream and, at most, one decoder stream

But also allows the option to avoid creating them?

An endpoint MAY avoid creating an encoder stream if it will not be used (for example, if its encoder does not wish to use the dynamic table or if the maximum size of the dynamic table permitted by the peer is zero).
An endpoint MAY avoid creating a decoder stream if its decoder sets the maximum capacity of the dynamic table to zero.

So I think the dynamic table encoding and decoding does not work right now, right?
So do we need to fix this before publishing?

@seanmonstar
Copy link
Member

So I think the dynamic table encoding and decoding does not work right now, right?

Correct, we don't currently use the dynamic table, but as you've noticed in the spec, it's not required. It's strictly an optimization, allowing headers that aren't in the static table to be compressed. So I don't think it would block us.

@ifd3f

This comment was marked as off-topic.

@seanmonstar seanmonstar pinned this issue Feb 14, 2023
@seanmonstar
Copy link
Member

So, I'm thinking that within the next couple days, we just publish the crate as-is (on master) as v0.0.1. It doesn't go with a big blog post or anything, it's simply to allow a few experimenters to move forward (such as the reqwest PR). That we might change the API drastically, or need to upgrade to quinn v0.9, etc is fine as 0.0.2.

Basically, the implementation is ready-ish enough, the API is close.

(If any of the collaborators would like to be part of the cargo publish process, I'm happy for that to happen too.)

@Ruben2424
Copy link
Contributor

If any of the collaborators would like to be part of the cargo publish process, I'm happy for that to happen too.

What exactly do you mean with "be part of the cargo publish process"?

@seanmonstar
Copy link
Member

seanmonstar commented Mar 3, 2023

Basically, do the actual release/publish process (together in chat or something), to spread the knowledge/responsibility. (Probably should document it anyways...)

@Ruben2424
Copy link
Contributor

Basically, do the actual release/publish process (together in chat or something), to spread the knowledge/responsibility. (Probably should document it anyways...)

I'm open to opportunities to get involved

@Ruben2424
Copy link
Contributor

It is now published 🚀

@Ruben2424 Ruben2424 unpinned this issue Mar 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
B-rfc Blocked: request for comments. Needs more discussion.
Projects
None yet
Development

No branches or pull requests

8 participants