-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Negotiation of pooling support #34
Comments
I'd like to propose a different model than the one in #25: The server sends the maximum WebTransport Session ID, or since 0 is a valid value, it's really the minimum invalid WebTransport Session ID. To indicate WT is not supported, server sends 0. This mirrors H3 PUSH, and requires at least one more frame (MAX_WT_SESSION_ID). Perhaps it would also require CANCEL_WT_SESSION? A client need only declare that it supports webtransport, and even that could be optional. The server will know if it receives a CONNECT with :protocol=webtransport. |
I wanted to have a negotiation mechanism in #25, but my conclusion after talking with @DavidSchinazi about it was that we should not do negotiation, and servers should just reject extra streams with a 400 (or maybe even a RESET_STREAM) if they ever see more sessions than they expect; meaning it's on the clients to know if they can pool or not. |
Though we may want a more specific 4XX error code so the client knows it should retry in a non-pooled connection |
That's certainly a lot simpler than session credits ;) The cost of at least an RTT for a client that tries to pool against a server that can't handle it seems somewhat high. I guess it depends what the common client pooling strategies are and how many servers don't support pooling. |
@afrind since it's looking like the JavaScript API will provide a way to disable pooling, I expect that the scenario of pooling-attempted-but-rejected will be quite rare, so a round trip cost should be acceptable |
I'm also considering use cases for WT which are not tied to the JS API. |
@afrind that's interesting - and those use-cases won't have a similar mechanism to disable pooling? I suspect they'll need a way to get the WT server's hostname somehow, and having config properties attached to that (such as disable_pooling) could be useful? |
@DavidSchinazi : the opposite -- the clients may attempt pooling to servers that might or might not support it. You can't tell from a hostname if it supports pooling. |
@afrind how did the client get the hostname in the first place? I'm suggesting to carry pooling_support next to that. |
Actually, let's go back to JS for a second. The people who write JS at Facebook are very unlikely to know if the edge server terminating their connection supports pooling, or that configuration could change over time. |
(I appreciate the irony of me saying this as a Googler, but) should the JS people talk to the edge people? I agree that this shouldn't fail, but it being inefficient seems reasonable - these teams should talk if they're trying to reach good performance. |
I think my main problem with |
Ok, MAX_WT_SESSION_ID doesn't handle that case, so it may be an over-engineered solution to the wrong problem. |
As I said in the meeting, I don't think that we should have any signals for dedicated connections from either client or server. Yutaka made the point that the client can decide to make a connection dedicated after seeing the server settings. It doesn't need to signal its intent (and signaling would have negative privacy consequences potentially). I agree with this observation. A server needs to support HTTP if we're using HTTP. So pooling with h3 is already a settled matter. If the server doesn't want to answer requests, it can use 406 or 421 or other HTTP mechanisms to tell the client to go away. More generally, the server can just not provide links to that server, such that it might have to respond to GET/OPTIONS/POST/etc. The question then becomes managing resource limits on the server. Having a way to limit the number of WebTransport sessions is a good thing and I believe that will be sufficient for this. Thus, I'm a supporter of option 3: MAX_WT_SESSIONS or similar. |
I agree that we should have limits of how many WT can be active at the same time on a connection. It should also define what happens if the limit is exceeded. Should a client open another QUIC connection to the same server or should it fail? Probably it should be former, but that should also have limits defined (6 parallel QUIC connection, may be too much? HTTP/1.1 have the 6 parallel connection limit). |
Chair: discussed at IETF 113, consensus in room to change the WT SETTING to carry the number of concurrent WT sessions allowed on this h3 connection |
The H3 draft by its nature allows for the parallel processing of multiple WT sessions and also the intermixing of normal H3 request with WT. Server endpoints may wish to negotiate or disable this behavior. Clients can simply choose not to do either kind of pooling if they don't want to.
There's already a PR open (#25) with a design idea.
The text was updated successfully, but these errors were encountered: