Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Negotiation of pooling support #34

Closed
afrind opened this issue Mar 4, 2021 · 16 comments · Fixed by #86
Closed

Negotiation of pooling support #34

afrind opened this issue Mar 4, 2021 · 16 comments · Fixed by #86
Assignees
Labels
capsule-dt ietf-113 Issues discussed at IETF 113 pooling Issues related to pooling multiple WebTransports together

Comments

@afrind
Copy link
Collaborator

afrind commented Mar 4, 2021

The H3 draft by its nature allows for the parallel processing of multiple WT sessions and also the intermixing of normal H3 request with WT. Server endpoints may wish to negotiate or disable this behavior. Clients can simply choose not to do either kind of pooling if they don't want to.

There's already a PR open (#25) with a design idea.

@afrind
Copy link
Collaborator Author

afrind commented Mar 4, 2021

I'd like to propose a different model than the one in #25:

The server sends the maximum WebTransport Session ID, or since 0 is a valid value, it's really the minimum invalid WebTransport Session ID.

To indicate WT is not supported, server sends 0.
To allow for only 1 session ever, server sends 4, and never updates it.
To allow for only 1 session at a time, server sends 4, and updates it by 4 as each session closes/resets.
To allow for N parallel sessions, server sends 4 * N. and updates as sessions close/reset.

This mirrors H3 PUSH, and requires at least one more frame (MAX_WT_SESSION_ID). Perhaps it would also require CANCEL_WT_SESSION?

A client need only declare that it supports webtransport, and even that could be optional. The server will know if it receives a CONNECT with :protocol=webtransport.

@vasilvv
Copy link
Collaborator

vasilvv commented Mar 5, 2021

I wanted to have a negotiation mechanism in #25, but my conclusion after talking with @DavidSchinazi about it was that we should not do negotiation, and servers should just reject extra streams with a 400 (or maybe even a RESET_STREAM) if they ever see more sessions than they expect; meaning it's on the clients to know if they can pool or not.

@DavidSchinazi
Copy link
Collaborator

Though we may want a more specific 4XX error code so the client knows it should retry in a non-pooled connection

@afrind
Copy link
Collaborator Author

afrind commented Mar 5, 2021

That's certainly a lot simpler than session credits ;) The cost of at least an RTT for a client that tries to pool against a server that can't handle it seems somewhat high. I guess it depends what the common client pooling strategies are and how many servers don't support pooling.

@DavidSchinazi
Copy link
Collaborator

@afrind since it's looking like the JavaScript API will provide a way to disable pooling, I expect that the scenario of pooling-attempted-but-rejected will be quite rare, so a round trip cost should be acceptable

@afrind
Copy link
Collaborator Author

afrind commented Mar 5, 2021

I'm also considering use cases for WT which are not tied to the JS API.

@DavidSchinazi
Copy link
Collaborator

@afrind that's interesting - and those use-cases won't have a similar mechanism to disable pooling? I suspect they'll need a way to get the WT server's hostname somehow, and having config properties attached to that (such as disable_pooling) could be useful?

@afrind
Copy link
Collaborator Author

afrind commented Mar 6, 2021

@DavidSchinazi : the opposite -- the clients may attempt pooling to servers that might or might not support it. You can't tell from a hostname if it supports pooling.

@DavidSchinazi
Copy link
Collaborator

@afrind how did the client get the hostname in the first place? I'm suggesting to carry pooling_support next to that.

@afrind
Copy link
Collaborator Author

afrind commented Mar 6, 2021

Actually, let's go back to JS for a second. The people who write JS at Facebook are very unlikely to know if the edge server terminating their connection supports pooling, or that configuration could change over time.

@DavidSchinazi
Copy link
Collaborator

(I appreciate the irony of me saying this as a Googler, but) should the JS people talk to the edge people? I agree that this shouldn't fail, but it being inefficient seems reasonable - these teams should talk if they're trying to reach good performance.

@vasilvv
Copy link
Collaborator

vasilvv commented Mar 6, 2021

I think my main problem with MAX_WT_SESSION_ID is that most people who want to disallow pooling want to exclude not only other WebTransport sessions, but also general HTTP traffic. #25 was written with that assumption in mind.

@afrind
Copy link
Collaborator Author

afrind commented Mar 8, 2021

Ok, MAX_WT_SESSION_ID doesn't handle that case, so it may be an over-engineered solution to the wrong problem.

@martinthomson
Copy link
Contributor

martinthomson commented Mar 8, 2021

As I said in the meeting, I don't think that we should have any signals for dedicated connections from either client or server.

Yutaka made the point that the client can decide to make a connection dedicated after seeing the server settings. It doesn't need to signal its intent (and signaling would have negative privacy consequences potentially). I agree with this observation.

A server needs to support HTTP if we're using HTTP. So pooling with h3 is already a settled matter. If the server doesn't want to answer requests, it can use 406 or 421 or other HTTP mechanisms to tell the client to go away. More generally, the server can just not provide links to that server, such that it might have to respond to GET/OPTIONS/POST/etc. The question then becomes managing resource limits on the server. Having a way to limit the number of WebTransport sessions is a good thing and I believe that will be sufficient for this.

Thus, I'm a supporter of option 3: MAX_WT_SESSIONS or similar.

@ddragana
Copy link

I agree that we should have limits of how many WT can be active at the same time on a connection. It should also define what happens if the limit is exceeded. Should a client open another QUIC connection to the same server or should it fail? Probably it should be former, but that should also have limits defined (6 parallel QUIC connection, may be too much? HTTP/1.1 have the 6 parallel connection limit).

@vasilvv vasilvv added the pooling Issues related to pooling multiple WebTransports together label Mar 21, 2022
@DavidSchinazi
Copy link
Collaborator

Chair: discussed at IETF 113, consensus in room to change the WT SETTING to carry the number of concurrent WT sessions allowed on this h3 connection

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
capsule-dt ietf-113 Issues discussed at IETF 113 pooling Issues related to pooling multiple WebTransports together
Projects
None yet
6 participants