-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net/http: don't block RoundTrip when the Transport hits MaxConcurrentStreams #27044
Comments
/cc @bradfitz @tombergan |
If we revert to the old behavior, one lowish priority thing we can do is have |
There looks to be a bit of overlap with #17776. Just wanted to make sure the two issues get linked together. |
FWIW, this is one of the reasons I disable http2 with |
If a user has it configured to allow an infinite number of connections per host, then I believe the H2 implementation then should create a new connection when it has reached the server-advertised maximum. However, there is some level of overhead in creating a new connection, perhaps a deadline on the RoundTrip block to attempt to stay in the server configured maxStreams before offloading the request to a new connection? |
We're facing problem with this change as well, as we're sending HTTP requests and responses with huge bodies, and performance isn't optimal as all of them are serialized over a single TCP connection. We expect to have around 10 clients against single server endpoint, so with current approach it means 10 concurrent TCP connections overall (1 connection per client-server pair). Limiting Right now in test environment, client is a load-test with N goroutines sending HTTP requests concurrently. These all requests are serialized and queued over a single connection, which leads to actual bandwidth drop when concurrency is increased at some point. |
To add some numbers, with unpatched HTTP/2 library we get around 1.5Gbit/s (up+down), with same settings and parts of the patch removed to allow more connections to be open we get 7Gbit/s (up+down) over several connections. Number of concurrent streams is limited on server side to 30, this might not be the optimal number, we'll keep testing but definitely +1 for this change to be reverted or making it configurable. |
it seems like the client is assuming that all future connections to a given host will hit the same backend that told the client how many concurrent streams it could send over the first connection... that assumption doesn't hold for load balanced services where the same host can be serviced by multiple backends. A limit for max connections per host makes sense, but MaxConcurrentStreams isn't a great stand-in for it. |
Nothing prevents a load balancer from replying with a high max concurrent streams. The problem I see is h2 implementations using the recommended minimum Go's current h2 bundle uses a value of 1000 for client connections by default, but only 250 for server connections. Further, the server stream comment refers to the Google Front End using a default value of 100—and I would think that Google servers can handle more than 100 concurrent requests, regardless of whether they are from one host, or, say, a proxy server. |
We're seeing significantly reduced throughput after this change using http://google.golang.org/api/bigquery/v2 to stream data into BigQuery. |
Can someone say what the HTTP/2 spec says about this setting? (Brad says it doesn't say.) |
I'm not sure what browsers do is the most relevant for Go. I would assume Go is more often used to build proxies or backend API clients than user facing HTTP clients. Having the HTTP library choosing to block a request with no good way for the caller to control or avoid it, is a big no-no for any low latency projects IMHO. |
Section 9.1:
Echoing @rs, it seems that a lot of the HTTP2 considerations are for browsers, and this max concurrent streams setting unnecessarily limits proxies, especially so when the proxy is talking to a backend that unnecessarily limits the max stream count (e.g. BigQuery replies with a limit of 100). |
@rsc I think Go HTTP/2 client library should be configurable at least to choose one of the behaviors: block until streams are available or open new connections. It seems that behavior prior to the change (no blocking) might be better default option. |
We also need to be able to monitor in-flight requests at the connection pool level so we can anticipate the need for the opening of a new connections. Here is an old proposal on that: HTTP/2 Custom Connection Pool. |
I'd like to second the sentiment that using browsers as our only guidance behind this doesn't feel like the best path, simply because of the different use cases between a user browsing a blog and a service that's built to multiplex lots of requests to backend systems. I think the browser functionality should be considered as part of the decision, but we should also take a look at the HTTP/2 implementations in other languages too. With gRPC using HTTP/2, and its usage in the industry growing, polyglot interoperability is becoming more prevalent. I think we should make sure Go is going to play nicely in those ecosystems, as well as be a viable option over other languages. It'd be unfortunate for it to have some sort of red mark like this that would prevent people from adopting Go. |
Hey all, I had an application communicating with an AWS ALB that got bitten by this issue this week. It seems like this is something that should be configured by // MaxConnsPerHost optionally limits the total number of
// connections per host, including connections in the dialing,
// active, and idle states. On limit violation, dials will block.
//
// Zero means no limit.
//
// For HTTP/2, this currently only controls the number of new
// connections being created at a time, instead of the total
// number. In practice, hosts using HTTP/2 only have about one
// idle connection, though.
MaxConnsPerHost int Is that the case? Or have I misunderstood the configuration option? At my current understanding it feels like the immediate path forward is to disable HTTP/2. Is the a better alternative? |
I'm leaning towards reverting this behavior for Go 1.12 and making it more opt-in somehow. |
@bradfitz Thank you for taking attention to this issue. I see that two things are being addressed in the last comment. A. The decision to revert the behavior Do we currently know if both A and B are targets (for the Go 1.12 milestone or otherwise)? I want to ensure I deliver the most-accurate information to my team regarding this issue. |
Sounds like the decision is to revert this behavior for Go 1.12. |
Change https://golang.org/cl/151857 mentions this issue: |
…EAMS And add the http2.Transport.StrictMaxConcurrentStreams bool knob to behavior being reverted. In CL 53250 for golang/go#13774 (for Go 1.10) we changed the HTTP/2 Transport's policy such that a server's advertisement of a MAX_CONCURRENT_STREAMS value meant that it was a maximum for the entire process, instead of just a single connection. We thought that was a reasonable interpretation of the spec and provided nice safety against slamming a server from a bunch of goroutines doing concurrent requests, but it's been largely unpopular (see golang/go#27044). It's also different behavior from HTTP/1 and because you're usually not sure which protocol version you're going to get, you need to limit your outbound HTTP requests anyway in case you're hitting an HTTP/1 server. And nowadays we have the Go 1.11 Transport.MaxConnsPerHost knob too (CL 71272 for golang/go#13957). It doesn't yet work for HTTP/2, but it will in either Go 1.12 or Go 1.13 (golang/go#27753) After this is bundled into net/http's, the default HTTP client will have this knob set false, restoring the old Go 1.9 behavior where new TCP connections are created as necessary. Users wanting the strict behavior and import golang.org/x/net/http2 themselves and make a Transport with StrictMaxConcurrentStreams set to true. Or they can set Transport.MaxConnsPerHost, once that works for HTTP/2. Updates golang/go#27044 (fixes after bundle into std) Change-Id: I4efdad7698feaf674ee8e01032d2dfa5c2f8a3a8 Reviewed-on: https://go-review.googlesource.com/c/151857 Reviewed-by: Andrew Bonventre <[email protected]>
Change https://golang.org/cl/152080 mentions this issue: |
@bradfitz is it really closed or gopherbot closed it incorrectly (I'm assuming it does so, by detecting words fixed\fixes in the line that contains the issue number)? |
@DmitriyMV The link it provided did not take me directly to the CL. Here it is: https://go-review.googlesource.com/c/net/+/151857/ To my knowledge, In the Go project issues are typically closed by the authors/contributors after the code in place rather than by the original issue author after verifying the fix. |
@DmitriyMV, this is correctly closed. |
…EAMS And add the http2.Transport.StrictMaxConcurrentStreams bool knob to behavior being reverted. In CL 53250 for golang/go#13774 (for Go 1.10) we changed the HTTP/2 Transport's policy such that a server's advertisement of a MAX_CONCURRENT_STREAMS value meant that it was a maximum for the entire process, instead of just a single connection. We thought that was a reasonable interpretation of the spec and provided nice safety against slamming a server from a bunch of goroutines doing concurrent requests, but it's been largely unpopular (see golang/go#27044). It's also different behavior from HTTP/1 and because you're usually not sure which protocol version you're going to get, you need to limit your outbound HTTP requests anyway in case you're hitting an HTTP/1 server. And nowadays we have the Go 1.11 Transport.MaxConnsPerHost knob too (CL 71272 for golang/go#13957). It doesn't yet work for HTTP/2, but it will in either Go 1.12 or Go 1.13 (golang/go#27753) After this is bundled into net/http's, the default HTTP client will have this knob set false, restoring the old Go 1.9 behavior where new TCP connections are created as necessary. Users wanting the strict behavior and import golang.org/x/net/http2 themselves and make a Transport with StrictMaxConcurrentStreams set to true. Or they can set Transport.MaxConnsPerHost, once that works for HTTP/2. Updates golang/go#27044 (fixes after bundle into std) Change-Id: I4efdad7698feaf674ee8e01032d2dfa5c2f8a3a8 Reviewed-on: https://go-review.googlesource.com/c/151857 Reviewed-by: Andrew Bonventre <[email protected]>
…EAMS (#1) And add the http2.Transport.StrictMaxConcurrentStreams bool knob to behavior being reverted. In CL 53250 for golang/go#13774 (for Go 1.10) we changed the HTTP/2 Transport's policy such that a server's advertisement of a MAX_CONCURRENT_STREAMS value meant that it was a maximum for the entire process, instead of just a single connection. We thought that was a reasonable interpretation of the spec and provided nice safety against slamming a server from a bunch of goroutines doing concurrent requests, but it's been largely unpopular (see golang/go#27044). It's also different behavior from HTTP/1 and because you're usually not sure which protocol version you're going to get, you need to limit your outbound HTTP requests anyway in case you're hitting an HTTP/1 server. And nowadays we have the Go 1.11 Transport.MaxConnsPerHost knob too (CL 71272 for golang/go#13957). It doesn't yet work for HTTP/2, but it will in either Go 1.12 or Go 1.13 (golang/go#27753) After this is bundled into net/http's, the default HTTP client will have this knob set false, restoring the old Go 1.9 behavior where new TCP connections are created as necessary. Users wanting the strict behavior and import golang.org/x/net/http2 themselves and make a Transport with StrictMaxConcurrentStreams set to true. Or they can set Transport.MaxConnsPerHost, once that works for HTTP/2. Updates golang/go#27044 (fixes after bundle into std) Change-Id: I4efdad7698feaf674ee8e01032d2dfa5c2f8a3a8 Reviewed-on: https://go-review.googlesource.com/c/151857 Reviewed-by: Andrew Bonventre <[email protected]>
The CL that disables connection pooling for HTTP2 creates a significant discontinuity in throughput when the server specifies a small number of maximum concurrent streams.
https://go-review.googlesource.com/c/net/+/53250
HTTP2 support is automatically enabled in Go under conditions not always specified by the developer. For example, configuration files often alternate between http and https endpoints. When using an http endpoint, go will use HTTP/1, whereas https endpoints use HTTP/2.
The HTTP/1 default transport will create as many connections as needed in the background. The HTTP2 default transport does not (although it used to).
As a result, HTTP1 endpoints get artificially high throughput when compared to HTTP2 endpoints that block waiting for more streams to become available instead of creating a new connection. For example, the AWS ALB limit the maximum number of streams to 128.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html
This HTTP/2 client is blocked once it hits 128 streams and waits for more to become available. The HTTP/1 client does not. The performance of the HTTP/1 client is orders of magnitude faster as a result. This effect is annoying and creates a leaky abstraction in the
net/http
package.The consequence of this is that importers of the
net/http
package now have to:1.) Distinguish between HTTP and HTTPS endpoints
2.) Write a custom connection pool for the transport when HTTP2 is enabled
I think the previous pooling functionality should be restored.
The text was updated successfully, but these errors were encountered: