-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement per-ClientConn/Transport maximum receive buffer sizes #2792
Comments
As you have said the connection level flow control is decoupled from streams. So, as an immediate solution, can you try using stream level flow control? |
@asubiotto I'd be interested in looking at your test code. Just curious. Thanks. |
@canguler the issue is that I want new streams to experience backpressure if the current streams are blocking the connection, I don't think I can do this with stream level flow control. @easwars the code is a bit cockroachdb specific, I can work on a less verbose test but the idea is there: https://gist.github.com/asubiotto/72552186a4b420b68a8cd325bc184256 |
@asubiotto So, you want the old behavior? (behavior before decoupling connection level flow control from the stream level flow control) But then we come to the issue of new streams starving because existing streams are using all the connection window. |
@canguler, yes, I want the old behavior. My argument is that streams starving because existing streams are using all the connection window is expected and desired (https://http2.github.io/http2-spec/#rfc.section.6.9.1), otherwise a fast sender could overwhelm a slow receiver. |
Closing due to lack of activity and priority. |
What version of gRPC are you using?
1.13.0
What version of Go are you using (
go version
)?go version go1.11.5 darwin/amd64
What operating system (Linux, Windows, …) and version?
Darwin Kernel Version 18.0.0: Wed Aug 22 20:13:40 PDT 2018; root:xnu-4903.201.2~1/RELEASE_X86_64
What did you do?
Created a bunch of streams asynchronously, sent data on each one, slept the main goroutine for 10s, grabbed a heap profile, and then started draining each stream in turn. I can share test code if needed.
What did you expect to see?
Flow control apply backpressure on the sending side and keep memory used to more or less the connection window size.
What did you see instead?
~9GB of memory usage on the receiving side:
It seems that the reason this happens is because connection flow control is decoupled from stream-level flow control (1e47334 added this to prevent starvation), so even if the application server is not reading any data, the grpc client updates its connection level flow control as if that data has been read which results in new streams having the ability to send data up until their stream window size. All this data will get acknowledged by the GRPC server but buffered up until the application server reads from GRPC. According to my understanding, flow control is supposed to apply backpressure so that these sorts of memory blowups do not occur.
The text was updated successfully, but these errors were encountered: