Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement per-ClientConn/Transport maximum receive buffer sizes #2792

Closed
asubiotto opened this issue Apr 26, 2019 · 6 comments
Closed

Implement per-ClientConn/Transport maximum receive buffer sizes #2792

asubiotto opened this issue Apr 26, 2019 · 6 comments
Labels
P2 Type: Feature New features or improvements in behavior

Comments

@asubiotto
Copy link

What version of gRPC are you using?

1.13.0

What version of Go are you using (go version)?

go version go1.11.5 darwin/amd64

What operating system (Linux, Windows, …) and version?

Darwin Kernel Version 18.0.0: Wed Aug 22 20:13:40 PDT 2018; root:xnu-4903.201.2~1/RELEASE_X86_64

What did you do?

Created a bunch of streams asynchronously, sent data on each one, slept the main goroutine for 10s, grabbed a heap profile, and then started draining each stream in turn. I can share test code if needed.

What did you expect to see?

Flow control apply backpressure on the sending side and keep memory used to more or less the connection window size.

What did you see instead?

~9GB of memory usage on the receiving side:
image

It seems that the reason this happens is because connection flow control is decoupled from stream-level flow control (1e47334 added this to prevent starvation), so even if the application server is not reading any data, the grpc client updates its connection level flow control as if that data has been read which results in new streams having the ability to send data up until their stream window size. All this data will get acknowledged by the GRPC server but buffered up until the application server reads from GRPC. According to my understanding, flow control is supposed to apply backpressure so that these sorts of memory blowups do not occur.

@dfawley dfawley added P2 Type: Feature New features or improvements in behavior and removed Type: Bug labels May 9, 2019
@canguler
Copy link

canguler commented May 9, 2019

As you have said the connection level flow control is decoupled from streams. So, as an immediate solution, can you try using stream level flow control?

@easwars
Copy link
Contributor

easwars commented May 9, 2019

@asubiotto I'd be interested in looking at your test code. Just curious. Thanks.

@asubiotto
Copy link
Author

@canguler the issue is that I want new streams to experience backpressure if the current streams are blocking the connection, I don't think I can do this with stream level flow control.

@easwars the code is a bit cockroachdb specific, I can work on a less verbose test but the idea is there: https://gist.github.com/asubiotto/72552186a4b420b68a8cd325bc184256

@dfawley dfawley changed the title Flow control failing to apply backpressure Implement per-ClientConn/Transport maximum receive buffer sizes May 10, 2019
@canguler
Copy link

@asubiotto So, you want the old behavior? (behavior before decoupling connection level flow control from the stream level flow control) But then we come to the issue of new streams starving because existing streams are using all the connection window.

@asubiotto
Copy link
Author

@canguler, yes, I want the old behavior. My argument is that streams starving because existing streams are using all the connection window is expected and desired (https://http2.github.io/http2-spec/#rfc.section.6.9.1), otherwise a fast sender could overwhelm a slow receiver.

@menghanl
Copy link
Contributor

menghanl commented May 3, 2021

Closing due to lack of activity and priority.
The team won't be able to work on this in a foreseeable future.

@menghanl menghanl closed this as completed May 3, 2021
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 31, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
P2 Type: Feature New features or improvements in behavior
Projects
None yet
Development

No branches or pull requests

5 participants