-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracing keeps entire requests in memory, doesn’t truncate #695
Comments
+1. This is affecting CockroachDB. We occasionally send very large messages, which are getting held in GRPC for a long time if we have tracing enabled (these are the only requests to make it in to the larger time buckets, so they aren't getting flushed out by the more frequent smaller requests). We're going to turn tracing off for now, but I'd like to have finer-grained control of tracing. We've found the connection-level tracing very useful, but the message/stream-level tracing is less useful to us and much more expensive. I'd like to be able to turn off the logging of message bodies. (especially because they're often hidden behind |
This retains a subset of messages for display on /debug/requests, which is very expensive for snapshots. Until we can be more selective about what is retained in traces, we must disable tracing entirely. See grpc/grpc-go#695
We are working with another team in Google on replacing the existing tracing. Stay tuned. |
Any update on this? |
More than a year has passed. What’s the current status? Flying blind is killing one of my biggest motivations to use gRPC in the first place. |
We recommend disabling tracing in gRPC ( |
After more deliberation, we decided that truncating the messages should be easy and shouldn't be too controversial, so we will just do that for now and file a separate issue to track the implementation of tracing through the stats API. |
...and after the first attempt in #1508, we realized it's very difficult to fix the memory usage problem. To truncate the message before storing it, we must first (s)print it -- but in most cases we'll never display the message, which is why the string method is evaluated lazily. The best I believe we can do without changing what we render in the tracing (e.g. omit the message contents) is to truncate the message when displaying it to protect the browser when it renders them. In theory we could truncate the binary-serialized message when storing it, but I don't believe the Go protobuf library handles truncated messages, so we wouldn't be able to render it in text form later if we did that. We'll also be disabling tracing by default to prevent this problem from impacting users that don't actually need tracing. |
Thanks for looking into this!
This is not what I observe: https://play.golang.org/p/u9xZoYLx6J results in:
i.e., proto.Unmarshal returns an error, but as a side-effect it also fills in the message as good as it can. I think it’s safe to unmarshal a truncated message and display the result textually. |
That's encouraging. So maybe the tracing implementation that is based on the stats handler interface can do something a little smarter. #1510 |
There’s a corresponding TODO (if I understood it correctly) at
grpc-go/trace.go
Line 98 in b0b7afa
Currently, when sending RPCs which contain lots of data (e.g. a scanned document page in https://github.com/stapelberg/scan2drive), grpc accumulates RAM (looking at the process’s resident set size after calling
(runtime/debug).FreeOSMemory()
.Looking at the
/debug/requests
page explains where the RAM goes: the messages are included in full in the trace contexts. This not only makes my browser unusable, but also wastes a lot of memory.I think we should truncate these messages and retain only the truncated version.
The text was updated successfully, but these errors were encountered: