-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
netlink: consider memoizing the largest seen buffer size in Conn.Receive #179
Comments
Before committing to doing this with a public API, I may do something along the lines of |
I used netlink for internal services and optimized conn.Receive with sync.Pool, saving GCMark time and 28% CPU savings. |
I don't believe a sync.Pool optimization would be safe because the library hands out allocated buffers for caller use, and putting those back in the pool could mean they are clobbered. I think a more appropriate approach to reuse memory would be to add an API which reads into a caller-allocated buffer and leaves managing that up to the caller. |
Closing in favor of #178. |
Right now we allocate a page of memory, peek into the socket, and double the size of the buffer if we can't fit the entire message.
For APIs which frequently return a large amount of data, it probably makes sense to memoize the largest used buffer on a per-Conn basis so that we know how much to allocate up front rather than looping with peek/allocate.
The text was updated successfully, but these errors were encountered: