-
-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dispatch goroutine leak #45
Comments
I have worked out why resources are not being freed, and perhaps this is a bug. when Am I doing something wrong that server.dispatch() never ends? |
I have created a standalone reproducer here. Basically if you run rpc/example_test, it has the same behavior: goroutines persist after the connection is closed. |
Drat. The RPC package probably needs some shuffling of internals: the monitor goroutine approach is prone to these sorts of leaks. I'll try to get to this soon, but realistically it may be weeks or months. |
If possible, would you mind explaining how you would approach it? (Or approach a workaround?) I wanted to use the RPC for a project, but I can't really afford to leak a goroutine for every capability. What happens if the client attempts to use a capability after it's dispatch() goroutine has ended? How do other implementations of capnproto rpc release capabilities? |
You should be able to call |
Oh okay, I think I see the issue. I haven't had a chance to sit down and reproduce this myself yet, but I'd imagine that |
I've found a couple different issues here, stay tuned. I'll try to fix this over the weekend. |
Awesome, much appreciated. |
Thanks so much for the great repro case, it really helped! I've made a slew of fixes around capability lifetimes and goroutine leaks. There's still more improvements that I'd like to make to the rpc package to improve resource consumption — it hasn't seen too much production usage yet AFAIK. If you find any other issues, please let me know so I can take a look. |
Thanks for fixing it so fast! I love it when open source projects are so responsive. I plan to use it in a production system, and we have a lot of infrastructure for testing performance/resource problems, so I'll let you know how it goes. |
As a follow-up here, I am experimenting with trying to reduce the number of goroutines and make the locking semantics more clear. It's stable, but it's uncovering a long tail of weird edge cases. I wish the RPC protocol wasn't so geared around a single-threaded implementation sometimes. I'll publish the branch when it's more stable. |
FYI, this work has now been merged in. I highly recommend pulling from master: all sorts of subtle issues have been resolved. |
I'm following through the getting started for rpc, and I have managed to work out most of my questions, but I have one that I can't quite work out.
Say I have some schema like
And say the client calls dolargequery, the server creates some go struct that, for arguments sake, occupies a bunch of memory and implements Cursor. The server sends the cursor capability back to the client.
Now my question is how do I idiomatically control the lifetime of the cursor struct? Presumably the server is holding on to a reference to it so that when it receives calls on the capability it can execute them, but how does the client "free" the capability so that the underlying struct I passed to Cursor_ServerToClient can be freed?
I did some experiments with printing in finalizers (which is always a bit shady) but it doesn't seem that the structure I pass to Cursor_ServerToClient becomes unreferenced even after the rpc connection's Wait() returns.
How does it all work?
The text was updated successfully, but these errors were encountered: