-
Notifications
You must be signed in to change notification settings - Fork 992
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DefaultEndpoint future listener recycle lose command context on requeue failures #734
Comments
Thanks a lot for reporting the issue. This error happens because of the introduced pooling of future callbacks. What happens here is that we resubmit commands to the endpoint. The endpoint is already closed which fails on command submission. The exception is caught but we lose the commands because the future callback was recycled and so command completion runs into a NPE. The fix is quite simple: We need to retain the sent command/commands and use these to perform completion. |
Lettuce now retains the actual sent commands during exceptional completion. Failures on requeue can occur if the connection is closed or queue bounds are exceeded. Previously, we lost command context because the listener was recycled hence command completion ran into NullPointerExceptions.
Lettuce now retains the actual sent commands during exceptional completion. Failures on requeue can occur if the connection is closed or queue bounds are exceeded. Previously, we lost command context because the listener was recycled hence command completion ran into NullPointerExceptions.
That's fixed now, snapshot builds are available. |
Hi Mark, thanks a lot for fixing that part of code. I'd like to reopen the issue, because unfortunately we still see it happen and I am not able to reproduce it. One important thing is, we want to control reconnection on our own. Therefore, We have set My hypothesis was the following:
Actually, after reading netty code, I think this hypothesis was wrong - as far as I understood the code, |
All netty channel interaction that happens from inside netty is single-threaded, meaning, there is no multi-threaded aspect when working inside of a
From a disconnect perspective, I think At-Most-Once mode makes here a difference. This ticket was addressing At-Least-Once mode. |
Hey Mark, I'm sorry - the bug was in my code. Thanks for help and apologies for bothering you! |
Glad to hear you were able to figure it out. Happy to help. |
Hi Mark, you probably already hate me, but I have another issue for you.:)
I'm using snapshots of version 5.0.3.
I noticed that when the connection is closed while commands are being written to the channel, some of these commands are not completed exceptionally.
I wrote a simple piece of code to reproduce it:
When I execute it with e.g. 100000 commands, it freezes, because some commands are not completed. The interesting part is, always the same number of commands is completed exceptionally (e.g., 16384 for 100k).
Here is my guess about what happened: when
CommandHandler.channelInactive
is called, not all written commands have already been added to the stack - i. e., not allAddToStack
listeners have already been processed. So, during this call: https://github.com/lettuce-io/lettuce-core/blob/5ef6722c2cf1e56edd328ffebf57e791f3ca1350/src/main/java/io/lettuce/core/protocol/CommandHandler.java#L303 some commands are not on the stack and then remain uncompleted.I'm not sure how to fix it yet - I'll try to create a PR during the weekend. Any suggestions are welcome.
Thanks!
The text was updated successfully, but these errors were encountered: