-
Notifications
You must be signed in to change notification settings - Fork 986
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IndexOutOfBoundsException in CommandHandler.channelRead #1078
Comments
Observing the output, you work with large chunks of data (8 MB increments). What worries me is that the stack size remains constant as this can be an indicator that commands are not decoded properly. Without decoding commands, the aggregation buffer keeps growing. Can you provide a reproducer? |
I will try to get a simple reproducer, however data size is not huge, its generated json couple of hundreds of KB. However there is normalize allocation happening in buffer which rounds to nearest power of 2 or so. it takes many requests to go for next allocation. Also is there possibility that it doesnt get decoded if command timed out. I saw await being used |
I think I understand now why it never goes to input buffer can have partial payload for bulk / byte response, for example, my payload is 1845 bytes so it reads long (size of message) and expects 1845 bytes, however buffer has only 1200 bytes then decode simply returns false. next time when it tries to decode, another chunk would be added to buffer that also in complete. so this continues until there are no calls to redis for a while. Possible solution could be circular buffer? or check for clean up when it goes for resize? |
We've introduced in 5.2.0 a new method to discard read bytes also if the decode was unsuccessful. Care to upgrade to 5.2.0 snapshots? See #906. |
Closing due to lack of requested feedback. If you would like us to look at this issue, please provide the requested information and we will re-open the issue. |
Bug Report
When used along with spring-boot default cache type, along with concurrent load throws
Redis exception; nested exception is io.lettuce.core.RedisException: java.lang.IndexOutOfBoundsException: writerIndex(2147483642) + minWritableBytes(1842) exceeds maxCapacity(2147483647): PooledUnsafeDirectByteBuf(ridx: 2146661676, widx: 2147483642, cap: 2147483647)"
Current Behavior
With bit of debugging reliazed that that is a buffer created in CommandHandler object which lives till connection stays? and keeps increasing buffer size when there is a response from (in redis. io.lettuce.core.protocol.CommandHandler#channelRead ). Also monitored stack size, that stays with in 200.
Dumping writer index :
Let me know if you need any more info, or if you can provide some pointers to resolve/fix.
Stack trace
Expected behavior/code
Environment
Possible Solution
Additional context
The text was updated successfully, but these errors were encountered: