Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IndexOutOfBoundsException in CommandHandler.channelRead #1078

Closed
dileepbapat opened this issue Jul 11, 2019 · 5 comments
Closed

IndexOutOfBoundsException in CommandHandler.channelRead #1078

dileepbapat opened this issue Jul 11, 2019 · 5 comments
Labels
type: bug A general bug

Comments

@dileepbapat
Copy link

Bug Report

When used along with spring-boot default cache type, along with concurrent load throws

Redis exception; nested exception is io.lettuce.core.RedisException: java.lang.IndexOutOfBoundsException: writerIndex(2147483642) + minWritableBytes(1842) exceeds maxCapacity(2147483647): PooledUnsafeDirectByteBuf(ridx: 2146661676, widx: 2147483642, cap: 2147483647)"

Current Behavior

With bit of debugging reliazed that that is a buffer created in CommandHandler object which lives till connection stays? and keeps increasing buffer size when there is a response from (in redis. io.lettuce.core.protocol.CommandHandler#channelRead ). Also monitored stack size, that stays with in 200.

Dumping writer index :

writerIndex:8387482
writerIndex:16777126
writerIndex:20971118
writerIndex:25165378
writerIndex:29356774
writerIndex:33554002
writerIndex:37712882
writerIndex:41942218
writerIndex:46136326
writerIndex:50330434
writerIndex:54525898
writerIndex:58720006
...
writerIndex:264241152
...

Let me know if you need any more info, or if you can provide some pointers to resolve/fix.

Stack trace
java.lang.IndexOutOfBoundsException: writerIndex(2147483644) + minWritableBytes(1842) exceeds maxCapacity(2147483647): PooledUnsafeDirectByteBuf(ridx: 2147269192, widx: 2147483644, cap: 2147483647)
	at io.netty.buffer.AbstractByteBuf.ensureWritable0(AbstractByteBuf.java:276)
	at io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:265)
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1079)
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1072)
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1062)
	at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:506)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808)
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:417)
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:317)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)

Expected behavior/code

Environment

  • Lettuce version(s): 5.0.3.RELEASE, 5.1.7.RELEASE
  • Redis version: 5.0.3, 5.0.5

Possible Solution

Additional context

@dileepbapat dileepbapat added the type: bug A general bug label Jul 11, 2019
@dileepbapat dileepbapat changed the title io.lettuce.core.RedisException: java.lang.IndexOutOfBoundsException: writerIndex(2147483642) + minWritableBytes(1842) exceeds maxCapacity(2147483647): PooledUnsafeDirectByteBuf(ridx: 2146661676, widx: 2147483642, cap: 2147483647)" io.lettuce.core.RedisException: java.lang.IndexOutOfBoundsException: writerIndex(2147483642) + minWritableBytes(1842) exceeds maxCapacity(2147483647) Jul 11, 2019
@mp911de mp911de changed the title io.lettuce.core.RedisException: java.lang.IndexOutOfBoundsException: writerIndex(2147483642) + minWritableBytes(1842) exceeds maxCapacity(2147483647) ndexOutOfBoundsException in CommandHandler.channelRead Jul 11, 2019
@mp911de mp911de changed the title ndexOutOfBoundsException in CommandHandler.channelRead IndexOutOfBoundsException in CommandHandler.channelRead Jul 11, 2019
@mp911de
Copy link
Collaborator

mp911de commented Jul 11, 2019

CommandHandler keeps an aggregation buffer that is used to aggregate multiple response chunks when reading from the transport channel. The buffer gets consolidated after decoding commands:

https://github.com/lettuce-io/lettuce-core/blob/5.1.x/src/main/java/io/lettuce/core/protocol/CommandHandler.java#L614-L616

Observing the output, you work with large chunks of data (8 MB increments). What worries me is that the stack size remains constant as this can be an indicator that commands are not decoded properly. Without decoding commands, the aggregation buffer keeps growing.

Can you provide a reproducer?

@dileepbapat
Copy link
Author

I will try to get a simple reproducer, however data size is not huge, its generated json couple of hundreds of KB. However there is normalize allocation happening in buffer which rounds to nearest power of 2 or so. it takes many requests to go for next allocation.

Also is there possibility that it doesnt get decoded if command timed out. I saw await being used

@dileepbapat
Copy link
Author

dileepbapat commented Jul 11, 2019

I think I understand now why it never goes to buffer.discardReadBytes();

input buffer can have partial payload for bulk / byte response, for example, my payload is 1845 bytes so it reads long (size of message) and expects 1845 bytes, however buffer has only 1200 bytes then decode simply returns false. next time when it tries to decode, another chunk would be added to buffer that also in complete. so this continues until there are no calls to redis for a while.

Possible solution could be circular buffer? or check for clean up when it goes for resize?

@mp911de
Copy link
Collaborator

mp911de commented Jul 12, 2019

We've introduced in 5.2.0 a new method to discard read bytes also if the decode was unsuccessful. Care to upgrade to 5.2.0 snapshots? See #906.

@mp911de
Copy link
Collaborator

mp911de commented Jul 19, 2019

Closing due to lack of requested feedback. If you would like us to look at this issue, please provide the requested information and we will re-open the issue.

@mp911de mp911de closed this as completed Jul 19, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug A general bug
Projects
None yet
Development

No branches or pull requests

2 participants