Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RedisClient.shutdown hangs because event loops terminate before connections are closed #998

Closed
Poorva17 opened this issue Mar 11, 2019 · 1 comment
Labels
type: bug A general bug
Milestone

Comments

@Poorva17
Copy link
Contributor

Bug Report

Current Behavior

RedisClient.shutdown hangs after upgrade to 5.1.5 from 5.0.5

Input Code

val client = RedisClient.create()

//create multiple redis connections
//CRUD operations
....

client.shutdown()

Expected behavior/code

RedisClient should shutdown gracefully.

Environment

  • Lettuce version(s): [e.g. 5.1.5.RELEASE]

Possible Solution

While comparing shutdown implementation with 5.0.5 and 5.1.5 we see following change in 5.1.5

 while (!closeableResources.isEmpty()) {
                Closeable closeableResource = closeableResources.iterator().next();

**// Additional check has been added for AsyncCloseable which returns Future.** 
                if (closeableResource instanceof AsyncCloseable) {

                    closeFutures.add(((AsyncCloseable) closeableResource).closeAsync());
                } else {
                    try {
                        closeableResource.close();
                    } catch (Exception e) {
                        logger.debug("Exception on Close: " + e.getMessage(), e);
                    }
                }
                closeableResources.remove(closeableResource);
            }
//Future for closing channels
closeFutures.add(toCompletableFuture(channels.close()));

Shutdown creates multiple futures e.g. closeableResources.closeAsync , channels.close, clientResources.shutdown. These futures will run concurrently which means in some cases event loop can get shutdown before closing channels or other resources which could cause this exception.

Additional context

We are getting following exception

2019-03-11T12:28:55.299Z ERROR:slf4j  (DefaultPromise.java 770) - Failed to submit a listener notification task. Event loop shut down? [Stacktrace] java.util.concurrent.RejectedExecutionException: event executor terminated
	at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:845)
	at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:328)
	at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:321)
	at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:756)
	at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:768)
	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:432)
	at io.netty.util.concurrent.DefaultPromise.setSuccess(DefaultPromise.java:94)
	at io.netty.channel.group.DefaultChannelGroupFuture.setSuccess0(DefaultChannelGroupFuture.java:205)
	at io.netty.channel.group.DefaultChannelGroupFuture.access$400(DefaultChannelGroupFuture.java:40)
	at io.netty.channel.group.DefaultChannelGroupFuture$1.operationComplete(DefaultChannelGroupFuture.java:74)
	at io.netty.channel.group.DefaultChannelGroupFuture$1.operationComplete(DefaultChannelGroupFuture.java:47)
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:485)
	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424)
	at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:103)
	at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:982)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:699)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:615)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.close(DefaultChannelPipeline.java:1350)
	at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:624)
	at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:608)
	at io.netty.channel.ChannelDuplexHandler.close(ChannelDuplexHandler.java:73)
	at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:624)
	at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:608)
	at io.netty.channel.ChannelOutboundHandlerAdapter.close(ChannelOutboundHandlerAdapter.java:71)
	at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:624)
	at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:608)
	at io.netty.channel.ChannelDuplexHandler.close(ChannelDuplexHandler.java:73)
	at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:624)
	at io.netty.channel.AbstractChannelHandlerContext.access$1100(AbstractChannelHandlerContext.java:38)
	at io.netty.channel.AbstractChannelHandlerContext$13.run(AbstractChannelHandlerContext.java:613)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:495)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:834)

This works fine with Lettuce version(s): [e.g. 5.0.5.RELEASE or lower versions].

@mp911de mp911de added the type: bug A general bug label Mar 12, 2019
@mp911de mp911de added this to the 5.1.6 milestone Mar 12, 2019
@mp911de
Copy link
Collaborator

mp911de commented Mar 12, 2019

Thanks a lot for tracking down this issue. The solution would be to await closing of the resources first and then shutting down event loops.

Poorva17 added a commit to tmtsoftware/csw that referenced this issue Mar 12, 2019
@mp911de mp911de changed the title RedisClient.shutdown hangs after upgrade to 5.1.5 RedisClient.shutdown hangs because event loops terminate before connections are closed Mar 14, 2019
mp911de pushed a commit that referenced this issue Mar 14, 2019
ClientResources are released after awaiting channel close completion to avoid event loop termination before the channels had a chance to get closed.

Co-authored-by: Pritam Kadam <[email protected]>
Original pull request: #1000.
mp911de added a commit that referenced this issue Mar 14, 2019
Add author tags. Reformat. Add unit tests.

Original pull request: #1000
mp911de pushed a commit that referenced this issue Mar 14, 2019
ClientResources are released after awaiting channel close completion to avoid event loop termination before the channels had a chance to get closed.

Co-authored-by: Pritam Kadam <[email protected]>
Original pull request: #1000.
mp911de added a commit that referenced this issue Mar 14, 2019
Add author tags. Reformat. Add unit tests.

Original pull request: #1000
@mp911de mp911de closed this as completed Mar 14, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug A general bug
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants