Skip to content
This repository has been archived by the owner on May 22, 2019. It is now read-only.

Control the thread pool's queue size #86

Open
KMax opened this issue Jul 12, 2016 · 3 comments
Open

Control the thread pool's queue size #86

KMax opened this issue Jul 12, 2016 · 3 comments

Comments

@KMax
Copy link
Contributor

KMax commented Jul 12, 2016

If we will look at the NettyWampClientConnectorProvider.createScheduler and will go deeper in the implementation, we will find that it uses SingleThreadEventExecutor. SingleThreadEventExecutor uses new LinkedBlockingQueue<Runnable>(); as a queue for the tasks submitted to be executed.

It means that the thread pool used to publish messages has an endless queue which means that if the client publishes too much message it may at some point in time due with the OutOfMemoryException.

I tried to override the NettyWampClientConnectorProvider.createScheduler method to use a thread pool with a fixed queue, but for unknown reason it wasn't able to even connect to the broker.

Are there any recommendation how to fix the size of the queue?

@KMax KMax changed the title Control the queue size Control the thread pool's queue size Jul 12, 2016
@the20login
Copy link

I don't think this is Jawampa issue, looks more like netty API problem. I mean, jawampa just use one of default EventLoopGroup implementations, if you need another implementation, you should ask in netty repo, not jawampa.

Also, did you really hit OOM, or this is theoretical bug? As far as I know, EventLoopGroup worker threads just listen on selector, so their number correlates mostly with connections count, not messages count. I mean, if you have only single TCP connection(like in jawampa client), you don't really need more than one thread on selector. If you have a lot of heavy messages, you could use intermediate processing thread pool, with any required properties.

Please correct me if I am wrong.

@Matthias247
Copy link
Owner

Unfortunatly I think WAMP itself is not the best protocol for such considerations, because there is no application level flow control and due to some guarantees that WAMP implementations have to make (don't loose or reorder messages) the best thing we can do is queuing messages at several places.

Things that could be monitored from my point of view is the size of the outgoing message queue (implemented in QueuingConnectionController). But the only sensible reaction would be to close the connection once the queue size gets too big. For incoming messages there currently exist no means to provide backpressure. It could work by changing from a push to a pull model for receveing messages from the networking abstraction, but that would be a bigger change.

@KMax
Copy link
Contributor Author

KMax commented Jul 15, 2016

@the20login Yes, you right this is more question to Netty than jawampa. It wasn't a theoretical question, I actually was analysing the memory consumption of an app and found this.

@Matthias247, @the20login I already use a separate thread pool for processing message before publishing to the broker, but since the jawampa's thread pool queue size is not fixed, I probably need to add additional queue with fixed size before it to control the memory consumption.

@Matthias247 I also tried to override NettyWampClientConnectorProvider.createScheduler method to use a ThreadPoolExecutor with a fixed queue, but it didn't work for some reason. The client couldn't connect to the broker. I didn't find the reason, may be need more time to understand.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants