-
Notifications
You must be signed in to change notification settings - Fork 7.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Observable.cache() creates indefinite number of Threads with Schedulers.io() #2191
Comments
You seem to create 100 caches and wait for it asynchronously in a tight loop and the observable elements wait on blocking gets. This delays their actions and the IO scheduler has less opportunity to reuse its pools. Schedulers.from is a weaker construct and allows thread hopping so it is more likely an idle worker can pick up more work. @zsxwing can you check if we leak workers because of Async? |
Hi @akarnokd .. I appreciate the quick response. I can certainly agree that this example is a bit contrived. I've been trying to reproduce an issue I found with real code in a much simpler form and this is the best I can do in short notice. It would certainly be great to hear back on the potential leak. Btw, in case it helps, it seems like the issue is exclusive to Schedulers.io() For example, if I use Schedulers.computational() I get thread re-use, sample result of program (last few entries like before): |
Schedulers.computation() uses a fixed set of worker threads in a round-robin fashion. I saw you commented out a sleep in the main loop. Could you place it after the thread.start call with 10ms to give that thread some chance? |
I attempted to use the computational scheduler to test out the theory that perhaps a leak at a higher level would cause the program to block. The program worked as expected though, and had a similar outcome to the Cached-Thread Executor Scheduler -- showing signs of obvious thread re-use. I tried doing the sleep with the Schedulers.io() setup, but I get the same result as before -- no obvious signs of thread re-use. I tried increasing from 10ms to 100-1000ms and still no luck. Very similar outcome as before: |
A bug in |
Hi @zsxwing , is the fix ready for testing? |
@michelbetancourt sure, you can test pr #2238 by yourself. I think it will be fixed in the next release once @benjchristensen merges it. |
…bscribers and unsafeSubscribe in general.
…bscribers and unsafeSubscribe in general.
I've been looking at this problem as well - I found it in both OnSubscribeCache and OperatorMulticast so far. I fixed it by adding unsubscribe logic to the Subscribers.from/create factories, and using .from on the Subject in OperatorMulticast instead of the anonymous inner class (which then corrects the OnSubscribeCache case as well). 7c45286 is the right commit - the previous one was missing an import statement. |
@duncani To fix
source.subscribe(subscription);
Could you also write a test and send a PR? |
@zsxwing, I have confirmed that your fix addresses the original issue -- use of cache operator with Schedulers.io leaking threads. |
Fix for #2191 - OperatorMulticast fails to unsubscribe from source
Hi,
It would appear to me that there's an issue with using when using Observable.cache() and Schedulers.io(). The issue is that an indefinite amount of threads are created as opposed to having thread re-use.
This can eventually lead the following fatal scenario: "java.lang.OutOfMemoryError: Unable to create new native thread".
I did notice that when using Observable.cache() and Schedulers.from(Executors.newCachedThreadPool()) Rx is able to re-use threads.
Other than thread-caching configuration, it's not clear to me why these 2 implementations produce wildly different thread caching results and behavior. I can only assume that this behavior is not intended. It would be great to hear whether that's the case.
Here's a sample program I've put together that demonstrates the 2 different results. It's a very simple example that creates multiple observables and ensures that they are subscribed to and that result is read. I'm using SettableFuture to mock a lag between calls.
Result with Schedulers.from(Executors.newCachedThreadPool()) (last few items displayed, notice the re-use in threads):
Thread[pool-1-thread-18,5,main]
Thread[pool-1-thread-15,5,main]
Thread[pool-1-thread-20,5,main]
Thread[pool-1-thread-7,5,main]
Thread[pool-1-thread-6,5,main]
Thread[pool-1-thread-11,5,main]
Thread[pool-1-thread-17,5,main]
Thread[pool-1-thread-4,5,main]
Thread[pool-1-thread-21,5,main]
Thread[pool-1-thread-1,5,main]
Thread[pool-1-thread-18,5,main]
Thread[pool-1-thread-10,5,main]
Thread[pool-1-thread-7,5,main]
Thread[pool-1-thread-11,5,main]
Thread[pool-1-thread-20,5,main]
Result with Schedulers.io() (last few items displayed, notice that the number is incremental, no re-use):
Thread[RxCachedThreadScheduler-187,5,main]
Thread[RxCachedThreadScheduler-196,5,main]
Thread[RxCachedThreadScheduler-189,5,main]
Thread[RxCachedThreadScheduler-198,5,main]
Thread[RxCachedThreadScheduler-191,5,main]
Thread[RxCachedThreadScheduler-200,5,main]
Thread[RxCachedThreadScheduler-193,5,main]
Thread[RxCachedThreadScheduler-195,5,main]
Thread[RxCachedThreadScheduler-197,5,main]
Thread[RxCachedThreadScheduler-199,5,main]
public class RxIndefiniateThreads {
}
dependencies:
com.google.guava
guava
18.0
io.reactivex
rxjava
1.0.3
io.reactivex
rxjava-async-util
0.21.0
io.reactivex
rxjava
The text was updated successfully, but these errors were encountered: