-
Notifications
You must be signed in to change notification settings - Fork 7.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ObserveOn Operator with Backpressure #835
Conversation
@headinthebox This relates to discussions you and I have had about eliminating implicit buffers to enable natural back-pressure unless operators such as |
RxJava-pull-requests #756 FAILURE |
@benjchristensen Will do! We might consider to add an explicit buffer operator to decouple producer and consumers such that is people want they can get the old behavior of observon, whith all caveats about buffer bloat of course. In .NET you can that by .ToEnumerable().ToObservable() but that is quite a hack, and maybe not giving you exactly what you want. |
That test is non-deterministic and needs to get fixed. |
I'm open to that. Are you thinking just an overload of |
You are thinking of buffer that takes a scheduler then? |
I was thinking so, but looking at the Observable<List<T>> buffer(int count) That would not line up well with: Observable<T> buffer(Scheduler s) Otherwise we could have I don't think we have enough use case to warrant the |
Yes, plus that |
I did some testing speed testing with 1M range observed on newThread This PR Semaphore + queue with permits, observeOn code is here: The Semaphore + queue with permits and batch drain of the queue, code 1: 95 000 ops/s Using batch drain appears to be more efficient when the queue size > 16 on my machine. I think the reason for this is that permits are released in larger numbers and producers less frequently need to wait for a 0 -> 1 permit transition. The two latter TransferQueue based implementations fluctuate quite a bit between 600k - 1200k for some reason. (Tested on i7 920 @ 2.66GHz, Win7x64, 6GB DDR3&1333MHz, Java 7 u51 x64). I tried to improve performance of the 1 permit case via TransferQueue, but I couldn't get reliable data exchange nor could I implement proper interruption. |
See ReactiveX#713 It was causing non-deterministic behavior, random test failures and poor performance.
ObserveOn was the wrong mechanism for delaying behavior as it was relying on the buffering of observeOn. Now using delay() to delay the group since observeOn no longer buffers.
Since we are blocking the producer on* notifications we need to interrupt it on unsubscribe events. I need to do it on the data structure and not the thread as the thread could change for each onNext and that could have unexpected consequences.
The ObserveOn operator is for moving where it executes, not making it async. SubscribeOn makes it async.
I have updated the code to support The buffer allows getting async behavior and increased throughput:
However, it's still slower than the previous implementation with an unbounded queue:
|
I'm going to proceed with the merge as this gets the functionality and API to what is wanted and the performance is adequate for the use case. I'd be interested if anyone else wants to look at how to further improve the performance. |
ObserveOn Operator with Backpressure
I've been playing with this over the weekend with 0.17.0-RC1 and think we need to revert the default back-pressure behavior. I think it is relevant as an option to be applied, but it's too risky as the default behavior if the source I'm going to submit a change today that splits out Further discussion of providing tools for back-pressure will be picked up in the 0.18 dev cycle. |
…figuration class. The bean declarations required more specific return types. (ReactiveX#835)
This is a re-implementation of ObserveOn done for 3 purposes:
lift
style so theSubscription
flows through correctlyUnit tests are passing on my machine for a full build, but I'm interested in user experience, particularly from Android users where
observeOn
behavior is mission critical.This WILL change behavior and possibly cause problems if
observeOn
was being used to make something async (which is not the purpose ofobserveOn
. ThesubscribeOn
operator is the one to use for making somethingasyncconcurrent. [Update: Per discussions below we may want another operator orobserveOn
overload to allow async behavior to use a queue and decouple producer/consumer. TheobserveOn
operator however should by default only do what it says - switch threads.]This is a complete re-write so I may very well have created nasty bugs that we don't yet have unit tests for so I'd appreciate a thorough review and testing.
One possible concern is performance. In the case of a synchronous firehose (not the normal use case, but a use case nonetheless), this is far slower because now each
onNext
is sent one-at-a-time across thread boundaries whereas before it would put everything into a queue on one side then read it all from the queue on the other side.Thus, for this code that synchronously firehoses 100000 values it is about 7 times faster on the current code without back-pressure:
However, this code is the same performance:
The difference being that we're not just filling up the queue in the second one as it has latency.
I believe this is okay as the intent of
observeOn
is moving across thread boundaries, such as for UI events, and eliminating the bad behavior of buffer-bloat is the right thing.I'm sure we can find some performance improvements in this code, but it's worth pointing out how the change in behavior can affect it.