-
Notifications
You must be signed in to change notification settings - Fork 7.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
observeOn: allow configurable buffer size #3777
Conversation
* @see #observeOn(Scheduler) | ||
* @see #observeOn(Scheduler, boolean) | ||
*/ | ||
public final Observable<T> observeOn(Scheduler scheduler, int bufferSize) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add observeOn(Scheduler, boolean delayError, int bufferSize)
overload as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still miss the overload observeOn(Scheduler, boolean delayError, int bufferSize)
👍 |
@@ -523,14 +523,21 @@ public boolean hasNext() { | |||
assertEquals(7, generated.get()); | |||
} | |||
|
|||
/** | |||
* {@link BackpressureTests.testObserveOnWithSlowConsumer} also covers the case with default buffer size. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this link won't work, I guess
👍 though i'd leave old test for default buffer size and just add a new one |
Rebased, fixed the javadoc link, and a spelling mistake. @artem-zinnatullin I reused this one to avoid redundancy as the code path is virtually the same, and there are tests (the ones referenced in the @link) which use the default size. Let me know if you still prefer the 2 cases and I'll fix that. |
Yeah, I understand, just afraid that one may change linked tests and we won't cover overload. // one day we'll setup code coverage, I hope |
@Test | ||
public void testQueueFullEmitsError() { | ||
final CountDownLatch latch = new CountDownLatch(1); | ||
// randomize buffer size, note that underlying implementations may be tuning the real size to a power of 2 | ||
// which can lead to unexpected results when adding excess capacity (e.g.: see ConcurrentCircularArrayQueue) | ||
final int capacity = (int)Math.pow(2, new Random().nextInt(10)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @artem-zinnatullin, leave this test as it was and create a new one with a loop over some queue sizes:
for (int i = 1; i <= 1024; i = i * 2) {
final int capacity = i;
// ...
}
The observeOn operator is backed by a small queue of 128 slots that may overflow quickly on slow producers. This could only be avoided by adding a backpressure operator before the observeOn (not only inconvenient, but also taking a perf. hit as it forces hops between two queues). This patch allows modifying the default queue size on the observeOn operator. Fixes: #3751 Signed-off-by: Galo Navarro <[email protected]>
Agh thanks @akarnokd, I didn't notice I changed computers so I was back on the 1st patch and lost the overload. Resubmitted based on the one with all overloads, and added the extra test instead of rewriting it. Hopefully fine now. |
👍 |
@@ -581,6 +581,69 @@ public void onNext(Integer t) { | |||
} | |||
|
|||
@Test | |||
public void testQueueFullEmitsErrorWithVaryingBufferSize() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: test
prefix is not needed
I've seen some failures on the test locally and there is also a bug in the test (the loop should start at 2, not 1). I can't look into it right now but will take a look asap. |
👍 |
I'm merging this. The changes are algorithmically solid so I'm not sure what failures you see locally. |
Thanks @akarnokd. The failure was no error notifications emitted at random sizes, but I did notice Travis didn't complain. Ping me if it surfaces. |
The observeOn operator is backed by a small queue of 128 slots that may
overflow quickly on slow producers. This could only be avoided by
adding a backpressure operator before the observeOn (not only
inconvenient, but also taking a perf. hit as it forces hops between two
queues).
This patch allows modifying the default queue size on the observeOn
operator.
Fixes: #3751
Signed-off-by: Galo Navarro [email protected]