-
Notifications
You must be signed in to change notification settings - Fork 7.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Confusion when using Backpressure operators #3751
Comments
It seems your source doesn't emit enough values thus the default buffer of 128 elements in There is a PR in limbo that tries to address this buffer behavior by allowing dropping, but if you want to queue on disk, you have to write a custom operator. |
I've bumped into the queueing on disk use case a few times but haven't implemented anything. I'll have a look (probably in a couple of weeks). |
I see. The challenge with the PR above is that currently the overflow function does not supply the item(s) which caused the overflow. It is just a void action ( Based on @akarnokd's comment, we should not use the buffer backpressure but ideally it seems that the current implementation of The term "buffer" fooled me a bit in the API docs and I assumed that the capacity controls the size after which the source observable starts to overflow. Would it make sense to clarify that documentation at least and mention that there is an internal buffer which can hold actually more items than what you specify as your overflow buffer? Thanks for the quick reply! |
I've hit this a number of times and generally ended up turning most .observeOn() into .onBackpressureBuffer().observeOn(), I guess the ability to control the 128 size buffer as an optional parameter to observeOn would be a nice addition. |
@srvaroa PR welcome. |
The observeOn operator is backed by a small queue of 128 slots that may overflow quickly on slow producers. This could only be avoided by adding a backpressure operator before the observeOn (not only inconvenient, but also taking a perf. hit as it forces hops between two queues). This patch allows modifying the default queue size on the observeOn operator. Fixes: ReactiveX#3751 Signed-off-by: Galo Navarro <[email protected]>
Hey,
We have a use case in which a consumer might not be able to process items fast enough than what is emitted from a source observable. I understood that in this case, a backpressure with either
onBackPressureBuffer()
oronBackPressureDrop()
might be useful. In case of overflow / drop, we would like to store items to a local storage and try processing them later when the consumer in this case is again able to handle the input rate. Our consumer is actually a remote REST call which might timeout or not be available in which case we retry.Anyways, I tried alternative ways to address the problem but I can't find a suitable way to solve it. To illustrate my testings, here is some code:
In
testOnBackPressureDrop()
I would assume that after theemitter
has queued some items, it would start dropping them. However, it seems that the backpressure operation subscription gets a receive size of 128 items. 128 items in memory in this case is far too much for us and we would like to control the size of the request items.In
testOnBackPressureBuffer()
I would assume that theemitter
would overflow after emitting more than two items into the buffer.However, in neither of the cases, I don't experience an oveflow or dropped items. Also I realized that when using
onBackPressureBuffer()
it seems that in overflow, the observable emitsonError()
. To me that wouldn't be an option since I want theemitter
to continue and I wan't to deal with the problem myself.Could you please instruct me that what we are missing here or are we trying to do something that is not yet even possible, e.g. is the API missing an operator like
onBackPressureBufferAndDrop(int capacity, Action1 onDrop)
?I wrote my tests based on the documentation in https://github.com/ReactiveX/RxJava/wiki/Backpressure
The text was updated successfully, but these errors were encountered: