-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Buffer timeout with fair backpressure #3332
Conversation
@chemicL Can you update us regarding what's left in this PR? This feature would really help me and I'd like to help if possible |
@almogtavor I'm waiting for @OlegDokuka to find some time to review the change. Also, I needed to move on to a different subject and couldn't spend more time thinking about JCStress tests. If you have ideas how to improve confidence in the concurrent execution of this operator, the ideas what scenarios to test would be useful. So far I ran the |
@OlegDokuka @chemicL can you tell me approximately when this will get reviewed? This feature will help a lot 😊 |
Upvoting this, I am also blocked by this, this fix would help. Cheering for you guys |
Hello team, I heard there is a workaround using a (window operation + concatMap) instead of buffer timeout, which can still yield the exception as of 3.5.x. Would it be possible to share the workaround please? |
do you mean you see issues in windowTimeout, if so can you open an issue which reproduces the failure. Thanks |
So basically, in order to perform some internal batch operation within a flux, for a number or a duration, (on purpose not using the words bufferTimeout or windowTimeout here) one has the following possibilities: use bufferTimeout -> very high likelihood of encountering "Could not emit buffer due to lack of requests" May I ask if my understanding is correct please? |
dc5589f
to
befec06
Compare
As long as the consumer is slower than the producer, yes.
The performance impact depends on the workload, but in general yes, windowTimeout has a lot more to do as for every window it goes through the entire reactive lifecycle to drain the results, while the buffer is a simple data structure that can be navigated in imperative manner. For this operator, we aim to deliver it in 3.5.7 release. |
Very clear explanation @chemicL . |
Build failure attributed to a flaky test. |
This change adds a variant of
bufferTimeout
that honors backpressure of a slow downstreamSubscriber
better than the current implementation (which just errors in the face of backpressure and timeouts).The problem with the existing implementation is that it requests
n * maxSize
items from the upstream with the assumption that the items will fill a buffer which was requested. However, if timeouts occur, the amount of buffers created and delivered can exceed the accumulated demand. In such case, a backpressure error happens.The new variant uses prefetching with
4 * maxSize
items from the upstream. Internally it uses a queue, which is used to satisfy the downstream demand with no more buffers than have been requested. It uses a watermark to replenish the queue to be ready to deliver buffers in an efficient manner when the demand increases.