-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async scheduling of batched updates #34295
Conversation
1818673
to
d9b7bb9
Compare
Size Change: +94 B (0%) Total Size: 1.04 MB
ℹ️ View Unchanged
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm okay if it's just a temporary solution. Maybe add a comment stating that explicitly though?
…n until we can get into a better place with async thunks
|
||
// Each request( api ) is pretty fast, but when there's a lot of them it may block the browser for a few | ||
// seconds. Let's split this long, blocking task into bite-sized pieces scheduled separately to give the | ||
// browser a space for processing other tasks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you clarify more what request calls are doing here? Can you share examples? I have trouble understanding why this would be blocking the UI.
Why does it take four seconds to invoke all of the calls to
Yeah we should finish this 🙂 |
Here's a screenshot from the profiler @youknowriad @noisysocks: All the blocking comes from what happens inside
The fix here is three-fold:
|
This is the part that is a bit concerning to me and also unexpected. If I call a function that returns a promise, why this is happening synchronously? Do you know more here maybe there's something to be fixed low level (without refactoring all actions to thunks) to solve this? |
To solve this kind of issues I recently introduced |
@youknowriad I'd have to confirm that, but I believe this is what happens:
An even shorter example would be: function fun() {
for( let i = 0; i < 1000000; i++) {
document.createElement("div");
}
return Promise.resolve("success");
}
for(let i = 0; i < 10; i++ ) {
fun().then(console.log);
console.log("fun() called")
}
// 10x fun() called
// 10x success Even though a promise is returned each time, there is still a lot of synchronous work to do in a single tick. |
Rungen seems to be synchronous-first: Only when it encounters a promise it awaits it: |
@adamziel that make sense, if this synchronous work done before API fetch involve multiple "dispatch" calls, we may consider using The solution in this PR is a bit weird though: you're basically doing this:
So what you're doing is to split the sync work in chunks (which has nothing to do with rungen really). For me, I fear this indicates the way "batching is performed right now is not great and too clever, I understand the reasoning about the generic
|
That's on purpose and not the real issue here, the rerendering on each dispatch is likely the issue. |
Actually since calling
|
@adamziel The solution I shared above will probably stop working if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The synchronous dispatches are caused by the __unstableAcquireStoreLock
call, the very first thing every saveEntityRecord
does, and the ENQUEUE_LOCK_REQUEST
action dispatch it performs.
Every ENQUEUE_LOCK_REQUEST
dispatch adds an item to the state.locks.requests
array in core-data
, and causes the core-data
store to fire an update event. That's a lot of useSelect
React updates that, in case of locking, are guaranteed to be noops.
I think the fact that the locking machine's state is part or core-data
state and fires subscriber updates is very inefficient. The state.locks
state is 100% private, as there are no public selectors that would look at it.
Only the acquire/release actions look at the state, and even these are private to the core-data
store.
The locking machine doesn't need to be implemented with Redux at all. It could be a plain object with internal state with two methods:
interface Locks {
acquire( path, exclusive ): Promise<Lock>;
release( lock: Lock ): void;
}
const awaitNextFrame = () => | ||
__unstableAwaitPromise( | ||
new Promise( ( resolve ) => | ||
window.requestAnimationFrame( () => resolve() ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
requestAnimationFrame
is not the best function to call here. It always waits 16ms for the next frame, even if the browser is not busy with anything else. Dispatching 60 parallel requests will be spread over exactly one second. setTimeout
or setImmediate
would be better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also this could only happen after the first saveEntityRecord
which would already trigger the progress indicator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
setTimeout
might not be better either, some browsers will only fire the callback after a certain amount of time (mostly 1 second) if the tab is in the background. setImmediate
is non-standard, so probably not the best either. I wonder if using Promise.resolve()
to schedule a microtask work here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Promise.resolve()
wouldn't prevent freezing of the UI. It schedules a microtask right after the current script finishes running, and no other event, like user input or an animation frame, can squeeze in between them.
setImmediate
, appropriately polyfilled, is probably the best. Just be careful to not choose a polyfill that uses queueMicrotask
or something equivalent. The popular async
package has a polyfill that does exactly that, so the danger is real.
This makes sense to me, I'm not sure locking should be done using "dispatch" actions. We could still have a "lock container" that is specific per registry/store though to avoid globals. |
I'm working on a PR that moves the locks away from the |
store specific is good enough but |
Yeah I agree, I'd rather use the registry.batch in some way too so that we can avoid all the store updates. cc @noisysocks too As we mentioned in this PR, there is plenty of related work going on. Depending on where it takes us, we may have to take different approaches (e.g. registry.batch won't play nicely with async |
One note there: I would still like to immediately propagate store updates from the very first |
We no longer use rungen and the locking machine also no longer depends on Redux which makes this PR outdated. I'm going to close it then. The freeze is much less noticeable these days as well. |
Description
Calling
__experimentalBatch()
with a few dozens requests causes the UI to freeze like that:This is because the saving state hinges on
isSavingEntityRecord()
which changes only once the appropriate redux actions are dispatched. Currently we do it in a blocking way like this:Which is fine for a small number of requests. Unfortunately, for many requests it takes some seconds:
In this PR I propose an async way of enqueuing updates. After every
request( api )
call, we await the next frame. This allows the browser to move on to other tasks, update the UI, switch to the "Saving..." state etc.Note that this is a one-off fix for something that I believe is a symptom of using
redux-rungen
instead ofasync/await
. I explored refactoring that with @jsnajdr – it would make core-data async by default and potentially alleviate this problem. I'd love to pick up that work again – see #33201.How has this been tested?
Types of changes
Bug fix (non-breaking change which fixes an issue)