-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wait for all futs to clear from ExecutionPipeline before dropping lif… #14224
Conversation
…etime_guard It's in theory safer this way and it avoids error logs before
⏱️ 1h 26m total CI duration on this PR
🚨 1 job on the last run was significantly faster/slower than expected
|
for (block, fut) in itertools::zip_eq(ordered_blocks, futs) { | ||
// wait for all futs so that lifetime_guard is guaranteed to be dropped only | ||
// after all executor calls are over | ||
for (block, fut) in itertools::zip_eq(&ordered_blocks, futs) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about put these futs
in a FuturesOrdered
and poll them all together?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doesn't matter, they are already spawned and running in parallel?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They run in parallel, but I notice there is a future here where we do some post-processing with some .await
. And, possibility that in the future, this future can have other things.
On the other hand, polling one by one would keep the logs sequential. Otherwise, we risk reading unordered logs when debugging. Let's leave it as-is then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also not sure if things in the post processing wants to be sequential.. For example, does the mempool tolerate seeing the notifications out of order?
for (block, fut) in itertools::zip_eq(ordered_blocks, futs) { | ||
// wait for all futs so that lifetime_guard is guaranteed to be dropped only | ||
// after all executor calls are over | ||
for (block, fut) in itertools::zip_eq(&ordered_blocks, futs) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They run in parallel, but I notice there is a future here where we do some post-processing with some .await
. And, possibility that in the future, this future can have other things.
On the other hand, polling one by one would keep the logs sequential. Otherwise, we risk reading unordered logs when debugging. Let's leave it as-is then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this fixes the lifetime issue, so is probably necessary - but I am a bit worried, if we have multiple blocks, and first fails - needing to wait on execution of all others, just to discard them - and then retry them all together - seems like something we can have issues down the line.
@igor-aptos It seems if an earlier block fails the laterones will almost always fail immediately anyway. If the VM or DB returns error, it's probably not recoverable anyway (even switching to state sync might probably not get around the underlying issue, like a full disk); if block fetching times out and a later one times out as well, it should fail at about the same time, right? Anyway let's see how it works out. And maybe re-evaluate if we should bundle several block ids together in the first place, as you suggested on slack. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
✅ Forge suite
|
✅ Forge suite
|
✅ Forge suite
|
💚 All backports created successfully
Questions ?Please refer to the Backport tool documentation and see the Github Action logs for details |
…etime_guard
It's in theory safer this way and it avoids error logs before
Description
Type of Change
Which Components or Systems Does This Change Impact?
How Has This Been Tested?
existing coverage, forge