You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently a validator prepares a block synchronously, it receives its assignment and then when it comes on time for it to propose it
packages all relevant attestations,deposits, etc and then finally broadcasts the block. While this process is optimized to pack attestations as quickly as possible and have a more efficient deposit trie generated , doing all these in lock-step does cause issues. If for any reason any part of our block proposal routine takes longer than usual this can lead to a late broadcast and in the worst case an orphaned block.
With the merge coming up soon, optimizing our block proposal process will be more important as retrieving the execution payload will add a non-trivial amount of time to the block proposal process.
Description
One solution would be to keep all these objects 'hot' . Fetching attestations/deposits should simply involve reaching directly into a pre-computed list and fetching up till the maximum limit.
Attestations would be continuously validated and processed in a pre-proposal pool. This would be run in a separate routine which would run on a per slot basis(or any other sensible time period). The attestations would be packed and verified before hand, and then placed into this pool.
Deposits would be easily retrievable from a deposit pool. There would be a separate deposit routine which would be on the lookout for any eth1data changes. Depending on that, it would rebuild the trie based on the newly voted eth1data. With this trie, you could simply arrange all valid deposits for that period and make it much faster for a proposer to fetch in the required amount of deposits.
Open Questions
Would having these background routines lead to higher than expected resource usage ? The process of packing attestations and generating deposit tries are not cheap. Having this happen continuously ( rather than sparsely) might have a negative effect on general beacon node performance.
What would be the expected improvements for the average validator in terms of time to proposal ?
How effective would it be given that attestations have a tight boundary to arrive in .( for the smallest inclusion distance and higher reward)
Given how sensitive this part of the codebase is, are there any possible adverse effects to the network by having block proposals this way ? ( ex: higher inclusion distances for validators)
The text was updated successfully, but these errors were encountered:
💎 Issue
Background
Currently a validator prepares a block synchronously, it receives its assignment and then when it comes on time for it to propose it
packages all relevant attestations,deposits, etc and then finally broadcasts the block. While this process is optimized to pack attestations as quickly as possible and have a more efficient deposit trie generated , doing all these in lock-step does cause issues. If for any reason any part of our block proposal routine takes longer than usual this can lead to a late broadcast and in the worst case an orphaned block.
With the merge coming up soon, optimizing our block proposal process will be more important as retrieving the execution payload will add a non-trivial amount of time to the block proposal process.
Description
One solution would be to keep all these objects 'hot' . Fetching attestations/deposits should simply involve reaching directly into a pre-computed list and fetching up till the maximum limit.
Attestations would be continuously validated and processed in a pre-proposal pool. This would be run in a separate routine which would run on a per slot basis(or any other sensible time period). The attestations would be packed and verified before hand, and then placed into this pool.
Deposits would be easily retrievable from a deposit pool. There would be a separate deposit routine which would be on the lookout for any eth1data changes. Depending on that, it would rebuild the trie based on the newly voted eth1data. With this trie, you could simply arrange all valid deposits for that period and make it much faster for a proposer to fetch in the required amount of deposits.
Open Questions
Would having these background routines lead to higher than expected resource usage ? The process of packing attestations and generating deposit tries are not cheap. Having this happen continuously ( rather than sparsely) might have a negative effect on general beacon node performance.
What would be the expected improvements for the average validator in terms of time to proposal ?
How effective would it be given that attestations have a tight boundary to arrive in .( for the smallest inclusion distance and higher reward)
Given how sensitive this part of the codebase is, are there any possible adverse effects to the network by having block proposals this way ? ( ex: higher inclusion distances for validators)
The text was updated successfully, but these errors were encountered: