-
Notifications
You must be signed in to change notification settings - Fork 985
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BlocksByRange under WS #2131
BlocksByRange under WS #2131
Conversation
|
|
|
|
This isn't possible. The sync has to start from the state provided by the user and go forward from there. Clients that respond with |
Yeah. For the record. Our current strategy is going to be to ban any node that sends us this error (at least for the first 5 months). The ban lasts ~40 mins which should be enough time for the peer to backfil etc. |
Yes, this is correct. We can't enforce that users must show up with a state that is exactly Note, that for a backfill of blocks only to that boundary, the only validity checks (without getting an old state) that a node can do is check that these blocks for a hash chain. Because the WS state is trusted, this check is enough to show validity of the blocks for storage and future serving. Note that if you backfill all the way to genesis, you could then check other validity conditions and could produce historic states. |
Why are the two related? When we ask for
It should also do a signature check, else it can be poisoned with blocks that have the correct root but invalid signature - it should be possible since we have the proposer index from the block and the validator set of the WS state. The point here is a bit that if we want to ensure that blocks are around, it's more likely that this will happen if the easiest thing to do is to get&store the blocks - starting range sync from WS state then "maybe" backfilling makes it less likely that the blocks will stay around - to make the sync requirement even stronger, it would make some sense to bake in "probe" range requests even if synced and disconnect any client that does not serve the blocks, or already-synced clients have little to lose (being disconnected by an unsynced client is a small loss - being disconnected by a synced client means a clear and present risk that your attestations will not make it through) |
Because you have no way to validate a block from You can check that the blocks being served to you form a chain and you can even validate the proposer signature against the validator set you have from your WS state, but you cannot validate that the sequence of blocks being sent to you is valid wrt state transition nor can you validate that the blocks will ultimately reach the chain you decided was valid through you input of WS state. Thus an attacker can serve you many blocks before you can know if they are valuable/valid, and thus can cause you to consume extra bandwidth.
Yes, agreed. Because you are storing the |
## Issue Addressed Related to #1891, The error is not in the spec yet (see ethereum/consensus-specs#2131) ## Proposed Changes Implement the proposed error, banning peers that send it ## Additional Info NA
## Issue Addressed Related to #1891, The error is not in the spec yet (see ethereum/consensus-specs#2131) ## Proposed Changes Implement the proposed error, banning peers that send it ## Additional Info NA
## Issue Addressed Related to #1891, The error is not in the spec yet (see ethereum/consensus-specs#2131) ## Proposed Changes Implement the proposed error, banning peers that send it ## Additional Info NA
…locks on WS period
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
There was at one point an error code that could be returned for any blocks by range request the node couldn't fulfil because they don't yet have the blocks. I still think that would be very useful as it makes it explicit that the node doesn't have the blocks rather than returning the empty response which could mean they don't have the blocks or that there were no blocks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
generally looks good! let some minor notes/comments
This is something we can fix I think: turning |
Co-authored-by: Alex Stokes <[email protected]> Co-authored-by: Jacek Sieka <[email protected]>
Added the |
Partially addresses #2116
MIN_EPOCHS_FOR_BLOCK_REQUESTS
for minimum expectation of epoch range for serving blocks (and thus how far a new node must backfill)Status
) or in ENRStatus
orMetaData
(likely with a protocol ID update) this near to genesis launch.MIN_EPOCHS_FOR_BLOCK_REQUESTS
gives us 4+ months to upgrade the req/resp to make this info availableThe main thing the above solution won't capture during the first 4 months is the event that a node is back-filling blocks from a checkpoint state to genesis. In such a case, the peer might not be able to respond to BlocksByRange requests successfully. To handle this, I suggest we spec an error code for this case. This will carry forward in the future when BlocksByRange requests are made outside of the advertised range. We can either use
2 (ServerError)
or define3 (ResourceUnavailable)
Questions:
2
or define3
?