-
Notifications
You must be signed in to change notification settings - Fork 245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial p2p sync implementation #1674
Conversation
7d9d07d
to
bf78cdc
Compare
crates/storage/src/connection.rs
Outdated
pub fn next_ancestor( | ||
&self, | ||
block: BlockNumber, | ||
) -> anyhow::Result<Option<(BlockNumber, BlockHash)>> { | ||
block::next_ancestor(self, block) | ||
} | ||
|
||
pub fn next_ancestor_without_parent( | ||
&self, | ||
block: BlockNumber, | ||
) -> anyhow::Result<Option<(BlockNumber, BlockHash)>> { | ||
block::next_ancestor_without_parent(self, block) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better names for these would be appreciated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like where this is heading 👍
b19c3b2
to
62d7cb5
Compare
d18d373
to
109796b
Compare
This lets us associate data with a specific peer.
7a97fc8
to
9287d1d
Compare
Direction::Backward => start.parent().unwrap_or_default(), | ||
}; | ||
|
||
|
||
yield PeerData::new(peer, signed_header); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Formatting nit
Direction::Backward => start.parent().unwrap_or_default(), | |
}; | |
yield PeerData::new(peer, signed_header); | |
Direction::Backward => start.parent().unwrap_or_default(), | |
}; | |
yield PeerData::new(peer, signed_header); |
crates/pathfinder/src/sync/p2p.rs
Outdated
/// Performs [analysis](Self::analyse) of the [LocalState] by comparing it with a given L1 checkpoint, and | ||
/// and [handles](Self::handle) the result. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo nit
/// Performs [analysis](Self::analyse) of the [LocalState] by comparing it with a given L1 checkpoint, and | |
/// and [handles](Self::handle) the result. | |
/// Performs [analysis](Self::analyse) of the [LocalState] by comparing it with a given L1 checkpoint, | |
/// and [handles](Self::handle) the result. |
Co-authored-by: Nikša Sporin <[email protected]>
Initial stab at a p2p sync implementation. This only implements the header sync portion and is not yet connected to the binary at all.
Note
This has changed significantly from the original demo. The original used tasks connected via channels whereas this new design uses streams and stream adapters which I think is much easier to read.
Design
The driving idea is that we can speed up disk IO by only syncing one type of data at a time e.g. only block headers, then transaction data etc. This has the additional advantage of keeping processing simple since much less needs to be done by a single process.
This PR implements the sync skeleton and then the header sync portion of it.
Nomenclature
I use two "definitions" throughout the code and comments:
EthereumStateUpdate
Design
To keep things simple and easy to reason about, but also secure, the sync process will only consider blocks that are secured by L1. This enables the following algorithm:
Data is also persisted to disk in chunks.
Known issues
This only syncs up to the latest L1 checkpoint, so this might leave a rather large gap of unsynced blocks still. I don't think we should worry about that at this stage. I prefer having a more secure sync, supplemented by using a non-sync approach for the rest. Ideally the gap would be small enough that we can reach the tip of L2 from the L1 checkpoint in one swoop.
There is a missing loop, which should repeat the overall sync procedure as after the first iteration we will again be behind the next L1 checkpoint. But that's easy to add.
RPC will be dodgy while this sync happens, but I think that's fine.
We need to be careful about any storage invariants we may break during sync. But if we find any such cases, we should prefer adjusting storage instead of malforming p2p sync to fit it.
Current status
However I think to keep the PR manageable this is a good place to stop for now.