-
Notifications
You must be signed in to change notification settings - Fork 763
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Merged by Bors] - Deposit Cache Finalization & Fast WS Sync #2915
Conversation
Looks like a few of the tests aren't compiling currently |
04e15b5
to
100e256
Compare
100e256
to
984ba58
Compare
984ba58
to
38e8f98
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found myself really wanting fast deposit sync the other day so I decided to come review this! 🤩
It's an awesome feature and you've done great work specifying and implementing it. I've left some comments that I think will help us get this closer to merging.
Before we merge I'd also like to run this on a lot of nodes (Prater would be good), and get Pawan or Paul to have a look over it as well (we can tag 'em when the time comes).
## Proposed Changes Add a `lighthouse db` command with three initial subcommands: - `lighthouse db version`: print the database schema version. - `lighthouse db migrate --to N`: manually upgrade (or downgrade!) the database to a different version. - `lighthouse db inspect --column C`: log the key and size in bytes of every value in a given `DBColumn`. This PR lays the groundwork for other changes, namely: - Mark's fast-deposit sync (#2915), for which I think we should implement a database downgrade (from v9 to v8). - My `tree-states` work, which already implements a downgrade (v10 to v8). - Standalone purge commands like `lighthouse db purge-dht` per #2824. ## Additional Info I updated the `strum` crate to 0.24.0, which necessitated some changes in the network code to remove calls to deprecated methods. Thanks to @winksaville for the motivation, and implementation work that I used as a source of inspiration (#2685).
## Proposed Changes Add a `lighthouse db` command with three initial subcommands: - `lighthouse db version`: print the database schema version. - `lighthouse db migrate --to N`: manually upgrade (or downgrade!) the database to a different version. - `lighthouse db inspect --column C`: log the key and size in bytes of every value in a given `DBColumn`. This PR lays the groundwork for other changes, namely: - Mark's fast-deposit sync (#2915), for which I think we should implement a database downgrade (from v9 to v8). - My `tree-states` work, which already implements a downgrade (v10 to v8). - Standalone purge commands like `lighthouse db purge-dht` per #2824. ## Additional Info I updated the `strum` crate to 0.24.0, which necessitated some changes in the network code to remove calls to deprecated methods. Thanks to @winksaville for the motivation, and implementation work that I used as a source of inspiration (#2685).
## Proposed Changes Add a `lighthouse db` command with three initial subcommands: - `lighthouse db version`: print the database schema version. - `lighthouse db migrate --to N`: manually upgrade (or downgrade!) the database to a different version. - `lighthouse db inspect --column C`: log the key and size in bytes of every value in a given `DBColumn`. This PR lays the groundwork for other changes, namely: - Mark's fast-deposit sync (sigp#2915), for which I think we should implement a database downgrade (from v9 to v8). - My `tree-states` work, which already implements a downgrade (v10 to v8). - Standalone purge commands like `lighthouse db purge-dht` per sigp#2824. ## Additional Info I updated the `strum` crate to 0.24.0, which necessitated some changes in the network code to remove calls to deprecated methods. Thanks to @winksaville for the motivation, and implementation work that I used as a source of inspiration (sigp#2685).
## Proposed Changes Add a `lighthouse db` command with three initial subcommands: - `lighthouse db version`: print the database schema version. - `lighthouse db migrate --to N`: manually upgrade (or downgrade!) the database to a different version. - `lighthouse db inspect --column C`: log the key and size in bytes of every value in a given `DBColumn`. This PR lays the groundwork for other changes, namely: - Mark's fast-deposit sync (sigp#2915), for which I think we should implement a database downgrade (from v9 to v8). - My `tree-states` work, which already implements a downgrade (v10 to v8). - Standalone purge commands like `lighthouse db purge-dht` per sigp#2824. ## Additional Info I updated the `strum` crate to 0.24.0, which necessitated some changes in the network code to remove calls to deprecated methods. Thanks to @winksaville for the motivation, and implementation work that I used as a source of inspiration (sigp#2685).
923d2b8
to
e2d90c7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking really good!
There are only two things I'd like to resolve before we merge:
- What to do about the database schema change, i.e. this comment. I think you mentioned something in Amsterdam about why this wasn't required, but I didn't follow the reasoning. Do you think we should take the
--eth1-purge-cache
approach for downgrades? I still prefer the explicit schema migration, which would be v9 -> v10, with a downgrade v10 -> v9. Thelighthouse db migrate
command is inunstable
now, so we'd just need to add the upgrade and downgrade toschema_change.rs
. And we'll still have the option of removing them later as in Mac's recent PR [Merged by Bors] - Remove DB migrations for legacy database schemas #3181. - Whether we can
impl Encode/Decode for Option<T>
(this comment). I'll ping Paul to bring his attention to it.
The other thing to discuss is release timing. I think we could either ship this in v2.3.0 along with the v9 schema upgrade for #3157 so that most user's only perceive a single upgrade v8 -> v10, or we wait until after the merge. Currently there's no urgent push for v2.3.0, so we may be able to get this in before.
beacon_node/eth1/src/service.rs
Outdated
#[tokio::main] | ||
pub async fn first_success_blocking<'a, F, O, R>( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reckon let's delete it. Sometimes I put snippets like this in a Github gist in case I ever want to come back to them
consensus/merkle_proof/src/lib.rs
Outdated
}; | ||
} | ||
if deposits == (0x1 << level) { | ||
return Ok(MerkleTree::Finalized(finalized_branches[0])); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My bad, I agree the way it is now is better
bors r+ |
## Summary The deposit cache now has the ability to finalize deposits. This will cause it to drop unneeded deposit logs and hashes in the deposit Merkle tree that are no longer required to construct deposit proofs. The cache is finalized whenever the latest finalized checkpoint has a new `Eth1Data` with all deposits imported. This has three benefits: 1. Improves the speed of constructing Merkle proofs for deposits as we can just replay deposits since the last finalized checkpoint instead of all historical deposits when re-constructing the Merkle tree. 2. Significantly faster weak subjectivity sync as the deposit cache can be transferred to the newly syncing node in compressed form. The Merkle tree that stores `N` finalized deposits requires a maximum of `log2(N)` hashes. The newly syncing node then only needs to download deposits since the last finalized checkpoint to have a full tree. 3. Future proofing in preparation for [EIP-4444](https://eips.ethereum.org/EIPS/eip-4444) as execution nodes will no longer be required to store logs permanently so we won't always have all historical logs available to us. ## More Details Image to illustrate how the deposit contract merkle tree evolves and finalizes along with the resulting `DepositTreeSnapshot` ![image](https://user-images.githubusercontent.com/37123614/151465302-5fc56284-8a69-4998-b20e-45db3934ac70.png) ## Other Considerations I've changed the structure of the `SszDepositCache` so once you load & save your database from this version of lighthouse, you will no longer be able to load it from older versions. Co-authored-by: ethDreamer <[email protected]>
Build failed: |
dang, when you have a sec @ethDreamer looks like there's some fixups required after the latest refactor |
0330fff
to
16ad1fe
Compare
clippy unhappy as well |
16ad1fe
to
001ae43
Compare
@michaelsproul looks like another spurious failure of the windows tests. |
let's try bors! bors r+ |
## Summary The deposit cache now has the ability to finalize deposits. This will cause it to drop unneeded deposit logs and hashes in the deposit Merkle tree that are no longer required to construct deposit proofs. The cache is finalized whenever the latest finalized checkpoint has a new `Eth1Data` with all deposits imported. This has three benefits: 1. Improves the speed of constructing Merkle proofs for deposits as we can just replay deposits since the last finalized checkpoint instead of all historical deposits when re-constructing the Merkle tree. 2. Significantly faster weak subjectivity sync as the deposit cache can be transferred to the newly syncing node in compressed form. The Merkle tree that stores `N` finalized deposits requires a maximum of `log2(N)` hashes. The newly syncing node then only needs to download deposits since the last finalized checkpoint to have a full tree. 3. Future proofing in preparation for [EIP-4444](https://eips.ethereum.org/EIPS/eip-4444) as execution nodes will no longer be required to store logs permanently so we won't always have all historical logs available to us. ## More Details Image to illustrate how the deposit contract merkle tree evolves and finalizes along with the resulting `DepositTreeSnapshot` ![image](https://user-images.githubusercontent.com/37123614/151465302-5fc56284-8a69-4998-b20e-45db3934ac70.png) ## Other Considerations I've changed the structure of the `SszDepositCache` so once you load & save your database from this version of lighthouse, you will no longer be able to load it from older versions. Co-authored-by: ethDreamer <[email protected]>
Pull request successfully merged into unstable. Build succeeded:
|
## Summary The deposit cache now has the ability to finalize deposits. This will cause it to drop unneeded deposit logs and hashes in the deposit Merkle tree that are no longer required to construct deposit proofs. The cache is finalized whenever the latest finalized checkpoint has a new `Eth1Data` with all deposits imported. This has three benefits: 1. Improves the speed of constructing Merkle proofs for deposits as we can just replay deposits since the last finalized checkpoint instead of all historical deposits when re-constructing the Merkle tree. 2. Significantly faster weak subjectivity sync as the deposit cache can be transferred to the newly syncing node in compressed form. The Merkle tree that stores `N` finalized deposits requires a maximum of `log2(N)` hashes. The newly syncing node then only needs to download deposits since the last finalized checkpoint to have a full tree. 3. Future proofing in preparation for [EIP-4444](https://eips.ethereum.org/EIPS/eip-4444) as execution nodes will no longer be required to store logs permanently so we won't always have all historical logs available to us. ## More Details Image to illustrate how the deposit contract merkle tree evolves and finalizes along with the resulting `DepositTreeSnapshot` ![image](https://user-images.githubusercontent.com/37123614/151465302-5fc56284-8a69-4998-b20e-45db3934ac70.png) ## Other Considerations I've changed the structure of the `SszDepositCache` so once you load & save your database from this version of lighthouse, you will no longer be able to load it from older versions. Co-authored-by: ethDreamer <[email protected]>
It appears DappNode's sync URL does not yet support this. Is there a recommended alternative URL for DappNode?
Dappnode's default URL is https://checkpoint-sync.dappnode.io |
I've tried several urls, the main one and dappnodes and a couple others and I'm getting the same error about 4881 |
@michaelsproul it doesn't ignore the error for me. Idk if it's because it's for network gnosis, but it shuts down on that error #4559 |
@icculp That's not the true error, there must be something else amiss. If you get an exit code 132 then you need the portable binary. If the checkpoint sync URL is timing out then you can lengthen the timeout with |
Well, i am in the same situation with gnosis. I did try to add the --checkpoint-sync-url-timeout flag, but the result is still the same error:
|
@fjvva The issue there is that the endpoint isn't returning a block:
It's this I asked the Gnosis devs and they're aware of the issue and will update the In the meantime, you'd be best off trying another checkpoint sync provider, although for Gnosis there don't seem to be many. I got a different error from gateway.fm's server. Might just need to wait for the fix. |
@michaelsproul thanks for the heads up. I guess Gnosis users will just have to wait a bit then. Edit: If anyone else has the same problem, this checkpoint provider works: https://checkpoint-sync-gnosis.dappnode.io/ |
Hi there, Aug 10 21:08:16.544 INFO Logging to file path: "/var/lib/lighthouse/beacon/logs/beacon.log"
Aug 10 21:08:16.544 INFO Lighthouse started version: Lighthouse/v4.2.0-c547a11
Aug 10 21:08:16.544 INFO Configured for network name: gnosis
Aug 10 21:08:16.546 INFO Data directory initialised datadir: /var/lib/lighthouse
Aug 10 21:08:16.546 INFO Deposit contract address: 0x0b98057ea310f4d31f2a452b414647007d1645d9, deploy_block: 19469077
Aug 10 21:08:16.722 INFO Starting checkpoint sync remote_url: https://checkpoint.gnosischain.com/, service: beacon
Aug 10 21:08:16.850 WARN Remote BN does not support EIP-4881 fast deposit sync, error: Error fetching deposit snapshot from remote: ServerMessage(ErrorMessage { code: 415, message: "unsupported content-type: application/octet-stream", stacktraces: [] }), service: beacon
Aug 10 21:08:16.885 CRIT Failed to start beacon node reason: Unable to parse SSZ: OffsetSkipsVariableBytes(388). Ensure the checkpoint-sync-url refers to a node for the correct network
Aug 10 21:08:16.885 INFO Internal shutdown received reason: Failed to start beacon node
Aug 10 21:08:16.885 INFO Shutting down.. reason: Failure("Failed to start beacon node")
Failed to start beacon node Aug 10 21:44:04.213 INFO Logging to file path: "/var/lib/lighthouse/beacon/logs/beacon.log"
Aug 10 21:44:04.214 INFO Lighthouse started version: Lighthouse/v4.2.0-c547a11
Aug 10 21:44:04.214 INFO Configured for network name: gnosis
Aug 10 21:44:04.215 INFO Data directory initialised datadir: /var/lib/lighthouse
Aug 10 21:44:04.216 INFO Deposit contract address: 0x0b98057ea310f4d31f2a452b414647007d1645d9, deploy_block: 19469077
Aug 10 21:44:04.268 INFO Starting checkpoint sync remote_url: https://checkpoint-sync-gnosis.dappnode.io/, service: beacon
Aug 10 21:44:04.726 CRIT Failed to start beacon node reason: Unable to parse SSZ: OffsetSkipsVariableBytes(388). Ensure the checkpoint-sync-url refers to a node for the correct network
Aug 10 21:44:04.727 INFO Internal shutdown received reason: Failed to start beacon node
Aug 10 21:44:04.729 INFO Shutting down.. reason: Failure("Failed to start beacon node")
Failed to start beacon node I'm using I appreciate your support. |
@chilcano You need Lighthouse v4.3.0, because the Shapella hard fork has happened on Gnosis. |
Thanks @michaelsproul |
I'll lock this thread now so that it doesn't continue as a generic checkpoint sync debugging thread. If anyone else stumbles upon the EIP-4881 warning or can't get checkpoint sync to work, please open a new issue or contact us on Discord. |
Summary
The deposit cache now has the ability to finalize deposits. This will cause it to drop unneeded deposit logs and hashes in the deposit Merkle tree that are no longer required to construct deposit proofs. The cache is finalized whenever the latest finalized checkpoint has a new
Eth1Data
with all deposits imported.This has three benefits:
N
finalized deposits requires a maximum oflog2(N)
hashes. The newly syncing node then only needs to download deposits since the last finalized checkpoint to have a full tree.More Details
Image to illustrate how the deposit contract merkle tree evolves and finalizes along with the resulting
DepositTreeSnapshot
Other Considerations
I've changed the structure of the
SszDepositCache
so once you load & save your database from this version of lighthouse, you will no longer be able to load it from older versions.