Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Merged by Bors] - Adjust beacon node timeouts for validator client HTTP requests #2352

Closed
wants to merge 13 commits into from

Conversation

macladson
Copy link
Member

@macladson macladson commented May 19, 2021

Issue Addressed

Resolves #2313

Proposed Changes

Provide BeaconNodeHttpClient with a dedicated Timeouts struct.
This will allow granular adjustment of the timeout duration for different calls made from the VC to the BN. These can either be a constant value, or as a ratio of the slot duration.

Improve timeout performance by using these adjusted timeout duration's only whenever a fallback endpoint is available.

Add a CLI flag called use-long-timeouts to revert to the old behavior.

Additional Info

Additionally set the default BeaconNodeHttpClient timeouts to the be the slot duration of the network, rather than a constant 12 seconds. This will allow it to adjust to different network specifications.

@macladson
Copy link
Member Author

macladson commented May 19, 2021

A few additional notes:

  • As per the reqwest documentation, timeout duration can be adjusted on a per-request basis without overwriting the default client timeout.

  • We should probably have a discussion around what kind of timeouts are desired here to ensure proper fallback behavior. As a start I've put in a ratio of 3 (which is a 4 second timeout on mainnet spec networks) for all calls relevant to attestation production.

  • Something we should also keep in mind is the use of remote beacon nodes (like Infura) and needing to account for
    the additional latency involved there. A timeout suitable for a local node, might not be suitable for a remote one.
    To see the sorts of constraints we are working with, I ran this overnight using an Infura beacon node with a 1 second timeout per attestation request and didn't have any attestation timeouts, so I suspect we will generally be fine.

@macladson macladson marked this pull request as draft May 19, 2021 02:34
@macladson
Copy link
Member Author

macladson commented May 19, 2021

Just had an attestation timeout in the eth1-simulator-ubuntu CI test the first time it ran (but not the second). Not sure if it was a once off issue or if it means that this specific test isn't consistently able to handle shorter timeouts.

@macladson macladson marked this pull request as ready for review May 19, 2021 03:54
@paulhauner paulhauner added the ready-for-review The code is ready for review label May 19, 2021
@macladson macladson marked this pull request as draft May 20, 2021 06:05
@paulhauner paulhauner added work-in-progress PR is a work-in-progress and removed ready-for-review The code is ready for review labels May 20, 2021
@macladson macladson changed the title Adjust timeout duration for VC attestation duties Adjust beacon node timeouts for validator client HTTP requests Jun 3, 2021
@macladson macladson marked this pull request as ready for review June 4, 2021 03:29
@paulhauner paulhauner added ready-for-review The code is ready for review v1.5.0 For inclusion in v1.5.0 release and removed work-in-progress PR is a work-in-progress labels Jun 4, 2021
Copy link
Member

@paulhauner paulhauner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heyo! Looking good, I'm sure this would work fine as-is. I must made a few nitpicks.

I've also added some metrics via this commit: 0e7ac92 I'd suggest cherry-picking it here so we can have them in the future too.

I'm running some metrics on a couple of Prater nodes. Here's the info so far:

http_times

We can let it run over the weekend and see what sort of times we're seeing on those requests. From an initial glance it looks like your suggestions are pretty good. I'll give you access to the metrics nodes when were both online again (probably Monday).

common/eth2/src/lib.rs Outdated Show resolved Hide resolved
common/eth2/src/lib.rs Outdated Show resolved Hide resolved
validator_client/src/cli.rs Outdated Show resolved Hide resolved
validator_client/src/lib.rs Outdated Show resolved Hide resolved
validator_client/src/lib.rs Outdated Show resolved Hide resolved
@paulhauner paulhauner added waiting-on-author The reviewer has suggested changes and awaits thier implementation. and removed ready-for-review The code is ready for review labels Jun 18, 2021
@macladson
Copy link
Member Author

Thanks for the suggestions, and for the additional metrics! Let me know if you feel the timeout durations need any more adjustment.

@paulhauner
Copy link
Member

paulhauner commented Jun 21, 2021

Here's the latest times, over 2 days on Prater:

http_times2

I'll make a commit with some suggestions based off this and drop it here 🙂

@paulhauner
Copy link
Member

paulhauner commented Jun 21, 2021

Actually, I think this view is better. It's the maximum times during the last 2 days on Prater:

http_times_vals

@paulhauner
Copy link
Member

Here's my suggested timeouts 🙂 Please double check and let me know if I've done anything stupid 🙏

paulhauner@02e4b82

@macladson
Copy link
Member Author

Looks good to me! I'll pull it in

@paulhauner
Copy link
Member

I'm running this on our European Prater nodes. After that's been running a while, I'll merge this in :)

@paulhauner
Copy link
Member

This ran well on the testnet nodes! I'd like to check with @michaelsproul before merging. We might want to block this on #2279.

@paulhauner paulhauner added blocked and removed waiting-on-author The reviewer has suggested changes and awaits thier implementation. labels Jul 7, 2021
@paulhauner
Copy link
Member

#2279 should merge today or early next week, we can merge this after 🙂

@paulhauner
Copy link
Member

#2279 has merged! 🎉 No conflicts visible yet, let's try bors.

bors r+

bors bot pushed a commit that referenced this pull request Jul 12, 2021
## Issue Addressed

Resolves #2313 

## Proposed Changes

Provide `BeaconNodeHttpClient` with a dedicated `Timeouts` struct.
This will allow granular adjustment of the timeout duration for different calls made from the VC to the BN. These can either be a constant value, or as a ratio of the slot duration.

Improve timeout performance by using these adjusted timeout duration's only whenever a fallback endpoint is available.

Add a CLI flag called `use-long-timeouts` to revert to the old behavior.

## Additional Info

Additionally set the default `BeaconNodeHttpClient` timeouts to the be the slot duration of the network, rather than a constant 12 seconds. This will allow it to adjust to different network specifications.


Co-authored-by: Paul Hauner <[email protected]>
@paulhauner paulhauner added the ready-for-merge This PR is ready to merge. label Jul 12, 2021
@bors bors bot changed the title Adjust beacon node timeouts for validator client HTTP requests [Merged by Bors] - Adjust beacon node timeouts for validator client HTTP requests Jul 12, 2021
@bors bors bot closed this Jul 12, 2021
AgeManning added a commit that referenced this pull request Jul 13, 2021
* Adjust beacon node timeouts for validator client HTTP requests (#2352)

## Issue Addressed

Resolves #2313 

## Proposed Changes

Provide `BeaconNodeHttpClient` with a dedicated `Timeouts` struct.
This will allow granular adjustment of the timeout duration for different calls made from the VC to the BN. These can either be a constant value, or as a ratio of the slot duration.

Improve timeout performance by using these adjusted timeout duration's only whenever a fallback endpoint is available.

Add a CLI flag called `use-long-timeouts` to revert to the old behavior.

## Additional Info

Additionally set the default `BeaconNodeHttpClient` timeouts to the be the slot duration of the network, rather than a constant 12 seconds. This will allow it to adjust to different network specifications.


Co-authored-by: Paul Hauner <[email protected]>

* Use read_recursive locks in database (#2417)

## Issue Addressed

Closes #2245

## Proposed Changes

Replace all calls to `RwLock::read` in the `store` crate with `RwLock::read_recursive`.

## Additional Info

* Unfortunately we can't run the deadlock detector on CI because it's pinned to an old Rust 1.51.0 nightly which cannot compile Lighthouse (one of our deps uses `ptr::addr_of!` which is too new). A fun side-project at some point might be to update the deadlock detector.
* The reason I think we haven't seen this deadlock (at all?) in practice is that _writes_ to the database's split point are quite infrequent, and a concurrent write is required to trigger the deadlock. The split point is only written when finalization advances, which is once per epoch (every ~6 minutes), and state reads are also quite sporadic. Perhaps we've just been incredibly lucky, or there's something about the timing of state reads vs database migration that protects us.
* I wrote a few small programs to demo the deadlock, and the effectiveness of the `read_recursive` fix: https://github.com/michaelsproul/relock_deadlock_mvp
* [The docs for `read_recursive`](https://docs.rs/lock_api/0.4.2/lock_api/struct.RwLock.html#method.read_recursive) warn of starvation for writers. I think in order for starvation to occur the database would have to be spammed with so many state reads that it's unable to ever clear them all and find time for a write, in which case migration of states to the freezer would cease. If an attack could be performed to trigger this starvation then it would likely trigger a deadlock in the current code, and I think ceasing migration is preferable to deadlocking in this extreme situation. In practice neither should occur due to protection from spammy peers at the network layer. Nevertheless, it would be prudent to run this change on the testnet nodes to check that it doesn't cause accidental starvation.

* Return more detail when invalid data is found in the DB during startup (#2445)

## Issue Addressed

- Resolves #2444

## Proposed Changes

Adds some more detail to the error message returned when the `BeaconChainBuilder` is unable to access or decode block/state objects during startup.

## Additional Info

NA

* Use hardware acceleration for SHA256 (#2426)

## Proposed Changes

Modify the SHA256 implementation in `eth2_hashing` so that it switches between `ring` and `sha2` to take advantage of [x86_64 SHA extensions](https://en.wikipedia.org/wiki/Intel_SHA_extensions). The extensions are available on modern Intel and AMD CPUs, and seem to provide a considerable speed-up: on my Ryzen 5950X it dropped state tree hashing times by about 30% from 35ms to 25ms (on Prater).

## Additional Info

The extensions became available in the `sha2` crate [last year](https://www.reddit.com/r/rust/comments/hf2vcx/ann_rustcryptos_sha1_and_sha2_now_support/), and are not available in Ring, which uses a [pure Rust implementation of sha2](https://github.com/briansmith/ring/blob/main/src/digest/sha2.rs). Ring is faster on CPUs that lack the extensions so I've implemented a runtime switch to use `sha2` only when the extensions are available. The runtime switching seems to impose a miniscule penalty (see the benchmarks linked below).

* Start a release checklist (#2270)

## Issue Addressed

NA

## Proposed Changes

Add a checklist to the release draft created by CI. I know @michaelsproul was also working on this and I suspect @realbigsean also might have useful input.

## Additional Info

NA

* Serious banning

* fmt

Co-authored-by: Mac L <[email protected]>
Co-authored-by: Paul Hauner <[email protected]>
Co-authored-by: Michael Sproul <[email protected]>
AgeManning added a commit that referenced this pull request Jul 13, 2021
* Adjust beacon node timeouts for validator client HTTP requests (#2352)

## Issue Addressed

Resolves #2313 

## Proposed Changes

Provide `BeaconNodeHttpClient` with a dedicated `Timeouts` struct.
This will allow granular adjustment of the timeout duration for different calls made from the VC to the BN. These can either be a constant value, or as a ratio of the slot duration.

Improve timeout performance by using these adjusted timeout duration's only whenever a fallback endpoint is available.

Add a CLI flag called `use-long-timeouts` to revert to the old behavior.

## Additional Info

Additionally set the default `BeaconNodeHttpClient` timeouts to the be the slot duration of the network, rather than a constant 12 seconds. This will allow it to adjust to different network specifications.


Co-authored-by: Paul Hauner <[email protected]>

* Use read_recursive locks in database (#2417)

## Issue Addressed

Closes #2245

## Proposed Changes

Replace all calls to `RwLock::read` in the `store` crate with `RwLock::read_recursive`.

## Additional Info

* Unfortunately we can't run the deadlock detector on CI because it's pinned to an old Rust 1.51.0 nightly which cannot compile Lighthouse (one of our deps uses `ptr::addr_of!` which is too new). A fun side-project at some point might be to update the deadlock detector.
* The reason I think we haven't seen this deadlock (at all?) in practice is that _writes_ to the database's split point are quite infrequent, and a concurrent write is required to trigger the deadlock. The split point is only written when finalization advances, which is once per epoch (every ~6 minutes), and state reads are also quite sporadic. Perhaps we've just been incredibly lucky, or there's something about the timing of state reads vs database migration that protects us.
* I wrote a few small programs to demo the deadlock, and the effectiveness of the `read_recursive` fix: https://github.com/michaelsproul/relock_deadlock_mvp
* [The docs for `read_recursive`](https://docs.rs/lock_api/0.4.2/lock_api/struct.RwLock.html#method.read_recursive) warn of starvation for writers. I think in order for starvation to occur the database would have to be spammed with so many state reads that it's unable to ever clear them all and find time for a write, in which case migration of states to the freezer would cease. If an attack could be performed to trigger this starvation then it would likely trigger a deadlock in the current code, and I think ceasing migration is preferable to deadlocking in this extreme situation. In practice neither should occur due to protection from spammy peers at the network layer. Nevertheless, it would be prudent to run this change on the testnet nodes to check that it doesn't cause accidental starvation.

* Return more detail when invalid data is found in the DB during startup (#2445)

## Issue Addressed

- Resolves #2444

## Proposed Changes

Adds some more detail to the error message returned when the `BeaconChainBuilder` is unable to access or decode block/state objects during startup.

## Additional Info

NA

* Use hardware acceleration for SHA256 (#2426)

## Proposed Changes

Modify the SHA256 implementation in `eth2_hashing` so that it switches between `ring` and `sha2` to take advantage of [x86_64 SHA extensions](https://en.wikipedia.org/wiki/Intel_SHA_extensions). The extensions are available on modern Intel and AMD CPUs, and seem to provide a considerable speed-up: on my Ryzen 5950X it dropped state tree hashing times by about 30% from 35ms to 25ms (on Prater).

## Additional Info

The extensions became available in the `sha2` crate [last year](https://www.reddit.com/r/rust/comments/hf2vcx/ann_rustcryptos_sha1_and_sha2_now_support/), and are not available in Ring, which uses a [pure Rust implementation of sha2](https://github.com/briansmith/ring/blob/main/src/digest/sha2.rs). Ring is faster on CPUs that lack the extensions so I've implemented a runtime switch to use `sha2` only when the extensions are available. The runtime switching seems to impose a miniscule penalty (see the benchmarks linked below).

* Start a release checklist (#2270)

## Issue Addressed

NA

## Proposed Changes

Add a checklist to the release draft created by CI. I know @michaelsproul was also working on this and I suspect @realbigsean also might have useful input.

## Additional Info

NA

* Serious banning

* fmt

Co-authored-by: Mac L <[email protected]>
Co-authored-by: Paul Hauner <[email protected]>
Co-authored-by: Michael Sproul <[email protected]>
@macladson macladson deleted the vc-timeout-adjustment branch July 14, 2021 07:17
AgeManning added a commit that referenced this pull request Jul 15, 2021
* Adjust beacon node timeouts for validator client HTTP requests (#2352)

Resolves #2313

Provide `BeaconNodeHttpClient` with a dedicated `Timeouts` struct.
This will allow granular adjustment of the timeout duration for different calls made from the VC to the BN. These can either be a constant value, or as a ratio of the slot duration.

Improve timeout performance by using these adjusted timeout duration's only whenever a fallback endpoint is available.

Add a CLI flag called `use-long-timeouts` to revert to the old behavior.

Additionally set the default `BeaconNodeHttpClient` timeouts to the be the slot duration of the network, rather than a constant 12 seconds. This will allow it to adjust to different network specifications.

Co-authored-by: Paul Hauner <[email protected]>

* Use read_recursive locks in database (#2417)

Closes #2245

Replace all calls to `RwLock::read` in the `store` crate with `RwLock::read_recursive`.

* Unfortunately we can't run the deadlock detector on CI because it's pinned to an old Rust 1.51.0 nightly which cannot compile Lighthouse (one of our deps uses `ptr::addr_of!` which is too new). A fun side-project at some point might be to update the deadlock detector.
* The reason I think we haven't seen this deadlock (at all?) in practice is that _writes_ to the database's split point are quite infrequent, and a concurrent write is required to trigger the deadlock. The split point is only written when finalization advances, which is once per epoch (every ~6 minutes), and state reads are also quite sporadic. Perhaps we've just been incredibly lucky, or there's something about the timing of state reads vs database migration that protects us.
* I wrote a few small programs to demo the deadlock, and the effectiveness of the `read_recursive` fix: https://github.com/michaelsproul/relock_deadlock_mvp
* [The docs for `read_recursive`](https://docs.rs/lock_api/0.4.2/lock_api/struct.RwLock.html#method.read_recursive) warn of starvation for writers. I think in order for starvation to occur the database would have to be spammed with so many state reads that it's unable to ever clear them all and find time for a write, in which case migration of states to the freezer would cease. If an attack could be performed to trigger this starvation then it would likely trigger a deadlock in the current code, and I think ceasing migration is preferable to deadlocking in this extreme situation. In practice neither should occur due to protection from spammy peers at the network layer. Nevertheless, it would be prudent to run this change on the testnet nodes to check that it doesn't cause accidental starvation.

* Return more detail when invalid data is found in the DB during startup (#2445)

- Resolves #2444

Adds some more detail to the error message returned when the `BeaconChainBuilder` is unable to access or decode block/state objects during startup.

NA

* Use hardware acceleration for SHA256 (#2426)

Modify the SHA256 implementation in `eth2_hashing` so that it switches between `ring` and `sha2` to take advantage of [x86_64 SHA extensions](https://en.wikipedia.org/wiki/Intel_SHA_extensions). The extensions are available on modern Intel and AMD CPUs, and seem to provide a considerable speed-up: on my Ryzen 5950X it dropped state tree hashing times by about 30% from 35ms to 25ms (on Prater).

The extensions became available in the `sha2` crate [last year](https://www.reddit.com/r/rust/comments/hf2vcx/ann_rustcryptos_sha1_and_sha2_now_support/), and are not available in Ring, which uses a [pure Rust implementation of sha2](https://github.com/briansmith/ring/blob/main/src/digest/sha2.rs). Ring is faster on CPUs that lack the extensions so I've implemented a runtime switch to use `sha2` only when the extensions are available. The runtime switching seems to impose a miniscule penalty (see the benchmarks linked below).

* Start a release checklist (#2270)

NA

Add a checklist to the release draft created by CI. I know @michaelsproul was also working on this and I suspect @realbigsean also might have useful input.

NA

* Serious banning

* fmt

Co-authored-by: Mac L <[email protected]>
Co-authored-by: Paul Hauner <[email protected]>
Co-authored-by: Michael Sproul <[email protected]>
AgeManning added a commit that referenced this pull request Jul 15, 2021
* Adjust beacon node timeouts for validator client HTTP requests (#2352)

Resolves #2313

Provide `BeaconNodeHttpClient` with a dedicated `Timeouts` struct.
This will allow granular adjustment of the timeout duration for different calls made from the VC to the BN. These can either be a constant value, or as a ratio of the slot duration.

Improve timeout performance by using these adjusted timeout duration's only whenever a fallback endpoint is available.

Add a CLI flag called `use-long-timeouts` to revert to the old behavior.

Additionally set the default `BeaconNodeHttpClient` timeouts to the be the slot duration of the network, rather than a constant 12 seconds. This will allow it to adjust to different network specifications.

Co-authored-by: Paul Hauner <[email protected]>

* Use read_recursive locks in database (#2417)

Closes #2245

Replace all calls to `RwLock::read` in the `store` crate with `RwLock::read_recursive`.

* Unfortunately we can't run the deadlock detector on CI because it's pinned to an old Rust 1.51.0 nightly which cannot compile Lighthouse (one of our deps uses `ptr::addr_of!` which is too new). A fun side-project at some point might be to update the deadlock detector.
* The reason I think we haven't seen this deadlock (at all?) in practice is that _writes_ to the database's split point are quite infrequent, and a concurrent write is required to trigger the deadlock. The split point is only written when finalization advances, which is once per epoch (every ~6 minutes), and state reads are also quite sporadic. Perhaps we've just been incredibly lucky, or there's something about the timing of state reads vs database migration that protects us.
* I wrote a few small programs to demo the deadlock, and the effectiveness of the `read_recursive` fix: https://github.com/michaelsproul/relock_deadlock_mvp
* [The docs for `read_recursive`](https://docs.rs/lock_api/0.4.2/lock_api/struct.RwLock.html#method.read_recursive) warn of starvation for writers. I think in order for starvation to occur the database would have to be spammed with so many state reads that it's unable to ever clear them all and find time for a write, in which case migration of states to the freezer would cease. If an attack could be performed to trigger this starvation then it would likely trigger a deadlock in the current code, and I think ceasing migration is preferable to deadlocking in this extreme situation. In practice neither should occur due to protection from spammy peers at the network layer. Nevertheless, it would be prudent to run this change on the testnet nodes to check that it doesn't cause accidental starvation.

* Return more detail when invalid data is found in the DB during startup (#2445)

- Resolves #2444

Adds some more detail to the error message returned when the `BeaconChainBuilder` is unable to access or decode block/state objects during startup.

NA

* Use hardware acceleration for SHA256 (#2426)

Modify the SHA256 implementation in `eth2_hashing` so that it switches between `ring` and `sha2` to take advantage of [x86_64 SHA extensions](https://en.wikipedia.org/wiki/Intel_SHA_extensions). The extensions are available on modern Intel and AMD CPUs, and seem to provide a considerable speed-up: on my Ryzen 5950X it dropped state tree hashing times by about 30% from 35ms to 25ms (on Prater).

The extensions became available in the `sha2` crate [last year](https://www.reddit.com/r/rust/comments/hf2vcx/ann_rustcryptos_sha1_and_sha2_now_support/), and are not available in Ring, which uses a [pure Rust implementation of sha2](https://github.com/briansmith/ring/blob/main/src/digest/sha2.rs). Ring is faster on CPUs that lack the extensions so I've implemented a runtime switch to use `sha2` only when the extensions are available. The runtime switching seems to impose a miniscule penalty (see the benchmarks linked below).

* Start a release checklist (#2270)

NA

Add a checklist to the release draft created by CI. I know @michaelsproul was also working on this and I suspect @realbigsean also might have useful input.

NA

* Serious banning

* fmt

Co-authored-by: Mac L <[email protected]>
Co-authored-by: Paul Hauner <[email protected]>
Co-authored-by: Michael Sproul <[email protected]>
paulhauner added a commit to paulhauner/lighthouse that referenced this pull request Jul 22, 2021
* Adjust beacon node timeouts for validator client HTTP requests (sigp#2352)

Resolves sigp#2313

Provide `BeaconNodeHttpClient` with a dedicated `Timeouts` struct.
This will allow granular adjustment of the timeout duration for different calls made from the VC to the BN. These can either be a constant value, or as a ratio of the slot duration.

Improve timeout performance by using these adjusted timeout duration's only whenever a fallback endpoint is available.

Add a CLI flag called `use-long-timeouts` to revert to the old behavior.

Additionally set the default `BeaconNodeHttpClient` timeouts to the be the slot duration of the network, rather than a constant 12 seconds. This will allow it to adjust to different network specifications.

Co-authored-by: Paul Hauner <[email protected]>

* Use read_recursive locks in database (sigp#2417)

Closes sigp#2245

Replace all calls to `RwLock::read` in the `store` crate with `RwLock::read_recursive`.

* Unfortunately we can't run the deadlock detector on CI because it's pinned to an old Rust 1.51.0 nightly which cannot compile Lighthouse (one of our deps uses `ptr::addr_of!` which is too new). A fun side-project at some point might be to update the deadlock detector.
* The reason I think we haven't seen this deadlock (at all?) in practice is that _writes_ to the database's split point are quite infrequent, and a concurrent write is required to trigger the deadlock. The split point is only written when finalization advances, which is once per epoch (every ~6 minutes), and state reads are also quite sporadic. Perhaps we've just been incredibly lucky, or there's something about the timing of state reads vs database migration that protects us.
* I wrote a few small programs to demo the deadlock, and the effectiveness of the `read_recursive` fix: https://github.com/michaelsproul/relock_deadlock_mvp
* [The docs for `read_recursive`](https://docs.rs/lock_api/0.4.2/lock_api/struct.RwLock.html#method.read_recursive) warn of starvation for writers. I think in order for starvation to occur the database would have to be spammed with so many state reads that it's unable to ever clear them all and find time for a write, in which case migration of states to the freezer would cease. If an attack could be performed to trigger this starvation then it would likely trigger a deadlock in the current code, and I think ceasing migration is preferable to deadlocking in this extreme situation. In practice neither should occur due to protection from spammy peers at the network layer. Nevertheless, it would be prudent to run this change on the testnet nodes to check that it doesn't cause accidental starvation.

* Return more detail when invalid data is found in the DB during startup (sigp#2445)

- Resolves sigp#2444

Adds some more detail to the error message returned when the `BeaconChainBuilder` is unable to access or decode block/state objects during startup.

NA

* Use hardware acceleration for SHA256 (sigp#2426)

Modify the SHA256 implementation in `eth2_hashing` so that it switches between `ring` and `sha2` to take advantage of [x86_64 SHA extensions](https://en.wikipedia.org/wiki/Intel_SHA_extensions). The extensions are available on modern Intel and AMD CPUs, and seem to provide a considerable speed-up: on my Ryzen 5950X it dropped state tree hashing times by about 30% from 35ms to 25ms (on Prater).

The extensions became available in the `sha2` crate [last year](https://www.reddit.com/r/rust/comments/hf2vcx/ann_rustcryptos_sha1_and_sha2_now_support/), and are not available in Ring, which uses a [pure Rust implementation of sha2](https://github.com/briansmith/ring/blob/main/src/digest/sha2.rs). Ring is faster on CPUs that lack the extensions so I've implemented a runtime switch to use `sha2` only when the extensions are available. The runtime switching seems to impose a miniscule penalty (see the benchmarks linked below).

* Start a release checklist (sigp#2270)

NA

Add a checklist to the release draft created by CI. I know @michaelsproul was also working on this and I suspect @realbigsean also might have useful input.

NA

* Serious banning

* fmt

Co-authored-by: Mac L <[email protected]>
Co-authored-by: Paul Hauner <[email protected]>
Co-authored-by: Michael Sproul <[email protected]>
paulhauner added a commit to paulhauner/lighthouse that referenced this pull request Aug 2, 2021
commit c5786a8
Author: realbigsean <[email protected]>
Date:   Sat Jul 31 03:50:52 2021 +0000

    Doppelganger detection (sigp#2230)

    ## Issue Addressed

    Resolves sigp#2069

    ## Proposed Changes

    - Adds a `--doppelganger-detection` flag
    - Adds a `lighthouse/seen_validators` endpoint, which will make it so the lighthouse VC is not interopable with other client beacon nodes if the `--doppelganger-detection` flag is used, but hopefully this will become standardized. Relevant Eth2 API repo issue: ethereum/beacon-APIs#64
    - If the `--doppelganger-detection` flag is used, the VC will wait until the beacon node is synced, and then wait an additional 2 epochs. The reason for this is to make sure the beacon node is able to subscribe to the subnets our validators should be attesting on. I think an alternative would be to have the beacon node subscribe to all subnets for 2+ epochs on startup by default.

    ## Additional Info

    I'd like to add tests and would appreciate feedback.

    TODO:  handle validators started via the API, potentially make this default behavior

    Co-authored-by: realbigsean <[email protected]>
    Co-authored-by: Michael Sproul <[email protected]>
    Co-authored-by: Paul Hauner <[email protected]>

commit 834ee98
Author: SaNNNNNNNN <[email protected]>
Date:   Sat Jul 31 02:24:09 2021 +0000

    Fix flag in redundancy docs (sigp#2482)

    Replace all --process-all-attestations with --import-all-attestations

    ## Issue Addressed

    Which issue # does this PR address?

    ## Proposed Changes

    Please list or describe the changes introduced by this PR.

    ## Additional Info

    Please provide any additional information. For example, future considerations
    or information useful for reviewers.

commit 303deb9
Author: realbigsean <[email protected]>
Date:   Fri Jul 30 01:11:47 2021 +0000

    Rust 1.54.0 lints (sigp#2483)

    ## Issue Addressed

    N/A

    ## Proposed Changes

    - Removing a bunch of unnecessary references
    - Updated `Error::VariantError` to `Error::Variant`
    - There were additional enum variant lints that I ignored, because I thought our variant names were fine
    - removed `MonitoredValidator`'s `pubkey` field, because I couldn't find it used anywhere. It looks like we just use the string version of the pubkey (the `id` field) if there is no index

    ## Additional Info

    Co-authored-by: realbigsean <[email protected]>

commit 8efd9fc
Author: Paul Hauner <[email protected]>
Date:   Thu Jul 29 04:38:26 2021 +0000

    Add `AttesterCache` for attestation production (sigp#2478)

    ## Issue Addressed

    - Resolves sigp#2169

    ## Proposed Changes

    Adds the `AttesterCache` to allow validators to produce attestations for older slots. Presently, some arbitrary restrictions can force validators to receive an error when attesting to a slot earlier than the present one. This can cause attestation misses when there is excessive load on the validator client or time sync issues between the VC and BN.

    ## Additional Info

    NA

commit 1d4f90e
Author: Michael Sproul <[email protected]>
Date:   Thu Jul 29 02:16:54 2021 +0000

    Bump tests to v1.1.0-beta.2 (sigp#2481)

    ## Proposed Changes

    Bump spec tests to v1.1.0-beta.2, for conformance with the latest spec release: https://github.com/ethereum/eth2.0-specs/releases/tag/v1.1.0-beta.2

    ## Additional Info

    We already happen to be compatible with the latest spec change that requires sync contributions to have at least one bit set. I'm gonna call it foresight on @realbigsean's part 😎

    https://github.com/sigp/lighthouse/blob/6e3ca48cb934a63cafdc940068825a48cba740d1/beacon_node/beacon_chain/src/sync_committee_verification.rs#L285-L288

commit 923486f
Author: Michael Sproul <[email protected]>
Date:   Wed Jul 28 05:40:21 2021 +0000

    Use bulk verification for sync_aggregate signature (sigp#2415)

    ## Proposed Changes

    Add the `sync_aggregate` from `BeaconBlock` to the bulk signature verifier for blocks. This necessitates a new signature set constructor for the sync aggregate, which is different from the others due to the use of [`eth2_fast_aggregate_verify`](https://github.com/ethereum/eth2.0-specs/blob/v1.1.0-alpha.7/specs/altair/bls.md#eth2_fast_aggregate_verify) for sync aggregates, per [`process_sync_aggregate`](https://github.com/ethereum/eth2.0-specs/blob/v1.1.0-alpha.7/specs/altair/beacon-chain.md#sync-aggregate-processing). I made the choice to return an optional signature set, with `None` representing the case where the signature is valid on account of being the point at infinity (requires no further checking).

    To "dogfood" the changes and prevent duplication, the consensus logic now uses the signature set approach as well whenever it is required to verify signatures (which should only be in testing AFAIK). The EF tests pass with the code as it exists currently, but failed before I adapted the `eth2_fast_aggregate_verify` changes (which is good).

    As a result of this change Altair block processing should be a little faster, and importantly, we will no longer accidentally verify signatures when replaying blocks, e.g. when replaying blocks from the database.

commit 6e3ca48
Author: Paul Hauner <[email protected]>
Date:   Tue Jul 27 07:01:01 2021 +0000

    Cache participating indices for Altair epoch processing (sigp#2416)

    ## Issue Addressed

    NA

    ## Proposed Changes

    This PR addresses two things:

    1. Allows the `ValidatorMonitor` to work with Altair states.
    1. Optimizes `altair::process_epoch` (see [code](https://github.com/paulhauner/lighthouse/blob/participation-cache/consensus/state_processing/src/per_epoch_processing/altair/participation_cache.rs) for description)

    ## Breaking Changes

    The breaking changes in this PR revolve around one premise:

    *After the Altair fork, it's not longer possible (given only a `BeaconState`) to identify if a validator had *any* attestation included during some epoch. The best we can do is see if that validator made the "timely" source/target/head flags.*

    Whilst this seems annoying, it's not actually too bad. Finalization is based upon "timely target" attestations, so that's really the most important thing. Although there's *some* value in knowing if a validator had *any* attestation included, it's far more important to know about "timely target" participation, since this is what affects finality and justification.

    For simplicity and consistency, I've also removed the ability to determine if *any* attestation was included from metrics and API endpoints. Now, all Altair and non-Altair states will simply report on the head/target attestations.

    The following section details where we've removed fields and provides replacement values.

    ### Breaking Changes: Prometheus Metrics

    Some participation metrics have been removed and replaced. Some were removed since they are no longer relevant to Altair (e.g., total attesting balance) and others replaced with gwei values instead of pre-computed values. This provides more flexibility at display-time (e.g., Grafana).

    The following metrics were added as replacements:

    - `beacon_participation_prev_epoch_head_attesting_gwei_total`
    - `beacon_participation_prev_epoch_target_attesting_gwei_total`
    - `beacon_participation_prev_epoch_source_attesting_gwei_total`
    - `beacon_participation_prev_epoch_active_gwei_total`

    The following metrics were removed:

    - `beacon_participation_prev_epoch_attester`
       - instead use `beacon_participation_prev_epoch_source_attesting_gwei_total / beacon_participation_prev_epoch_active_gwei_total`.
    - `beacon_participation_prev_epoch_target_attester`
       - instead use `beacon_participation_prev_epoch_target_attesting_gwei_total / beacon_participation_prev_epoch_active_gwei_total`.
    - `beacon_participation_prev_epoch_head_attester`
       - instead use `beacon_participation_prev_epoch_head_attesting_gwei_total / beacon_participation_prev_epoch_active_gwei_total`.

    The `beacon_participation_prev_epoch_attester` endpoint has been removed. Users should instead use the pre-existing `beacon_participation_prev_epoch_target_attester`.

    ### Breaking Changes: HTTP API

    The `/lighthouse/validator_inclusion/{epoch}/{validator_id}` endpoint loses the following fields:

    - `current_epoch_attesting_gwei` (use `current_epoch_target_attesting_gwei` instead)
    - `previous_epoch_attesting_gwei` (use `previous_epoch_target_attesting_gwei` instead)

    The `/lighthouse/validator_inclusion/{epoch}/{validator_id}` endpoint lose the following fields:

    - `is_current_epoch_attester` (use `is_current_epoch_target_attester` instead)
    - `is_previous_epoch_attester` (use `is_previous_epoch_target_attester` instead)
    - `is_active_in_current_epoch` becomes `is_active_unslashed_in_current_epoch`.
    - `is_active_in_previous_epoch` becomes `is_active_unslashed_in_previous_epoch`.

    ## Additional Info

    NA

    ## TODO

    - [x] Deal with total balances
    - [x] Update validator_inclusion API
    - [ ] Ensure `beacon_participation_prev_epoch_target_attester` and `beacon_participation_prev_epoch_head_attester` work before Altair

    Co-authored-by: realbigsean <[email protected]>

commit f5bdca0
Author: Michael Sproul <[email protected]>
Date:   Tue Jul 27 05:43:35 2021 +0000

    Update to spec v1.1.0-beta.1 (sigp#2460)

    ## Proposed Changes

    Update to the latest version of the Altair spec, which includes new tests and a tweak to the target sync aggregators.

    ## Additional Info

    This change is _not_ required for the imminent Altair devnet, and is waiting on the merge of sigp#2321 to unstable.

    Co-authored-by: Paul Hauner <[email protected]>

commit 84e6d71
Author: Michael Sproul <[email protected]>
Date:   Fri Jul 23 00:23:53 2021 +0000

    Tree hash caching and optimisations for Altair (sigp#2459)

    ## Proposed Changes

    Remove the remaining Altair `FIXME`s from consensus land.

    1. Implement tree hash caching for the participation lists. This required some light type manipulation, including removing the `TreeHash` bound from `CachedTreeHash` which was purely descriptive.
    2. Plumb the proposer index through Altair attestation processing, to avoid calculating it for _every_ attestation (potentially 128ms on large networks). This duplicates some work from sigp#2431, but with the aim of getting it in sooner, particularly for the Altair devnets.
    3. Removes two FIXMEs related to `superstruct` and cloning, which are unlikely to be particularly detrimental and will be tracked here instead: sigp/superstruct#5

commit 74aa99c
Author: Michael Sproul <[email protected]>
Date:   Thu Jul 22 01:37:01 2021 +0000

    Document BN API security considerations (sigp#2470)

    ## Issue Addressed

    Closes sigp#2468

    ## Proposed Changes

    Document security considerations for the beacon node API, with strong recommendations against exposing it to the internet.

commit 63923ea
Author: Michael Sproul <[email protected]>
Date:   Wed Jul 21 07:10:52 2021 +0000

    Bump discv5 to v0.1.0-beta.8 (sigp#2471)

    ## Proposed Changes

    Update discv5 to fix bugs seen on `altair-devnet-1`

commit 17b6d7c
Author: Mac L <[email protected]>
Date:   Wed Jul 21 07:10:51 2021 +0000

    Add `http-address` flag to VC (sigp#2467)

    ## Issue Addressed

    sigp#2454

    ## Proposed Changes

    Adds the `--http-address` flag to allow the user to use custom HTTP addresses. This can be helpful for certain Docker setups.

    Since using custom HTTP addresses is unsafe due to the server being unencrypted,  `--unencrypted-http-transport` was also added as a safety flag and must be used in tandem with `--http-address`. This is to ensure the user is aware of the risks associated with using non-local HTTP addresses.

commit bcf8ba6
Author: realbigsean <[email protected]>
Date:   Wed Jul 21 03:24:23 2021 +0000

    Add lcli Dockerfile and auto-build to CI (sigp#2469)

    ## Issue Addressed

    Resolves: sigp#2087

    ## Proposed Changes

    - Add a `Dockerfile` to the `lcli` directory
    - Add a github actions job to build and push and `lcli` docker image on pushes to `unstable` and `stable`

    ## Additional Info

    It's a little awkward but `lcli` requires the full project scope so must be built:
    - from the `lighthouse` dir with: `docker build -f ./lcli/Dockerflie .`
    - from the `lcli` dir with: `docker build -f ./Dockerfile ../`

    Didn't include `libssl-dev` or `ca-certificates`, `lcli` doesn't need these right?

    Co-authored-by: realbigsean <[email protected]>
    Co-authored-by: Michael Sproul <[email protected]>
    Co-authored-by: Michael Sproul <[email protected]>

commit 9a8320b
Merge: b0f5c4c 08fedbf
Author: Age Manning <[email protected]>
Date:   Thu Jul 15 18:15:07 2021 +1000

    Merge pull request sigp#2389 from sigp/network-1.5

    Network Updates for 1.5

commit 08fedbf
Author: Age Manning <[email protected]>
Date:   Thu Jul 15 10:53:59 2021 +1000

    Libp2p Connection Limit (sigp#2455)

    * Get libp2p to handle connection limits

    * fmt

commit 6818a94
Author: Age Manning <[email protected]>
Date:   Wed Jul 14 16:54:44 2021 +1000

    Discovery update (sigp#2458)

commit 381befb
Author: Age Manning <[email protected]>
Date:   Wed Jul 14 12:59:24 2021 +1000

    Ensure disconnecting peers are added to the peerdb (sigp#2451)

commit 059d9ec
Author: Age Manning <[email protected]>
Date:   Tue Jul 13 15:37:52 2021 +1000

    Gossipsub scoring improvements (sigp#2391)

    * Tweak gossipsub parameters for improved scoring

    * Modify gossip history

    * Update settings

    * Make mesh window constant

    * Decrease the mesh message deliveries weight

    * Fmt

commit c62810b
Author: Age Manning <[email protected]>
Date:   Tue Jul 13 14:37:25 2021 +1000

    Update to Libp2p to 39.1 (sigp#2448)

    * Adjust beacon node timeouts for validator client HTTP requests (sigp#2352)

    Resolves sigp#2313

    Provide `BeaconNodeHttpClient` with a dedicated `Timeouts` struct.
    This will allow granular adjustment of the timeout duration for different calls made from the VC to the BN. These can either be a constant value, or as a ratio of the slot duration.

    Improve timeout performance by using these adjusted timeout duration's only whenever a fallback endpoint is available.

    Add a CLI flag called `use-long-timeouts` to revert to the old behavior.

    Additionally set the default `BeaconNodeHttpClient` timeouts to the be the slot duration of the network, rather than a constant 12 seconds. This will allow it to adjust to different network specifications.

    Co-authored-by: Paul Hauner <[email protected]>

    * Use read_recursive locks in database (sigp#2417)

    Closes sigp#2245

    Replace all calls to `RwLock::read` in the `store` crate with `RwLock::read_recursive`.

    * Unfortunately we can't run the deadlock detector on CI because it's pinned to an old Rust 1.51.0 nightly which cannot compile Lighthouse (one of our deps uses `ptr::addr_of!` which is too new). A fun side-project at some point might be to update the deadlock detector.
    * The reason I think we haven't seen this deadlock (at all?) in practice is that _writes_ to the database's split point are quite infrequent, and a concurrent write is required to trigger the deadlock. The split point is only written when finalization advances, which is once per epoch (every ~6 minutes), and state reads are also quite sporadic. Perhaps we've just been incredibly lucky, or there's something about the timing of state reads vs database migration that protects us.
    * I wrote a few small programs to demo the deadlock, and the effectiveness of the `read_recursive` fix: https://github.com/michaelsproul/relock_deadlock_mvp
    * [The docs for `read_recursive`](https://docs.rs/lock_api/0.4.2/lock_api/struct.RwLock.html#method.read_recursive) warn of starvation for writers. I think in order for starvation to occur the database would have to be spammed with so many state reads that it's unable to ever clear them all and find time for a write, in which case migration of states to the freezer would cease. If an attack could be performed to trigger this starvation then it would likely trigger a deadlock in the current code, and I think ceasing migration is preferable to deadlocking in this extreme situation. In practice neither should occur due to protection from spammy peers at the network layer. Nevertheless, it would be prudent to run this change on the testnet nodes to check that it doesn't cause accidental starvation.

    * Return more detail when invalid data is found in the DB during startup (sigp#2445)

    - Resolves sigp#2444

    Adds some more detail to the error message returned when the `BeaconChainBuilder` is unable to access or decode block/state objects during startup.

    NA

    * Use hardware acceleration for SHA256 (sigp#2426)

    Modify the SHA256 implementation in `eth2_hashing` so that it switches between `ring` and `sha2` to take advantage of [x86_64 SHA extensions](https://en.wikipedia.org/wiki/Intel_SHA_extensions). The extensions are available on modern Intel and AMD CPUs, and seem to provide a considerable speed-up: on my Ryzen 5950X it dropped state tree hashing times by about 30% from 35ms to 25ms (on Prater).

    The extensions became available in the `sha2` crate [last year](https://www.reddit.com/r/rust/comments/hf2vcx/ann_rustcryptos_sha1_and_sha2_now_support/), and are not available in Ring, which uses a [pure Rust implementation of sha2](https://github.com/briansmith/ring/blob/main/src/digest/sha2.rs). Ring is faster on CPUs that lack the extensions so I've implemented a runtime switch to use `sha2` only when the extensions are available. The runtime switching seems to impose a miniscule penalty (see the benchmarks linked below).

    * Start a release checklist (sigp#2270)

    NA

    Add a checklist to the release draft created by CI. I know @michaelsproul was also working on this and I suspect @realbigsean also might have useful input.

    NA

    * Serious banning

    * fmt

    Co-authored-by: Mac L <[email protected]>
    Co-authored-by: Paul Hauner <[email protected]>
    Co-authored-by: Michael Sproul <[email protected]>

commit 3c0d322
Author: Age Manning <[email protected]>
Date:   Tue Jul 13 10:48:33 2021 +1000

    Global Network Behaviour Refactor (sigp#2442)

    * Network upgrades (sigp#2345)

    * Discovery patch (sigp#2382)

    * Upgrade libp2p and unstable gossip

    * Network protocol upgrades

    * Correct dependencies, reduce incoming bucket limit

    * Clean up dirty DHT entries before repopulating

    * Update cargo lock

    * Update lockfile

    * Update ENR dep

    * Update deps to specific versions

    * Update test dependencies

    * Update docker rust, and remote signer tests

    * More remote signer test fixes

    * Temp commit

    * Update discovery

    * Remove cached enrs after dialing

    * Increase the session capacity, for improved efficiency

    * Bleeding edge discovery (sigp#2435)

    * Update discovery banning logic and tokio

    * Update to latest discovery

    * Shift to latest discovery

    * Fmt

    * Initial re-factor of the behaviour

    * More progress

    * Missed changes

    * First draft

    * Discovery as a behaviour

    * Adding back event waker (not convinced its neccessary, but have made this many changes already)

    * Corrections

    * Speed up discovery

    * Remove double log

    * Fmt

    * After disconnect inform swarm about ban

    * More fmt

    * Appease clippy

    * Improve ban handling

    * Update tests

    * Update cargo.lock

    * Correct tests

    * Downgrade log

commit 6422632
Author: Pawan Dhananjay <[email protected]>
Date:   Fri Jul 9 08:18:29 2021 +0530

    Relax requirement for enr fork digest predicate (sigp#2433)

commit c1d2e35
Author: Age Manning <[email protected]>
Date:   Wed Jul 7 18:18:44 2021 +1000

    Bleeding edge discovery (sigp#2435)

    * Update discovery banning logic and tokio

    * Update to latest discovery

    * Shift to latest discovery

    * Fmt

commit f4bc9db
Author: Age Manning <[email protected]>
Date:   Tue Jun 15 14:53:35 2021 +1000

    Change the window mode of yamux (sigp#2390)

commit 6fb48b4
Author: Age Manning <[email protected]>
Date:   Tue Jun 15 14:40:43 2021 +1000

    Discovery patch (sigp#2382)

    * Upgrade libp2p and unstable gossip

    * Network protocol upgrades

    * Correct dependencies, reduce incoming bucket limit

    * Clean up dirty DHT entries before repopulating

    * Update cargo lock

    * Update lockfile

    * Update ENR dep

    * Update deps to specific versions

    * Update test dependencies

    * Update docker rust, and remote signer tests

    * More remote signer test fixes

    * Temp commit

    * Update discovery

    * Remove cached enrs after dialing

    * Increase the session capacity, for improved efficiency

commit 4aa06c9
Author: Age Manning <[email protected]>
Date:   Thu Jun 3 11:11:33 2021 +1000

    Network upgrades (sigp#2345)

commit b0f5c4c
Author: Paul Hauner <[email protected]>
Date:   Thu Jul 15 04:22:06 2021 +0000

    Clarify eth1 error message (sigp#2461)

    ## Issue Addressed

    - Closes sigp#2452

    ## Proposed Changes

    Addresses: sigp#2452 (comment)

    ## Additional Info

    NA

commit a3a7f39
Author: realbigsean <[email protected]>
Date:   Thu Jul 15 00:52:02 2021 +0000

    [Altair] Sync committee pools (sigp#2321)

    Add pools supporting sync committees:
    - naive sync aggregation pool
    - observed sync contributions pool
    - observed sync contributors pool
    - observed sync aggregators pool

    Add SSZ types and tests related to sync committee signatures.

    Co-authored-by: Michael Sproul <[email protected]>
    Co-authored-by: realbigsean <[email protected]>

commit 8fa6e46
Author: Michael Sproul <[email protected]>
Date:   Wed Jul 14 05:24:10 2021 +0000

    Update direct libsecp256k1 dependencies (sigp#2456)

    ## Proposed Changes

    * Remove direct dependencies on vulnerable `libsecp256k1 0.3.5`
    * Ignore the RUSTSEC issue until it is resolved in sigp#2389

commit fc4c611
Author: Paul Hauner <[email protected]>
Date:   Wed Jul 14 05:24:09 2021 +0000

    Remove msg about longer sync with remote eth1 nodes (sigp#2453)

    ## Issue Addressed

    - Resolves sigp#2452

    ## Proposed Changes

    I've seen a few people confused by this and I don't think the message is really worth it.

    ## Additional Info

    NA
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready-for-merge This PR is ready to merge. v1.5.0 For inclusion in v1.5.0 release
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants