-
Notifications
You must be signed in to change notification settings - Fork 12.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
undefined behavior after calling VecDeque::shrink_to #108453
Labels
C-bug
Category: This is a bug.
I-unsound
Issue: A soundness hole (worst kind of bug), see: https://en.wikipedia.org/wiki/Soundness
regression-from-stable-to-stable
Performance or correctness regression from one stable version to another.
T-libs
Relevant to the library team, which will review and decide on the PR/issue.
Comments
trinity-1686a
changed the title
undefined behavior after calling VeqDeque::shrink_to
undefined behavior after calling VecDeque::shrink_to
Feb 25, 2023
3 tasks
Noratrieb
added
the
T-libs
Relevant to the library team, which will review and decide on the PR/issue.
label
Feb 25, 2023
tmiasko
added
regression-from-stable-to-stable
Performance or correctness regression from one stable version to another.
I-unsound
Issue: A soundness hole (worst kind of bug), see: https://en.wikipedia.org/wiki/Soundness
labels
Feb 25, 2023
rustbot
added
the
I-prioritize
Issue: Indicates that prioritization has been requested for this issue.
label
Feb 25, 2023
I can submit a PR to fix it later today. I was under the impression that the tests covered all cases for |
Sp00ph
added a commit
to Sp00ph/rust
that referenced
this issue
Feb 26, 2023
This adds both a test specific to rust-lang#108453 as well as an exhaustive test that goes through all possible combinations of head index, length and target capacity for a deque with capacity 16.
cuviper
pushed a commit
to cuviper/rust
that referenced
this issue
Mar 3, 2023
This adds both a test specific to rust-lang#108453 as well as an exhaustive test that goes through all possible combinations of head index, length and target capacity for a deque with capacity 16. (cherry picked from commit 9e22516)
workingjubilee
removed
the
I-prioritize
Issue: Indicates that prioritization has been requested for this issue.
label
Mar 3, 2023
bors bot
pushed a commit
to sigp/lighthouse
that referenced
this issue
Apr 18, 2023
## Issue Addressed There was a [`VecDeque` bug](rust-lang/rust#108453) in some recent versions of the Rust standard library (1.67.0 & 1.67.1) that could cause Lighthouse to panic (reported by `@Sea Monkey` on discord). See full logs below. The issue was likely introduced in Rust 1.67.0 and [fixed](rust-lang/rust#108475) in 1.68, and we were able to reproduce the panic ourselves using [@michaelsproul's fuzz tests](https://github.com/michaelsproul/lighthouse/blob/fuzz-lru-time-cache/beacon_node/lighthouse_network/src/peer_manager/fuzz.rs#L111) on both Rust 1.67.0 and 1.67.1. Users that uses our Docker images or binaries are unlikely affected, as our Docker images were built with `1.66`, and latest binaries were built with latest stable (`1.68.2`). It likely impacts user that builds from source using Rust versions 1.67.x. ## Proposed Changes Bump Rust version (MSRV) to latest stable `1.68.2`. ## Additional Info From `@Sea Monkey` on Lighthouse Discord: > Crash on goerli using `unstable` `dd124b2d6804d02e4e221f29387a56775acccd08` ``` thread 'tokio-runtime-worker' panicked at 'Key must exist', /mnt/goerli/goerli/lighthouse/common/lru_cache/src/time.rs:68:28 stack backtrace: Apr 15 09:37:36.993 WARN Peer sent invalid block in single block lookup, peer_id: 16Uiu2HAm6ZuyJpVpR6y51X4Enbp8EhRBqGycQsDMPX7e5XfPYznG, error: WouldRevertFinalizedSlot { block_slot: Slot(5420212), finalized_slot: Slot(5420224) }, root: 0x10f6…3165, service: sync 0: rust_begin_unwind at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/panicking.rs:575:5 1: core::panicking::panic_fmt at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:64:14 2: core::panicking::panic_display at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:135:5 3: core::panicking::panic_str at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:119:5 4: core::option::expect_failed at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/option.rs:1879:5 5: lru_cache::time::LRUTimeCache<Key>::raw_remove 6: lighthouse_network::peer_manager::PeerManager<TSpec>::handle_ban_operation 7: lighthouse_network::peer_manager::PeerManager<TSpec>::handle_score_action 8: lighthouse_network::peer_manager::PeerManager<TSpec>::report_peer 9: network::service::NetworkService<T>::spawn_service::{{closure}} 10: <futures_util::future::select::Select<A,B> as core::future::future::Future>::poll 11: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll 12: <futures_util::future::future::flatten::Flatten<Fut,<Fut as core::future::future::Future>::Output> as core::future::future::Future>::poll 13: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut 14: tokio::runtime::task::core::Core<T,S>::poll 15: tokio::runtime::task::harness::Harness<T,S>::poll 16: tokio::runtime::scheduler::multi_thread::worker::Context::run_task 17: tokio::runtime::scheduler::multi_thread::worker::Context::run 18: tokio::macros::scoped_tls::ScopedKey<T>::set 19: tokio::runtime::scheduler::multi_thread::worker::run 20: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut 21: tokio::runtime::task::core::Core<T,S>::poll 22: tokio::runtime::task::harness::Harness<T,S>::poll 23: tokio::runtime::blocking::pool::Inner::run note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. Apr 15 09:37:37.069 INFO Saved DHT state service: network Apr 15 09:37:37.070 INFO Network service shutdown service: network Apr 15 09:37:37.132 CRIT Task panic. This is a bug! advice: Please check above for a backtrace and notify the developers, message: <none>, task_name: network Apr 15 09:37:37.132 INFO Internal shutdown received reason: Panic (fatal error) Apr 15 09:37:37.133 INFO Shutting down.. reason: Failure("Panic (fatal error)") Apr 15 09:37:37.135 WARN Unable to free worker error: channel closed, msg: did not free worker, shutdown may be underway Apr 15 09:37:39.350 INFO Saved beacon chain to disk service: beacon Panic (fatal error) ```
xenowits
pushed a commit
to xenowits/lighthouse
that referenced
this issue
May 16, 2023
## Issue Addressed There was a [`VecDeque` bug](rust-lang/rust#108453) in some recent versions of the Rust standard library (1.67.0 & 1.67.1) that could cause Lighthouse to panic (reported by `@Sea Monkey` on discord). See full logs below. The issue was likely introduced in Rust 1.67.0 and [fixed](rust-lang/rust#108475) in 1.68, and we were able to reproduce the panic ourselves using [@michaelsproul's fuzz tests](https://github.com/michaelsproul/lighthouse/blob/fuzz-lru-time-cache/beacon_node/lighthouse_network/src/peer_manager/fuzz.rs#L111) on both Rust 1.67.0 and 1.67.1. Users that uses our Docker images or binaries are unlikely affected, as our Docker images were built with `1.66`, and latest binaries were built with latest stable (`1.68.2`). It likely impacts user that builds from source using Rust versions 1.67.x. ## Proposed Changes Bump Rust version (MSRV) to latest stable `1.68.2`. ## Additional Info From `@Sea Monkey` on Lighthouse Discord: > Crash on goerli using `unstable` `dd124b2d6804d02e4e221f29387a56775acccd08` ``` thread 'tokio-runtime-worker' panicked at 'Key must exist', /mnt/goerli/goerli/lighthouse/common/lru_cache/src/time.rs:68:28 stack backtrace: Apr 15 09:37:36.993 WARN Peer sent invalid block in single block lookup, peer_id: 16Uiu2HAm6ZuyJpVpR6y51X4Enbp8EhRBqGycQsDMPX7e5XfPYznG, error: WouldRevertFinalizedSlot { block_slot: Slot(5420212), finalized_slot: Slot(5420224) }, root: 0x10f6…3165, service: sync 0: rust_begin_unwind at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/panicking.rs:575:5 1: core::panicking::panic_fmt at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:64:14 2: core::panicking::panic_display at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:135:5 3: core::panicking::panic_str at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:119:5 4: core::option::expect_failed at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/option.rs:1879:5 5: lru_cache::time::LRUTimeCache<Key>::raw_remove 6: lighthouse_network::peer_manager::PeerManager<TSpec>::handle_ban_operation 7: lighthouse_network::peer_manager::PeerManager<TSpec>::handle_score_action 8: lighthouse_network::peer_manager::PeerManager<TSpec>::report_peer 9: network::service::NetworkService<T>::spawn_service::{{closure}} 10: <futures_util::future::select::Select<A,B> as core::future::future::Future>::poll 11: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll 12: <futures_util::future::future::flatten::Flatten<Fut,<Fut as core::future::future::Future>::Output> as core::future::future::Future>::poll 13: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut 14: tokio::runtime::task::core::Core<T,S>::poll 15: tokio::runtime::task::harness::Harness<T,S>::poll 16: tokio::runtime::scheduler::multi_thread::worker::Context::run_task 17: tokio::runtime::scheduler::multi_thread::worker::Context::run 18: tokio::macros::scoped_tls::ScopedKey<T>::set 19: tokio::runtime::scheduler::multi_thread::worker::run 20: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut 21: tokio::runtime::task::core::Core<T,S>::poll 22: tokio::runtime::task::harness::Harness<T,S>::poll 23: tokio::runtime::blocking::pool::Inner::run note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. Apr 15 09:37:37.069 INFO Saved DHT state service: network Apr 15 09:37:37.070 INFO Network service shutdown service: network Apr 15 09:37:37.132 CRIT Task panic. This is a bug! advice: Please check above for a backtrace and notify the developers, message: <none>, task_name: network Apr 15 09:37:37.132 INFO Internal shutdown received reason: Panic (fatal error) Apr 15 09:37:37.133 INFO Shutting down.. reason: Failure("Panic (fatal error)") Apr 15 09:37:37.135 WARN Unable to free worker error: channel closed, msg: did not free worker, shutdown may be underway Apr 15 09:37:39.350 INFO Saved beacon chain to disk service: beacon Panic (fatal error) ```
ghost
pushed a commit
to oone-world/lighthouse
that referenced
this issue
Jul 13, 2023
## Issue Addressed There was a [`VecDeque` bug](rust-lang/rust#108453) in some recent versions of the Rust standard library (1.67.0 & 1.67.1) that could cause Lighthouse to panic (reported by `@Sea Monkey` on discord). See full logs below. The issue was likely introduced in Rust 1.67.0 and [fixed](rust-lang/rust#108475) in 1.68, and we were able to reproduce the panic ourselves using [@michaelsproul's fuzz tests](https://github.com/michaelsproul/lighthouse/blob/fuzz-lru-time-cache/beacon_node/lighthouse_network/src/peer_manager/fuzz.rs#L111) on both Rust 1.67.0 and 1.67.1. Users that uses our Docker images or binaries are unlikely affected, as our Docker images were built with `1.66`, and latest binaries were built with latest stable (`1.68.2`). It likely impacts user that builds from source using Rust versions 1.67.x. ## Proposed Changes Bump Rust version (MSRV) to latest stable `1.68.2`. ## Additional Info From `@Sea Monkey` on Lighthouse Discord: > Crash on goerli using `unstable` `dd124b2d6804d02e4e221f29387a56775acccd08` ``` thread 'tokio-runtime-worker' panicked at 'Key must exist', /mnt/goerli/goerli/lighthouse/common/lru_cache/src/time.rs:68:28 stack backtrace: Apr 15 09:37:36.993 WARN Peer sent invalid block in single block lookup, peer_id: 16Uiu2HAm6ZuyJpVpR6y51X4Enbp8EhRBqGycQsDMPX7e5XfPYznG, error: WouldRevertFinalizedSlot { block_slot: Slot(5420212), finalized_slot: Slot(5420224) }, root: 0x10f6…3165, service: sync 0: rust_begin_unwind at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/panicking.rs:575:5 1: core::panicking::panic_fmt at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:64:14 2: core::panicking::panic_display at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:135:5 3: core::panicking::panic_str at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:119:5 4: core::option::expect_failed at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/option.rs:1879:5 5: lru_cache::time::LRUTimeCache<Key>::raw_remove 6: lighthouse_network::peer_manager::PeerManager<TSpec>::handle_ban_operation 7: lighthouse_network::peer_manager::PeerManager<TSpec>::handle_score_action 8: lighthouse_network::peer_manager::PeerManager<TSpec>::report_peer 9: network::service::NetworkService<T>::spawn_service::{{closure}} 10: <futures_util::future::select::Select<A,B> as core::future::future::Future>::poll 11: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll 12: <futures_util::future::future::flatten::Flatten<Fut,<Fut as core::future::future::Future>::Output> as core::future::future::Future>::poll 13: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut 14: tokio::runtime::task::core::Core<T,S>::poll 15: tokio::runtime::task::harness::Harness<T,S>::poll 16: tokio::runtime::scheduler::multi_thread::worker::Context::run_task 17: tokio::runtime::scheduler::multi_thread::worker::Context::run 18: tokio::macros::scoped_tls::ScopedKey<T>::set 19: tokio::runtime::scheduler::multi_thread::worker::run 20: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut 21: tokio::runtime::task::core::Core<T,S>::poll 22: tokio::runtime::task::harness::Harness<T,S>::poll 23: tokio::runtime::blocking::pool::Inner::run note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. Apr 15 09:37:37.069 INFO Saved DHT state service: network Apr 15 09:37:37.070 INFO Network service shutdown service: network Apr 15 09:37:37.132 CRIT Task panic. This is a bug! advice: Please check above for a backtrace and notify the developers, message: <none>, task_name: network Apr 15 09:37:37.132 INFO Internal shutdown received reason: Panic (fatal error) Apr 15 09:37:37.133 INFO Shutting down.. reason: Failure("Panic (fatal error)") Apr 15 09:37:37.135 WARN Unable to free worker error: channel closed, msg: did not free worker, shutdown may be underway Apr 15 09:37:39.350 INFO Saved beacon chain to disk service: beacon Panic (fatal error) ```
Woodpile37
pushed a commit
to Woodpile37/lighthouse
that referenced
this issue
Jan 6, 2024
There was a [`VecDeque` bug](rust-lang/rust#108453) in some recent versions of the Rust standard library (1.67.0 & 1.67.1) that could cause Lighthouse to panic (reported by `@Sea Monkey` on discord). See full logs below. The issue was likely introduced in Rust 1.67.0 and [fixed](rust-lang/rust#108475) in 1.68, and we were able to reproduce the panic ourselves using [@michaelsproul's fuzz tests](https://github.com/michaelsproul/lighthouse/blob/fuzz-lru-time-cache/beacon_node/lighthouse_network/src/peer_manager/fuzz.rs#L111) on both Rust 1.67.0 and 1.67.1. Users that uses our Docker images or binaries are unlikely affected, as our Docker images were built with `1.66`, and latest binaries were built with latest stable (`1.68.2`). It likely impacts user that builds from source using Rust versions 1.67.x. Bump Rust version (MSRV) to latest stable `1.68.2`. From `@Sea Monkey` on Lighthouse Discord: > Crash on goerli using `unstable` `dd124b2d6804d02e4e221f29387a56775acccd08` ``` thread 'tokio-runtime-worker' panicked at 'Key must exist', /mnt/goerli/goerli/lighthouse/common/lru_cache/src/time.rs:68:28 stack backtrace: Apr 15 09:37:36.993 WARN Peer sent invalid block in single block lookup, peer_id: 16Uiu2HAm6ZuyJpVpR6y51X4Enbp8EhRBqGycQsDMPX7e5XfPYznG, error: WouldRevertFinalizedSlot { block_slot: Slot(5420212), finalized_slot: Slot(5420224) }, root: 0x10f6…3165, service: sync 0: rust_begin_unwind at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/panicking.rs:575:5 1: core::panicking::panic_fmt at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:64:14 2: core::panicking::panic_display at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:135:5 3: core::panicking::panic_str at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:119:5 4: core::option::expect_failed at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/option.rs:1879:5 5: lru_cache::time::LRUTimeCache<Key>::raw_remove 6: lighthouse_network::peer_manager::PeerManager<TSpec>::handle_ban_operation 7: lighthouse_network::peer_manager::PeerManager<TSpec>::handle_score_action 8: lighthouse_network::peer_manager::PeerManager<TSpec>::report_peer 9: network::service::NetworkService<T>::spawn_service::{{closure}} 10: <futures_util::future::select::Select<A,B> as core::future::future::Future>::poll 11: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll 12: <futures_util::future::future::flatten::Flatten<Fut,<Fut as core::future::future::Future>::Output> as core::future::future::Future>::poll 13: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut 14: tokio::runtime::task::core::Core<T,S>::poll 15: tokio::runtime::task::harness::Harness<T,S>::poll 16: tokio::runtime::scheduler::multi_thread::worker::Context::run_task 17: tokio::runtime::scheduler::multi_thread::worker::Context::run 18: tokio::macros::scoped_tls::ScopedKey<T>::set 19: tokio::runtime::scheduler::multi_thread::worker::run 20: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut 21: tokio::runtime::task::core::Core<T,S>::poll 22: tokio::runtime::task::harness::Harness<T,S>::poll 23: tokio::runtime::blocking::pool::Inner::run note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. Apr 15 09:37:37.069 INFO Saved DHT state service: network Apr 15 09:37:37.070 INFO Network service shutdown service: network Apr 15 09:37:37.132 CRIT Task panic. This is a bug! advice: Please check above for a backtrace and notify the developers, message: <none>, task_name: network Apr 15 09:37:37.132 INFO Internal shutdown received reason: Panic (fatal error) Apr 15 09:37:37.133 INFO Shutting down.. reason: Failure("Panic (fatal error)") Apr 15 09:37:37.135 WARN Unable to free worker error: channel closed, msg: did not free worker, shutdown may be underway Apr 15 09:37:39.350 INFO Saved beacon chain to disk service: beacon Panic (fatal error) ```
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
C-bug
Category: This is a bug.
I-unsound
Issue: A soundness hole (worst kind of bug), see: https://en.wikipedia.org/wiki/Soundness
regression-from-stable-to-stable
Performance or correctness regression from one stable version to another.
T-libs
Relevant to the library team, which will review and decide on the PR/issue.
I tried this code:
I expected to see this happen: no error
Instead, this happened:
note: no zero was pushed, this is reading an undefined byte.
Meta
rustc --version --verbose
:Additional information
I believe this happens because the following case is not handled in
shrink_to
In this case,
self.head < target_cap
, so two of the "cases of interest" are not a match, andtail_outside == false
, so the last case isn't matched either.The text was updated successfully, but these errors were encountered: