-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix the inconsistency check in get_entries_in_data_block() #27195
Fix the inconsistency check in get_entries_in_data_block() #27195
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
Pull request has been modified.
Pull request has been modified.
automerge label removed due to a CI failure |
#### Problem get_entries_in_data_block() panics when there's inconsistency between slot_meta and data_shred. However, as we don't lock on reads, reading across multiple column families is not atomic (especially for older slots) and thus does not guarantee consistency as the background cleanup service could purge the slot in the middle. Such panic was reported in #26980 when the validator serves a high load of RPC calls. #### Summary of Changes This PR makes get_entries_in_data_block() panic only when the inconsistency between slot-meta and data-shred happens on a slot older than lowest_cleanup_slot. (cherry picked from commit 6d12bb6)
#### Problem get_entries_in_data_block() panics when there's inconsistency between slot_meta and data_shred. However, as we don't lock on reads, reading across multiple column families is not atomic (especially for older slots) and thus does not guarantee consistency as the background cleanup service could purge the slot in the middle. Such panic was reported in #26980 when the validator serves a high load of RPC calls. #### Summary of Changes This PR makes get_entries_in_data_block() panic only when the inconsistency between slot-meta and data-shred happens on a slot older than lowest_cleanup_slot. (cherry picked from commit 6d12bb6)
no test? ;) |
…27195) (#27231) Fix a corner-case panic in get_entries_in_data_block() (#27195) #### Problem get_entries_in_data_block() panics when there's inconsistency between slot_meta and data_shred. However, as we don't lock on reads, reading across multiple column families is not atomic (especially for older slots) and thus does not guarantee consistency as the background cleanup service could purge the slot in the middle. Such panic was reported in #26980 when the validator serves a high load of RPC calls. #### Summary of Changes This PR makes get_entries_in_data_block() panic only when the inconsistency between slot-meta and data-shred happens on a slot older than lowest_cleanup_slot. (cherry picked from commit 6d12bb6) Co-authored-by: Yueh-Hsuan Chiang <[email protected]>
…27195) (#27232) Fix a corner-case panic in get_entries_in_data_block() (#27195) #### Problem get_entries_in_data_block() panics when there's inconsistency between slot_meta and data_shred. However, as we don't lock on reads, reading across multiple column families is not atomic (especially for older slots) and thus does not guarantee consistency as the background cleanup service could purge the slot in the middle. Such panic was reported in #26980 when the validator serves a high load of RPC calls. #### Summary of Changes This PR makes get_entries_in_data_block() panic only when the inconsistency between slot-meta and data-shred happens on a slot older than lowest_cleanup_slot. (cherry picked from commit 6d12bb6) Co-authored-by: Yueh-Hsuan Chiang <[email protected]>
…7195) #### Problem get_entries_in_data_block() panics when there's inconsistency between slot_meta and data_shred. However, as we don't lock on reads, reading across multiple column families is not atomic (especially for older slots) and thus does not guarantee consistency as the background cleanup service could purge the slot in the middle. Such panic was reported in solana-labs#26980 when the validator serves a high load of RPC calls. #### Summary of Changes This PR makes get_entries_in_data_block() panic only when the inconsistency between slot-meta and data-shred happens on a slot older than lowest_cleanup_slot.
* refactor: extract store_stake_accounts fn * refactor: extract store_vote_account fn * refactor: extract reward history update fn * remove avg point value from pay_valiator fn. not used * clippy: slice * clippy: slice * remove abort() from test-validator (#27124) * chore: bump bytes from 1.1.0 to 1.2.1 (#27172) * chore: bump bytes from 1.1.0 to 1.2.1 Bumps [bytes](https://github.com/tokio-rs/bytes) from 1.1.0 to 1.2.1. - [Release notes](https://github.com/tokio-rs/bytes/releases) - [Changelog](https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md) - [Commits](tokio-rs/bytes@v1.1.0...v1.2.1) --- updated-dependencies: - dependency-name: bytes dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> * [auto-commit] Update all Cargo lock files Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot-buildkite <[email protected]> * Share Ancestors API get with contains_key (#27161) consolidate similar fns * Rename to `MAX_BLOCK_ACCOUNTS_DATA_SIZE_DELTA` (#27175) * chore: bump libc from 0.2.129 to 0.2.131 (#27162) * chore: bump libc from 0.2.129 to 0.2.131 Bumps [libc](https://github.com/rust-lang/libc) from 0.2.129 to 0.2.131. - [Release notes](https://github.com/rust-lang/libc/releases) - [Commits](rust-lang/libc@0.2.129...0.2.131) --- updated-dependencies: - dependency-name: libc dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> * [auto-commit] Update all Cargo lock files Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot-buildkite <[email protected]> * reverts wide fanout in broadcast when the root node is down (#26359) A change included in #20480 was that when the root node in turbine broadcast tree is down, the leader will broadcast the shred to all nodes in the first layer. The intention was to mitigate the impact of dead nodes on shreds propagation, because if the root node is down, then the entire cluster will miss out the shred. On the other hand, if x% of stake is down, this will cause 200*x% + 1 packets/shreds ratio at the broadcast stage which might contribute to line-rate saturation and packet drop. To avoid this bandwidth saturation issue, this commit reverts that logic and always broadcasts shreds from the leader only to the root node. As before we rely on erasure codes to recover shreds lost due to staked nodes being offline. * add getTokenLargestAccounts rpc method to rust client (#26840) * add get token largest accounts rpc call to client * split to include with commitment * Bump spl-token-2022 (#27181) * Bump token-2022 to 0.4.3 * Allow cargo to bump stuff to v1.11.5 * VoteProgram.safeWithdraw function to safeguard against accidental vote account closures (#26586) feat: safe withdraw function Co-authored-by: aschonfeld <[email protected]> * chore: bump futures from 0.3.21 to 0.3.23 (#27182) * chore: bump futures from 0.3.21 to 0.3.23 Bumps [futures](https://github.com/rust-lang/futures-rs) from 0.3.21 to 0.3.23. - [Release notes](https://github.com/rust-lang/futures-rs/releases) - [Changelog](https://github.com/rust-lang/futures-rs/blob/master/CHANGELOG.md) - [Commits](rust-lang/futures-rs@0.3.21...0.3.23) --- updated-dependencies: - dependency-name: futures dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> * [auto-commit] Update all Cargo lock files Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot-buildkite <[email protected]> * chore: bump nix from 0.24.2 to 0.25.0 (#27179) * chore: bump nix from 0.24.2 to 0.25.0 Bumps [nix](https://github.com/nix-rust/nix) from 0.24.2 to 0.25.0. - [Release notes](https://github.com/nix-rust/nix/releases) - [Changelog](https://github.com/nix-rust/nix/blob/master/CHANGELOG.md) - [Commits](nix-rust/nix@v0.24.2...v0.25.0) --- updated-dependencies: - dependency-name: nix dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> * [auto-commit] Update all Cargo lock files Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot-buildkite <[email protected]> * Parse ConfidentialTransaction instructions (#26825) Parse ConfidentialTransfer instructions * snapshots: serialize version file first (#27192) serialize version file first * serialize incremental_snapshot_hash (#26839) * serialize incremental_snapshot_hash * pr feedback * derives Error trait for ClusterInfoError and core::result::Error (#27208) * Add clean_accounts_for_tests() (#27200) * Rust v1.63.0 (#27148) * Upgrade to Rust v1.63.0 * Add nightly_clippy_allows * Resolve some new clippy nightly lints * Increase QUIC packets completion timeout Co-authored-by: Michael Vines <[email protected]> * docs: updated "transaction fees" page (#26861) * docs: transaction fees, compute units, compute budget * docs: added messages definition * Revert "docs: added messages definition" This reverts commit 3c56156. * docs: added messages definition * Update docs/src/transaction_fees.md Co-authored-by: Jacob Creech <[email protected]> * fix: updates from feedback Co-authored-by: Jacob Creech <[email protected]> * sdk: Fix args after "--" in build-bpf and test-bpf (#27221) * Flaky Unit Test test_rpc_subscriptions (#27214) Increase unit test timeout from 5 seconds to 10 seconds * chore: only buildkite pipelines use sccache in docker-run.sh (#27204) chore: only buildkite ci use sccache * clean feature: `prevent_calling_precompiles_as_programs` (#27100) * clean feature: prevent_calling_precompiles_as_programs * fix tests * fix test * remove comment * fix test * feedback * Add get_account_with_commitment to BenchTpsClient (#27176) * Fix a corner-case panic in get_entries_in_data_block() (#27195) #### Problem get_entries_in_data_block() panics when there's inconsistency between slot_meta and data_shred. However, as we don't lock on reads, reading across multiple column families is not atomic (especially for older slots) and thus does not guarantee consistency as the background cleanup service could purge the slot in the middle. Such panic was reported in #26980 when the validator serves a high load of RPC calls. #### Summary of Changes This PR makes get_entries_in_data_block() panic only when the inconsistency between slot-meta and data-shred happens on a slot older than lowest_cleanup_slot. * Verify snapshot slot deltas (#26666) * store-tool: log lamports for each account (#27168) log lamports for each account * add an assert for a debug feature to avoid wasted time (#27210) * remove redundant call that bumps age to future (#27215) * Use from_secs api to create duration (#27222) use from_secs api to create duration * reorder slot # in debug hash data path (#27217) * create helper fn for clarity (#27216) * Verifying snapshot bank must always specify the snapshot slot (#27234) * Remove `Bank::ensure_no_storage_rewards_pool()` (#26468) * cli: Add subcommands for address lookup tables (#27123) * cli: Add subcommand for creating address lookup tables * cli: Add additional subcommands for address lookup tables * short commands * adds hash domain to ping-pong protocol (#27193) In order to maintain backward compatibility, for now the responding node will hash the token both with and without domain so that the other node will accept the response regardless of its upgrade status. Once the cluster has upgraded to the new code, we will remove the legacy domain = false case. * Revert "Rust v1.63.0 (#27148)" (#27245) This reverts commit a2e7bdf. * correct double negation (#27240) * Enable QUIC client by default. Add arg to disable QUIC client. (Forward port #26927) (#27194) Enable QUIC client by default. Add arg to disable QUIC client. * Enable QUIC client by default. Add arg to disable QUIC client. * Deprecate --disable-quic-servers arg * Add #[ignore] annotation to failing tests * slots_connected: check if the range is connected (>= ending_slot) (#27152) * create-snapshot check if snapshot slot exists (#27153) * Add Bank::clean_accounts_for_tests() (#27209) * Call `AccountsDb::shrink_all_slots()` directly (#27235) * add ed25519_program to built-in instruction cost list (#27199) * add ed25519_program to built-in instruction cost list * Remove unnecessary and stale comment * simple refactorings to disk idx (#27238) * add _inclusive for clarity (#27239) * eliminate unnecessary ZERO_RAW_LAMPORTS_SENTINEL (#27218) * make test code more clear (#27260) * banking stage: actually aggregate tracer packet stats (#27118) * aggregated_tracer_packet_stats_option was alwasys None * Actually accumulate tracer packet stats * Refactor epoch reward 1 (#27253) * refactor: extract store_stake_accounts fn * clippy: slice Co-authored-by: haoran <haoran@mbook> * recovers merkle shreds from erasure codes (#27136) The commit * Identifies Merkle shreds when recovering from erasure codes and dispatches specialized code to reconstruct shreds. * Coding shred headers are added to recovered erasure shards. * Merkle tree is reconstructed for the erasure batch and added to recovered shreds. * The common signature (for the root of Merkle tree) is attached to all recovered shreds. * Simplify `Bank::clean_accounts()` by removing params (#27254) * Account files remove (#26910) * Create a new function cleanup_accounts_paths, a trivial change * Remove account files asynchronously * Update and simplify the implementation after the validator test runs. * Fixes after testing on the dev device * Discard tokio. Use thread instead * Fix comments format * Fix config type to pass the github test * Fix failed tests. Handle the case of non-existing path * Final cleanup, addressing the review comments Avoided OsString. Made the function more generic with "impl AsRef<Path>" Co-authored-by: Jeff Washington <[email protected]> * Refactor: Flattens `TransactionContext::instruction_trace` (#27109) * Flattens TransactionContext::instruction_trace. * Stop the search at transaction level. * Renames get_instruction_context_at => get_instruction_context_at_nesting_level. * Removes TransactionContext::get_instruction_trace(). Adds TransactionContext::get_instruction_trace_length() and TransactionContext::get_instruction_context_at_index(). * Have TransactionContext::instruction_accounts_lamport_sum() accept an iterator instead of a slice. * Removes instruction_trace from ExecutionRecord. * make InstructionContext::new() private * Parallel insertion of dirty store keys during clean (#27058) parallelize dirty store key insertion * Refactor epoch reward 2 (#27257) * refactor: extract store_stake_accounts fn * refactor: extract store_vote_account fn * clippy: slice * clippy: slice * fix merge error Co-authored-by: haoran <haoran@mbook> * Standardize thread names Tenets: 1. Limit thread names to 15 characters 2. Prefix all Solana-controlled threads with "sol" 3. Use Camel case. It's more character dense than Snake or Kebab case * cleanup comment on filter_zero_lamport_clean_for_incremental_snapshots (#27273) * remove inaccurate log (#27255) * patches metrics for invalid cached vote/stake accounts (#27266) patches invalid cached vote/stake accounts metrics Invalid cached vote accounts is overcounting actual mismatches, and invalid cached stake accounts is undercounting. * Refactor epoch reward 3 (#27259) * refactor: extract store_stake_accounts fn * refactor: extract store_vote_account fn * refactor: extract reward history update fn * clippy: slice * clippy: slice Co-authored-by: haoran <haoran@mbook> * fix merges Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: haoran <haoran@mbook> Co-authored-by: Jeff Biseda <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot-buildkite <[email protected]> Co-authored-by: Brooks Prumo <[email protected]> Co-authored-by: behzad nouri <[email protected]> Co-authored-by: AJ Taylor <[email protected]> Co-authored-by: Tyera Eulberg <[email protected]> Co-authored-by: Andrew Schonfeld <[email protected]> Co-authored-by: aschonfeld <[email protected]> Co-authored-by: apfitzge <[email protected]> Co-authored-by: Jeff Washington (jwash) <[email protected]> Co-authored-by: Brennan Watt <[email protected]> Co-authored-by: Michael Vines <[email protected]> Co-authored-by: Nick Frostbutter <[email protected]> Co-authored-by: Jacob Creech <[email protected]> Co-authored-by: Jon Cinque <[email protected]> Co-authored-by: Yihau Chen <[email protected]> Co-authored-by: Justin Starry <[email protected]> Co-authored-by: kirill lykov <[email protected]> Co-authored-by: Yueh-Hsuan Chiang <[email protected]> Co-authored-by: leonardkulms <[email protected]> Co-authored-by: Will Hickey <[email protected]> Co-authored-by: Tao Zhu <[email protected]> Co-authored-by: Xiang Zhu <[email protected]> Co-authored-by: Jeff Washington <[email protected]> Co-authored-by: Alexander Meißner <[email protected]>
@yhchiang-sol This function is called from replay stage when processing shreds for completed sets for the first time. The commit is suppressing the panic, but if the slot is incorrectly cleaned up, then the culprit is somewhere else!
|
@behzadnouri Thanks for raising it. Let me revisit the purge logic a bit. |
@behzadnouri The purge is determined by the ledger_cleanup_service, where anything older than the max-ledger-shreds (i.e., the Is there any atomic variable that stores the current replay status so that the purge process can honor this? Although it will then keep more shreds than the configured |
In any case, I will do some code study and file a PR to make the cleanup service honor the replay progress. |
Hey @behzadnouri. @steviez and I chatted a little bit about this, and we found that the replay stage isn't the only call-stack that might invoke get_slot_entries_with_shred_info(). The rpc::get_block We also double-checked the ledger_cleanup_service code, and it never purges anything newer than the root. So everything is good. |
Thanks for posting the summary @yhchiang-sol. Poking around a little more, there is another codepath that will hit
From the Here is the implementation for pub fn get_rooted_block(
...
) -> Result<VersionedConfirmedBlock> {
datapoint_info!("blockstore-rpc-api", ("method", "get_rooted_block", String));
let _lock = self.check_lowest_cleanup_slot(slot)?;
if self.is_root(slot) {
return self.get_complete_block(slot, require_previous_blockhash);
}
Err(BlockstoreError::SlotNotRooted)
} And the implementation for fn check_lowest_cleanup_slot(&self, slot: Slot) -> Result<std::sync::RwLockReadGuard<Slot>> {
// lowest_cleanup_slot is the last slot that was not cleaned up by LedgerCleanupService
let lowest_cleanup_slot = self.lowest_cleanup_slot.read().unwrap();
if *lowest_cleanup_slot > 0 && *lowest_cleanup_slot >= slot {
return Err(BlockstoreError::SlotCleanedUp);
}
// Make caller hold this lock properly; otherwise LedgerCleanupService can purge/compact
// needed slots here at any given moment
Ok(lowest_cleanup_slot)
} Note that the lock is held for
However, if the node was serving RPC requests on very old slots, those very old slots should have been rooted, so I think the codepath would have hit the route that involves lowest_cleanup_slot lock |
I have seen this panic a lot on the cluster from non-rpc nodes. The way this commit is silencing the panic prevents us from identifying and fixing the root cause here. I don't know about the RPC call path, but we need to make sure the other call paths (replay and anything else) don't simply ignore the error here. |
Sure thing. I will create follow-up PR to make sure those critical code-path panic for this type of error. |
@behzadnouri: Can I know whether this is seen recently or it has been there for a while? In addition, can I know whether this happens during the initial catch-up duration or not?
Originally I was baking #27498 that makes ledger-cleanup-service honor the confirm slot progress, but I found the ledger-cleanup-service already honors the root so it never purges anything newer than the root inside find_slots_to_clean:
|
There were some from last 30 day which I was querying but I don't remember their exact days. |
https://discord.com/channels/428295358100013066/439194979856809985/1070199891609014292 This is now panicking on mainnet on v1.15. |
This may be related to a reduced |
I started peaking around code a little bit since I had touched some of these items recently and things are still fairly fresh. As a general refresher (for my own sake too), the issue stems from inconsistency between meta data and shreds for a slot. For some slot Here are some miscellaneous notes; LCS = LedgerCleanupService:
Reading the meta information and reading the shreds are separate queries to rocks, so it is seemingly the case that a slot is getting cleaned up between reads. We could grab a If we get steps to repro, I think the following things would be helpful:
At the moment, my money is on |
Problem
get_entries_in_data_block() panics when there's inconsistency between
slot_meta and data_shred.
However, as we don't lock on reads, reading across multiple column families is
not atomic (especially for older slots) and thus does not guarantee consistency
as the background cleanup service could purge the slot in the middle. Such
panic was reported in #26980 when the validator serves a high load of RPC calls.
Summary of Changes
This PR makes get_entries_in_data_block() panic only when the inconsistency
between slot-meta and data-shred happens on a slot newer than lowest_cleanup_slot.