-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix ref-count for multiple stores to the same pubkey in a slot, fixes zero lamport purge detection #12462
Conversation
fa8f897
to
92401fe
Compare
Codecov Report
@@ Coverage Diff @@
## master #12462 +/- ##
=======================================
Coverage 82.0% 82.0%
=======================================
Files 354 354
Lines 82719 82732 +13
=======================================
+ Hits 67896 67913 +17
+ Misses 14823 14819 -4 |
runtime/src/accounts_db.rs
Outdated
.into_iter() | ||
.map(|account| account.meta.pubkey) | ||
.collect::<Vec<Pubkey>>(), | ||
) | ||
}) | ||
.collect() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about reduce()
-ing here like this, which will be done in parallel by rayon?:
let pubkeys: HashSet<(Slot, Pubkey)> =
...
.reduce(|| HashSet<_>::new(), |mut reduced, pubkeys| reduced.extend(pubkeys)
so that we can remove the need to construct cleaned_slot_keys
later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with nits, awesome bug fixing. I guess you found this by carefully reading the code to fight with the snapshot bloat issue at #12194 .
I guess this bug doesn't manifest as clear errors. Only this causes is dangling appendvecs, which doesn't reduce the snapshot size as expected.
Pull request has been modified.
@ryoqun I actually found it while writing those snapshot hash mismatch tests :) |
a2efc21
to
4c21ec7
Compare
4c21ec7
to
59d27eb
Compare
59d27eb
to
ecec28a
Compare
Problem
Storing to the same key multiple times in a slot only keeps one entry alive for that (pubkey, slot) pair in the AccountsIndex, but increments the ref count multiple times. This means the zero-lamport purge logic will not accurately detect if an account can be purged
Summary of Changes
Only increment ref count once per store per pubkey/slot.
Fixes #