Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

modify geyser account iter at snapshot load #960

Merged
merged 1 commit into from
Apr 22, 2024

Conversation

jeffwashington
Copy link

Problem

stop mmapping storage files.
storages will no longer be able to return refs to many accounts at a time.

Summary of Changes

populate geyser at snapshot load using scans, which can handle the refs appropriately. This requires 2 passes to deal with duplicates effectively.

Fixes #

@codecov-commenter
Copy link

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 81.8%. Comparing base (14d6c79) to head (9a90f08).
Report is 5 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff            @@
##           master     #960     +/-   ##
=========================================
- Coverage    81.9%    81.8%   -0.1%     
=========================================
  Files         853      853             
  Lines      231797   231813     +16     
=========================================
- Hits       189845   189813     -32     
- Misses      41952    42000     +48     

Comment on lines +117 to +122
if let Some(highest_i) = accounts_duplicate.get(account.pubkey()) {
if highest_i != &i {
// this pubkey is in this storage twice and the current instance is not the last one, so we skip it.
// We only send unique accounts in this slot to `notify_filtered_accounts`
return;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to do move where account_len is incremented? Or maybe before we call return we also decrement? Otherwise I think we'll over count the number of accounts if there are duplicates.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i wondered this, too. in the previous impl, we incremented account_len for each account in accounts.for_each(|account| {. This would have included duplicates already. The impl in this pr should be exactly the same behavior as the previous impl.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha. Yeah, that's fair. Won't be an issue anywhere until we have ancient append vecs. And if we make pack the default too, then by then we'll never hit this issue (and could remove this duplicates handling entirely).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

even still, ancient append vecs should in practice never have duplicates. but it is possible I think.

Copy link

@brooksprumo brooksprumo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

Comment on lines +117 to +122
if let Some(highest_i) = accounts_duplicate.get(account.pubkey()) {
if highest_i != &i {
// this pubkey is in this storage twice and the current instance is not the last one, so we skip it.
// We only send unique accounts in this slot to `notify_filtered_accounts`
return;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha. Yeah, that's fair. Won't be an issue anywhere until we have ancient append vecs. And if we make pack the default too, then by then we'll never hit this issue (and could remove this duplicates handling entirely).

@jeffwashington jeffwashington merged commit 862c79e into anza-xyz:master Apr 22, 2024
38 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants