-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix sector expiration #1721
Fix sector expiration #1721
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -449,6 +449,10 @@ where | |
|
||
while let Some(()) = archived_segments_receiver.next().await { | ||
let archived_segment_header = last_archived_segment.load(Ordering::SeqCst); | ||
trace!( | ||
segment_index = %archived_segment_header.segment_index(), | ||
"New archived segment received", | ||
); | ||
|
||
// It is fine to take a synchronous read lock here because the only time | ||
// write lock is taken is during plotting, which we know doesn't happen | ||
|
@@ -460,7 +464,23 @@ where | |
.collect_into(&mut sectors_to_check); | ||
for (sector_index, history_size) in sectors_to_check.drain(..) { | ||
if let Some(sector_expire_at) = sectors_expire_at.get(§or_index) { | ||
if *sector_expire_at >= archived_segment_header.segment_index() { | ||
trace!( | ||
%sector_index, | ||
%history_size, | ||
%sector_expire_at, | ||
"Checking sector for expiration" | ||
); | ||
// +1 means we will start replotting a bit before it actually expires to avoid | ||
// storing expired sectors | ||
if *sector_expire_at | ||
<= (archived_segment_header.segment_index() + SegmentIndex::ONE) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can this + 1 be made a method like There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Possibly, do you think it would be more readable? Rust doesn't have increments and I got used to it eventually. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just preference, this is fine too |
||
{ | ||
debug!( | ||
%sector_index, | ||
%history_size, | ||
%sector_expire_at, | ||
"Sector expires soon #1, scheduling replotting" | ||
); | ||
// Time to replot | ||
sector_indices_to_replot.push(sector_index); | ||
} | ||
|
@@ -471,6 +491,12 @@ where | |
.sector_expiration_check(min_sector_lifetime) | ||
.map(|expiration_check_history_size| expiration_check_history_size.segment_index()) | ||
{ | ||
trace!( | ||
%sector_index, | ||
%history_size, | ||
%expiration_check_segment_index, | ||
"Determined sector expiration check segment index" | ||
); | ||
let maybe_sector_expiration_check_segment_commitment = | ||
if let Some(segment_commitment) = | ||
archived_segment_commitments_cache.get(&expiration_check_segment_index) | ||
|
@@ -508,12 +534,32 @@ where | |
metadata; qed", | ||
); | ||
|
||
trace!( | ||
%sector_index, | ||
%history_size, | ||
sector_expire_at = %expiration_history_size.segment_index(), | ||
"Determined sector expiration segment index" | ||
); | ||
// +1 means we will start replotting a bit before it actually expires to avoid | ||
// storing expired sectors | ||
if expiration_history_size.segment_index() | ||
>= archived_segment_header.segment_index() | ||
<= (archived_segment_header.segment_index() + SegmentIndex::ONE) | ||
{ | ||
debug!( | ||
%sector_index, | ||
%history_size, | ||
sector_expire_at = %expiration_history_size.segment_index(), | ||
"Sector expires soon #2, scheduling replotting" | ||
); | ||
// Time to replot | ||
sector_indices_to_replot.push(sector_index); | ||
} else { | ||
trace!( | ||
%sector_index, | ||
%history_size, | ||
sector_expire_at = %expiration_history_size.segment_index(), | ||
"Sector expires later, remembering sector expiration" | ||
); | ||
// Store expiration so we don't have to recalculate it later | ||
sectors_expire_at | ||
.insert(sector_index, expiration_history_size.segment_index()); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unrelated to this change(github doesn't allow comments on unchanged lines) ..
L612 above (
block_number.checked_sub(&confirmation_depth_k.into()
...). This looks like an edge case where we won't archive blocks when the u32 wraps around. When this happens, there could be a slow path that just walks back the parent hashes to get to the block k_conf_depth old).Or we could opportunistically cache the block hashes as import notifications come in (this won't work on restart or reorg), then fall back to the slow path
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
u32
will never wrap around, our block number type isu32
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean when it reaches U32::MAX and goes back ..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Block number will never reach
u32::MAX
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it because mainnet needs to be running for a very loong time (at 1 block per 6 sec) to get there? nice problem to have though :-
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly. Even if blocks are produced every second running out of
u32
will not be an issue to worry for quite a few decades. I hope Subspace lives long enough for this to be a real issue.