-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Small cleanup in BlobStoreRepository#finalizeSnapshot #99635
Small cleanup in BlobStoreRepository#finalizeSnapshot #99635
Conversation
Reorders the operations into their logical order and adds a few TODOs for gaps that this cleanup exposes.
After going through this during onboarding I think there's room for improvement... |
Pinging @elastic/es-distributed (Team:Distributed) |
…eRepository-finalizeSnapshot-cleanup
.<MetadataWriteResult>andThen((l, existingRepositoryData) -> { | ||
final int existingSnapshotCount = existingRepositoryData.getSnapshotIds().size(); | ||
if (existingSnapshotCount >= maxSnapshotCount) { | ||
throw new RepositoryException( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NB slight change in exception semantics (see change to BlobStoreSizeLimitIT
) - previously this exception would be thrown directly to the outer listener, but now it's wrapped in a SnapshotException
first. I claim that the new behaviour is preferable, it's useful to report that this specific snapshot failed, as well as the cause.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
I think the new style of chaining is easier to read than the step based approach.
allMetaListeners.acquire(), | ||
() -> GLOBAL_METADATA_FORMAT.write(clusterMetadata, blobContainer(), snapshotId.getUUID(), compress) | ||
) | ||
// NB failure of writeIndexGen doesn't guarantee the update failed, so we cannot safely clean anything up on failure |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For my learning: Could you please expand a bit on what this means? If the index-N
file fails to update, there will be no root level reference to the new shard generation and data files? If so, why is it not safe to clean up?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a good question. I opened #99645 to answer this in the Javadocs so that others can find this info in future too. Please take a look, and if you need more detail then let's discuss there.
Reorders the operations into their logical order and adds a few TODOs
for gaps that this cleanup exposes.