-
Notifications
You must be signed in to change notification settings - Fork 24.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restore from Individual Shard Snapshot Files in Parallel #48110
Merged
original-brownbear
merged 49 commits into
elastic:master
from
original-brownbear:async-restore
Oct 30, 2019
Merged
Restore from Individual Shard Snapshot Files in Parallel #48110
original-brownbear
merged 49 commits into
elastic:master
from
original-brownbear:async-restore
Oct 30, 2019
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
original-brownbear
added
:Distributed Coordination/Snapshot/Restore
Anything directly related to the `_snapshot/*` APIs
team-discuss
labels
Oct 16, 2019
Pinging @elastic/es-distributed (:Distributed/Snapshot/Restore) |
original-brownbear
changed the title
Restore from Snapshots in Parallel
Restore from Individual Shard Snapshot Files in Parallel
Oct 16, 2019
The code here was needlessly complicated when it enqueued all file uploads up-front. Instead, we can go with a cleaner worker + queue pattern here by taking the max-parallelism from the threadpool info. Also, I slightly simplified the rethrow and listener (step listener is pointless when you add the callback in the next line) handling it since I noticed that we were needlessly rethrowing in the same code and that wasn't worth a separate PR.
Jenkins run elasticsearch-ci/2 (unrelated ML failure) |
tlrx
approved these changes
Oct 30, 2019
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks Armin.
Thanks all! |
original-brownbear
added a commit
to original-brownbear/elasticsearch
that referenced
this pull request
Oct 30, 2019
Make restoring shard snapshots run in parallel on the `SNAPSHOT` thread-pool.
original-brownbear
added a commit
that referenced
this pull request
Oct 30, 2019
original-brownbear
added a commit
to original-brownbear/elasticsearch
that referenced
this pull request
Nov 1, 2019
Follow up to elastic#48110 cleaning up the redundant future uses that were left over from that change.
original-brownbear
added a commit
to original-brownbear/elasticsearch
that referenced
this pull request
Nov 1, 2019
With the changes in elastic#48110 there is no more need to block a generic thread when waiting for the multi file transfer in `CcrRepository`.
original-brownbear
added a commit
that referenced
this pull request
Nov 1, 2019
With the changes in #48110 there is no more need to block a generic thread when waiting for the multi file transfer in `CcrRepository`.
original-brownbear
added a commit
to original-brownbear/elasticsearch
that referenced
this pull request
Nov 1, 2019
With the changes in elastic#48110 there is no more need to block a generic thread when waiting for the multi file transfer in `CcrRepository`.
original-brownbear
added a commit
that referenced
this pull request
Nov 1, 2019
original-brownbear
added a commit
that referenced
this pull request
Nov 2, 2019
Follow up to #48110 cleaning up the redundant future uses that were left over from that change.
original-brownbear
added a commit
to original-brownbear/elasticsearch
that referenced
this pull request
Nov 2, 2019
Follow up to elastic#48110 cleaning up the redundant future uses that were left over from that change.
original-brownbear
added a commit
that referenced
this pull request
Nov 2, 2019
This was referenced Feb 3, 2020
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
:Distributed Coordination/Snapshot/Restore
Anything directly related to the `_snapshot/*` APIs
>enhancement
v7.6.0
v8.0.0-alpha1
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The code in this PR is rather to illustrate the amount of change necessary to allow for faster restores and demonstrate required code changes than for review as it does not limit concurrency in any way.
In #42791 we fixed the order in which files are uploaded to snapshots, making snapshots upload the individual file for each shard in parallel and working shard-by-shard in terms of ordering the uploads for various shards in the snapshot.
For restores from snapshots however we currently run all shards in parallel using only a single thread per shard for downloading files. This is needlessly inefficient and significantly slows down restores from Cloud repositories.
I think we should move to the same ordering for restores. Parallelize by files and order by shards.
This should significantly speed up restores for shards (especially those with many files) as well as speed up the restore process end-to-end since if we order by shards we restore the first primaries more quickly and thus the replica recovery can run in parallel to the restore in a more efficient manner.