-
Notifications
You must be signed in to change notification settings - Fork 24.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nodes drop their copy of auto-expanded data when coming up, only to sync it again #1873
Comments
Yeah, I was just hit by this one too. Wonder what happens if you disable reallocation before shutting down? |
Perhaps scheduling the deletion of the physical shard files when a shard is no longer allocated on a node can help here. Then there is time window the master node can react the the node rejoining and the deletion of physical shard files can be cancelled. |
@ywelsch, @clintongormley, is this issue going to be addressed for 2.4.x any time soon? Is this issue a problem with 5.x? |
@portante I just tested with ES |
Pinging @elastic/es-distributed |
Auto-expands replicas in the same cluster state update (instead of a follow-up reroute) where nodes are added or removed. Closes #1873, fixing an issue where nodes drop their copy of auto-expanded data when coming up, only to sync it again later.
Auto-expands replicas in the same cluster state update (instead of a follow-up reroute) where nodes are added or removed. Closes #1873, fixing an issue where nodes drop their copy of auto-expanded data when coming up, only to sync it again later.
Auto-expands replicas in the same cluster state update (instead of a follow-up reroute) where nodes are added or removed. Closes #1873, fixing an issue where nodes drop their copy of auto-expanded data when coming up, only to sync it again later.
When you have an index with
index.auto_expand_replicas=0-all
running on 3 nodes and you bring down one node the number of replicas will be reduced by the master from 2 to 1. Then when the node that just went down comes up again ElasticSearch on that node will will:Instead ElasticSearch should:
This'll aid recovery time where you have a setup where a relatively small index is available on all the nodes for capacity reasons, and you bring up a new node that should serve search requests right away.
The text was updated successfully, but these errors were encountered: