-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shrink should not touch max_retries #47719
Shrink should not touch max_retries #47719
Conversation
Shrink would set `max_retries=1` in order to avoid retrying. This however sticks to the shrunk index afterwards, causing issues when a shard copy later fails to allocate just once. While there is no new node to allocate to and a retry will likely fail again, the downside of having `max_retries=1` afterwards outweigh the benefit of not retrying the failed shrink a few times. This change ensures shrink no longer sets `max_retries`.
Pinging @elastic/es-distributed (:Distributed/Allocation) |
@elasticmachine run elasticsearch-ci/packaging-sample-matrix |
1 similar comment
@elasticmachine run elasticsearch-ci/packaging-sample-matrix |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @henningandersen I agree with your assessment, I left some comments though.
server/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java
Show resolved
Hide resolved
server/src/test/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexServiceTests.java
Show resolved
Hide resolved
If max_retries was set on source, it is unlikely to be wanted on target too, instead the new index will rely on the default.
…touch_max_retries
@elasticmachine run elasticsearch-ci/packaging-sample |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Thanks @jasontedor |
Shrink would set `max_retries=1` in order to avoid retrying. This however sticks to the shrunk index afterwards, causing issues when a shard copy later fails to allocate just once. Avoiding a retry of a shrink makes sense since there is no new node to allocate to and a retry will likely fail again. However, the downside of having max_retries=1 afterwards outweigh the benefit of not retrying the failed shrink a few times. This change ensures shrink no longer sets max_retries and also makes all resize operations (shrink, clone, split) leave the setting at default value rather than copy it from source.
Shrink would set `max_retries=1` in order to avoid retrying. This however sticks to the shrunk index afterwards, causing issues when a shard copy later fails to allocate just once. Avoiding a retry of a shrink makes sense since there is no new node to allocate to and a retry will likely fail again. However, the downside of having max_retries=1 afterwards outweigh the benefit of not retrying the failed shrink a few times. This change ensures shrink no longer sets max_retries and also makes all resize operations (shrink, clone, split) leave the setting at default value rather than copy it from source.
Shrink would set
max_retries=1
in order to avoid retrying. Thishowever sticks to the shrunk index afterwards, causing issues when a
shard copy later fails to allocate just once.
Avoiding a retry of a shrink makes sense since there is no new node
to allocate to and a retry will likely fail again. However, the downside of
having
max_retries=1
afterwards outweigh the benefit of not retryingthe failed shrink a few times. This change ensures shrink no longer
sets
max_retries
and also makes all resize operations (shrink, clone,split) leave the setting at default value rather than copy it from source.