-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add known issue docs for #75598 #79221
Add known issue docs for #75598 #79221
Conversation
Adds a description of elastic#75598, and the mitigation, to the release notes of versions 7.13.2 through 7.14.0.
e690314
to
4ca78e9
Compare
Pinging @elastic/es-docs (Team:Docs) |
Pinging @elastic/es-distributed (Team:Distributed) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Aside from some minor wording nits, I think we should include a snippet for the setting update. Thanks @DaveCTurner!
causing future restore operations to fail. To mitigate this problem, prevent | ||
concurrent snapshot operations by setting | ||
`snapshot.max_concurrent_operations: 1`. | ||
+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the remediation step is a single API call, I'd include it here. If you'd rather not do that, I'd at least state you can update snapshot.max_concurrent_operations
using the update cluster settings API (with a link).
+ | |
+ | |
[source,console] | |
---- | |
PUT _cluster/settings | |
{ | |
"persistent" : { | |
"snapshot.max_concurrent_operations" : 1 | |
} | |
} | |
---- | |
+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 good idea.
* Snapshot and restore: If a running snapshot is cancelled while a | ||
previously-started snapshot is still ongoing and a later snapshot is enqueued | ||
then there is a risk that some shard data may be lost from the repository, | ||
causing future restore operations to fail. To mitigate this problem, prevent | ||
concurrent snapshot operations by setting | ||
`snapshot.max_concurrent_operations: 1`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor edits to reword some passive voice. There is still some passive voice in here, but I think this reads better. Feel free to ignore if wanted tho.
* Snapshot and restore: If a running snapshot is cancelled while a | |
previously-started snapshot is still ongoing and a later snapshot is enqueued | |
then there is a risk that some shard data may be lost from the repository, | |
causing future restore operations to fail. To mitigate this problem, prevent | |
concurrent snapshot operations by setting | |
`snapshot.max_concurrent_operations: 1`. | |
* Snapshot and restore: If you cancel a running snapshot while a | |
previously-started snapshot is still ongoing and a later snapshot is enqueued, | |
the repository may lose some shard data. This can cause future restore | |
operations to fail. To mitigate this problem, set | |
`snapshot.max_concurrent_operations` to `1` to prevent concurrent snapshot | |
operations. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've left the first bit of passive voice in there ("if a running snapshot is cancelled" etc) since users will typically hit this when snapshots are being run by other components (SLM or ILM for instance) rather than when running snapshots themselves.
e673f36
to
b211069
Compare
Sorry, I messed up a merge and brought in some commits from a different branch. Force-pushed to fix it, but didn't change any reviewed commits. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No worries at all. Still looks good. Thanks!
The known-issue docs give the impression that an upgrade will restore the lost data in the repository. This isn't the case, so this commit clarifies this in the docs. Relates elastic#73456 Relates elastic#75598 Relates elastic#79221
Adds a description of #75598, and the mitigation, to the release notes
of versions 7.13.2 through 7.14.0.