-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DOCS] Add max open shards error to 'Size your shards' #77287
Conversation
Appending common error into our Shard sizing docs along w/extra resources commonly viewed from [this Elastic Discuss](https://discuss.elastic.co/t/how-to-fix-hitting-maximum-shards-open-error/200502/2). Top 4 viewed error last 30d on Elastic Discuss. Kindly assist - fixing resource links - I'm debating including [the cluster setting you can temporarily override](https://www.elastic.co/guide/en/elasticsearch/reference/7.14/modules-cluster.html#cluster-shard-limit), but have left it off so far. Would love your thoughts!
Pinging @elastic/es-docs (Team:Docs) |
Thanks for this PR @stefnestor. I agree that we should document this error. However, I'm not sure we want to include the external links here. Those links aren't necessarily maintained, and I don't want to link users to outdated info. I created a new troubleshooting section and included some information about temporarily increasing @henningandersen Do you mind taking a look at these changes at your convenience? Thanks! |
Pinging @elastic/es-distributed (Team:Distributed) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just one nit.
---- | ||
|
||
This increase should only be temporary. As a long-term solution, we recommend | ||
you add non-frozen data notes or <<reduce-cluster-shard-count,reduce your |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if rather than non-frozen
we should dictate to add nodes for the tier that is oversharded? While today, it is correct that adding a cold node will avoid the limit also for all your hot data, it is not really a solution to the underlying oversharding problem of having too many hot shards for the number of hot nodes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! I'll revise this to:
we recommend you add nodes to the oversharded data tier or ...
Changes: * Adds a troubleshooting section and documents the `maximum shards open` error. * Retitles the `Fix an oversharded cluster` to `Reduce a cluster's shard count`. Co-authored-by: James Rodewig <[email protected]> Co-authored-by: Stef Nestor <[email protected]>
Changes: * Adds a troubleshooting section and documents the `maximum shards open` error. * Retitles the `Fix an oversharded cluster` to `Reduce a cluster's shard count`. Co-authored-by: James Rodewig <[email protected]> Co-authored-by: Stef Nestor <[email protected]>
Changes: * Adds a troubleshooting section and documents the `maximum shards open` error. * Retitles the `Fix an oversharded cluster` to `Reduce a cluster's shard count`. Co-authored-by: James Rodewig <[email protected]> Co-authored-by: Stef Nestor <[email protected]>
Changes:
maximum shards open
error.Fix an oversharded cluster
toReduce a cluster's shard count
.Relates to this Elastic Discuss topic.