-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
primary weight factor should exchange primary and replica at the same in the smalll cluster in some case #8060
Comments
cc: @dreamer-89 |
Thank you @kkewwei for opening this issue.
There are few limitations today which prevent primary shard balancing. One of them prevents primary shard movement when target node already contains replica copy. This is known as SameShardAllocationDecider. We do have issue open to fix this and tracked in #6481 . If you are interested please feel free to take up and work on this issue.
We are actually using |
@dreamer-89 sorry to reopen the issue.
The cluster has three nodes: node0, node1, node2, every index* has only one primary and one replica, the allocation is: In RebalanceConstraints, we don't set If we should add |
@dreamer-89 please confirm it in your spare time. |
Thank you @kkewwei for the comment, deep dive and sharing your use case with detail. Today, rebalancing of primary shard is performed at index level i.e. primary shards belonging to same index are distributed equally across nodes. The use case you shared above relates to primary balance across all indices. This is another problem that we have yet to solve and is tracked in separate issue #6642. |
Is your feature request related to a problem? Please describe.
It is very much looking forward to introducing the primary weight factor in #6017, but it seems not work in the small cluster, for example:
index1 settings:
The cluster has three nodes: node0, node1, node2, the allocation of the index1 is:
node0 node1 node2
shard0 p r r
shard1 p r r
shard2 p r r
there are 3 primary shards in the node0.
When put the settings:
There will no primary rebalance as expected, the reason is that a copy of this shard is already allocated to every target host.
Describe the solution you'd like
There are good reasons to rebalance primary shards. When rebalancing the primary ,and target node has the replica shard, if we should exchange the primary and replica at the same time.
In addition, I'm a little confused why we not Introduce
cluster.primary.shard.balance.constraint
in rebalance, the feature seems usefully in certain scenarios.The text was updated successfully, but these errors were encountered: