Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Zen2] Update documentation for Zen2 #34714

Merged
Merged
Show file tree
Hide file tree
Changes from 96 commits
Commits
Show all changes
115 commits
Select commit Hold shift + click to select a range
621774a
Add some docs on cluster coordination
DaveCTurner Oct 22, 2018
56d050f
Review/rework
DaveCTurner Oct 23, 2018
aa6df51
More review feedback
DaveCTurner Oct 23, 2018
7c9db23
Bootstrapping explanation as NOTE
DaveCTurner Oct 23, 2018
830eca7
Rename to 'POST /_cluster/force_local_node_takeover'
DaveCTurner Oct 23, 2018
d91c924
WIP rolling restarts
DaveCTurner Oct 25, 2018
d03103a
Reorder bootstrap section
DaveCTurner Oct 25, 2018
c762dba
Finish section on migration/restarts
DaveCTurner Oct 25, 2018
8ca2e75
Different auto-config heuristics
DaveCTurner Oct 25, 2018
d8ec40b
Move/rework section on cluster maintenance
DaveCTurner Oct 25, 2018
a498e42
More rewording
DaveCTurner Oct 25, 2018
eb0aa2f
Moar reword
DaveCTurner Oct 25, 2018
ebcbe3c
Review feedback
DaveCTurner Oct 31, 2018
27a9ffb
Split sentence
DaveCTurner Oct 31, 2018
f43414d
Retire -> withdraw vote
DaveCTurner Nov 2, 2018
1fef44e
Typo, and better UUIDs
DaveCTurner Nov 2, 2018
10020db
Width
DaveCTurner Nov 2, 2018
05dc68a
Comments & width
DaveCTurner Nov 2, 2018
529a94a
Reorder
DaveCTurner Nov 2, 2018
dd35159
Reformat JSON
DaveCTurner Nov 2, 2018
40649bd
Better API for bootstrapping
DaveCTurner Nov 2, 2018
e88656c
Rewording
DaveCTurner Nov 2, 2018
697bbca
Merge branch 'zen2' into 2018-10-22-cluster-coordination-docs
DaveCTurner Nov 26, 2018
8de22f1
Update APIs
DaveCTurner Nov 26, 2018
b54c0c1
isn't
DaveCTurner Nov 26, 2018
cb38a39
Merge branch 'zen2' into 2018-10-22-cluster-coordination-docs
DaveCTurner Nov 29, 2018
dfd64f9
Add wait_for_removal parameter
DaveCTurner Nov 29, 2018
0973de8
rename withdrawal to exclusions
ywelsch Dec 4, 2018
e8d9656
Rename tombstones to exclusions
ywelsch Dec 4, 2018
208d463
Reformat
DaveCTurner Dec 5, 2018
de98cba
Expand section about quorums
DaveCTurner Dec 5, 2018
12c2b4b
Simplify bootstrapping docs
DaveCTurner Dec 5, 2018
d4763ee
Oops
DaveCTurner Dec 5, 2018
09b6293
Merge branch 'master' into 2018-10-22-cluster-coordination-docs
DaveCTurner Dec 7, 2018
3fc691f
Command line also ok
DaveCTurner Dec 7, 2018
ca73f1f
Refactor docs
ywelsch Dec 9, 2018
01e7555
Merge remote-tracking branch 'elastic/master' into zen2-docs
ywelsch Dec 9, 2018
f3a8b93
put all in one doc
ywelsch Dec 10, 2018
e10d760
remove coordination.asciidoc
ywelsch Dec 10, 2018
ff1e87c
Merge branch 'master' into 2018-10-22-cluster-coordination-docs
DaveCTurner Dec 10, 2018
024c9b2
Adapt docker instructions
ywelsch Dec 11, 2018
02b607c
adapt other uses of minimum_master_nodes
ywelsch Dec 11, 2018
60d64b4
Whitespace
DaveCTurner Dec 11, 2018
6102d5b
Cluster formation module forms clusters
DaveCTurner Dec 11, 2018
9fa0844
Rewording of summary
DaveCTurner Dec 11, 2018
fb1e7d3
Link to plugins page
DaveCTurner Dec 11, 2018
bfc7d16
Tweaks to discovery section
DaveCTurner Dec 11, 2018
2ed2c39
More on bootstrapping
DaveCTurner Dec 11, 2018
68d9ef5
Expand on cluster name
DaveCTurner Dec 11, 2018
a2b4d38
Expand on 'default configuration' for auto-bootstrapping
DaveCTurner Dec 11, 2018
bb6ef8e
Master-ineligible
DaveCTurner Dec 11, 2018
cbd33ff
Emphasize when you need voting exclusions
DaveCTurner Dec 11, 2018
b91519c
More on publishing
DaveCTurner Dec 11, 2018
7540e4e
Add lag detection bit
DaveCTurner Dec 11, 2018
d04b7ad
Tweaks
DaveCTurner Dec 11, 2018
8635a41
Hyphen?
DaveCTurner Dec 11, 2018
1189440
Consistentify with the `node.name` setting.
DaveCTurner Dec 11, 2018
7c7e7af
Add note on disconnections bypassing fault detection
DaveCTurner Dec 11, 2018
d48eccc
Add breaking changes
DaveCTurner Dec 11, 2018
43a6dcc
Reword
DaveCTurner Dec 11, 2018
e6087e9
Split up discovery depending on master-eligibility
DaveCTurner Dec 11, 2018
02b7ebd
Use the leader/follower terminology less
DaveCTurner Dec 11, 2018
7714003
fix link
ywelsch Dec 11, 2018
dddc3cf
smaller changes
ywelsch Dec 12, 2018
1888c97
Rewrite publishing bit
DaveCTurner Dec 12, 2018
b8997b1
Merge branch 'master' into 2018-10-22-cluster-coordination-docs
DaveCTurner Dec 12, 2018
b1e98bd
Skip attempts to destroy the test cluster
DaveCTurner Dec 12, 2018
a1c9843
Rewording
DaveCTurner Dec 12, 2018
4b40b34
Weaken recommendation for removing bootstrap setting
DaveCTurner Dec 13, 2018
e6b7401
Merge branch 'master' into 2018-10-22-cluster-coordination-docs
DaveCTurner Dec 17, 2018
041494c
Rework discovery settings
DaveCTurner Dec 17, 2018
f438a28
Add link to discovery settings docs
DaveCTurner Dec 17, 2018
4180e00
Emphasize again that this is only for new clusters
DaveCTurner Dec 17, 2018
76ec76c
Reformat
DaveCTurner Dec 17, 2018
8d1b118
Define 'cluster bootstrapping'
DaveCTurner Dec 17, 2018
2df3878
Weaken recommendation further, with more qualification
DaveCTurner Dec 17, 2018
17be8bb
Clarify that auto-bootstrapping will only find local nodes
DaveCTurner Dec 17, 2018
7ca6cc8
+automatically
DaveCTurner Dec 17, 2018
2fdb92f
Shorter sentences
DaveCTurner Dec 17, 2018
c4fd335
Add 'batch of'
DaveCTurner Dec 17, 2018
1d69b0a
Link up bootstrapping/setting initial quorum sections a bit
DaveCTurner Dec 17, 2018
9003c04
Remove note on migration and TODO
DaveCTurner Dec 17, 2018
771cf61
Fix ref to voting exclusions
DaveCTurner Dec 17, 2018
14de23c
Apply suggestions from code review
lcawl Dec 18, 2018
58c2a52
Apply suggestions from code review
lcawl Dec 18, 2018
98f1485
FIXUP missed suggestion
lcawl Dec 18, 2018
ebe1a1f
Reformat
DaveCTurner Dec 18, 2018
2cac91f
local ports
DaveCTurner Dec 18, 2018
b4dd874
Apply suggestions from code review
lcawl Dec 18, 2018
9d787f9
Reword 'at startup'
DaveCTurner Dec 18, 2018
400b2e4
Add redirects
DaveCTurner Dec 18, 2018
802a413
Split up monolith
DaveCTurner Dec 18, 2018
0ea5488
Rework the discovery module front page
DaveCTurner Dec 18, 2018
00a0145
Better front page
DaveCTurner Dec 18, 2018
9cdc18f
Reorder sections
DaveCTurner Dec 18, 2018
a9848ab
Merge branch 'master' into 2018-10-22-cluster-coordination-docs
DaveCTurner Dec 19, 2018
4b55b1e
Suggested changes to adding & removing nodes
lcawl Dec 20, 2018
6778c0a
Suggested changes to bootstrapping doc
lcawl Dec 20, 2018
b349806
Suggested changes to discovery docs
lcawl Dec 20, 2018
ccba8ba
Apply suggestions from code review
lcawl Dec 20, 2018
8835ef2
Suggested title changes
lcawl Dec 20, 2018
0643929
Suggested changes to publishing docs
lcawl Dec 20, 2018
dea0d59
Merge branch 'master' into 2018-10-22-cluster-coordination-docs
DaveCTurner Dec 20, 2018
442a7a7
Further updates to publishing.asciidoc
DaveCTurner Dec 20, 2018
14194c6
Suggested changes to quorums.asciidoc
lcawl Dec 20, 2018
e466ed0
Add headings
DaveCTurner Dec 20, 2018
8e34a77
Move recommendation up in bootstrapping doc
DaveCTurner Dec 20, 2018
ec4e739
Combine discovery overviews
DaveCTurner Dec 20, 2018
f4a41db
Update docs/reference/setup/important-settings/discovery-settings.asc…
lcawl Dec 20, 2018
6af3721
Change title
DaveCTurner Dec 20, 2018
0b2b63c
Clarify the difference between a split brain and an even network part…
DaveCTurner Dec 20, 2018
f24c1d9
Add 'that half'
DaveCTurner Dec 20, 2018
64564c0
Move elections overview to quorums page
DaveCTurner Dec 20, 2018
852caed
Fix up broken link
DaveCTurner Dec 20, 2018
94bc24b
_hosts_ providers
DaveCTurner Dec 20, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions docs/plugins/discovery.asciidoc
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
[[discovery]]
== Discovery Plugins

Discovery plugins extend Elasticsearch by adding new discovery mechanisms that
can be used instead of {ref}/modules-discovery-zen.html[Zen Discovery].
Discovery plugins extend Elasticsearch by adding new host providers that
can be used to extend the {ref}/modules-discovery.html[cluster formation module].

[float]
==== Core discovery plugins
Expand All @@ -26,7 +26,6 @@ The Google Compute Engine discovery plugin uses the GCE API for unicast discover

A number of discovery plugins have been contributed by our community:

* https://github.com/shikhar/eskka[eskka Discovery Plugin] (by Shikhar Bhushan)
DaveCTurner marked this conversation as resolved.
Show resolved Hide resolved
* https://github.com/fabric8io/elasticsearch-cloud-kubernetes[Kubernetes Discovery Plugin] (by Jimmi Dyson, http://fabric8.io[fabric8])

include::discovery-ec2.asciidoc[]
Expand Down
2 changes: 2 additions & 0 deletions docs/reference/migration/migrate_7_0.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ See also <<release-highlights>> and <<es-release-notes>>.

* <<breaking_70_aggregations_changes>>
* <<breaking_70_cluster_changes>>
* <<breaking_70_discovery_changes>>
* <<breaking_70_indices_changes>>
* <<breaking_70_mappings_changes>>
* <<breaking_70_search_changes>>
Expand Down Expand Up @@ -44,6 +45,7 @@ Elasticsearch 6.x in order to be readable by Elasticsearch 7.x.
include::migrate_7_0/aggregations.asciidoc[]
include::migrate_7_0/analysis.asciidoc[]
include::migrate_7_0/cluster.asciidoc[]
include::migrate_7_0/discovery.asciidoc[]
include::migrate_7_0/indices.asciidoc[]
include::migrate_7_0/mappings.asciidoc[]
include::migrate_7_0/search.asciidoc[]
Expand Down
9 changes: 0 additions & 9 deletions docs/reference/migration/migrate_7_0/cluster.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -25,12 +25,3 @@ Clusters now have soft limits on the total number of open shards in the cluster
based on the number of nodes and the `cluster.max_shards_per_node` cluster
setting, to prevent accidental operations that would destabilize the cluster.
More information can be found in the <<misc-cluster,documentation for that setting>>.

[float]
==== Discovery configuration is required in production
Production deployments of Elasticsearch now require at least one of the following settings
to be specified in the `elasticsearch.yml` configuration file:

- `discovery.zen.ping.unicast.hosts`
- `discovery.zen.hosts_provider`
- `cluster.initial_master_nodes`
40 changes: 40 additions & 0 deletions docs/reference/migration/migrate_7_0/discovery.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
[float]
[[breaking_70_discovery_changes]]
=== Discovery changes

[float]
==== Cluster bootstrapping is required if discovery is configured

The first time a cluster is started, `cluster.initial_master_nodes` must be set
to perform cluster bootstrapping. It should contain the names of the
master-eligible nodes in the initial cluster and be defined on every
master-eligible node in the cluster. See <<discovery-settings,the discovery
settings summary>> for an example, and the
<<modules-discovery-bootstrap-cluster,cluster bootstrapping reference
DaveCTurner marked this conversation as resolved.
Show resolved Hide resolved
documentation>> describes this setting in more detail.

The `discovery.zen.minimum_master_nodes` setting is required during a rolling
upgrade from 6.x, but can be removed in all other circumstances.

[float]
==== Removing master-eligible nodes sometimes requires voting exclusions

If you wish to remove half or more of the master-eligible nodes from a cluster,
you must first exclude the affected nodes from the voting configuration using
the <<modules-discovery-adding-removing-nodes,voting config exclusions API>>.
If you remove fewer than half of the master-eligible nodes at the same time,
voting exclusions are not required. If you remove only master-ineligible nodes
such as data-only nodes or coordinating-only nodes, voting exclusions are not
required. Likewise, if you add nodes to the cluster, voting exclusions are not
required.

[float]
==== Discovery configuration is required in production

Production deployments of Elasticsearch now require at least one of the
following settings to be specified in the `elasticsearch.yml` configuration
file:

- `discovery.zen.ping.unicast.hosts`
- `discovery.zen.hosts_provider`
- `cluster.initial_master_nodes`
12 changes: 6 additions & 6 deletions docs/reference/modules.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,13 @@ These settings can be dynamically updated on a live cluster with the

The modules in this section are:

<<modules-cluster,Cluster-level routing and shard allocation>>::
<<modules-discovery,Discovery and cluster formation>>::

Settings to control where, when, and how shards are allocated to nodes.
How nodes discover each other, elect a master and form a cluster.

<<modules-discovery,Discovery>>::
<<modules-cluster,Shard allocation and cluster-level routing>>::

How nodes discover each other to form a cluster.
Settings to control where, when, and how shards are allocated to nodes.

<<modules-gateway,Gateway>>::

Expand Down Expand Up @@ -85,10 +85,10 @@ The modules in this section are:
--


include::modules/cluster.asciidoc[]

include::modules/discovery.asciidoc[]

include::modules/cluster.asciidoc[]

include::modules/gateway.asciidoc[]

include::modules/http.asciidoc[]
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/modules/cluster.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[[modules-cluster]]
== Cluster
== Shard allocation and cluster-level routing

One of the main roles of the master is to decide which shards to allocate to
which nodes, and when to move shards between nodes in order to rebalance the
Expand Down
87 changes: 66 additions & 21 deletions docs/reference/modules/discovery.asciidoc
Original file line number Diff line number Diff line change
@@ -1,30 +1,75 @@
[[modules-discovery]]
== Discovery
== Discovery and cluster formation
andrershov marked this conversation as resolved.
Show resolved Hide resolved

The discovery module is responsible for discovering nodes within a
cluster, as well as electing a master node.
The discovery and cluster formation module is responsible for discovering
nodes, electing a master, forming a cluster, and publishing the cluster state
each time it changes. It is integrated with other modules. For example, all
communication between nodes is done using the <<modules-transport,transport>>
module. This module is divided into the following sections:

Note, Elasticsearch is a peer to peer based system, nodes communicate
with one another directly if operations are delegated / broadcast. All
the main APIs (index, delete, search) do not communicate with the master
node. The responsibility of the master node is to maintain the global
cluster state, and act if nodes join or leave the cluster by reassigning
shards. Each time a cluster state is changed, the state is made known to
the other nodes in the cluster (the manner depends on the actual
discovery implementation).
<<modules-discovery-hosts-providers>>::

[float]
=== Settings
Discovery is the process where nodes find each other when the master is
unknown, such as when a node has just started up or when the previous
master has failed.

The `cluster.name` allows to create separated clusters from one another.
The default value for the cluster name is `elasticsearch`, though it is
recommended to change this to reflect the logical group name of the
cluster running.
<<modules-discovery-bootstrap-cluster>>::

include::discovery/azure.asciidoc[]
Bootstrapping a cluster is required when an Elasticsearch cluster starts up
for the very first time. In <<dev-vs-prod-mode,development mode>>, with no
discovery settings configured, this is automatically performed by the nodes
themselves. As this auto-bootstrapping is
<<modules-discovery-quorums,inherently unsafe>>, running a node in
<<dev-vs-prod-mode,production mode>> requires bootstrapping to be
explicitly configured via the
<<modules-discovery-bootstrap-cluster,`cluster.initial_master_nodes`
setting>>.

include::discovery/ec2.asciidoc[]
<<modules-discovery-adding-removing-nodes,Adding and removing master-eligible nodes>>::

include::discovery/gce.asciidoc[]
It is recommended to have a small and fixed number of master-eligible nodes
in a cluster, and to scale the cluster up and down by adding and removing
master-ineligible nodes only. However there are situations in which it may
be desirable to add or remove some master-eligible nodes to or from a
cluster. This section describes the process for adding or removing
master-eligible nodes, including the extra steps that need to be performed
when removing more than half of the master-eligible nodes at the same time.

<<cluster-state-publishing>>::

Cluster state publishing is the process by which the elected master node
updates the cluster state on all the other nodes in the cluster.

<<no-master-block>>::

The no-master block is put in place when there is no known elected master,
and can be configured to determine which operations should be rejected when
it is in place.

Advanced settings::

There are settings that allow advanced users to influence the
<<master-election,master election>> and <<fault-detection,fault detection>>
processes.

<<modules-discovery-quorums>>::

This section describes the detailed design behind the master election and
auto-reconfiguration logic.

include::discovery/discovery.asciidoc[]

include::discovery/bootstrapping.asciidoc[]

include::discovery/adding-removing-nodes.asciidoc[]

include::discovery/publishing.asciidoc[]

include::discovery/no-master-block.asciidoc[]

include::discovery/master-election.asciidoc[]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be great to have the following example here. Consider you have 3 master eligible nodes - A, B, C and auto_shrink is set to true. In this case, the voting configuration will be {A, B, C}. Now consider node C fails, the voting configuration is not changed in this case, because there would be less than 3 nodes if node C is removed. Now master-eligible node D connects to the cluster, in this case, node C will be atomically replaced with node D in the voting configuration - {A, B, D}.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should wait and see about this. I am worried that introducing this one example will raise more questions than it answers, and do not want to introduce a much broader selection of examples. This particular example is spelled out here:

check(nodes("a", "b", "c"), conf("a", "b", "e"), true, conf("a", "b", "c"));

However as you can see from that test case there are many other examples to think about.

include::discovery/fault-detection.asciidoc[]

include::discovery/quorums.asciidoc[]

include::discovery/zen.asciidoc[]
121 changes: 121 additions & 0 deletions docs/reference/modules/discovery/adding-removing-nodes.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
[[modules-discovery-adding-removing-nodes]]
=== Adding and removing nodes

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like the majority of this information pertains only to removing nodes. It's not necessary immediately, but at some point I think it would be good to split this into two separate pages -- one about adding nodes (with lots of details about how that differs depending on platform and node type, etc) and one about removing nodes (which would be most of this content).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There just isn't really a lot to say about adding nodes in this context. I added headings to divide the page up in e466ed0.

As nodes are added or removed Elasticsearch maintains an optimal level of fault
tolerance by automatically updating the cluster's _voting configuration_, which
is the set of master-eligible nodes whose responses are counted when making
DaveCTurner marked this conversation as resolved.
Show resolved Hide resolved
decisions such as electing a new master or committing a new cluster state.

It is recommended to have a small and fixed number of master-eligible nodes in a
cluster, and to scale the cluster up and down by adding and removing
master-ineligible nodes only. However there are situations in which it may be
desirable to add or remove some master-eligible nodes to or from a cluster.

If you wish to add some master-eligible nodes to your cluster, simply configure
the new nodes to find the existing cluster and start them up. Elasticsearch will
add the new nodes to the voting configuration if it is appropriate to do so.

When removing master-eligible nodes, it is important not to remove too many all
at the same time. For instance, if there are currently seven master-eligible
nodes and you wish to reduce this to three, it is not possible simply to stop
four of the nodes at once: to do so would leave only three nodes remaining,
which is less than half of the voting configuration, which means the cluster
cannot take any further actions.

As long as there are at least three master-eligible nodes in the cluster, as a
general rule it is best to remove nodes one-at-a-time, allowing enough time for
the cluster to <<modules-discovery-quorums,auto-adjust>> the voting
configuration and adapt the fault tolerance level to the new set of nodes.

If there are only two master-eligible nodes remaining then neither node can be
safely removed since both are required to reliably make progress, so you must
DaveCTurner marked this conversation as resolved.
Show resolved Hide resolved
first inform Elasticsearch that one of the nodes should not be part of the
voting configuration, and that the voting power should instead be given to
other nodes, allowing the excluded node to be taken offline without preventing
DaveCTurner marked this conversation as resolved.
Show resolved Hide resolved
the other node from making progress. A node which is added to a voting
configuration exclusion list still works normally, but Elasticsearch will try
DaveCTurner marked this conversation as resolved.
Show resolved Hide resolved
and remove it from the voting configuration so its vote is no longer required.
DaveCTurner marked this conversation as resolved.
Show resolved Hide resolved
Importantly, Elasticsearch will never automatically move a node on the voting
exclusions list back into the voting configuration. Once an excluded node has
been successfully auto-reconfigured out of the voting configuration, it is safe
to shut it down without affecting the cluster's master-level availability. A
node can be added to the voting configuration exclusion list using the
following API:

[source,js]
--------------------------------------------------
# Add node to voting configuration exclusions list and wait for the system to
# auto-reconfigure the node out of the voting configuration up to the default
# timeout of 30 seconds
POST /_cluster/voting_config_exclusions/node_name

# Add node to voting configuration exclusions list and wait for
# auto-reconfiguration up to one minute
POST /_cluster/voting_config_exclusions/node_name?timeout=1m
--------------------------------------------------
// CONSOLE
// TEST[skip:this would break the test cluster if executed]

The node that should be added to the exclusions list is specified using
<<cluster-nodes,node filters>> in place of `node_name` here. If a call to the
voting configuration exclusions API fails then the call can safely be retried.
DaveCTurner marked this conversation as resolved.
Show resolved Hide resolved
Only a successful response guarantees that the node has actually been removed
from the voting configuration and will not be reinstated.

Although the voting configuration exclusions API is most useful for down-scaling
a two-node to a one-node cluster, it is also possible to use it to remove
multiple master-eligible nodes all at the same time. Adding multiple nodes
to the exclusions list has the system try to auto-reconfigure all of these nodes
out of the voting configuration, allowing them to be safely shut down while
keeping the cluster available. In the example described above, shrinking a
seven-master-node cluster down to only have three master nodes, you could add
four nodes to the exclusions list, wait for confirmation, and then shut them
down simultaneously.

NOTE: Voting exclusions are only required when removing at least half of the
master-eligible nodes from a cluster in a short time period. They are not
required when removing master-ineligible nodes, nor are they required when
removing fewer than half of the master-eligible nodes.

Adding an exclusion for a node creates an entry for that node in the voting
configuration exclusions list, which has the system automatically try to
reconfigure the voting configuration to remove that node and prevents it from
returning to the voting configuration once it has removed. The current list of
exclusions is stored in the cluster state and can be inspected as follows:

[source,js]
--------------------------------------------------
GET /_cluster/state?filter_path=metadata.cluster_coordination.voting_config_exclusions
--------------------------------------------------
// CONSOLE

This list is limited in size by the following setting:

`cluster.max_voting_config_exclusions`::

Sets a limits on the number of voting configuration exclusions at any one
time. Defaults to `10`.

Since voting configuration exclusions are persistent and limited in number, they
must be cleaned up. Normally an exclusion is added when performing some
maintenance on the cluster, and the exclusions should be cleaned up when the
maintenance is complete. Clusters should have no voting configuration exclusions
in normal operation.

If a node is excluded from the voting configuration because it is to be shut
down permanently then its exclusion can be removed once it has shut down and
DaveCTurner marked this conversation as resolved.
Show resolved Hide resolved
been removed from the cluster. Exclusions can also be cleared if they were
DaveCTurner marked this conversation as resolved.
Show resolved Hide resolved
created in error or were only required temporarily:

[source,js]
--------------------------------------------------
# Wait for all the nodes with voting configuration exclusions to be removed from
# the cluster and then remove all the exclusions, allowing any node to return to
# the voting configuration in the future.
DELETE /_cluster/voting_config_exclusions

# Immediately remove all the voting configuration exclusions, allowing any node
# to return to the voting configuration in the future.
DELETE /_cluster/voting_config_exclusions?wait_for_removal=false
--------------------------------------------------
// CONSOLE
5 changes: 0 additions & 5 deletions docs/reference/modules/discovery/azure.asciidoc

This file was deleted.

Loading