diff --git a/docs/reference/cluster/voting-exclusions.asciidoc b/docs/reference/cluster/voting-exclusions.asciidoc index dbb5432a28052..4393821b4f6bd 100644 --- a/docs/reference/cluster/voting-exclusions.asciidoc +++ b/docs/reference/cluster/voting-exclusions.asciidoc @@ -1,10 +1,11 @@ [[voting-config-exclusions]] == Voting configuration exclusions API ++++ -Voting configuration exclusions +Voting Configuration Exclusions ++++ -Adds or removes nodes from the voting configuration exclusion list. +Adds or removes master-eligible nodes from the +<>. [float] === Request @@ -28,16 +29,20 @@ DELETE /_cluster/voting_config_exclusions [float] === Description -If the <> -is `true`, the <> automatically -shrinks when you remove master-eligible nodes from the cluster. - -If the `cluster.auto_shrink_voting_configuration` setting is `false`, you must -use this API to remove departed nodes from the voting configuration manually. -It adds an entry for that node in the voting configuration exclusions list. The -cluster then tries to reconfigure the voting configuration to remove that node -and to prevent it from returning. - +If the <> is `true`, and there are more than three master-eligible nodes in the +cluster, and you remove fewer than half of the master-eligible nodes in the +cluster at once, then the <> +automatically shrinks when you remove master-eligible nodes from the cluster. + +If the `cluster.auto_shrink_voting_configuration` setting is `false`, or you +wish to shrink the voting configuration to contain fewer than three nodes, or +you wish to remove half or more of the master-eligible nodes in the cluster at +once, you must use this API to remove departed nodes from the voting +configuration manually. It adds an entry for that node in the voting +configuration exclusions list. The cluster then tries to reconfigure the voting +configuration to remove that node and to prevent it from returning. + If the API fails, you can safely retry it. Only a successful response guarantees that the node has been removed from the voting configuration and will not be reinstated. @@ -47,11 +52,11 @@ master-eligible nodes from a cluster in a short time period. They are not required when removing master-ineligible nodes or fewer than half of the master-eligible nodes. -The -<> -limits the size of the voting configuration exclusion list. The default value is -`10`. Since voting configuration exclusions are persistent and limited in number, -you must clean up the list. +The <> limits the size of the voting configuration exclusion list. The +default value is `10`. Since voting configuration exclusions are persistent and +limited in number, you must clear the voting config exclusions list once the +exclusions are no longer required. For more information, see <>. diff --git a/docs/reference/modules/discovery.asciidoc b/docs/reference/modules/discovery.asciidoc index 0886f4c338da0..d3e0d4fe84751 100644 --- a/docs/reference/modules/discovery.asciidoc +++ b/docs/reference/modules/discovery.asciidoc @@ -15,8 +15,13 @@ module. This module is divided into the following sections: <>:: - This section describes the detailed design behind the master election and - auto-reconfiguration logic. + This section describes how {es} uses a quorum-based voting mechanism to + make decisions even if some nodes are unavailable. + +<>:: + + This section describes the concept of voting configurations, which {es} + automatically updates as nodes leave and join the cluster. <>:: @@ -44,7 +49,11 @@ module. This module is divided into the following sections: Cluster state publishing is the process by which the elected master node updates the cluster state on all the other nodes in the cluster. - + +<>:: + + {es} performs health checks to detect and remove faulty nodes. + <>:: There are settings that enable users to influence the discovery, cluster @@ -64,4 +73,4 @@ include::discovery/publishing.asciidoc[] include::discovery/fault-detection.asciidoc[] -include::discovery/discovery-settings.asciidoc[] \ No newline at end of file +include::discovery/discovery-settings.asciidoc[] diff --git a/docs/reference/modules/discovery/discovery-settings.asciidoc b/docs/reference/modules/discovery/discovery-settings.asciidoc index dbfb38c98ad6f..494c5ac225b87 100644 --- a/docs/reference/modules/discovery/discovery-settings.asciidoc +++ b/docs/reference/modules/discovery/discovery-settings.asciidoc @@ -5,11 +5,12 @@ Discovery and cluster formation are affected by the following settings: `cluster.auto_shrink_voting_configuration`:: - Controls whether the <> sheds - departed nodes automatically, as long as it still contains at least 3 nodes. - The default value is `true`. If set to `false`, the voting configuration - never shrinks automatically; you must remove departed nodes manually with - the <>. + Controls whether the <> + sheds departed nodes automatically, as long as it still contains at least 3 + nodes. The default value is `true`. If set to `false`, the voting + configuration never shrinks automatically and you must remove departed + nodes manually with the <>. [[master-election-settings]]`cluster.election.back_off_time`:: @@ -160,9 +161,11 @@ APIs are not be blocked and can run on any available node. Provides a list of master-eligible nodes in the cluster. The list contains either an array of hosts or a comma-delimited string. Each value has the - format `host:port` or `host`, where `port` defaults to the setting `transport.profiles.default.port`. Note that IPv6 hosts must be bracketed. + format `host:port` or `host`, where `port` defaults to the setting + `transport.profiles.default.port`. Note that IPv6 hosts must be bracketed. The default value is `127.0.0.1, [::1]`. See <>. `discovery.zen.ping.unicast.hosts.resolve_timeout`:: - Sets the amount of time to wait for DNS lookups on each round of discovery. This is specified as a <> and defaults to `5s`. \ No newline at end of file + Sets the amount of time to wait for DNS lookups on each round of discovery. + This is specified as a <> and defaults to `5s`. diff --git a/docs/reference/modules/discovery/fault-detection.asciidoc b/docs/reference/modules/discovery/fault-detection.asciidoc index b696cdb8f7ca2..9062444b80d6c 100644 --- a/docs/reference/modules/discovery/fault-detection.asciidoc +++ b/docs/reference/modules/discovery/fault-detection.asciidoc @@ -2,8 +2,9 @@ === Cluster fault detection The elected master periodically checks each of the nodes in the cluster to -ensure that they are still connected and healthy. Each node in the cluster also periodically checks the health of the elected master. These checks -are known respectively as _follower checks_ and _leader checks_. +ensure that they are still connected and healthy. Each node in the cluster also +periodically checks the health of the elected master. These checks are known +respectively as _follower checks_ and _leader checks_. Elasticsearch allows these checks to occasionally fail or timeout without taking any action. It considers a node to be faulty only after a number of @@ -16,4 +17,4 @@ and retry setting values and attempts to remove the node from the cluster. Similarly, if a node detects that the elected master has disconnected, this situation is treated as an immediate failure. The node bypasses the timeout and retry settings and restarts its discovery phase to try and find or elect a new -master. \ No newline at end of file +master. diff --git a/docs/reference/modules/discovery/quorums.asciidoc b/docs/reference/modules/discovery/quorums.asciidoc index 40e31f06aa59f..1a1954454268c 100644 --- a/docs/reference/modules/discovery/quorums.asciidoc +++ b/docs/reference/modules/discovery/quorums.asciidoc @@ -18,13 +18,13 @@ cluster. In many cases you can do this simply by starting or stopping the nodes as required. See <>. As nodes are added or removed Elasticsearch maintains an optimal level of fault -tolerance by updating the cluster's _voting configuration_, which is the set of -master-eligible nodes whose responses are counted when making decisions such as -electing a new master or committing a new cluster state. A decision is made only -after more than half of the nodes in the voting configuration have responded. -Usually the voting configuration is the same as the set of all the -master-eligible nodes that are currently in the cluster. However, there are some -situations in which they may be different. +tolerance by updating the cluster's <>, which is the set of master-eligible nodes whose responses are +counted when making decisions such as electing a new master or committing a new +cluster state. A decision is made only after more than half of the nodes in the +voting configuration have responded. Usually the voting configuration is the +same as the set of all the master-eligible nodes that are currently in the +cluster. However, there are some situations in which they may be different. To be sure that the cluster remains available you **must not stop half or more of the nodes in the voting configuration at the same time**. As long as more @@ -38,46 +38,6 @@ cluster-state update that adjusts the voting configuration to match, and this can take a short time to complete. It is important to wait for this adjustment to complete before removing more nodes from the cluster. -[float] -==== Setting the initial quorum - -When a brand-new cluster starts up for the first time, it must elect its first -master node. To do this election, it needs to know the set of master-eligible -nodes whose votes should count. This initial voting configuration is known as -the _bootstrap configuration_ and is set in the -<>. - -It is important that the bootstrap configuration identifies exactly which nodes -should vote in the first election. It is not sufficient to configure each node -with an expectation of how many nodes there should be in the cluster. It is also -important to note that the bootstrap configuration must come from outside the -cluster: there is no safe way for the cluster to determine the bootstrap -configuration correctly on its own. - -If the bootstrap configuration is not set correctly, when you start a brand-new -cluster there is a risk that you will accidentally form two separate clusters -instead of one. This situation can lead to data loss: you might start using both -clusters before you notice that anything has gone wrong and it is impossible to -merge them together later. - -NOTE: To illustrate the problem with configuring each node to expect a certain -cluster size, imagine starting up a three-node cluster in which each node knows -that it is going to be part of a three-node cluster. A majority of three nodes -is two, so normally the first two nodes to discover each other form a cluster -and the third node joins them a short time later. However, imagine that four -nodes were erroneously started instead of three. In this case, there are enough -nodes to form two separate clusters. Of course if each node is started manually -then it's unlikely that too many nodes are started. If you're using an automated -orchestrator, however, it's certainly possible to get into this situation-- -particularly if the orchestrator is not resilient to failures such as network -partitions. - -The initial quorum is only required the very first time a whole cluster starts -up. New nodes joining an established cluster can safely obtain all the -information they need from the elected master. Nodes that have previously been -part of a cluster will have stored to disk all the information that is required -when they restart. - [float] ==== Master elections @@ -103,3 +63,4 @@ and then started again then it will automatically recover, such as during a <>. There is no need to take any further action with the APIs described here in these cases, because the set of master nodes is not changing permanently. + diff --git a/docs/reference/modules/discovery/voting.asciidoc b/docs/reference/modules/discovery/voting.asciidoc index 1f71cc4b8810f..84aab16b8ed30 100644 --- a/docs/reference/modules/discovery/voting.asciidoc +++ b/docs/reference/modules/discovery/voting.asciidoc @@ -1,11 +1,11 @@ [[modules-discovery-voting]] === Voting configurations -Each {es} cluster has a _voting configuration_, which is the set of +Each {es} cluster has a _voting configuration_, which is the set of <> whose responses are counted when making -decisions such as electing a new master or committing a new cluster -state. Decisions are made only after a _quorum_ (more than half) of the nodes in -the voting configuration respond. +decisions such as electing a new master or committing a new cluster state. +Decisions are made only after a majority (more than half) of the nodes in the +voting configuration respond. Usually the voting configuration is the same as the set of all the master-eligible nodes that are currently in the cluster. However, there are some @@ -98,3 +98,43 @@ nodes, however, the cluster is still only fully tolerant to the loss of one node, but quorum-based decisions require votes from two of the three voting nodes. In the event of an even split, one half will contain two of the three voting nodes so that half will remain available. + +[float] +==== Setting the initial voting configuration + +When a brand-new cluster starts up for the first time, it must elect its first +master node. To do this election, it needs to know the set of master-eligible +nodes whose votes should count. This initial voting configuration is known as +the _bootstrap configuration_ and is set in the +<>. + +It is important that the bootstrap configuration identifies exactly which nodes +should vote in the first election. It is not sufficient to configure each node +with an expectation of how many nodes there should be in the cluster. It is also +important to note that the bootstrap configuration must come from outside the +cluster: there is no safe way for the cluster to determine the bootstrap +configuration correctly on its own. + +If the bootstrap configuration is not set correctly, when you start a brand-new +cluster there is a risk that you will accidentally form two separate clusters +instead of one. This situation can lead to data loss: you might start using both +clusters before you notice that anything has gone wrong and it is impossible to +merge them together later. + +NOTE: To illustrate the problem with configuring each node to expect a certain +cluster size, imagine starting up a three-node cluster in which each node knows +that it is going to be part of a three-node cluster. A majority of three nodes +is two, so normally the first two nodes to discover each other form a cluster +and the third node joins them a short time later. However, imagine that four +nodes were erroneously started instead of three. In this case, there are enough +nodes to form two separate clusters. Of course if each node is started manually +then it's unlikely that too many nodes are started. If you're using an automated +orchestrator, however, it's certainly possible to get into this situation-- +particularly if the orchestrator is not resilient to failures such as network +partitions. + +The initial quorum is only required the very first time a whole cluster starts +up. New nodes joining an established cluster can safely obtain all the +information they need from the elected master. Nodes that have previously been +part of a cluster will have stored to disk all the information that is required +when they restart.