From 27925cb6239eb548e5c3900099e99da60e7f203c Mon Sep 17 00:00:00 2001 From: James Rodewig Date: Mon, 11 May 2020 12:32:24 -0400 Subject: [PATCH 1/3] [DOCS] Relocate `shard allocation` module content --- docs/plugins/discovery-ec2.asciidoc | 8 +++--- docs/reference/index-modules.asciidoc | 6 ++--- .../allocation/filtering.asciidoc | 4 +-- docs/reference/modules.asciidoc | 8 +----- docs/reference/modules/cluster.asciidoc | 24 ++++++++++------- .../cluster/allocation_awareness.asciidoc | 10 +++---- .../cluster/allocation_filtering.asciidoc | 9 +++---- .../modules/cluster/disk_allocator.asciidoc | 4 +-- docs/reference/modules/cluster/misc.asciidoc | 17 ++++++------ .../cluster/shards_allocation.asciidoc | 24 +++++++---------- docs/reference/monitoring/exporters.asciidoc | 3 +-- docs/reference/redirects.asciidoc | 27 ++++++++++++++++++- .../search/request/preference.asciidoc | 2 +- docs/reference/setup.asciidoc | 2 ++ .../upgrade/disable-shard-alloc.asciidoc | 4 +-- 15 files changed, 84 insertions(+), 68 deletions(-) diff --git a/docs/plugins/discovery-ec2.asciidoc b/docs/plugins/discovery-ec2.asciidoc index a3b0c6812ac7f..a3190cff9224b 100644 --- a/docs/plugins/discovery-ec2.asciidoc +++ b/docs/plugins/discovery-ec2.asciidoc @@ -236,7 +236,8 @@ The `discovery-ec2` plugin can automatically set the `aws_availability_zone` node attribute to the availability zone of each node. This node attribute allows you to ensure that each shard has copies allocated redundantly across multiple availability zones by using the -{ref}/allocation-awareness.html[Allocation Awareness] feature. +{ref}/modules-cluster.html#shard-allocation-awareness[Allocation Awareness] +feature. In order to enable the automatic definition of the `aws_availability_zone` attribute, set `cloud.node.auto_attributes` to `true`. For example: @@ -327,8 +328,9 @@ labelled as `Moderate` or `Low`. * It is a good idea to distribute your nodes across multiple http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html[availability -zones] and use {ref}/allocation-awareness.html[shard allocation awareness] to -ensure that each shard has copies in more than one availability zone. +zones] and use {ref}/modules-cluster.html#shard-allocation-awareness[shard +allocation awareness] to ensure that each shard has copies in more than one +availability zone. * Do not span a cluster across regions. {es} expects that node-to-node connections within a cluster are reasonably reliable and offer high bandwidth diff --git a/docs/reference/index-modules.asciidoc b/docs/reference/index-modules.asciidoc index 4fb31e31e8554..45817515312a7 100644 --- a/docs/reference/index-modules.asciidoc +++ b/docs/reference/index-modules.asciidoc @@ -105,7 +105,7 @@ specific index module: for the upper bound (e.g. `0-all`). Defaults to `false` (i.e. disabled). Note that the auto-expanded number of replicas only takes <> rules into account, but ignores - any other allocation rules such as <> + any other allocation rules such as <> and <>, and this can lead to the cluster health becoming `YELLOW` if the applicable rules prevent all the replicas from being allocated. @@ -178,8 +178,8 @@ specific index module: `index.blocks.read_only_allow_delete`:: Similar to `index.blocks.read_only` but also allows deleting the index to - free up resources. The <> may - add and remove this block automatically. + free up resources. The <> may add and remove this block automatically. `index.blocks.read`:: diff --git a/docs/reference/index-modules/allocation/filtering.asciidoc b/docs/reference/index-modules/allocation/filtering.asciidoc index f5a4ce31d38fd..12ae0e64ebaa9 100644 --- a/docs/reference/index-modules/allocation/filtering.asciidoc +++ b/docs/reference/index-modules/allocation/filtering.asciidoc @@ -3,8 +3,8 @@ You can use shard allocation filters to control where {es} allocates shards of a particular index. These per-index filters are applied in conjunction with -<> and -<>. +<> and +<>. Shard allocation filters can be based on custom node attributes or the built-in `_name`, `_host_ip`, `_publish_ip`, `_ip`, `_host` and `_id` attributes. diff --git a/docs/reference/modules.asciidoc b/docs/reference/modules.asciidoc index 2ab54762e6df2..1feafcbe3d30b 100644 --- a/docs/reference/modules.asciidoc +++ b/docs/reference/modules.asciidoc @@ -21,13 +21,7 @@ The modules in this section are: <>:: How nodes discover each other, elect a master and form a cluster. - -<>:: - - Settings to control where, when, and how shards are allocated to nodes. -- -include::modules/discovery.asciidoc[] - -include::modules/cluster.asciidoc[] +include::modules/discovery.asciidoc[] \ No newline at end of file diff --git a/docs/reference/modules/cluster.asciidoc b/docs/reference/modules/cluster.asciidoc index 810ed7c4a34b4..ba0ea765c608f 100644 --- a/docs/reference/modules/cluster.asciidoc +++ b/docs/reference/modules/cluster.asciidoc @@ -1,5 +1,9 @@ [[modules-cluster]] -== Shard allocation and cluster-level routing +=== Cluster-level shard allocation and routing settings + +_Shard allocation_ is the process of allocating shards to nodes. This can +happen during initial recovery, replica allocation, rebalancing, or +when nodes are added or removed. One of the main roles of the master is to decide which shards to allocate to which nodes, and when to move shards between nodes in order to rebalance the @@ -7,21 +11,21 @@ cluster. There are a number of settings available to control the shard allocation process: -* <> lists the settings to control the allocation and +* <> control allocation and rebalancing operations. -* <> explains how Elasticsearch takes available disk space - into account, and the related settings. +* <> explains how Elasticsearch takes available + disk space into account, and the related settings. -* <> and <> control how shards can - be distributed across different racks or availability zones. +* <> and <> control how shards + can be distributed across different racks or availability zones. -* <> allows certain nodes or groups of nodes excluded - from allocation so that they can be decommissioned. +* <> allows certain nodes or groups of + nodes excluded from allocation so that they can be decommissioned. -Besides these, there are a few other <>. +Besides these, there are a few other <>. -All of the settings in this section are _dynamic_ settings which can be +All of these settings are _dynamic_ and can be updated on a live cluster with the <> API. diff --git a/docs/reference/modules/cluster/allocation_awareness.asciidoc b/docs/reference/modules/cluster/allocation_awareness.asciidoc index 2d81be8a87ecd..f961b2e59ce17 100644 --- a/docs/reference/modules/cluster/allocation_awareness.asciidoc +++ b/docs/reference/modules/cluster/allocation_awareness.asciidoc @@ -1,5 +1,5 @@ -[[allocation-awareness]] -=== Shard allocation awareness +[[shard-allocation-awareness]] +==== Shard allocation awareness You can use custom node attributes as _awareness attributes_ to enable {es} to take your physical hardware configuration into account when allocating shards. @@ -22,9 +22,8 @@ allocated in each location. If the number of nodes in each location is unbalanced and there are a lot of replicas, replica shards might be left unassigned. -[float] [[enabling-awareness]] -==== Enabling shard allocation awareness +===== Enabling shard allocation awareness To enable shard allocation awareness: @@ -76,9 +75,8 @@ allocates the lost shard copies to nodes in `rack_one`. To prevent multiple copies of a particular shard from being allocated in the same location, you can enable forced awareness. -[float] [[forced-awareness]] -==== Forced awareness +===== Forced awareness By default, if one location fails, Elasticsearch assigns all of the missing replica shards to the remaining locations. While you might have sufficient diff --git a/docs/reference/modules/cluster/allocation_filtering.asciidoc b/docs/reference/modules/cluster/allocation_filtering.asciidoc index 51a66a0e4cf0d..a7ca63d70c695 100644 --- a/docs/reference/modules/cluster/allocation_filtering.asciidoc +++ b/docs/reference/modules/cluster/allocation_filtering.asciidoc @@ -1,10 +1,10 @@ -[[allocation-filtering]] -=== Cluster-level shard allocation filtering +[[cluster-shard-allocation-filtering]] +==== Cluster-level shard allocation filtering You can use cluster-level shard allocation filters to control where {es} allocates shards from any index. These cluster wide filters are applied in conjunction with <> -and <>. +and <>. Shard allocation filters can be based on custom node attributes or the built-in `_name`, `_host_ip`, `_publish_ip`, `_ip`, `_host` and `_id` attributes. @@ -28,9 +28,8 @@ PUT _cluster/settings } -------------------------------------------------- -[float] [[cluster-routing-settings]] -==== Cluster routing settings +===== Cluster routing settings `cluster.routing.allocation.include.{attribute}`:: diff --git a/docs/reference/modules/cluster/disk_allocator.asciidoc b/docs/reference/modules/cluster/disk_allocator.asciidoc index d4018352e7c98..03d46521eb7fe 100644 --- a/docs/reference/modules/cluster/disk_allocator.asciidoc +++ b/docs/reference/modules/cluster/disk_allocator.asciidoc @@ -1,5 +1,5 @@ -[[disk-allocator]] -=== Disk-based shard allocation +[[disk-based-shard-allocation]] +==== Disk-based shard allocation settings Elasticsearch considers the available disk space on a node before deciding whether to allocate new shards to that node or to actively relocate shards away diff --git a/docs/reference/modules/cluster/misc.asciidoc b/docs/reference/modules/cluster/misc.asciidoc index 32803bf12bc31..1c8a2c781f641 100644 --- a/docs/reference/modules/cluster/misc.asciidoc +++ b/docs/reference/modules/cluster/misc.asciidoc @@ -1,8 +1,8 @@ -[[misc-cluster]] -=== Miscellaneous cluster settings +[[misc-cluster-settings]] +==== Miscellaneous cluster settings [[cluster-read-only]] -==== Metadata +===== Metadata An entire cluster may be set to read-only with the following _dynamic_ setting: @@ -23,8 +23,7 @@ API can make the cluster read-write again. [[cluster-shard-limit]] - -==== Cluster Shard Limit +===== Cluster shard limit There is a soft limit on the number of shards in a cluster, based on the number of nodes in the cluster. This is intended to prevent operations which may @@ -66,7 +65,7 @@ This allows the creation of indices during cluster creation if dedicated master nodes are set up before data nodes. [[user-defined-data]] -==== User Defined Cluster Metadata +===== User-defined cluster metadata User-defined metadata can be stored and retrieved using the Cluster Settings API. This can be used to store arbitrary, infrequently-changing data about the cluster @@ -92,7 +91,7 @@ metadata will be viewable by anyone with access to the {es} logs. [[cluster-max-tombstones]] -==== Index Tombstones +===== Index tombstones The cluster state maintains index tombstones to explicitly denote indices that have been deleted. The number of tombstones maintained in the cluster state is @@ -109,7 +108,7 @@ than 500 deletes. We think that is rare, thus the default. Tombstones don't take up much space, but we also think that a number like 50,000 is probably too big. [[cluster-logger]] -==== Logger +===== Logger The settings which control logging can be updated dynamically with the `logger.` prefix. For instance, to increase the logging level of the @@ -127,7 +126,7 @@ PUT /_cluster/settings [[persistent-tasks-allocation]] -==== Persistent Tasks Allocations +===== Persistent tasks allocations Plugins can create a kind of tasks called persistent tasks. Those tasks are usually long-live tasks and are stored in the cluster state, allowing the diff --git a/docs/reference/modules/cluster/shards_allocation.asciidoc b/docs/reference/modules/cluster/shards_allocation.asciidoc index 7513142cb86ae..a1c0df7aa43fd 100644 --- a/docs/reference/modules/cluster/shards_allocation.asciidoc +++ b/docs/reference/modules/cluster/shards_allocation.asciidoc @@ -1,15 +1,9 @@ -[[shards-allocation]] -=== Cluster level shard allocation - -Shard allocation is the process of allocating shards to nodes. This can -happen during initial recovery, replica allocation, rebalancing, or -when nodes are added or removed. - -[float] -=== Shard allocation settings +[[cluster-shard-allocation-settings]] +==== Cluster-level shard allocation settings The following _dynamic_ settings may be used to control shard allocation and recovery: +[[cluster.routing.allocation.enable]] `cluster.routing.allocation.enable`:: + -- @@ -58,8 +52,8 @@ one of the active allocation ids in the cluster state. Defaults to `false`, meaning that no check is performed by default. This setting only applies if multiple nodes are started on the same machine. -[float] -=== Shard rebalancing settings +[[shards-rebalancing-settings]] +==== Shard rebalancing settings The following _dynamic_ settings may be used to control the rebalancing of shards across the cluster: @@ -94,11 +88,11 @@ Specify when shard rebalancing is allowed: allowed cluster wide. Defaults to `2`. Note that this setting only controls the number of concurrent shard relocations due to imbalances in the cluster. This setting does not limit shard - relocations due to <> - or <>. + relocations due to <> or <>. -[float] -=== Shard balancing heuristics +[[shards-rebalancing-heuristics]] +==== Shard balancing heuristics settings The following settings are used together to determine where to place each shard. The cluster is balanced when no allowed rebalancing operation can bring the weight diff --git a/docs/reference/monitoring/exporters.asciidoc b/docs/reference/monitoring/exporters.asciidoc index e1a27641b6e75..e64997b1a8a75 100644 --- a/docs/reference/monitoring/exporters.asciidoc +++ b/docs/reference/monitoring/exporters.asciidoc @@ -74,8 +74,7 @@ feature is triggered, it makes all indices (including monitoring indices) read-only until the issue is fixed and a user manually makes the index writeable again. While an active monitoring index is read-only, it will naturally fail to write (index) new data and will continuously log errors that indicate the write -failure. For more information, see -{ref}/disk-allocator.html[Disk-based Shard Allocation]. +failure. For more information, see <>. [float] [[es-monitoring-default-exporter]] diff --git a/docs/reference/redirects.asciidoc b/docs/reference/redirects.asciidoc index c3ec903ea7a2f..c2f41de540290 100644 --- a/docs/reference/redirects.asciidoc +++ b/docs/reference/redirects.asciidoc @@ -502,4 +502,29 @@ See <>. [role="exclude",id="modules-gateway-dangling-indices"] === Dangling indices -See <>. \ No newline at end of file +See <>. + +[role="exclude",id="shards-allocation"] +=== Cluster-level shard allocation + +See <>. + +[role="exclude",id="disk-allocator"] +=== Disk-based shard allocation + +See <>. + +[role="exclude",id="allocation-awareness"] +=== Shard allocation awareness + +See <>. + +[role="exclude",id="allocation-filtering"] +=== Cluster-level shard allocation filtering + +See <>. + +[role="exclude",id="misc-cluster"] +=== Miscellaneous cluster settings + +See <>. \ No newline at end of file diff --git a/docs/reference/search/request/preference.asciidoc b/docs/reference/search/request/preference.asciidoc index 8462748de4c5a..7c64bf8d2ce19 100644 --- a/docs/reference/search/request/preference.asciidoc +++ b/docs/reference/search/request/preference.asciidoc @@ -3,7 +3,7 @@ Controls a `preference` of the shard copies on which to execute the search. By default, Elasticsearch selects from the available shard copies in an -unspecified order, taking the <> and +unspecified order, taking the <> and <> configuration into account. However, it may sometimes be desirable to try and route certain searches to certain sets of shard copies. diff --git a/docs/reference/setup.asciidoc b/docs/reference/setup.asciidoc index 3c29af1c173cd..34e9305f1279c 100644 --- a/docs/reference/setup.asciidoc +++ b/docs/reference/setup.asciidoc @@ -49,6 +49,8 @@ include::settings/audit-settings.asciidoc[] include::modules/indices/circuit_breaker.asciidoc[] +include::modules/cluster.asciidoc[] + include::settings/ccr-settings.asciidoc[] include::modules/indices/fielddata.asciidoc[] diff --git a/docs/reference/upgrade/disable-shard-alloc.asciidoc b/docs/reference/upgrade/disable-shard-alloc.asciidoc index 8f238a2c2c6a5..239c75aa2b88d 100644 --- a/docs/reference/upgrade/disable-shard-alloc.asciidoc +++ b/docs/reference/upgrade/disable-shard-alloc.asciidoc @@ -4,8 +4,8 @@ When you shut down a node, the allocation process waits for starting to replicate the shards on that node to other nodes in the cluster, which can involve a lot of I/O. Since the node is shortly going to be restarted, this I/O is unnecessary. You can avoid racing the clock by -<> of replicas before shutting down -the node: +<> of replicas before +shutting down the node: [source,console] -------------------------------------------------- From 4f4c8941c9a4318816d8ba29b462a79a52c275be Mon Sep 17 00:00:00 2001 From: James Rodewig Date: Mon, 11 May 2020 17:21:27 -0400 Subject: [PATCH 2/3] review feedback --- docs/reference/modules/cluster/misc.asciidoc | 2 +- docs/reference/modules/cluster/shards_allocation.asciidoc | 2 +- docs/reference/upgrade/disable-shard-alloc.asciidoc | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/reference/modules/cluster/misc.asciidoc b/docs/reference/modules/cluster/misc.asciidoc index 1c8a2c781f641..f0302219d6bc7 100644 --- a/docs/reference/modules/cluster/misc.asciidoc +++ b/docs/reference/modules/cluster/misc.asciidoc @@ -126,7 +126,7 @@ PUT /_cluster/settings [[persistent-tasks-allocation]] -===== Persistent tasks allocations +===== Persistent tasks allocation Plugins can create a kind of tasks called persistent tasks. Those tasks are usually long-live tasks and are stored in the cluster state, allowing the diff --git a/docs/reference/modules/cluster/shards_allocation.asciidoc b/docs/reference/modules/cluster/shards_allocation.asciidoc index a1c0df7aa43fd..1ea20dbd2e114 100644 --- a/docs/reference/modules/cluster/shards_allocation.asciidoc +++ b/docs/reference/modules/cluster/shards_allocation.asciidoc @@ -3,7 +3,7 @@ The following _dynamic_ settings may be used to control shard allocation and recovery: -[[cluster.routing.allocation.enable]] +[[cluster-routing-allocation-enable]] `cluster.routing.allocation.enable`:: + -- diff --git a/docs/reference/upgrade/disable-shard-alloc.asciidoc b/docs/reference/upgrade/disable-shard-alloc.asciidoc index 239c75aa2b88d..56461fa999720 100644 --- a/docs/reference/upgrade/disable-shard-alloc.asciidoc +++ b/docs/reference/upgrade/disable-shard-alloc.asciidoc @@ -4,7 +4,7 @@ When you shut down a node, the allocation process waits for starting to replicate the shards on that node to other nodes in the cluster, which can involve a lot of I/O. Since the node is shortly going to be restarted, this I/O is unnecessary. You can avoid racing the clock by -<> of replicas before +<> of replicas before shutting down the node: [source,console] From 9eb7276e38abc6cb0dcb8cb2a10897deaca06ac5 Mon Sep 17 00:00:00 2001 From: James Rodewig Date: Mon, 11 May 2020 17:22:00 -0400 Subject: [PATCH 3/3] review feedback --- docs/reference/modules/cluster/misc.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/reference/modules/cluster/misc.asciidoc b/docs/reference/modules/cluster/misc.asciidoc index f0302219d6bc7..6986254fa1c6b 100644 --- a/docs/reference/modules/cluster/misc.asciidoc +++ b/docs/reference/modules/cluster/misc.asciidoc @@ -129,7 +129,7 @@ PUT /_cluster/settings ===== Persistent tasks allocation Plugins can create a kind of tasks called persistent tasks. Those tasks are -usually long-live tasks and are stored in the cluster state, allowing the +usually long-lived tasks and are stored in the cluster state, allowing the tasks to be revived after a full cluster restart. Every time a persistent task is created, the master node takes care of