Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOCS] Relocate shard allocation module content #56535

Merged
merged 3 commits into from
May 12, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 5 additions & 3 deletions docs/plugins/discovery-ec2.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,8 @@ The `discovery-ec2` plugin can automatically set the `aws_availability_zone`
node attribute to the availability zone of each node. This node attribute
allows you to ensure that each shard has copies allocated redundantly across
multiple availability zones by using the
{ref}/allocation-awareness.html[Allocation Awareness] feature.
{ref}/modules-cluster.html#shard-allocation-awareness[Allocation Awareness]
feature.

In order to enable the automatic definition of the `aws_availability_zone`
attribute, set `cloud.node.auto_attributes` to `true`. For example:
Expand Down Expand Up @@ -327,8 +328,9 @@ labelled as `Moderate` or `Low`.

* It is a good idea to distribute your nodes across multiple
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html[availability
zones] and use {ref}/allocation-awareness.html[shard allocation awareness] to
ensure that each shard has copies in more than one availability zone.
zones] and use {ref}/modules-cluster.html#shard-allocation-awareness[shard
allocation awareness] to ensure that each shard has copies in more than one
availability zone.

* Do not span a cluster across regions. {es} expects that node-to-node
connections within a cluster are reasonably reliable and offer high bandwidth
Expand Down
6 changes: 3 additions & 3 deletions docs/reference/index-modules.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ specific index module:
for the upper bound (e.g. `0-all`). Defaults to `false` (i.e. disabled).
Note that the auto-expanded number of replicas only takes
<<shard-allocation-filtering,allocation filtering>> rules into account, but ignores
any other allocation rules such as <<allocation-awareness,shard allocation awareness>>
any other allocation rules such as <<shard-allocation-awareness,shard allocation awareness>>
and <<allocation-total-shards,total shards per node>>, and this can lead to the
cluster health becoming `YELLOW` if the applicable rules prevent all the replicas
from being allocated.
Expand Down Expand Up @@ -178,8 +178,8 @@ specific index module:
`index.blocks.read_only_allow_delete`::

Similar to `index.blocks.read_only` but also allows deleting the index to
free up resources. The <<disk-allocator,disk-based shard allocator>> may
add and remove this block automatically.
free up resources. The <<disk-based-shard-allocation,disk-based shard
allocator>> may add and remove this block automatically.

`index.blocks.read`::

Expand Down
4 changes: 2 additions & 2 deletions docs/reference/index-modules/allocation/filtering.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@

You can use shard allocation filters to control where {es} allocates shards of
a particular index. These per-index filters are applied in conjunction with
<<allocation-filtering, cluster-wide allocation filtering>> and
<<allocation-awareness, allocation awareness>>.
<<cluster-shard-allocation-filtering, cluster-wide allocation filtering>> and
<<shard-allocation-awareness, allocation awareness>>.

Shard allocation filters can be based on custom node attributes or the built-in
`_name`, `_host_ip`, `_publish_ip`, `_ip`, `_host` and `_id` attributes.
Expand Down
8 changes: 1 addition & 7 deletions docs/reference/modules.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,7 @@ The modules in this section are:
<<modules-discovery,Discovery and cluster formation>>::

How nodes discover each other, elect a master and form a cluster.

<<modules-cluster,Shard allocation and cluster-level routing>>::

Settings to control where, when, and how shards are allocated to nodes.
--


include::modules/discovery.asciidoc[]

include::modules/cluster.asciidoc[]
include::modules/discovery.asciidoc[]
24 changes: 14 additions & 10 deletions docs/reference/modules/cluster.asciidoc
Original file line number Diff line number Diff line change
@@ -1,27 +1,31 @@
[[modules-cluster]]
== Shard allocation and cluster-level routing
=== Cluster-level shard allocation and routing settings

_Shard allocation_ is the process of allocating shards to nodes. This can
happen during initial recovery, replica allocation, rebalancing, or
when nodes are added or removed.

One of the main roles of the master is to decide which shards to allocate to
which nodes, and when to move shards between nodes in order to rebalance the
cluster.

There are a number of settings available to control the shard allocation process:

* <<shards-allocation>> lists the settings to control the allocation and
* <<cluster-shard-allocation-settings>> control allocation and
rebalancing operations.

* <<disk-allocator>> explains how Elasticsearch takes available disk space
into account, and the related settings.
* <<disk-based-shard-allocation>> explains how Elasticsearch takes available
disk space into account, and the related settings.

* <<allocation-awareness>> and <<forced-awareness>> control how shards can
be distributed across different racks or availability zones.
* <<shard-allocation-awareness>> and <<forced-awareness>> control how shards
can be distributed across different racks or availability zones.

* <<allocation-filtering>> allows certain nodes or groups of nodes excluded
from allocation so that they can be decommissioned.
* <<cluster-shard-allocation-filtering>> allows certain nodes or groups of
nodes excluded from allocation so that they can be decommissioned.

Besides these, there are a few other <<misc-cluster,miscellaneous cluster-level settings>>.
Besides these, there are a few other <<misc-cluster-settings,miscellaneous cluster-level settings>>.

All of the settings in this section are _dynamic_ settings which can be
All of these settings are _dynamic_ and can be
updated on a live cluster with the
<<cluster-update-settings,cluster-update-settings>> API.

Expand Down
10 changes: 4 additions & 6 deletions docs/reference/modules/cluster/allocation_awareness.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[[allocation-awareness]]
=== Shard allocation awareness
[[shard-allocation-awareness]]
==== Shard allocation awareness

You can use custom node attributes as _awareness attributes_ to enable {es}
to take your physical hardware configuration into account when allocating shards.
Expand All @@ -22,9 +22,8 @@ allocated in each location. If the number of nodes in each location is
unbalanced and there are a lot of replicas, replica shards might be left
unassigned.

[float]
[[enabling-awareness]]
==== Enabling shard allocation awareness
===== Enabling shard allocation awareness

To enable shard allocation awareness:

Expand Down Expand Up @@ -76,9 +75,8 @@ allocates the lost shard copies to nodes in `rack_one`. To prevent multiple
copies of a particular shard from being allocated in the same location, you can
enable forced awareness.

[float]
[[forced-awareness]]
==== Forced awareness
===== Forced awareness

By default, if one location fails, Elasticsearch assigns all of the missing
replica shards to the remaining locations. While you might have sufficient
Expand Down
9 changes: 4 additions & 5 deletions docs/reference/modules/cluster/allocation_filtering.asciidoc
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
[[allocation-filtering]]
=== Cluster-level shard allocation filtering
[[cluster-shard-allocation-filtering]]
==== Cluster-level shard allocation filtering

You can use cluster-level shard allocation filters to control where {es}
allocates shards from any index. These cluster wide filters are applied in
conjunction with <<shard-allocation-filtering, per-index allocation filtering>>
and <<allocation-awareness, allocation awareness>>.
and <<shard-allocation-awareness, allocation awareness>>.

Shard allocation filters can be based on custom node attributes or the built-in
`_name`, `_host_ip`, `_publish_ip`, `_ip`, `_host` and `_id` attributes.
Expand All @@ -28,9 +28,8 @@ PUT _cluster/settings
}
--------------------------------------------------

[float]
[[cluster-routing-settings]]
==== Cluster routing settings
===== Cluster routing settings

`cluster.routing.allocation.include.{attribute}`::

Expand Down
4 changes: 2 additions & 2 deletions docs/reference/modules/cluster/disk_allocator.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[[disk-allocator]]
=== Disk-based shard allocation
[[disk-based-shard-allocation]]
==== Disk-based shard allocation settings

Elasticsearch considers the available disk space on a node before deciding
whether to allocate new shards to that node or to actively relocate shards away
Expand Down
19 changes: 9 additions & 10 deletions docs/reference/modules/cluster/misc.asciidoc
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
[[misc-cluster]]
=== Miscellaneous cluster settings
[[misc-cluster-settings]]
==== Miscellaneous cluster settings

[[cluster-read-only]]
==== Metadata
===== Metadata

An entire cluster may be set to read-only with the following _dynamic_ setting:

Expand All @@ -23,8 +23,7 @@ API can make the cluster read-write again.


[[cluster-shard-limit]]

==== Cluster Shard Limit
===== Cluster shard limit

There is a soft limit on the number of shards in a cluster, based on the number
of nodes in the cluster. This is intended to prevent operations which may
Expand Down Expand Up @@ -66,7 +65,7 @@ This allows the creation of indices during cluster creation if dedicated master
nodes are set up before data nodes.

[[user-defined-data]]
==== User Defined Cluster Metadata
===== User-defined cluster metadata

User-defined metadata can be stored and retrieved using the Cluster Settings API.
This can be used to store arbitrary, infrequently-changing data about the cluster
Expand All @@ -92,7 +91,7 @@ metadata will be viewable by anyone with access to the
{es} logs.

[[cluster-max-tombstones]]
==== Index Tombstones
===== Index tombstones

The cluster state maintains index tombstones to explicitly denote indices that
have been deleted. The number of tombstones maintained in the cluster state is
Expand All @@ -109,7 +108,7 @@ than 500 deletes. We think that is rare, thus the default. Tombstones don't take
up much space, but we also think that a number like 50,000 is probably too big.

[[cluster-logger]]
==== Logger
===== Logger

The settings which control logging can be updated dynamically with the
`logger.` prefix. For instance, to increase the logging level of the
Expand All @@ -127,10 +126,10 @@ PUT /_cluster/settings


[[persistent-tasks-allocation]]
==== Persistent Tasks Allocations
===== Persistent tasks allocation

jrodewig marked this conversation as resolved.
Show resolved Hide resolved
Plugins can create a kind of tasks called persistent tasks. Those tasks are
usually long-live tasks and are stored in the cluster state, allowing the
usually long-lived tasks and are stored in the cluster state, allowing the
tasks to be revived after a full cluster restart.

Every time a persistent task is created, the master node takes care of
Expand Down
24 changes: 9 additions & 15 deletions docs/reference/modules/cluster/shards_allocation.asciidoc
Original file line number Diff line number Diff line change
@@ -1,15 +1,9 @@
[[shards-allocation]]
=== Cluster level shard allocation

Shard allocation is the process of allocating shards to nodes. This can
happen during initial recovery, replica allocation, rebalancing, or
when nodes are added or removed.

[float]
=== Shard allocation settings
[[cluster-shard-allocation-settings]]
==== Cluster-level shard allocation settings

The following _dynamic_ settings may be used to control shard allocation and recovery:

[[cluster-routing-allocation-enable]]
`cluster.routing.allocation.enable`::
+
--
Expand Down Expand Up @@ -58,8 +52,8 @@ one of the active allocation ids in the cluster state.
Defaults to `false`, meaning that no check is performed by default. This
setting only applies if multiple nodes are started on the same machine.

[float]
=== Shard rebalancing settings
[[shards-rebalancing-settings]]
==== Shard rebalancing settings

The following _dynamic_ settings may be used to control the rebalancing of
shards across the cluster:
Expand Down Expand Up @@ -94,11 +88,11 @@ Specify when shard rebalancing is allowed:
allowed cluster wide. Defaults to `2`. Note that this setting
only controls the number of concurrent shard relocations due
to imbalances in the cluster. This setting does not limit shard
relocations due to <<allocation-filtering,allocation filtering>>
or <<forced-awareness,forced awareness>>.
relocations due to <<cluster-shard-allocation-filtering,allocation
filtering>> or <<forced-awareness,forced awareness>>.

[float]
=== Shard balancing heuristics
[[shards-rebalancing-heuristics]]
==== Shard balancing heuristics settings

The following settings are used together to determine where to place each
shard. The cluster is balanced when no allowed rebalancing operation can bring the weight
Expand Down
3 changes: 1 addition & 2 deletions docs/reference/monitoring/exporters.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -74,8 +74,7 @@ feature is triggered, it makes all indices (including monitoring indices)
read-only until the issue is fixed and a user manually makes the index writeable
again. While an active monitoring index is read-only, it will naturally fail to
write (index) new data and will continuously log errors that indicate the write
failure. For more information, see
{ref}/disk-allocator.html[Disk-based Shard Allocation].
failure. For more information, see <<disk-based-shard-allocation>>.

[float]
[[es-monitoring-default-exporter]]
Expand Down
27 changes: 26 additions & 1 deletion docs/reference/redirects.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -502,4 +502,29 @@ See <<search-search>>.
[role="exclude",id="modules-gateway-dangling-indices"]
=== Dangling indices

See <<modules-gateway-dangling-indices>>.
See <<modules-gateway-dangling-indices>>.

[role="exclude",id="shards-allocation"]
=== Cluster-level shard allocation

See <<cluster-shard-allocation-settings>>.

[role="exclude",id="disk-allocator"]
=== Disk-based shard allocation

See <<disk-based-shard-allocation>>.

[role="exclude",id="allocation-awareness"]
=== Shard allocation awareness

See <<shard-allocation-awareness>>.

[role="exclude",id="allocation-filtering"]
=== Cluster-level shard allocation filtering

See <<cluster-shard-allocation-filtering>>.

[role="exclude",id="misc-cluster"]
=== Miscellaneous cluster settings

See <<misc-cluster-settings>>.
2 changes: 1 addition & 1 deletion docs/reference/search/request/preference.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Controls a `preference` of the shard copies on which to execute the search. By
default, Elasticsearch selects from the available shard copies in an
unspecified order, taking the <<allocation-awareness,allocation awareness>> and
unspecified order, taking the <<shard-allocation-awareness,allocation awareness>> and
<<search-adaptive-replica,adaptive replica selection>> configuration into
account. However, it may sometimes be desirable to try and route certain
searches to certain sets of shard copies.
Expand Down
2 changes: 2 additions & 0 deletions docs/reference/setup.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,8 @@ include::settings/audit-settings.asciidoc[]

include::modules/indices/circuit_breaker.asciidoc[]

include::modules/cluster.asciidoc[]

include::settings/ccr-settings.asciidoc[]

include::modules/indices/fielddata.asciidoc[]
Expand Down
4 changes: 2 additions & 2 deletions docs/reference/upgrade/disable-shard-alloc.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ When you shut down a node, the allocation process waits for
starting to replicate the shards on that node to other nodes in the cluster,
which can involve a lot of I/O. Since the node is shortly going to be
restarted, this I/O is unnecessary. You can avoid racing the clock by
<<shards-allocation, disabling allocation>> of replicas before shutting down
the node:
<<cluster-routing-allocation-enable,disabling allocation>> of replicas before
shutting down the node:

[source,console]
--------------------------------------------------
Expand Down