diff --git a/docs/reference/cluster.asciidoc b/docs/reference/cluster.asciidoc index f92e364bae102..cfa2d5a6488d7 100644 --- a/docs/reference/cluster.asciidoc +++ b/docs/reference/cluster.asciidoc @@ -104,3 +104,5 @@ include::cluster/tasks.asciidoc[] include::cluster/nodes-hot-threads.asciidoc[] include::cluster/allocation-explain.asciidoc[] + +include::cluster/voting-exclusions.asciidoc[] diff --git a/docs/reference/cluster/voting-exclusions.asciidoc b/docs/reference/cluster/voting-exclusions.asciidoc new file mode 100644 index 0000000000000..fcef8113912c4 --- /dev/null +++ b/docs/reference/cluster/voting-exclusions.asciidoc @@ -0,0 +1,76 @@ +[[voting-config-exclusions]] +== Voting configuration exclusions API +++++ +Voting Configuration Exclusions +++++ + +Adds or removes master-eligible nodes from the +<>. + +[float] +=== Request + +`POST _cluster/voting_config_exclusions/` + + +`DELETE _cluster/voting_config_exclusions` + +[float] +=== Path parameters + +`node_name`:: + A <> that identifies {es} nodes. + +[float] +=== Description + +By default, if there are more than three master-eligible nodes in the cluster +and you remove fewer than half of the master-eligible nodes in the cluster at +once, the <> automatically +shrinks. + +If you want to shrink the voting configuration to contain fewer than three nodes +or to remove half or more of the master-eligible nodes in the cluster at once, +you must use this API to remove departed nodes from the voting configuration +manually. It adds an entry for that node in the voting configuration exclusions +list. The cluster then tries to reconfigure the voting configuration to remove +that node and to prevent it from returning. + +If the API fails, you can safely retry it. Only a successful response +guarantees that the node has been removed from the voting configuration and will +not be reinstated. + +NOTE: Voting exclusions are required only when you remove at least half of the +master-eligible nodes from a cluster in a short time period. They are not +required when removing master-ineligible nodes or fewer than half of the +master-eligible nodes. + +The <> limits the size of the voting configuration exclusion list. The +default value is `10`. Since voting configuration exclusions are persistent and +limited in number, you must clear the voting config exclusions list once the +exclusions are no longer required. + +There is also a +<>, +which is set to true by default. If it is set to false, you must use this API to +maintain the voting configuration. + +For more information, see <>. + +[float] +=== Examples + +Add `nodeId1` to the voting configuration exclusions list: +[source,js] +-------------------------------------------------- +POST /_cluster/voting_config_exclusions/nodeId1 +-------------------------------------------------- +// CONSOLE +// TEST[catch:bad_request] + +Remove all exclusions from the list: +[source,js] +-------------------------------------------------- +DELETE /_cluster/voting_config_exclusions +-------------------------------------------------- +// CONSOLE \ No newline at end of file diff --git a/docs/reference/ml/aggregations.asciidoc b/docs/reference/ml/aggregations.asciidoc index 47db536db014b..3f09022d17eaa 100644 --- a/docs/reference/ml/aggregations.asciidoc +++ b/docs/reference/ml/aggregations.asciidoc @@ -8,7 +8,7 @@ and to configure your jobs to analyze aggregated data. One of the benefits of aggregating data this way is that {es} automatically distributes these calculations across your cluster. You can then feed this -aggregated data into {xpackml} instead of raw results, which +aggregated data into the {ml-features} instead of raw results, which reduces the volume of data that must be considered while detecting anomalies. There are some limitations to using aggregations in {dfeeds}, however. diff --git a/docs/reference/ml/apis/resultsresource.asciidoc b/docs/reference/ml/apis/resultsresource.asciidoc index 8962129c73966..f2533bbd07345 100644 --- a/docs/reference/ml/apis/resultsresource.asciidoc +++ b/docs/reference/ml/apis/resultsresource.asciidoc @@ -269,7 +269,7 @@ probability of this occurrence. There can be many anomaly records depending on the characteristics and size of the input data. In practice, there are often too many to be able to manually -process them. The {xpackml} features therefore perform a sophisticated +process them. The {ml-features} therefore perform a sophisticated aggregation of the anomaly records into buckets. The number of record results depends on the number of anomalies found in each diff --git a/docs/reference/ml/configuring.asciidoc b/docs/reference/ml/configuring.asciidoc index a7773b5681f89..9304a93d360c7 100644 --- a/docs/reference/ml/configuring.asciidoc +++ b/docs/reference/ml/configuring.asciidoc @@ -2,12 +2,12 @@ [[ml-configuring]] == Configuring machine learning -If you want to use {xpackml} features, there must be at least one {ml} node in +If you want to use {ml-features}, there must be at least one {ml} node in your cluster and all master-eligible nodes must have {ml} enabled. By default, all nodes are {ml} nodes. For more information about these settings, see {ref}/modules-node.html#modules-node-xpack[{ml} nodes]. -To use the {xpackml} features to analyze your data, you must create a job and +To use the {ml-features} to analyze your data, you must create a job and send your data to that job. * If your data is stored in {es}: diff --git a/docs/reference/ml/functions.asciidoc b/docs/reference/ml/functions.asciidoc index e32470c6827b6..48e56bb4627ee 100644 --- a/docs/reference/ml/functions.asciidoc +++ b/docs/reference/ml/functions.asciidoc @@ -2,7 +2,7 @@ [[ml-functions]] == Function reference -The {xpackml} features include analysis functions that provide a wide variety of +The {ml-features} include analysis functions that provide a wide variety of flexible ways to analyze data for anomalies. When you create jobs, you specify one or more detectors, which define the type of diff --git a/docs/reference/ml/functions/count.asciidoc b/docs/reference/ml/functions/count.asciidoc index 3365a0923a8b0..404ed7f2d94a3 100644 --- a/docs/reference/ml/functions/count.asciidoc +++ b/docs/reference/ml/functions/count.asciidoc @@ -14,7 +14,7 @@ in one field is unusual, as opposed to the total count. Use high-sided functions if you want to monitor unusually high event rates. Use low-sided functions if you want to look at drops in event rate. -The {xpackml} features include the following count functions: +The {ml-features} include the following count functions: * xref:ml-count[`count`, `high_count`, `low_count`] * xref:ml-nonzero-count[`non_zero_count`, `high_non_zero_count`, `low_non_zero_count`] diff --git a/docs/reference/ml/functions/geo.asciidoc b/docs/reference/ml/functions/geo.asciidoc index 3698ab7c0590e..130e17d85dcfe 100644 --- a/docs/reference/ml/functions/geo.asciidoc +++ b/docs/reference/ml/functions/geo.asciidoc @@ -5,7 +5,7 @@ The geographic functions detect anomalies in the geographic location of the input data. -The {xpackml} features include the following geographic function: `lat_long`. +The {ml-features} include the following geographic function: `lat_long`. NOTE: You cannot create forecasts for jobs that contain geographic functions. You also cannot add rules with conditions to detectors that use geographic @@ -72,7 +72,7 @@ For example, JSON data might contain the following transaction coordinates: In {es}, location data is likely to be stored in `geo_point` fields. For more information, see {ref}/geo-point.html[Geo-point datatype]. This data type is not -supported natively in {xpackml} features. You can, however, use Painless scripts +supported natively in {ml-features}. You can, however, use Painless scripts in `script_fields` in your {dfeed} to transform the data into an appropriate format. For example, the following Painless script transforms `"coords": {"lat" : 41.44, "lon":90.5}` into `"lat-lon": "41.44,90.5"`: diff --git a/docs/reference/ml/functions/info.asciidoc b/docs/reference/ml/functions/info.asciidoc index 2c3117e0e5644..c75440f238ff5 100644 --- a/docs/reference/ml/functions/info.asciidoc +++ b/docs/reference/ml/functions/info.asciidoc @@ -6,7 +6,7 @@ that is contained in strings within a bucket. These functions can be used as a more sophisticated method to identify incidences of data exfiltration or C2C activity, when analyzing the size in bytes of the data might not be sufficient. -The {xpackml} features include the following information content functions: +The {ml-features} include the following information content functions: * `info_content`, `high_info_content`, `low_info_content` diff --git a/docs/reference/ml/functions/metric.asciidoc b/docs/reference/ml/functions/metric.asciidoc index 9d6f3515a029c..7868d4b780a40 100644 --- a/docs/reference/ml/functions/metric.asciidoc +++ b/docs/reference/ml/functions/metric.asciidoc @@ -6,7 +6,7 @@ The metric functions include functions such as mean, min and max. These values are calculated for each bucket. Field values that cannot be converted to double precision floating point numbers are ignored. -The {xpackml} features include the following metric functions: +The {ml-features} include the following metric functions: * <> * <> diff --git a/docs/reference/ml/functions/rare.asciidoc b/docs/reference/ml/functions/rare.asciidoc index 1531285a7add2..87c212fbd1275 100644 --- a/docs/reference/ml/functions/rare.asciidoc +++ b/docs/reference/ml/functions/rare.asciidoc @@ -27,7 +27,7 @@ with shorter bucket spans typically being measured in minutes, not hours. for typical data. ==== -The {xpackml} features include the following rare functions: +The {ml-features} include the following rare functions: * <> * <> @@ -85,7 +85,7 @@ different rare status codes compared to the population is regarded as highly anomalous. This analysis is based on the number of different status code values, not the count of occurrences. -NOTE: To define a status code as rare the {xpackml} features look at the number +NOTE: To define a status code as rare the {ml-features} look at the number of distinct status codes that occur, not the number of times the status code occurs. If a single client IP experiences a single unique status code, this is rare, even if it occurs for that client IP in every bucket. diff --git a/docs/reference/ml/functions/sum.asciidoc b/docs/reference/ml/functions/sum.asciidoc index 7a95ad63fccee..9313a60a01a6c 100644 --- a/docs/reference/ml/functions/sum.asciidoc +++ b/docs/reference/ml/functions/sum.asciidoc @@ -11,7 +11,7 @@ If want to look at drops in totals, use low-sided functions. If your data is sparse, use `non_null_sum` functions. Buckets without values are ignored; buckets with a zero value are analyzed. -The {xpackml} features include the following sum functions: +The {ml-features} include the following sum functions: * xref:ml-sum[`sum`, `high_sum`, `low_sum`] * xref:ml-nonnull-sum[`non_null_sum`, `high_non_null_sum`, `low_non_null_sum`] diff --git a/docs/reference/ml/functions/time.asciidoc b/docs/reference/ml/functions/time.asciidoc index ac8199307f130..026d29d85d3d7 100644 --- a/docs/reference/ml/functions/time.asciidoc +++ b/docs/reference/ml/functions/time.asciidoc @@ -6,7 +6,7 @@ The time functions detect events that happen at unusual times, either of the day or of the week. These functions can be used to find unusual patterns of behavior, typically associated with suspicious user activity. -The {xpackml} features include the following time functions: +The {ml-features} include the following time functions: * <> * <> diff --git a/docs/reference/ml/transforms.asciidoc b/docs/reference/ml/transforms.asciidoc index 66c55d72b14f2..6fc67fa7c4e4b 100644 --- a/docs/reference/ml/transforms.asciidoc +++ b/docs/reference/ml/transforms.asciidoc @@ -569,7 +569,7 @@ GET _ml/datafeeds/datafeed-test4/_preview // TEST[skip:needs-licence] In {es}, location data can be stored in `geo_point` fields but this data type is -not supported natively in {xpackml} analytics. This example of a script field +not supported natively in {ml} analytics. This example of a script field transforms the data into an appropriate format. For more information, see <>. diff --git a/docs/reference/modules/discovery.asciidoc b/docs/reference/modules/discovery.asciidoc index 78e8e82f7e84f..d3e0d4fe84751 100644 --- a/docs/reference/modules/discovery.asciidoc +++ b/docs/reference/modules/discovery.asciidoc @@ -13,6 +13,16 @@ module. This module is divided into the following sections: unknown, such as when a node has just started up or when the previous master has failed. +<>:: + + This section describes how {es} uses a quorum-based voting mechanism to + make decisions even if some nodes are unavailable. + +<>:: + + This section describes the concept of voting configurations, which {es} + automatically updates as nodes leave and join the cluster. + <>:: Bootstrapping a cluster is required when an Elasticsearch cluster starts up @@ -40,11 +50,10 @@ module. This module is divided into the following sections: Cluster state publishing is the process by which the elected master node updates the cluster state on all the other nodes in the cluster. -<>:: +<>:: + + {es} performs health checks to detect and remove faulty nodes. - This section describes the detailed design behind the master election and - auto-reconfiguration logic. - <>:: There are settings that enable users to influence the discovery, cluster @@ -52,14 +61,16 @@ module. This module is divided into the following sections: include::discovery/discovery.asciidoc[] +include::discovery/quorums.asciidoc[] + +include::discovery/voting.asciidoc[] + include::discovery/bootstrapping.asciidoc[] include::discovery/adding-removing-nodes.asciidoc[] include::discovery/publishing.asciidoc[] -include::discovery/quorums.asciidoc[] - include::discovery/fault-detection.asciidoc[] -include::discovery/discovery-settings.asciidoc[] \ No newline at end of file +include::discovery/discovery-settings.asciidoc[] diff --git a/docs/reference/modules/discovery/adding-removing-nodes.asciidoc b/docs/reference/modules/discovery/adding-removing-nodes.asciidoc index a52cf1e2e7467..3b416ea51d223 100644 --- a/docs/reference/modules/discovery/adding-removing-nodes.asciidoc +++ b/docs/reference/modules/discovery/adding-removing-nodes.asciidoc @@ -12,6 +12,7 @@ cluster, and to scale the cluster up and down by adding and removing master-ineligible nodes only. However there are situations in which it may be desirable to add or remove some master-eligible nodes to or from a cluster. +[[modules-discovery-adding-nodes]] ==== Adding master-eligible nodes If you wish to add some nodes to your cluster, simply configure the new nodes @@ -24,6 +25,7 @@ cluster. You can use the `cluster.join.timeout` setting to configure how long a node waits after sending a request to join a cluster. Its default value is `30s`. See <>. +[[modules-discovery-removing-nodes]] ==== Removing master-eligible nodes When removing master-eligible nodes, it is important not to remove too many all @@ -50,7 +52,7 @@ will never automatically move a node on the voting exclusions list back into the voting configuration. Once an excluded node has been successfully auto-reconfigured out of the voting configuration, it is safe to shut it down without affecting the cluster's master-level availability. A node can be added -to the voting configuration exclusion list using the following API: +to the voting configuration exclusion list using the <> API. For example: [source,js] -------------------------------------------------- diff --git a/docs/reference/modules/discovery/discovery-settings.asciidoc b/docs/reference/modules/discovery/discovery-settings.asciidoc index 381974b5498d8..494c5ac225b87 100644 --- a/docs/reference/modules/discovery/discovery-settings.asciidoc +++ b/docs/reference/modules/discovery/discovery-settings.asciidoc @@ -3,6 +3,15 @@ Discovery and cluster formation are affected by the following settings: +`cluster.auto_shrink_voting_configuration`:: + + Controls whether the <> + sheds departed nodes automatically, as long as it still contains at least 3 + nodes. The default value is `true`. If set to `false`, the voting + configuration never shrinks automatically and you must remove departed + nodes manually with the <>. + [[master-election-settings]]`cluster.election.back_off_time`:: Sets the amount to increase the upper bound on the wait before an election @@ -152,9 +161,11 @@ APIs are not be blocked and can run on any available node. Provides a list of master-eligible nodes in the cluster. The list contains either an array of hosts or a comma-delimited string. Each value has the - format `host:port` or `host`, where `port` defaults to the setting `transport.profiles.default.port`. Note that IPv6 hosts must be bracketed. + format `host:port` or `host`, where `port` defaults to the setting + `transport.profiles.default.port`. Note that IPv6 hosts must be bracketed. The default value is `127.0.0.1, [::1]`. See <>. `discovery.zen.ping.unicast.hosts.resolve_timeout`:: - Sets the amount of time to wait for DNS lookups on each round of discovery. This is specified as a <> and defaults to `5s`. \ No newline at end of file + Sets the amount of time to wait for DNS lookups on each round of discovery. + This is specified as a <> and defaults to `5s`. diff --git a/docs/reference/modules/discovery/fault-detection.asciidoc b/docs/reference/modules/discovery/fault-detection.asciidoc index b696cdb8f7ca2..9062444b80d6c 100644 --- a/docs/reference/modules/discovery/fault-detection.asciidoc +++ b/docs/reference/modules/discovery/fault-detection.asciidoc @@ -2,8 +2,9 @@ === Cluster fault detection The elected master periodically checks each of the nodes in the cluster to -ensure that they are still connected and healthy. Each node in the cluster also periodically checks the health of the elected master. These checks -are known respectively as _follower checks_ and _leader checks_. +ensure that they are still connected and healthy. Each node in the cluster also +periodically checks the health of the elected master. These checks are known +respectively as _follower checks_ and _leader checks_. Elasticsearch allows these checks to occasionally fail or timeout without taking any action. It considers a node to be faulty only after a number of @@ -16,4 +17,4 @@ and retry setting values and attempts to remove the node from the cluster. Similarly, if a node detects that the elected master has disconnected, this situation is treated as an immediate failure. The node bypasses the timeout and retry settings and restarts its discovery phase to try and find or elect a new -master. \ No newline at end of file +master. diff --git a/docs/reference/modules/discovery/quorums.asciidoc b/docs/reference/modules/discovery/quorums.asciidoc index 8f3b74be05d9d..1a1954454268c 100644 --- a/docs/reference/modules/discovery/quorums.asciidoc +++ b/docs/reference/modules/discovery/quorums.asciidoc @@ -18,13 +18,13 @@ cluster. In many cases you can do this simply by starting or stopping the nodes as required. See <>. As nodes are added or removed Elasticsearch maintains an optimal level of fault -tolerance by updating the cluster's _voting configuration_, which is the set of -master-eligible nodes whose responses are counted when making decisions such as -electing a new master or committing a new cluster state. A decision is made only -after more than half of the nodes in the voting configuration have responded. -Usually the voting configuration is the same as the set of all the -master-eligible nodes that are currently in the cluster. However, there are some -situations in which they may be different. +tolerance by updating the cluster's <>, which is the set of master-eligible nodes whose responses are +counted when making decisions such as electing a new master or committing a new +cluster state. A decision is made only after more than half of the nodes in the +voting configuration have responded. Usually the voting configuration is the +same as the set of all the master-eligible nodes that are currently in the +cluster. However, there are some situations in which they may be different. To be sure that the cluster remains available you **must not stop half or more of the nodes in the voting configuration at the same time**. As long as more @@ -38,46 +38,6 @@ cluster-state update that adjusts the voting configuration to match, and this can take a short time to complete. It is important to wait for this adjustment to complete before removing more nodes from the cluster. -[float] -==== Setting the initial quorum - -When a brand-new cluster starts up for the first time, it must elect its first -master node. To do this election, it needs to know the set of master-eligible -nodes whose votes should count. This initial voting configuration is known as -the _bootstrap configuration_ and is set in the -<>. - -It is important that the bootstrap configuration identifies exactly which nodes -should vote in the first election. It is not sufficient to configure each node -with an expectation of how many nodes there should be in the cluster. It is also -important to note that the bootstrap configuration must come from outside the -cluster: there is no safe way for the cluster to determine the bootstrap -configuration correctly on its own. - -If the bootstrap configuration is not set correctly, when you start a brand-new -cluster there is a risk that you will accidentally form two separate clusters -instead of one. This situation can lead to data loss: you might start using both -clusters before you notice that anything has gone wrong and it is impossible to -merge them together later. - -NOTE: To illustrate the problem with configuring each node to expect a certain -cluster size, imagine starting up a three-node cluster in which each node knows -that it is going to be part of a three-node cluster. A majority of three nodes -is two, so normally the first two nodes to discover each other form a cluster -and the third node joins them a short time later. However, imagine that four -nodes were erroneously started instead of three. In this case, there are enough -nodes to form two separate clusters. Of course if each node is started manually -then it's unlikely that too many nodes are started. If you're using an automated -orchestrator, however, it's certainly possible to get into this situation-- -particularly if the orchestrator is not resilient to failures such as network -partitions. - -The initial quorum is only required the very first time a whole cluster starts -up. New nodes joining an established cluster can safely obtain all the -information they need from the elected master. Nodes that have previously been -part of a cluster will have stored to disk all the information that is required -when they restart. - [float] ==== Master elections @@ -104,92 +64,3 @@ and then started again then it will automatically recover, such as during a action with the APIs described here in these cases, because the set of master nodes is not changing permanently. -[float] -==== Automatic changes to the voting configuration - -Nodes may join or leave the cluster, and Elasticsearch reacts by automatically -making corresponding changes to the voting configuration in order to ensure that -the cluster is as resilient as possible. - -The default auto-reconfiguration -behaviour is expected to give the best results in most situations. The current -voting configuration is stored in the cluster state so you can inspect its -current contents as follows: - -[source,js] --------------------------------------------------- -GET /_cluster/state?filter_path=metadata.cluster_coordination.last_committed_config --------------------------------------------------- -// CONSOLE - -NOTE: The current voting configuration is not necessarily the same as the set of -all available master-eligible nodes in the cluster. Altering the voting -configuration involves taking a vote, so it takes some time to adjust the -configuration as nodes join or leave the cluster. Also, there are situations -where the most resilient configuration includes unavailable nodes, or does not -include some available nodes, and in these situations the voting configuration -differs from the set of available master-eligible nodes in the cluster. - -Larger voting configurations are usually more resilient, so Elasticsearch -normally prefers to add master-eligible nodes to the voting configuration after -they join the cluster. Similarly, if a node in the voting configuration -leaves the cluster and there is another master-eligible node in the cluster that -is not in the voting configuration then it is preferable to swap these two nodes -over. The size of the voting configuration is thus unchanged but its -resilience increases. - -It is not so straightforward to automatically remove nodes from the voting -configuration after they have left the cluster. Different strategies have -different benefits and drawbacks, so the right choice depends on how the cluster -will be used. You can control whether the voting configuration automatically shrinks by using the following setting: - -`cluster.auto_shrink_voting_configuration`:: - - Defaults to `true`, meaning that the voting configuration will automatically - shrink, shedding departed nodes, as long as it still contains at least 3 - nodes. If set to `false`, the voting configuration never automatically - shrinks; departed nodes must be removed manually using the - <>. - -NOTE: If `cluster.auto_shrink_voting_configuration` is set to `true`, the -recommended and default setting, and there are at least three master-eligible -nodes in the cluster, then Elasticsearch remains capable of processing -cluster-state updates as long as all but one of its master-eligible nodes are -healthy. - -There are situations in which Elasticsearch might tolerate the loss of multiple -nodes, but this is not guaranteed under all sequences of failures. If this -setting is set to `false` then departed nodes must be removed from the voting -configuration manually, using the -<>, to achieve -the desired level of resilience. - -No matter how it is configured, Elasticsearch will not suffer from a "split-brain" inconsistency. -The `cluster.auto_shrink_voting_configuration` setting affects only its availability in the -event of the failure of some of its nodes, and the administrative tasks that -must be performed as nodes join and leave the cluster. - -[float] -==== Even numbers of master-eligible nodes - -There should normally be an odd number of master-eligible nodes in a cluster. -If there is an even number, Elasticsearch leaves one of them out of the voting -configuration to ensure that it has an odd size. This omission does not decrease -the failure-tolerance of the cluster. In fact, improves it slightly: if the -cluster suffers from a network partition that divides it into two equally-sized -halves then one of the halves will contain a majority of the voting -configuration and will be able to keep operating. If all of the master-eligible -nodes' votes were counted, neither side would contain a strict majority of the -nodes and so the cluster would not be able to make any progress. - -For instance if there are four master-eligible nodes in the cluster and the -voting configuration contained all of them, any quorum-based decision would -require votes from at least three of them. This situation means that the cluster -can tolerate the loss of only a single master-eligible node. If this cluster -were split into two equal halves, neither half would contain three -master-eligible nodes and the cluster would not be able to make any progress. -If the voting configuration contains only three of the four master-eligible -nodes, however, the cluster is still only fully tolerant to the loss of one -node, but quorum-based decisions require votes from two of the three voting -nodes. In the event of an even split, one half will contain two of the three -voting nodes so that half will remain available. diff --git a/docs/reference/modules/discovery/voting.asciidoc b/docs/reference/modules/discovery/voting.asciidoc new file mode 100644 index 0000000000000..7c6ea0c1cc985 --- /dev/null +++ b/docs/reference/modules/discovery/voting.asciidoc @@ -0,0 +1,140 @@ +[[modules-discovery-voting]] +=== Voting configurations + +Each {es} cluster has a _voting configuration_, which is the set of +<> whose responses are counted when making +decisions such as electing a new master or committing a new cluster state. +Decisions are made only after a majority (more than half) of the nodes in the +voting configuration respond. + +Usually the voting configuration is the same as the set of all the +master-eligible nodes that are currently in the cluster. However, there are some +situations in which they may be different. + +IMPORTANT: To ensure the cluster remains available, you **must not stop half or +more of the nodes in the voting configuration at the same time**. As long as more +than half of the voting nodes are available, the cluster can work normally. For +example, if there are three or four master-eligible nodes, the cluster +can tolerate one unavailable node. If there are two or fewer master-eligible +nodes, they must all remain available. + +After a node joins or leaves the cluster, {es} reacts by automatically making +corresponding changes to the voting configuration in order to ensure that the +cluster is as resilient as possible. It is important to wait for this adjustment +to complete before you remove more nodes from the cluster. For more information, +see <>. + +The current voting configuration is stored in the cluster state so you can +inspect its current contents as follows: + +[source,js] +-------------------------------------------------- +GET /_cluster/state?filter_path=metadata.cluster_coordination.last_committed_config +-------------------------------------------------- +// CONSOLE + +NOTE: The current voting configuration is not necessarily the same as the set of +all available master-eligible nodes in the cluster. Altering the voting +configuration involves taking a vote, so it takes some time to adjust the +configuration as nodes join or leave the cluster. Also, there are situations +where the most resilient configuration includes unavailable nodes or does not +include some available nodes. In these situations, the voting configuration +differs from the set of available master-eligible nodes in the cluster. + +Larger voting configurations are usually more resilient, so Elasticsearch +normally prefers to add master-eligible nodes to the voting configuration after +they join the cluster. Similarly, if a node in the voting configuration +leaves the cluster and there is another master-eligible node in the cluster that +is not in the voting configuration then it is preferable to swap these two nodes +over. The size of the voting configuration is thus unchanged but its +resilience increases. + +It is not so straightforward to automatically remove nodes from the voting +configuration after they have left the cluster. Different strategies have +different benefits and drawbacks, so the right choice depends on how the cluster +will be used. You can control whether the voting configuration automatically +shrinks by using the +<>. + +NOTE: If `cluster.auto_shrink_voting_configuration` is set to `true` (which is +the default and recommended value) and there are at least three master-eligible +nodes in the cluster, Elasticsearch remains capable of processing cluster state +updates as long as all but one of its master-eligible nodes are healthy. + +There are situations in which Elasticsearch might tolerate the loss of multiple +nodes, but this is not guaranteed under all sequences of failures. If the +`cluster.auto_shrink_voting_configuration` setting is `false`, you must remove +departed nodes from the voting configuration manually. Use the +<> to achieve the desired level +of resilience. + +No matter how it is configured, Elasticsearch will not suffer from a +"split-brain" inconsistency. The `cluster.auto_shrink_voting_configuration` +setting affects only its availability in the event of the failure of some of its +nodes and the administrative tasks that must be performed as nodes join and +leave the cluster. + +[float] +==== Even numbers of master-eligible nodes + +There should normally be an odd number of master-eligible nodes in a cluster. +If there is an even number, Elasticsearch leaves one of them out of the voting +configuration to ensure that it has an odd size. This omission does not decrease +the failure-tolerance of the cluster. In fact, improves it slightly: if the +cluster suffers from a network partition that divides it into two equally-sized +halves then one of the halves will contain a majority of the voting +configuration and will be able to keep operating. If all of the votes from +master-eligible nodes were counted, neither side would contain a strict majority +of the nodes and so the cluster would not be able to make any progress. + +For instance if there are four master-eligible nodes in the cluster and the +voting configuration contained all of them, any quorum-based decision would +require votes from at least three of them. This situation means that the cluster +can tolerate the loss of only a single master-eligible node. If this cluster +were split into two equal halves, neither half would contain three +master-eligible nodes and the cluster would not be able to make any progress. +If the voting configuration contains only three of the four master-eligible +nodes, however, the cluster is still only fully tolerant to the loss of one +node, but quorum-based decisions require votes from two of the three voting +nodes. In the event of an even split, one half will contain two of the three +voting nodes so that half will remain available. + +[float] +==== Setting the initial voting configuration + +When a brand-new cluster starts up for the first time, it must elect its first +master node. To do this election, it needs to know the set of master-eligible +nodes whose votes should count. This initial voting configuration is known as +the _bootstrap configuration_ and is set in the +<>. + +It is important that the bootstrap configuration identifies exactly which nodes +should vote in the first election. It is not sufficient to configure each node +with an expectation of how many nodes there should be in the cluster. It is also +important to note that the bootstrap configuration must come from outside the +cluster: there is no safe way for the cluster to determine the bootstrap +configuration correctly on its own. + +If the bootstrap configuration is not set correctly, when you start a brand-new +cluster there is a risk that you will accidentally form two separate clusters +instead of one. This situation can lead to data loss: you might start using both +clusters before you notice that anything has gone wrong and it is impossible to +merge them together later. + +NOTE: To illustrate the problem with configuring each node to expect a certain +cluster size, imagine starting up a three-node cluster in which each node knows +that it is going to be part of a three-node cluster. A majority of three nodes +is two, so normally the first two nodes to discover each other form a cluster +and the third node joins them a short time later. However, imagine that four +nodes were erroneously started instead of three. In this case, there are enough +nodes to form two separate clusters. Of course if each node is started manually +then it's unlikely that too many nodes are started. If you're using an automated +orchestrator, however, it's certainly possible to get into this situation-- +particularly if the orchestrator is not resilient to failures such as network +partitions. + +The initial quorum is only required the very first time a whole cluster starts +up. New nodes joining an established cluster can safely obtain all the +information they need from the elected master. Nodes that have previously been +part of a cluster will have stored to disk all the information that is required +when they restart. diff --git a/docs/reference/modules/ml-node.asciidoc b/docs/reference/modules/ml-node.asciidoc index 9e4413e3a0c7e..5a907adfbbf3a 100644 --- a/docs/reference/modules/ml-node.asciidoc +++ b/docs/reference/modules/ml-node.asciidoc @@ -9,10 +9,9 @@ If {xpack} is installed, there is an additional node type: <>:: A node that has `xpack.ml.enabled` and `node.ml` set to `true`, which is the -default behavior when {xpack} is installed. If you want to use {xpackml} -features, there must be at least one {ml} node in your cluster. For more -information about {xpackml} features, -see {xpack-ref}/xpack-ml.html[Machine Learning in the Elastic Stack]. +default behavior when {xpack} is installed. If you want to use {ml-features}, there must be at least one {ml} node in your cluster. For more +information about {ml-features}, +see {stack-ov}/xpack-ml.html[Machine learning in the {stack}]. IMPORTANT: Do not set use the `node.ml` setting unless {xpack} is installed. Otherwise, the node fails to start. @@ -88,11 +87,11 @@ node.ml: false <5> [[ml-node]] === [xpack]#Machine learning node# -The {xpackml} features provide {ml} nodes, which run jobs and handle {ml} API +The {ml-features} provide {ml} nodes, which run jobs and handle {ml} API requests. If `xpack.ml.enabled` is set to true and `node.ml` is set to `false`, the node can service API requests but it cannot run jobs. -If you want to use {xpackml} features in your cluster, you must enable {ml} +If you want to use {ml-features} in your cluster, you must enable {ml} (set `xpack.ml.enabled` to `true`) on all master-eligible nodes. Do not use these settings if you do not have {xpack} installed. diff --git a/plugins/examples/custom-settings/src/main/java/org/elasticsearch/example/customsettings/ExampleCustomSettingsConfig.java b/plugins/examples/custom-settings/src/main/java/org/elasticsearch/example/customsettings/ExampleCustomSettingsConfig.java index fafe3615f639d..ffc7b4366b587 100644 --- a/plugins/examples/custom-settings/src/main/java/org/elasticsearch/example/customsettings/ExampleCustomSettingsConfig.java +++ b/plugins/examples/custom-settings/src/main/java/org/elasticsearch/example/customsettings/ExampleCustomSettingsConfig.java @@ -49,7 +49,7 @@ public class ExampleCustomSettingsConfig { /** * A string setting that can be dynamically updated and that is validated by some logic */ - static final Setting VALIDATED_SETTING = Setting.simpleString("custom.validated", (value, settings) -> { + static final Setting VALIDATED_SETTING = Setting.simpleString("custom.validated", value -> { if (value != null && value.contains("forbidden")) { throw new IllegalArgumentException("Setting must not contain [forbidden]"); } diff --git a/server/src/main/java/org/elasticsearch/action/search/SearchRequest.java b/server/src/main/java/org/elasticsearch/action/search/SearchRequest.java index fd996b0aa5cdd..69b090fb89a5a 100644 --- a/server/src/main/java/org/elasticsearch/action/search/SearchRequest.java +++ b/server/src/main/java/org/elasticsearch/action/search/SearchRequest.java @@ -179,8 +179,7 @@ public SearchRequest(StreamInput in) throws IOException { if (in.getVersion().onOrAfter(Version.V_6_3_0)) { allowPartialSearchResults = in.readOptionalBoolean(); } - //TODO update version after backport - if (in.getVersion().onOrAfter(Version.V_7_0_0)) { + if (in.getVersion().onOrAfter(Version.V_6_7_0)) { localClusterAlias = in.readOptionalString(); if (localClusterAlias != null) { absoluteStartMillis = in.readVLong(); @@ -211,8 +210,7 @@ public void writeTo(StreamOutput out) throws IOException { if (out.getVersion().onOrAfter(Version.V_6_3_0)) { out.writeOptionalBoolean(allowPartialSearchResults); } - //TODO update version after backport - if (out.getVersion().onOrAfter(Version.V_7_0_0)) { + if (out.getVersion().onOrAfter(Version.V_6_7_0)) { out.writeOptionalString(localClusterAlias); if (localClusterAlias != null) { out.writeVLong(absoluteStartMillis); diff --git a/server/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java b/server/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java index 9361c877d38e5..73e3c6b67eccf 100644 --- a/server/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java +++ b/server/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java @@ -157,6 +157,10 @@ static Setting buildNumberOfShardsSetting() { public static final Setting INDEX_NUMBER_OF_ROUTING_SHARDS_SETTING = Setting.intSetting("index.number_of_routing_shards", INDEX_NUMBER_OF_SHARDS_SETTING, 1, new Setting.Validator() { + @Override + public void validate(Integer value) { + } + @Override public void validate(Integer numRoutingShards, Map, Integer> settings) { Integer numShards = settings.get(INDEX_NUMBER_OF_SHARDS_SETTING); @@ -223,14 +227,14 @@ public Iterator> settings() { public static final String INDEX_ROUTING_INCLUDE_GROUP_PREFIX = "index.routing.allocation.include"; public static final String INDEX_ROUTING_EXCLUDE_GROUP_PREFIX = "index.routing.allocation.exclude"; public static final Setting.AffixSetting INDEX_ROUTING_REQUIRE_GROUP_SETTING = - Setting.prefixKeySetting(INDEX_ROUTING_REQUIRE_GROUP_PREFIX + ".", (key) -> - Setting.simpleString(key, (value, map) -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.IndexScope)); + Setting.prefixKeySetting(INDEX_ROUTING_REQUIRE_GROUP_PREFIX + ".", key -> + Setting.simpleString(key, value -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.IndexScope)); public static final Setting.AffixSetting INDEX_ROUTING_INCLUDE_GROUP_SETTING = - Setting.prefixKeySetting(INDEX_ROUTING_INCLUDE_GROUP_PREFIX + ".", (key) -> - Setting.simpleString(key, (value, map) -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.IndexScope)); + Setting.prefixKeySetting(INDEX_ROUTING_INCLUDE_GROUP_PREFIX + ".", key -> + Setting.simpleString(key, value -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.IndexScope)); public static final Setting.AffixSetting INDEX_ROUTING_EXCLUDE_GROUP_SETTING = - Setting.prefixKeySetting(INDEX_ROUTING_EXCLUDE_GROUP_PREFIX + ".", (key) -> - Setting.simpleString(key, (value, map) -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.IndexScope)); + Setting.prefixKeySetting(INDEX_ROUTING_EXCLUDE_GROUP_PREFIX + ".", key -> + Setting.simpleString(key, value -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.IndexScope)); public static final Setting.AffixSetting INDEX_ROUTING_INITIAL_RECOVERY_GROUP_SETTING = Setting.prefixKeySetting("index.routing.allocation.initial_recovery.", key -> Setting.simpleString(key)); // this is only setable internally not a registered setting!! diff --git a/server/src/main/java/org/elasticsearch/cluster/routing/allocation/DiskThresholdSettings.java b/server/src/main/java/org/elasticsearch/cluster/routing/allocation/DiskThresholdSettings.java index ccd64827b32f6..b8d234e9f1086 100644 --- a/server/src/main/java/org/elasticsearch/cluster/routing/allocation/DiskThresholdSettings.java +++ b/server/src/main/java/org/elasticsearch/cluster/routing/allocation/DiskThresholdSettings.java @@ -93,6 +93,10 @@ public DiskThresholdSettings(Settings settings, ClusterSettings clusterSettings) static final class LowDiskWatermarkValidator implements Setting.Validator { + @Override + public void validate(String value) { + } + @Override public void validate(String value, Map, String> settings) { final String highWatermarkRaw = settings.get(CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK_SETTING); @@ -112,6 +116,10 @@ public Iterator> settings() { static final class HighDiskWatermarkValidator implements Setting.Validator { + @Override + public void validate(String value) { + } + @Override public void validate(String value, Map, String> settings) { final String lowWatermarkRaw = settings.get(CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK_SETTING); @@ -131,6 +139,10 @@ public Iterator> settings() { static final class FloodStageValidator implements Setting.Validator { + @Override + public void validate(String value) { + } + @Override public void validate(String value, Map, String> settings) { final String lowWatermarkRaw = settings.get(CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK_SETTING); diff --git a/server/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDecider.java b/server/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDecider.java index 053d696f6768c..7d24d46318585 100644 --- a/server/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDecider.java +++ b/server/src/main/java/org/elasticsearch/cluster/routing/allocation/decider/FilterAllocationDecider.java @@ -72,14 +72,14 @@ public class FilterAllocationDecider extends AllocationDecider { private static final String CLUSTER_ROUTING_INCLUDE_GROUP_PREFIX = "cluster.routing.allocation.include"; private static final String CLUSTER_ROUTING_EXCLUDE_GROUP_PREFIX = "cluster.routing.allocation.exclude"; public static final Setting.AffixSetting CLUSTER_ROUTING_REQUIRE_GROUP_SETTING = - Setting.prefixKeySetting(CLUSTER_ROUTING_REQUIRE_GROUP_PREFIX + ".", (key) -> - Setting.simpleString(key, (value, map) -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.NodeScope)); + Setting.prefixKeySetting(CLUSTER_ROUTING_REQUIRE_GROUP_PREFIX + ".", key -> + Setting.simpleString(key, value -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.NodeScope)); public static final Setting.AffixSetting CLUSTER_ROUTING_INCLUDE_GROUP_SETTING = - Setting.prefixKeySetting(CLUSTER_ROUTING_INCLUDE_GROUP_PREFIX + ".", (key) -> - Setting.simpleString(key, (value, map) -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.NodeScope)); + Setting.prefixKeySetting(CLUSTER_ROUTING_INCLUDE_GROUP_PREFIX + ".", key -> + Setting.simpleString(key, value -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.NodeScope)); public static final Setting.AffixSettingCLUSTER_ROUTING_EXCLUDE_GROUP_SETTING = - Setting.prefixKeySetting(CLUSTER_ROUTING_EXCLUDE_GROUP_PREFIX + ".", (key) -> - Setting.simpleString(key, (value, map) -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.NodeScope)); + Setting.prefixKeySetting(CLUSTER_ROUTING_EXCLUDE_GROUP_PREFIX + ".", key -> + Setting.simpleString(key, value -> IP_VALIDATOR.accept(key, value), Property.Dynamic, Property.NodeScope)); /** * The set of {@link RecoverySource.Type} values for which the diff --git a/server/src/main/java/org/elasticsearch/common/Numbers.java b/server/src/main/java/org/elasticsearch/common/Numbers.java index 7561175f3fe35..27c1dd18e97b8 100644 --- a/server/src/main/java/org/elasticsearch/common/Numbers.java +++ b/server/src/main/java/org/elasticsearch/common/Numbers.java @@ -33,48 +33,6 @@ public final class Numbers { private static final BigInteger MIN_LONG_VALUE = BigInteger.valueOf(Long.MIN_VALUE); private Numbers() { - - } - - /** - * Converts a byte array to an short. - * - * @param arr The byte array to convert to an short - * @return The int converted - */ - public static short bytesToShort(byte[] arr) { - return (short) (((arr[0] & 0xff) << 8) | (arr[1] & 0xff)); - } - - public static short bytesToShort(BytesRef bytes) { - return (short) (((bytes.bytes[bytes.offset] & 0xff) << 8) | (bytes.bytes[bytes.offset + 1] & 0xff)); - } - - /** - * Converts a byte array to an int. - * - * @param arr The byte array to convert to an int - * @return The int converted - */ - public static int bytesToInt(byte[] arr) { - return (arr[0] << 24) | ((arr[1] & 0xff) << 16) | ((arr[2] & 0xff) << 8) | (arr[3] & 0xff); - } - - public static int bytesToInt(BytesRef bytes) { - return (bytes.bytes[bytes.offset] << 24) | ((bytes.bytes[bytes.offset + 1] & 0xff) << 16) | - ((bytes.bytes[bytes.offset + 2] & 0xff) << 8) | (bytes.bytes[bytes.offset + 3] & 0xff); - } - - /** - * Converts a byte array to a long. - * - * @param arr The byte array to convert to a long - * @return The long converter - */ - public static long bytesToLong(byte[] arr) { - int high = (arr[0] << 24) | ((arr[1] & 0xff) << 16) | ((arr[2] & 0xff) << 8) | (arr[3] & 0xff); - int low = (arr[4] << 24) | ((arr[5] & 0xff) << 16) | ((arr[6] & 0xff) << 8) | (arr[7] & 0xff); - return (((long) high) << 32) | (low & 0x0ffffffffL); } public static long bytesToLong(BytesRef bytes) { @@ -85,40 +43,6 @@ public static long bytesToLong(BytesRef bytes) { return (((long) high) << 32) | (low & 0x0ffffffffL); } - /** - * Converts a byte array to float. - * - * @param arr The byte array to convert to a float - * @return The float converted - */ - public static float bytesToFloat(byte[] arr) { - return Float.intBitsToFloat(bytesToInt(arr)); - } - - public static float bytesToFloat(BytesRef bytes) { - return Float.intBitsToFloat(bytesToInt(bytes)); - } - - /** - * Converts a byte array to double. - * - * @param arr The byte array to convert to a double - * @return The double converted - */ - public static double bytesToDouble(byte[] arr) { - return Double.longBitsToDouble(bytesToLong(arr)); - } - - public static double bytesToDouble(BytesRef bytes) { - return Double.longBitsToDouble(bytesToLong(bytes)); - } - - /** - * Converts an int to a byte array. - * - * @param val The int to convert to a byte array - * @return The byte array converted - */ public static byte[] intToBytes(int val) { byte[] arr = new byte[4]; arr[0] = (byte) (val >>> 24); @@ -160,16 +84,6 @@ public static byte[] longToBytes(long val) { return arr; } - /** - * Converts a float to a byte array. - * - * @param val The float to convert to a byte array - * @return The byte array converted - */ - public static byte[] floatToBytes(float val) { - return intToBytes(Float.floatToRawIntBits(val)); - } - /** * Converts a double to a byte array. * diff --git a/server/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java b/server/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java index 752a9d5aba1eb..b49f0f8225016 100644 --- a/server/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java +++ b/server/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java @@ -723,7 +723,7 @@ private boolean updateSettings(Settings toApply, Settings.Builder target, Settin } else if (get(key) == null) { throw new IllegalArgumentException(type + " setting [" + key + "], not recognized"); } else if (isDelete == false && canUpdate.test(key)) { - validate(key, toApply, false); // we might not have a full picture here do to a dependency validation + get(key).validateWithoutDependencies(toApply); // we might not have a full picture here do to a dependency validation settingsBuilder.copy(key, toApply); updates.copy(key, toApply); changed |= toApply.get(key).equals(target.get(key)) == false; diff --git a/server/src/main/java/org/elasticsearch/common/settings/Setting.java b/server/src/main/java/org/elasticsearch/common/settings/Setting.java index 127f06da1a44d..9c3762f857e4a 100644 --- a/server/src/main/java/org/elasticsearch/common/settings/Setting.java +++ b/server/src/main/java/org/elasticsearch/common/settings/Setting.java @@ -186,7 +186,7 @@ private void checkPropertyRequiresIndexScope(final EnumSet properties, * @param properties properties for this setting like scope, filtering... */ public Setting(Key key, Function defaultValue, Function parser, Property... properties) { - this(key, defaultValue, parser, (v, s) -> {}, properties); + this(key, defaultValue, parser, v -> {}, properties); } /** @@ -246,7 +246,7 @@ public Setting(String key, Function defaultValue, Function fallbackSetting, Function parser, Property... properties) { - this(key, fallbackSetting, fallbackSetting::getRaw, parser, (v, m) -> {}, properties); + this(key, fallbackSetting, fallbackSetting::getRaw, parser, v -> {}, properties); } /** @@ -354,6 +354,14 @@ boolean hasComplexMatcher() { return isGroupSetting(); } + /** + * Validate the current setting value only without dependencies with {@link Setting.Validator#validate(Object)}. + * @param settings a settings object for settings that has a default value depending on another setting if available + */ + void validateWithoutDependencies(Settings settings) { + validator.validate(get(settings, false)); + } + /** * Returns the default value string representation for this setting. * @param settings a settings object for settings that has a default value depending on another setting if available @@ -414,6 +422,7 @@ private T get(Settings settings, boolean validate) { } else { map = Collections.emptyMap(); } + validator.validate(parsed); validator.validate(parsed, map); } return parsed; @@ -805,8 +814,10 @@ public Map getAsMap(Settings settings) { } /** - * Represents a validator for a setting. The {@link #validate(Object, Map)} method is invoked with the value of this setting and a map - * from the settings specified by {@link #settings()}} to their values. All these values come from the same {@link Settings} instance. + * Represents a validator for a setting. The {@link #validate(Object)} method is invoked early in the update setting process with the + * value of this setting for a fail-fast validation. Later on, the {@link #validate(Object, Map)} method is invoked with the value of + * this setting and a map from the settings specified by {@link #settings()}} to their values. All these values come from the same + * {@link Settings} instance. * * @param the type of the {@link Setting} */ @@ -814,17 +825,28 @@ public Map getAsMap(Settings settings) { public interface Validator { /** - * The validation routine for this validator. + * Validate this setting's value in isolation. + * + * @param value the value of this setting + */ + void validate(T value); + + /** + * Validate this setting against its dependencies, specified by {@link #settings()}. The default implementation does nothing, + * accepting any value as valid as long as it passes the validation in {@link #validate(Object)}. * * @param value the value of this setting * @param settings a map from the settings specified by {@link #settings()}} to their values */ - void validate(T value, Map, T> settings); + default void validate(T value, Map, T> settings) { + } /** - * The settings needed by this validator. + * The settings on which the validity of this setting depends. The values of the specified settings are passed to + * {@link #validate(Object, Map)}. By default this returns an empty iterator, indicating that this setting does not depend on any + * other settings. * - * @return the settings needed to validate; these can be used for cross-settings validation + * @return the settings on which the validity of this setting depends. */ default Iterator> settings() { return Collections.emptyIterator(); @@ -1021,8 +1043,8 @@ public static Setting simpleString(String key, Property... properties) { return new Setting<>(key, s -> "", Function.identity(), properties); } - public static Setting simpleString(String key, Function parser, Property... properties) { - return new Setting<>(key, s -> "", parser, properties); + public static Setting simpleString(String key, Validator validator, Property... properties) { + return new Setting<>(new SimpleKey(key), null, s -> "", Function.identity(), validator, properties); } public static Setting simpleString(String key, Setting fallback, Property... properties) { @@ -1037,10 +1059,6 @@ public static Setting simpleString( return new Setting<>(key, fallback, parser, properties); } - public static Setting simpleString(String key, Validator validator, Property... properties) { - return new Setting<>(new SimpleKey(key), null, s -> "", Function.identity(), validator, properties); - } - /** * Creates a new Setting instance with a String value * @@ -1279,9 +1297,9 @@ private ListSetting( super( new ListKey(key), fallbackSetting, - (s) -> Setting.arrayToParsableString(defaultStringValue.apply(s)), + s -> Setting.arrayToParsableString(defaultStringValue.apply(s)), parser, - (v,s) -> {}, + v -> {}, properties); this.defaultStringValue = defaultStringValue; } @@ -1339,7 +1357,7 @@ public static Setting timeSetting( fallbackSetting, fallbackSetting::getRaw, minTimeValueParser(key, minValue), - (v, s) -> {}, + v -> {}, properties); } diff --git a/server/src/main/java/org/elasticsearch/common/settings/Settings.java b/server/src/main/java/org/elasticsearch/common/settings/Settings.java index e8ba6d383d55e..ac43a1800b40f 100644 --- a/server/src/main/java/org/elasticsearch/common/settings/Settings.java +++ b/server/src/main/java/org/elasticsearch/common/settings/Settings.java @@ -1019,8 +1019,8 @@ public Builder put(String setting, double value) { * @param value The time value * @return The builder */ - public Builder put(String setting, long value, TimeUnit timeUnit) { - put(setting, timeUnit.toMillis(value) + "ms"); + public Builder put(final String setting, final long value, final TimeUnit timeUnit) { + put(setting, new TimeValue(value, timeUnit)); return this; } diff --git a/server/src/main/java/org/elasticsearch/threadpool/AutoQueueAdjustingExecutorBuilder.java b/server/src/main/java/org/elasticsearch/threadpool/AutoQueueAdjustingExecutorBuilder.java index 45f53006ecd59..bc17d52bcc236 100644 --- a/server/src/main/java/org/elasticsearch/threadpool/AutoQueueAdjustingExecutorBuilder.java +++ b/server/src/main/java/org/elasticsearch/threadpool/AutoQueueAdjustingExecutorBuilder.java @@ -75,8 +75,12 @@ public final class AutoQueueAdjustingExecutorBuilder extends ExecutorBuilder( minSizeKey, Integer.toString(minQueueSize), - (s) -> Setting.parseInt(s, 0, minSizeKey), + s -> Setting.parseInt(s, 0, minSizeKey), new Setting.Validator() { + @Override + public void validate(Integer value) { + } + @Override public void validate(Integer value, Map, Integer> settings) { if (value > settings.get(tempMaxQueueSizeSetting)) { @@ -94,8 +98,12 @@ public Iterator> settings() { this.maxQueueSizeSetting = new Setting<>( maxSizeKey, Integer.toString(maxQueueSize), - (s) -> Setting.parseInt(s, 0, maxSizeKey), + s -> Setting.parseInt(s, 0, maxSizeKey), new Setting.Validator() { + @Override + public void validate(Integer value) { + } + @Override public void validate(Integer value, Map, Integer> settings) { if (value < settings.get(tempMinQueueSizeSetting)) { diff --git a/server/src/main/java/org/elasticsearch/transport/RemoteClusterAware.java b/server/src/main/java/org/elasticsearch/transport/RemoteClusterAware.java index 5a874ba61a218..9b9243b612b74 100644 --- a/server/src/main/java/org/elasticsearch/transport/RemoteClusterAware.java +++ b/server/src/main/java/org/elasticsearch/transport/RemoteClusterAware.java @@ -121,7 +121,6 @@ public String getKey(final String key) { if (Strings.hasLength(s)) { parsePort(s); } - return s; }, Setting.Property.Deprecated, Setting.Property.Dynamic, diff --git a/server/src/test/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdaterTests.java b/server/src/test/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdaterTests.java index c345e34d20c3a..6786a630d86ab 100644 --- a/server/src/test/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdaterTests.java +++ b/server/src/test/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdaterTests.java @@ -29,12 +29,16 @@ import org.elasticsearch.test.ESTestCase; import java.util.ArrayList; +import java.util.HashSet; +import java.util.Iterator; import java.util.List; +import java.util.Map; import java.util.Set; import java.util.concurrent.atomic.AtomicReference; import java.util.stream.Collectors; import java.util.stream.Stream; +import static java.util.Arrays.asList; import static org.elasticsearch.common.settings.AbstractScopedSettings.ARCHIVED_SETTINGS_PREFIX; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.hasItem; @@ -221,24 +225,11 @@ public void testUpdateWithUnknownAndSettings() { // these are invalid settings that exist as either persistent or transient settings final int numberOfInvalidSettings = randomIntBetween(0, 7); - final List> invalidSettings = new ArrayList<>(numberOfInvalidSettings); - for (int i = 0; i < numberOfInvalidSettings; i++) { - final Setting invalidSetting = Setting.simpleString( - "invalid.setting" + i, - (value, settings) -> { - throw new IllegalArgumentException("invalid"); - }, - Property.NodeScope); - invalidSettings.add(invalidSetting); - } + final List> invalidSettings = invalidSettings(numberOfInvalidSettings); // these are unknown settings that exist as either persistent or transient settings final int numberOfUnknownSettings = randomIntBetween(0, 7); - final List> unknownSettings = new ArrayList<>(numberOfUnknownSettings); - for (int i = 0; i < numberOfUnknownSettings; i++) { - final Setting unknownSetting = Setting.simpleString("unknown.setting" + i, Property.NodeScope); - unknownSettings.add(unknownSetting); - } + final List> unknownSettings = unknownSettings(numberOfUnknownSettings); final Settings.Builder existingPersistentSettings = Settings.builder(); final Settings.Builder existingTransientSettings = Settings.builder(); @@ -393,24 +384,11 @@ public void testRemovingArchivedSettingsDoesNotRemoveNonArchivedInvalidOrUnknown // these are invalid settings that exist as either persistent or transient settings final int numberOfInvalidSettings = randomIntBetween(0, 7); - final List> invalidSettings = new ArrayList<>(numberOfInvalidSettings); - for (int i = 0; i < numberOfInvalidSettings; i++) { - final Setting invalidSetting = Setting.simpleString( - "invalid.setting" + i, - (value, settings) -> { - throw new IllegalArgumentException("invalid"); - }, - Property.NodeScope); - invalidSettings.add(invalidSetting); - } + final List> invalidSettings = invalidSettings(numberOfInvalidSettings); // these are unknown settings that exist as either persistent or transient settings final int numberOfUnknownSettings = randomIntBetween(0, 7); - final List> unknownSettings = new ArrayList<>(numberOfUnknownSettings); - for (int i = 0; i < numberOfUnknownSettings; i++) { - final Setting unknownSetting = Setting.simpleString("unknown.setting" + i, Property.NodeScope); - unknownSettings.add(unknownSetting); - } + final List> unknownSettings = unknownSettings(numberOfUnknownSettings); final Settings.Builder existingPersistentSettings = Settings.builder(); final Settings.Builder existingTransientSettings = Settings.builder(); @@ -511,4 +489,120 @@ public void testRemovingArchivedSettingsDoesNotRemoveNonArchivedInvalidOrUnknown } } + private static List> unknownSettings(int numberOfUnknownSettings) { + final List> unknownSettings = new ArrayList<>(numberOfUnknownSettings); + for (int i = 0; i < numberOfUnknownSettings; i++) { + unknownSettings.add(Setting.simpleString("unknown.setting" + i, Property.NodeScope)); + } + return unknownSettings; + } + + private static List> invalidSettings(int numberOfInvalidSettings) { + final List> invalidSettings = new ArrayList<>(numberOfInvalidSettings); + for (int i = 0; i < numberOfInvalidSettings; i++) { + invalidSettings.add(randomBoolean() ? invalidInIsolationSetting(i) : invalidWithDependenciesSetting(i)); + } + return invalidSettings; + } + + private static Setting invalidInIsolationSetting(int index) { + return Setting.simpleString("invalid.setting" + index, + new Setting.Validator() { + @Override + public void validate(String value) { + throw new IllegalArgumentException("Invalid in isolation setting"); + } + + @Override + public void validate(String value, Map, String> settings) { + } + }, + Property.NodeScope); + } + + private static Setting invalidWithDependenciesSetting(int index) { + return Setting.simpleString("invalid.setting" + index, + new Setting.Validator() { + @Override + public void validate(String value) { + } + + @Override + public void validate(String value, Map, String> settings) { + throw new IllegalArgumentException("Invalid with dependencies setting"); + } + }, + Property.NodeScope); + } + + private static class FooLowSettingValidator implements Setting.Validator { + @Override + public void validate(Integer value) { + } + + @Override + public void validate(Integer low, Map, Integer> settings) { + if (settings.containsKey(SETTING_FOO_HIGH) && low > settings.get(SETTING_FOO_HIGH)) { + throw new IllegalArgumentException("[low]=" + low + " is higher than [high]=" + settings.get(SETTING_FOO_HIGH)); + } + } + + @Override + public Iterator> settings() { + return asList(SETTING_FOO_LOW, SETTING_FOO_HIGH).iterator(); + } + } + + private static class FooHighSettingValidator implements Setting.Validator { + @Override + public void validate(Integer value) { + } + + @Override + public void validate(Integer high, Map, Integer> settings) { + if (settings.containsKey(SETTING_FOO_LOW) && high < settings.get(SETTING_FOO_LOW)) { + throw new IllegalArgumentException("[high]=" + high + " is lower than [low]=" + settings.get(SETTING_FOO_LOW)); + } + } + + @Override + public Iterator> settings() { + return asList(SETTING_FOO_LOW, SETTING_FOO_HIGH).iterator(); + } + } + + private static final Setting SETTING_FOO_LOW = new Setting<>("foo.low", "10", + Integer::valueOf, new FooLowSettingValidator(), Property.Dynamic, Setting.Property.NodeScope); + private static final Setting SETTING_FOO_HIGH = new Setting<>("foo.high", "100", + Integer::valueOf, new FooHighSettingValidator(), Property.Dynamic, Setting.Property.NodeScope); + + public void testUpdateOfValidationDependentSettings() { + final ClusterSettings settings = new ClusterSettings(Settings.EMPTY, new HashSet<>(asList(SETTING_FOO_LOW, SETTING_FOO_HIGH))); + final SettingsUpdater updater = new SettingsUpdater(settings); + final MetaData.Builder metaData = MetaData.builder().persistentSettings(Settings.EMPTY).transientSettings(Settings.EMPTY); + + ClusterState cluster = ClusterState.builder(new ClusterName("cluster")).metaData(metaData).build(); + + cluster = updater.updateSettings(cluster, Settings.builder().put(SETTING_FOO_LOW.getKey(), 20).build(), Settings.EMPTY, logger); + assertThat(cluster.getMetaData().settings().get(SETTING_FOO_LOW.getKey()), equalTo("20")); + + cluster = updater.updateSettings(cluster, Settings.builder().put(SETTING_FOO_HIGH.getKey(), 40).build(), Settings.EMPTY, logger); + assertThat(cluster.getMetaData().settings().get(SETTING_FOO_LOW.getKey()), equalTo("20")); + assertThat(cluster.getMetaData().settings().get(SETTING_FOO_HIGH.getKey()), equalTo("40")); + + cluster = updater.updateSettings(cluster, Settings.builder().put(SETTING_FOO_LOW.getKey(), 5).build(), Settings.EMPTY, logger); + assertThat(cluster.getMetaData().settings().get(SETTING_FOO_LOW.getKey()), equalTo("5")); + assertThat(cluster.getMetaData().settings().get(SETTING_FOO_HIGH.getKey()), equalTo("40")); + + cluster = updater.updateSettings(cluster, Settings.builder().put(SETTING_FOO_HIGH.getKey(), 8).build(), Settings.EMPTY, logger); + assertThat(cluster.getMetaData().settings().get(SETTING_FOO_LOW.getKey()), equalTo("5")); + assertThat(cluster.getMetaData().settings().get(SETTING_FOO_HIGH.getKey()), equalTo("8")); + + final ClusterState finalCluster = cluster; + Exception exception = expectThrows(IllegalArgumentException.class, () -> + updater.updateSettings(finalCluster, Settings.builder().put(SETTING_FOO_HIGH.getKey(), 2).build(), Settings.EMPTY, logger)); + + assertThat(exception.getMessage(), equalTo("[high]=2 is lower than [low]=5")); + } + } diff --git a/server/src/test/java/org/elasticsearch/action/search/SearchRequestTests.java b/server/src/test/java/org/elasticsearch/action/search/SearchRequestTests.java index 3fb9b6ae4eb16..91f6c0c09cd20 100644 --- a/server/src/test/java/org/elasticsearch/action/search/SearchRequestTests.java +++ b/server/src/test/java/org/elasticsearch/action/search/SearchRequestTests.java @@ -76,8 +76,7 @@ public void testClusterAliasSerialization() throws IOException { SearchRequest searchRequest = createSearchRequest(); Version version = VersionUtils.randomVersion(random()); SearchRequest deserializedRequest = copyWriteable(searchRequest, namedWriteableRegistry, SearchRequest::new, version); - //TODO update version after backport - if (version.before(Version.V_7_0_0)) { + if (version.before(Version.V_6_7_0)) { assertNull(deserializedRequest.getLocalClusterAlias()); assertAbsoluteStartMillisIsCurrentTime(deserializedRequest); } else { @@ -86,11 +85,10 @@ public void testClusterAliasSerialization() throws IOException { } } - //TODO rename and update version after backport - public void testReadFromPre7_0_0() throws IOException { + public void testReadFromPre6_7_0() throws IOException { String msg = "AAEBBWluZGV4AAAAAQACAAAA/////w8AAAAAAAAA/////w8AAAAAAAACAAAAAAABAAMCBAUBAAKABACAAQIAAA=="; try (StreamInput in = StreamInput.wrap(Base64.getDecoder().decode(msg))) { - in.setVersion(VersionUtils.randomVersionBetween(random(), Version.V_6_4_0, VersionUtils.getPreviousVersion(Version.V_7_0_0))); + in.setVersion(VersionUtils.randomVersionBetween(random(), Version.V_6_4_0, VersionUtils.getPreviousVersion(Version.V_6_7_0))); SearchRequest searchRequest = new SearchRequest(in); assertArrayEquals(new String[]{"index"}, searchRequest.indices()); assertNull(searchRequest.getLocalClusterAlias()); diff --git a/server/src/test/java/org/elasticsearch/cluster/routing/allocation/DiskThresholdSettingsTests.java b/server/src/test/java/org/elasticsearch/cluster/routing/allocation/DiskThresholdSettingsTests.java index 342fcea7ddef1..d9e157187d581 100644 --- a/server/src/test/java/org/elasticsearch/cluster/routing/allocation/DiskThresholdSettingsTests.java +++ b/server/src/test/java/org/elasticsearch/cluster/routing/allocation/DiskThresholdSettingsTests.java @@ -26,6 +26,7 @@ import java.util.Locale; +import static org.hamcrest.CoreMatchers.equalTo; import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.hasToString; import static org.hamcrest.Matchers.instanceOf; @@ -203,4 +204,50 @@ public void testInvalidHighDiskThreshold() { assertThat(cause, hasToString(containsString("low disk watermark [85%] more than high disk watermark [75%]"))); } + public void testSequenceOfUpdates() { + final ClusterSettings clusterSettings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); + new DiskThresholdSettings(Settings.EMPTY, clusterSettings); // this has the effect of registering the settings updater + + final Settings.Builder target = Settings.builder(); + + { + final Settings settings = Settings.builder() + .put(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_DISK_FLOOD_STAGE_WATERMARK_SETTING.getKey(), "99%") + .build(); + final Settings.Builder updates = Settings.builder(); + assertTrue(clusterSettings.updateSettings(settings, target, updates, "transient")); + assertNull(target.get(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK_SETTING.getKey())); + assertNull(target.get(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK_SETTING.getKey())); + assertThat(target.get(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_DISK_FLOOD_STAGE_WATERMARK_SETTING.getKey()), + equalTo("99%")); + } + + { + final Settings settings = Settings.builder() + .put(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK_SETTING.getKey(), "97%") + .build(); + final Settings.Builder updates = Settings.builder(); + assertTrue(clusterSettings.updateSettings(settings, target, updates, "transient")); + assertNull(target.get(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK_SETTING.getKey())); + assertThat(target.get(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK_SETTING.getKey()), + equalTo("97%")); + assertThat(target.get(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_DISK_FLOOD_STAGE_WATERMARK_SETTING.getKey()), + equalTo("99%")); + } + + { + final Settings settings = Settings.builder() + .put(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK_SETTING.getKey(), "95%") + .build(); + final Settings.Builder updates = Settings.builder(); + assertTrue(clusterSettings.updateSettings(settings, target, updates, "transient")); + assertThat(target.get(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK_SETTING.getKey()), + equalTo("95%")); + assertThat(target.get(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK_SETTING.getKey()), + equalTo("97%")); + assertThat(target.get(DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_DISK_FLOOD_STAGE_WATERMARK_SETTING.getKey()), + equalTo("99%")); + } + } + } diff --git a/server/src/test/java/org/elasticsearch/common/settings/ScopedSettingsTests.java b/server/src/test/java/org/elasticsearch/common/settings/ScopedSettingsTests.java index 9194a60382d0d..fc732fbd88e2e 100644 --- a/server/src/test/java/org/elasticsearch/common/settings/ScopedSettingsTests.java +++ b/server/src/test/java/org/elasticsearch/common/settings/ScopedSettingsTests.java @@ -37,7 +37,6 @@ import java.util.Collections; import java.util.HashMap; import java.util.HashSet; -import java.util.Iterator; import java.util.LinkedHashSet; import java.util.List; import java.util.Map; @@ -51,9 +50,7 @@ import static org.hamcrest.CoreMatchers.containsString; import static org.hamcrest.CoreMatchers.equalTo; -import static org.hamcrest.CoreMatchers.instanceOf; import static org.hamcrest.CoreMatchers.startsWith; -import static org.hamcrest.Matchers.arrayWithSize; import static org.hamcrest.Matchers.hasToString; import static org.hamcrest.Matchers.sameInstance; @@ -514,94 +511,6 @@ public void testApply() { assertEquals(15, bC.get()); } - private static final Setting FOO_BAR_LOW_SETTING = new Setting<>( - "foo.bar.low", - "1", - Integer::parseInt, - new FooBarLowValidator(), - Property.Dynamic, - Property.NodeScope); - - private static final Setting FOO_BAR_HIGH_SETTING = new Setting<>( - "foo.bar.high", - "2", - Integer::parseInt, - new FooBarHighValidator(), - Property.Dynamic, - Property.NodeScope); - - static class FooBarLowValidator implements Setting.Validator { - @Override - public void validate(Integer value, Map, Integer> settings) { - final int high = settings.get(FOO_BAR_HIGH_SETTING); - if (value > high) { - throw new IllegalArgumentException("low [" + value + "] more than high [" + high + "]"); - } - } - - @Override - public Iterator> settings() { - return Collections.singletonList(FOO_BAR_HIGH_SETTING).iterator(); - } - } - - static class FooBarHighValidator implements Setting.Validator { - @Override - public void validate(Integer value, Map, Integer> settings) { - final int low = settings.get(FOO_BAR_LOW_SETTING); - if (value < low) { - throw new IllegalArgumentException("high [" + value + "] less than low [" + low + "]"); - } - } - - @Override - public Iterator> settings() { - return Collections.singletonList(FOO_BAR_LOW_SETTING).iterator(); - } - } - - public void testValidator() { - final AbstractScopedSettings service = - new ClusterSettings(Settings.EMPTY, new HashSet<>(Arrays.asList(FOO_BAR_LOW_SETTING, FOO_BAR_HIGH_SETTING))); - - final AtomicInteger consumerLow = new AtomicInteger(); - final AtomicInteger consumerHigh = new AtomicInteger(); - - service.addSettingsUpdateConsumer(FOO_BAR_LOW_SETTING, consumerLow::set); - - service.addSettingsUpdateConsumer(FOO_BAR_HIGH_SETTING, consumerHigh::set); - - final Settings newSettings = Settings.builder().put("foo.bar.low", 17).put("foo.bar.high", 13).build(); - { - final IllegalArgumentException e = - expectThrows( - IllegalArgumentException.class, - () -> service.validateUpdate(newSettings)); - assertThat(e, hasToString(containsString("illegal value can't update [foo.bar.low] from [1] to [17]"))); - assertNotNull(e.getCause()); - assertThat(e.getCause(), instanceOf(IllegalArgumentException.class)); - final IllegalArgumentException cause = (IllegalArgumentException) e.getCause(); - assertThat(cause, hasToString(containsString("low [17] more than high [13]"))); - assertThat(e.getSuppressed(), arrayWithSize(1)); - assertThat(e.getSuppressed()[0], instanceOf(IllegalArgumentException.class)); - final IllegalArgumentException suppressed = (IllegalArgumentException) e.getSuppressed()[0]; - assertThat(suppressed, hasToString(containsString("illegal value can't update [foo.bar.high] from [2] to [13]"))); - assertNotNull(suppressed.getCause()); - assertThat(suppressed.getCause(), instanceOf(IllegalArgumentException.class)); - final IllegalArgumentException suppressedCause = (IllegalArgumentException) suppressed.getCause(); - assertThat(suppressedCause, hasToString(containsString("high [13] less than low [17]"))); - assertThat(consumerLow.get(), equalTo(0)); - assertThat(consumerHigh.get(), equalTo(0)); - } - - { - final IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> service.applySettings(newSettings)); - assertThat(e, hasToString(containsString("illegal value can't update [foo.bar.low] from [1] to [17]"))); - assertThat(consumerLow.get(), equalTo(0)); - assertThat(consumerHigh.get(), equalTo(0)); - } - } - public void testGet() { ClusterSettings settings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); // affix setting - complex matcher diff --git a/server/src/test/java/org/elasticsearch/common/settings/SettingTests.java b/server/src/test/java/org/elasticsearch/common/settings/SettingTests.java index 750c7148946fc..220392a952c29 100644 --- a/server/src/test/java/org/elasticsearch/common/settings/SettingTests.java +++ b/server/src/test/java/org/elasticsearch/common/settings/SettingTests.java @@ -204,12 +204,18 @@ public void testValidateStringSetting() { static class FooBarValidator implements Setting.Validator { - public static boolean invoked; + public static boolean invokedInIsolation; + public static boolean invokedWithDependencies; @Override - public void validate(String value, Map, String> settings) { - invoked = true; + public void validate(String value) { + invokedInIsolation = true; assertThat(value, equalTo("foo.bar value")); + } + + @Override + public void validate(String value, Map, String> settings) { + invokedWithDependencies = true; assertTrue(settings.keySet().contains(BAZ_QUX_SETTING)); assertThat(settings.get(BAZ_QUX_SETTING), equalTo("baz.qux value")); assertTrue(settings.keySet().contains(QUUX_QUUZ_SETTING)); @@ -230,7 +236,8 @@ public void testValidator() { .put("quux.quuz", "quux.quuz value") .build(); FOO_BAR_SETTING.get(settings); - assertTrue(FooBarValidator.invoked); + assertTrue(FooBarValidator.invokedInIsolation); + assertTrue(FooBarValidator.invokedWithDependencies); } public void testUpdateNotDynamic() { @@ -934,7 +941,7 @@ public void testAffixMapUpdateWithNullSettingValue() { final Setting.AffixSetting affixSetting = Setting.prefixKeySetting("prefix" + ".", - (key) -> Setting.simpleString(key, (value, map) -> {}, Property.Dynamic, Property.NodeScope)); + key -> Setting.simpleString(key, Property.Dynamic, Property.NodeScope)); final Consumer> consumer = (map) -> {}; final BiConsumer validator = (s1, s2) -> {}; diff --git a/server/src/test/java/org/elasticsearch/common/settings/SettingsTests.java b/server/src/test/java/org/elasticsearch/common/settings/SettingsTests.java index 27a9b00204203..802bceaa90812 100644 --- a/server/src/test/java/org/elasticsearch/common/settings/SettingsTests.java +++ b/server/src/test/java/org/elasticsearch/common/settings/SettingsTests.java @@ -47,6 +47,7 @@ import java.util.Map; import java.util.NoSuchElementException; import java.util.Set; +import java.util.concurrent.TimeUnit; import static org.hamcrest.Matchers.contains; import static org.hamcrest.Matchers.containsInAnyOrder; @@ -744,4 +745,18 @@ public void testFractionalByteSizeValue() { assertThat(actual, equalTo(expected)); } + public void testSetByTimeUnit() { + final Setting setting = + Setting.timeSetting("key", TimeValue.parseTimeValue(randomTimeValue(0, 24, "h"), "key"), TimeValue.ZERO); + final TimeValue expected = new TimeValue(1500, TimeUnit.MICROSECONDS); + final Settings settings = Settings.builder().put("key", expected.getMicros(), TimeUnit.MICROSECONDS).build(); + /* + * Previously we would internally convert the duration to a string by converting to milliseconds which could lose precision (e.g., + * 1500 microseconds would be converted to 1ms). Effectively this test is then asserting that we no longer make this mistake when + * doing the internal string conversion. Instead, we convert to a duration using a method that does not lose the original unit. + */ + final TimeValue actual = setting.get(settings); + assertThat(actual, equalTo(expected)); + } + } diff --git a/server/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java b/server/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java index d7ca90a90a3d9..c90d9319df30b 100644 --- a/server/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java +++ b/server/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java @@ -445,7 +445,7 @@ public void testRestoreWithDifferentMappingsAndSettings() throws Exception { logger.info("--> assert that old settings are restored"); GetSettingsResponse getSettingsResponse = client.admin().indices().prepareGetSettings("test-idx").execute().actionGet(); - assertThat(getSettingsResponse.getSetting("test-idx", "index.refresh_interval"), equalTo("10000ms")); + assertThat(getSettingsResponse.getSetting("test-idx", "index.refresh_interval"), equalTo("10s")); } public void testEmptySnapshot() throws Exception { diff --git a/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/XPackSettings.java b/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/XPackSettings.java index 111d8a9a68ca9..13cc4c121daf6 100644 --- a/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/XPackSettings.java +++ b/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/XPackSettings.java @@ -139,7 +139,7 @@ private XPackSettings() { * Do not allow insecure hashing algorithms to be used for password hashing */ public static final Setting PASSWORD_HASHING_ALGORITHM = new Setting<>( - "xpack.security.authc.password_hashing.algorithm", "bcrypt", Function.identity(), (v, s) -> { + "xpack.security.authc.password_hashing.algorithm", "bcrypt", Function.identity(), v -> { if (Hasher.getAvailableAlgoStoredHash().contains(v.toLowerCase(Locale.ROOT)) == false) { throw new IllegalArgumentException("Invalid algorithm: " + v + ". Valid values for password hashing are " + Hasher.getAvailableAlgoStoredHash().toString()); diff --git a/x-pack/plugin/ml/qa/native-multi-node-tests/src/test/java/org/elasticsearch/xpack/ml/integration/MlNativeAutodetectIntegTestCase.java b/x-pack/plugin/ml/qa/native-multi-node-tests/src/test/java/org/elasticsearch/xpack/ml/integration/MlNativeAutodetectIntegTestCase.java index e824fa2917012..c06810bbf2a0e 100644 --- a/x-pack/plugin/ml/qa/native-multi-node-tests/src/test/java/org/elasticsearch/xpack/ml/integration/MlNativeAutodetectIntegTestCase.java +++ b/x-pack/plugin/ml/qa/native-multi-node-tests/src/test/java/org/elasticsearch/xpack/ml/integration/MlNativeAutodetectIntegTestCase.java @@ -6,7 +6,6 @@ package org.elasticsearch.xpack.ml.integration; import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksRequest; -import org.elasticsearch.action.get.GetResponse; import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.action.support.master.AcknowledgedResponse; import org.elasticsearch.client.Client; @@ -348,17 +347,19 @@ protected void waitForecastToFinish(String jobId, String forecastId) throws Exce } protected ForecastRequestStats getForecastStats(String jobId, String forecastId) { - GetResponse getResponse = client().prepareGet() - .setIndex(AnomalyDetectorsIndex.jobResultsAliasedName(jobId)) - .setId(ForecastRequestStats.documentId(jobId, forecastId)) - .execute().actionGet(); + SearchResponse searchResponse = client().prepareSearch(AnomalyDetectorsIndex.jobResultsAliasedName(jobId)) + .setQuery(QueryBuilders.idsQuery().addIds(ForecastRequestStats.documentId(jobId, forecastId))) + .get(); - if (getResponse.isExists() == false) { + if (searchResponse.getHits().getHits().length == 0) { return null; } + + assertThat(searchResponse.getHits().getHits().length, equalTo(1)); + try (XContentParser parser = XContentFactory.xContent(XContentType.JSON).createParser( NamedXContentRegistry.EMPTY, DeprecationHandler.THROW_UNSUPPORTED_OPERATION, - getResponse.getSourceAsBytesRef().streamInput())) { + searchResponse.getHits().getHits()[0].getSourceRef().streamInput())) { return ForecastRequestStats.STRICT_PARSER.apply(parser, null); } catch (IOException e) { throw new IllegalStateException(e); @@ -398,7 +399,6 @@ protected long countForecastDocs(String jobId, String forecastId) { protected List getForecasts(String jobId, ForecastRequestStats forecastRequestStats) { List forecasts = new ArrayList<>(); - SearchResponse searchResponse = client().prepareSearch(AnomalyDetectorsIndex.jobResultsIndexPrefix() + "*") .setSize((int) forecastRequestStats.getRecordCount()) .setQuery(QueryBuilders.boolQuery() diff --git a/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/datafeed/DatafeedJob.java b/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/datafeed/DatafeedJob.java index 64e8512baa5b0..35878f1199586 100644 --- a/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/datafeed/DatafeedJob.java +++ b/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/datafeed/DatafeedJob.java @@ -32,7 +32,7 @@ import org.elasticsearch.xpack.core.ml.job.messages.Messages; import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.DataCounts; import org.elasticsearch.xpack.core.ml.job.results.Bucket; -import org.elasticsearch.xpack.core.security.user.SystemUser; +import org.elasticsearch.xpack.core.security.user.XPackUser; import org.elasticsearch.xpack.ml.datafeed.delayeddatacheck.DelayedDataDetector; import org.elasticsearch.xpack.ml.datafeed.delayeddatacheck.DelayedDataDetectorFactory.BucketWithMissingData; import org.elasticsearch.xpack.ml.datafeed.extractor.DataExtractorFactory; @@ -225,12 +225,12 @@ private Annotation createAnnotation(Date startTime, Date endTime, String msg) { Date currentTime = new Date(currentTimeSupplier.get()); return new Annotation(msg, currentTime, - SystemUser.NAME, + XPackUser.NAME, startTime, endTime, jobId, currentTime, - SystemUser.NAME, + XPackUser.NAME, "annotation"); } @@ -238,9 +238,11 @@ private String addAndSetDelayedDataAnnotation(Annotation annotation) { try (XContentBuilder xContentBuilder = annotation.toXContent(XContentFactory.jsonBuilder(), ToXContent.EMPTY_PARAMS)) { IndexRequest request = new IndexRequest(AnnotationIndex.WRITE_ALIAS_NAME); request.source(xContentBuilder); - IndexResponse response = client.index(request).actionGet(); - lastDataCheckAnnotation = annotation; - return response.getId(); + try (ThreadContext.StoredContext ignore = stashWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN)) { + IndexResponse response = client.index(request).actionGet(); + lastDataCheckAnnotation = annotation; + return response.getId(); + } } catch (IOException ex) { String errorMessage = "[" + jobId + "] failed to create annotation for delayed data checker."; LOGGER.error(errorMessage, ex); @@ -251,7 +253,7 @@ private String addAndSetDelayedDataAnnotation(Annotation annotation) { private void updateAnnotation(Annotation annotation) { Annotation updatedAnnotation = new Annotation(lastDataCheckAnnotation); - updatedAnnotation.setModifiedUsername(SystemUser.NAME); + updatedAnnotation.setModifiedUsername(XPackUser.NAME); updatedAnnotation.setModifiedTime(new Date(currentTimeSupplier.get())); updatedAnnotation.setAnnotation(annotation.getAnnotation()); updatedAnnotation.setTimestamp(annotation.getTimestamp()); @@ -260,8 +262,10 @@ private void updateAnnotation(Annotation annotation) { IndexRequest indexRequest = new IndexRequest(AnnotationIndex.WRITE_ALIAS_NAME); indexRequest.id(lastDataCheckAnnotationId); indexRequest.source(xContentBuilder); - client.index(indexRequest).actionGet(); - lastDataCheckAnnotation = updatedAnnotation; + try (ThreadContext.StoredContext ignore = stashWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN)) { + client.index(indexRequest).actionGet(); + lastDataCheckAnnotation = updatedAnnotation; + } } catch (IOException ex) { String errorMessage = "[" + jobId + "] failed to update annotation for delayed data checker."; LOGGER.error(errorMessage, ex); diff --git a/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/job/persistence/JobResultsProvider.java b/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/job/persistence/JobResultsProvider.java index cc75d48b81c0b..17d173bf22fc6 100644 --- a/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/job/persistence/JobResultsProvider.java +++ b/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/job/persistence/JobResultsProvider.java @@ -490,20 +490,6 @@ private T parseSearchHit(SearchHit hit, BiFunction } } - private T parseGetHit(GetResponse getResponse, BiFunction objectParser, - Consumer errorHandler) { - BytesReference source = getResponse.getSourceAsBytesRef(); - - try (InputStream stream = source.streamInput(); - XContentParser parser = XContentFactory.xContent(XContentType.JSON) - .createParser(NamedXContentRegistry.EMPTY, LoggingDeprecationHandler.INSTANCE, stream)) { - return objectParser.apply(parser, null); - } catch (IOException e) { - errorHandler.accept(new ElasticsearchParseException("failed to parse " + getResponse.getType(), e)); - return null; - } - } - /** * Search for buckets with the parameters in the {@link BucketsQueryBuilder} * Uses the internal client, so runs as the _xpack user @@ -957,19 +943,6 @@ private void searchSingleResult(String jobId, String resultDescription, S ), client::search); } - private void getResult(String jobId, String resultDescription, GetRequest get, BiFunction objectParser, - Consumer> handler, Consumer errorHandler, Supplier notFoundSupplier) { - - executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, get, ActionListener.wrap(getDocResponse -> { - if (getDocResponse.isExists()) { - handler.accept(new Result<>(getDocResponse.getIndex(), parseGetHit(getDocResponse, objectParser, errorHandler))); - } else { - LOGGER.trace("No {} for job with id {}", resultDescription, jobId); - handler.accept(new Result<>(null, notFoundSupplier.get())); - } - }, errorHandler), client::get); - } - private SearchRequestBuilder createLatestModelSizeStatsSearch(String indexName) { return client.prepareSearch(indexName) .setSize(1) @@ -1115,11 +1088,14 @@ public void scheduledEvents(ScheduledEventsQueryBuilder query, ActionListener handler, Consumer errorHandler) { String indexName = AnomalyDetectorsIndex.jobResultsAliasedName(jobId); - GetRequest getRequest = new GetRequest(indexName, ElasticsearchMappings.DOC_TYPE, - ForecastRequestStats.documentId(jobId, forecastId)); - - getResult(jobId, ForecastRequestStats.RESULTS_FIELD.getPreferredName(), getRequest, ForecastRequestStats.LENIENT_PARSER, - result -> handler.accept(result.result), errorHandler, () -> null); + SearchRequestBuilder forecastSearch = client.prepareSearch(indexName) + .setQuery(QueryBuilders.idsQuery().addIds(ForecastRequestStats.documentId(jobId, forecastId))); + + searchSingleResult(jobId, + ForecastRequestStats.RESULTS_FIELD.getPreferredName(), + forecastSearch, + ForecastRequestStats.LENIENT_PARSER,result -> handler.accept(result.result), + errorHandler, () -> null); } public void getForecastStats(String jobId, Consumer handler, Consumer errorHandler) { diff --git a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/datafeed/DatafeedJobTests.java b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/datafeed/DatafeedJobTests.java index 534681ff3c86a..2540ab8cde8ef 100644 --- a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/datafeed/DatafeedJobTests.java +++ b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/datafeed/DatafeedJobTests.java @@ -30,7 +30,7 @@ import org.elasticsearch.xpack.core.ml.datafeed.extractor.DataExtractor; import org.elasticsearch.xpack.core.ml.job.messages.Messages; import org.elasticsearch.xpack.core.ml.job.results.Bucket; -import org.elasticsearch.xpack.core.security.user.SystemUser; +import org.elasticsearch.xpack.core.security.user.XPackUser; import org.elasticsearch.xpack.ml.datafeed.delayeddatacheck.DelayedDataDetector; import org.elasticsearch.xpack.ml.datafeed.delayeddatacheck.DelayedDataDetectorFactory.BucketWithMissingData; import org.elasticsearch.xpack.ml.datafeed.extractor.DataExtractorFactory; @@ -271,12 +271,12 @@ public void testRealtimeRun() throws Exception { Annotation expectedAnnotation = new Annotation(msg, new Date(currentTime), - SystemUser.NAME, + XPackUser.NAME, bucket.getTimestamp(), new Date((bucket.getEpoch() + bucket.getBucketSpan()) * 1000), jobId, new Date(currentTime), - SystemUser.NAME, + XPackUser.NAME, "annotation"); IndexRequest request = new IndexRequest(AnnotationIndex.WRITE_ALIAS_NAME); @@ -312,7 +312,7 @@ public void testRealtimeRun() throws Exception { Annotation updatedAnnotation = new Annotation(expectedAnnotation); updatedAnnotation.setAnnotation(msg); updatedAnnotation.setModifiedTime(new Date(currentTime)); - updatedAnnotation.setModifiedUsername(SystemUser.NAME); + updatedAnnotation.setModifiedUsername(XPackUser.NAME); updatedAnnotation.setEndTimestamp(new Date((bucket2.getEpoch() + bucket2.getBucketSpan()) * 1000)); try (XContentBuilder xContentBuilder = updatedAnnotation.toXContent(XContentFactory.jsonBuilder(), ToXContent.EMPTY_PARAMS)) { indexRequest.source(xContentBuilder); diff --git a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/job/persistence/JobResultsProviderTests.java b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/job/persistence/JobResultsProviderTests.java index c2bda603724d6..8532cfc4feac4 100644 --- a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/job/persistence/JobResultsProviderTests.java +++ b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/job/persistence/JobResultsProviderTests.java @@ -11,7 +11,6 @@ import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.admin.indices.create.CreateIndexRequest; import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse; -import org.elasticsearch.action.get.GetResponse; import org.elasticsearch.action.search.MultiSearchRequest; import org.elasticsearch.action.search.MultiSearchResponse; import org.elasticsearch.action.search.SearchRequest; @@ -834,13 +833,6 @@ private JobResultsProvider createProvider(Client client) { return new JobResultsProvider(client, Settings.EMPTY); } - private static GetResponse createGetResponse(boolean exists, Map source) throws IOException { - GetResponse getResponse = mock(GetResponse.class); - when(getResponse.isExists()).thenReturn(exists); - when(getResponse.getSourceAsBytesRef()).thenReturn(BytesReference.bytes(XContentFactory.jsonBuilder().map(source))); - return getResponse; - } - private static SearchResponse createSearchResponse(List> source) throws IOException { SearchResponse response = mock(SearchResponse.class); List list = new ArrayList<>(); diff --git a/x-pack/plugin/monitoring/src/main/java/org/elasticsearch/xpack/monitoring/exporter/Exporter.java b/x-pack/plugin/monitoring/src/main/java/org/elasticsearch/xpack/monitoring/exporter/Exporter.java index 85a6da15177f9..34c069adb2a02 100644 --- a/x-pack/plugin/monitoring/src/main/java/org/elasticsearch/xpack/monitoring/exporter/Exporter.java +++ b/x-pack/plugin/monitoring/src/main/java/org/elasticsearch/xpack/monitoring/exporter/Exporter.java @@ -25,11 +25,11 @@ public abstract class Exporter implements AutoCloseable { private static final Setting.AffixSetting ENABLED_SETTING = Setting.affixKeySetting("xpack.monitoring.exporters.","enabled", - (key) -> Setting.boolSetting(key, true, Property.Dynamic, Property.NodeScope)); + key -> Setting.boolSetting(key, true, Property.Dynamic, Property.NodeScope)); private static final Setting.AffixSetting TYPE_SETTING = Setting.affixKeySetting("xpack.monitoring.exporters.","type", - (key) -> Setting.simpleString(key, (v, s) -> { + key -> Setting.simpleString(key, v -> { switch (v) { case "": case "http": @@ -47,13 +47,13 @@ public abstract class Exporter implements AutoCloseable { */ public static final Setting.AffixSetting USE_INGEST_PIPELINE_SETTING = Setting.affixKeySetting("xpack.monitoring.exporters.","use_ingest", - (key) -> Setting.boolSetting(key, true, Property.Dynamic, Property.NodeScope)); + key -> Setting.boolSetting(key, true, Property.Dynamic, Property.NodeScope)); /** * Every {@code Exporter} allows users to explicitly disable cluster alerts. */ public static final Setting.AffixSetting CLUSTER_ALERTS_MANAGEMENT_SETTING = Setting.affixKeySetting("xpack.monitoring.exporters.", "cluster_alerts.management.enabled", - (key) -> Setting.boolSetting(key, true, Property.Dynamic, Property.NodeScope)); + key -> Setting.boolSetting(key, true, Property.Dynamic, Property.NodeScope)); /** * Every {@code Exporter} allows users to explicitly disable specific cluster alerts. *

@@ -61,14 +61,14 @@ public abstract class Exporter implements AutoCloseable { */ public static final Setting.AffixSetting> CLUSTER_ALERTS_BLACKLIST_SETTING = Setting .affixKeySetting("xpack.monitoring.exporters.", "cluster_alerts.management.blacklist", - (key) -> Setting.listSetting(key, Collections.emptyList(), Function.identity(), Property.Dynamic, Property.NodeScope)); + key -> Setting.listSetting(key, Collections.emptyList(), Function.identity(), Property.Dynamic, Property.NodeScope)); /** * Every {@code Exporter} allows users to use a different index time format. */ private static final Setting.AffixSetting INDEX_NAME_TIME_FORMAT_SETTING = Setting.affixKeySetting("xpack.monitoring.exporters.","index.name.time_format", - (key) -> Setting.simpleString(key, Property.Dynamic, Property.NodeScope)); + key -> Setting.simpleString(key, Property.Dynamic, Property.NodeScope)); private static final String INDEX_FORMAT = "YYYY.MM.dd"; diff --git a/x-pack/plugin/watcher/src/main/java/org/elasticsearch/xpack/watcher/common/http/HttpSettings.java b/x-pack/plugin/watcher/src/main/java/org/elasticsearch/xpack/watcher/common/http/HttpSettings.java index f4f97df1d4fd8..af4a20d596cd0 100644 --- a/x-pack/plugin/watcher/src/main/java/org/elasticsearch/xpack/watcher/common/http/HttpSettings.java +++ b/x-pack/plugin/watcher/src/main/java/org/elasticsearch/xpack/watcher/common/http/HttpSettings.java @@ -34,7 +34,7 @@ public class HttpSettings { private static final String SSL_KEY_PREFIX = "xpack.http.ssl."; static final Setting PROXY_HOST = Setting.simpleString(PROXY_HOST_KEY, Property.NodeScope); - static final Setting PROXY_SCHEME = Setting.simpleString(PROXY_SCHEME_KEY, (v, s) -> Scheme.parse(v), Property.NodeScope); + static final Setting PROXY_SCHEME = Setting.simpleString(PROXY_SCHEME_KEY, Scheme::parse, Property.NodeScope); static final Setting PROXY_PORT = Setting.intSetting(PROXY_PORT_KEY, 0, 0, 0xFFFF, Property.NodeScope); static final Setting MAX_HTTP_RESPONSE_SIZE = Setting.byteSizeSetting("xpack.http.max_response_size",