diff --git a/docs/reference/api-conventions.asciidoc b/docs/reference/api-conventions.asciidoc index 82a6ef55c2638..26839eebe0581 100644 --- a/docs/reference/api-conventions.asciidoc +++ b/docs/reference/api-conventions.asciidoc @@ -42,6 +42,14 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] The defaults settings for the above parameters depend on the API being used. +Some indices (hereafter "system indices") are used by various system +modules and/or plugins to store state or configuration. These indices +are not intended to be accessed directly, and accessing them directly is +deprecated. In the next major version, access to these indices will no longer be +allowed to prevent accidental operations that may cause problems with +Elasticsearch features which depend on the consistency of data in these +indices. + Some multi-target APIs that can target indices also support the following query string parameter: diff --git a/docs/reference/docs/delete-by-query.asciidoc b/docs/reference/docs/delete-by-query.asciidoc index a65e75b1c37e7..cd5f0f6beaac0 100644 --- a/docs/reference/docs/delete-by-query.asciidoc +++ b/docs/reference/docs/delete-by-query.asciidoc @@ -53,13 +53,13 @@ POST /my-index-000001/_delete_by_query ==== {api-description-title} You can specify the query criteria in the request URI or the request body -using the same syntax as the <>. +using the same syntax as the <>. When you submit a delete by query request, {es} gets a snapshot of the data stream or index when it begins processing the request and deletes matching documents using `internal` versioning. If a document changes between the time that the snapshot is taken and the delete operation is processed, it results in a version -conflict and the delete operation fails. +conflict and the delete operation fails. NOTE: Documents with a version equal to 0 cannot be deleted using delete by query because `internal` versioning does not support 0 as a valid @@ -70,18 +70,18 @@ requests sequentially to find all of the matching documents to delete. A bulk delete request is performed for each batch of matching documents. If a search or bulk request is rejected, the requests are retried up to 10 times, with exponential back off. If the maximum retry limit is reached, processing halts -and all failed requests are returned in the response. Any delete requests that -completed successfully still stick, they are not rolled back. +and all failed requests are returned in the response. Any delete requests that +completed successfully still stick, they are not rolled back. -You can opt to count version conflicts instead of halting and returning by -setting `conflicts` to `proceed`. +You can opt to count version conflicts instead of halting and returning by +setting `conflicts` to `proceed`. ===== Refreshing shards Specifying the `refresh` parameter refreshes all shards involved in the delete -by query once the request completes. This is different than the delete API's -`refresh` parameter, which causes just the shard that received the delete -request to be refreshed. Unlike the delete API, it does not support +by query once the request completes. This is different than the delete API's +`refresh` parameter, which causes just the shard that received the delete +request to be refreshed. Unlike the delete API, it does not support `wait_for`. [[docs-delete-by-query-task-api]] @@ -90,7 +90,7 @@ request to be refreshed. Unlike the delete API, it does not support If the request contains `wait_for_completion=false`, {es} performs some preflight checks, launches the request, and returns a <> you can use to cancel or get the status of the task. {es} creates a -record of this task as a document at `.tasks/task/${taskId}`. When you are +record of this task as a document at `.tasks/task/${taskId}`. When you are done with a task, you should delete the task document so {es} can reclaim the space. @@ -101,20 +101,20 @@ before proceeding with the request. See <> for details. `timeout` controls how long each write request waits for unavailable shards to become available. Both work exactly the way they work in the <>. Delete by query uses scrolled searches, so you can also -specify the `scroll` parameter to control how long it keeps the search context +specify the `scroll` parameter to control how long it keeps the search context alive, for example `?scroll=10m`. The default is 5 minutes. ===== Throttling delete requests To control the rate at which delete by query issues batches of delete operations, you can set `requests_per_second` to any positive decimal number. This pads each -batch with a wait time to throttle the rate. Set `requests_per_second` to `-1` +batch with a wait time to throttle the rate. Set `requests_per_second` to `-1` to disable throttling. -Throttling uses a wait time between batches so that the internal scroll requests -can be given a timeout that takes the request padding into account. The padding -time is the difference between the batch size divided by the -`requests_per_second` and the time spent writing. By default the batch size is +Throttling uses a wait time between batches so that the internal scroll requests +can be given a timeout that takes the request padding into account. The padding +time is the difference between the batch size divided by the +`requests_per_second` and the time spent writing. By default the batch size is `1000`, so if `requests_per_second` is set to `500`: [source,txt] @@ -123,9 +123,9 @@ target_time = 1000 / 500 per second = 2 seconds wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds -------------------------------------------------- -Since the batch is issued as a single `_bulk` request, large batch sizes -cause {es} to create many requests and wait before starting the next set. -This is "bursty" instead of "smooth". +Since the batch is issued as a single `_bulk` request, large batch sizes +cause {es} to create many requests and wait before starting the next set. +This is "bursty" instead of "smooth". [[docs-delete-by-query-slice]] ===== Slicing @@ -134,11 +134,11 @@ Delete by query supports <> to parallelize the delete process. This can improve efficiency and provide a convenient way to break the request down into smaller parts. -Setting `slices` to `auto` chooses a reasonable number for most data streams and indices. -If you're slicing manually or otherwise tuning automatic slicing, keep in mind +Setting `slices` to `auto` chooses a reasonable number for most data streams and indices. +If you're slicing manually or otherwise tuning automatic slicing, keep in mind that: -* Query performance is most efficient when the number of `slices` is equal to +* Query performance is most efficient when the number of `slices` is equal to the number of shards in the index or backing index. If that number is large (for example, 500), choose a lower number as too many `slices` hurts performance. Setting `slices` higher than the number of shards generally does not improve efficiency @@ -171,15 +171,15 @@ Defaults to `true`. include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyzer] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyze_wildcard] - + `conflicts`:: - (Optional, string) What to do if delete by query hits version conflicts: + (Optional, string) What to do if delete by query hits version conflicts: `abort` or `proceed`. Defaults to `abort`. include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=default_operator] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=df] - + include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] + Defaults to `open`. @@ -187,9 +187,9 @@ Defaults to `open`. include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=from] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - + include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=lenient] - + include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=max_docs] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=preference] @@ -214,9 +214,9 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=scroll_size] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search_type] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search_timeout] - + include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=slices] - + include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=sort] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source] @@ -226,7 +226,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_excludes] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_includes] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=stats] - + include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=terminate_after] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeout] @@ -239,9 +239,9 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards ==== {api-request-body-title} `query`:: - (Optional, <>) Specifies the documents to delete + (Optional, <>) Specifies the documents to delete using the <>. - + [[docs-delete-by-query-api-response-body]] ==== Response body @@ -345,7 +345,7 @@ this is non-empty then the request aborted because of those failures. Delete by query is implemented using batches, and any failure causes the entire process to abort but all failures in the current batch are collected into the array. You can use the `conflicts` option to prevent reindex from aborting on -version conflicts. +version conflicts. [[docs-delete-by-query-api-example]] ==== {api-examples-title} @@ -377,7 +377,7 @@ POST /my-index-000001,my-index-000002/_delete_by_query // TEST[s/^/PUT my-index-000001\nPUT my-index-000002\n/] Limit the delete by query operation to shards that a particular routing -value: +value: [source,console] -------------------------------------------------- @@ -571,7 +571,7 @@ though these are all taken at approximately the same time. The value of `requests_per_second` can be changed on a running delete by query using the `_rethrottle` API. Rethrottling that speeds up the -query takes effect immediately but rethrotting that slows down the query +query takes effect immediately but rethrotting that slows down the query takes effect after completing the current batch to prevent scroll timeouts. @@ -670,6 +670,6 @@ POST _tasks/r1A2WoRbTwKZ516z6NEs5A:36619/_cancel The task ID can be found using the <>. -Cancellation should happen quickly but might take a few seconds. The task status -API above will continue to list the delete by query task until this task checks that it +Cancellation should happen quickly but might take a few seconds. The task status +API above will continue to list the delete by query task until this task checks that it has been cancelled and terminates itself. diff --git a/docs/reference/docs/update-by-query.asciidoc b/docs/reference/docs/update-by-query.asciidoc index 1cdfc24a408f8..69c9ded788174 100644 --- a/docs/reference/docs/update-by-query.asciidoc +++ b/docs/reference/docs/update-by-query.asciidoc @@ -4,7 +4,7 @@ Update by query ++++ -Updates documents that match the specified query. +Updates documents that match the specified query. If no query is specified, performs an update on every document in the data stream or index without modifying the source, which is useful for picking up mapping changes. @@ -50,33 +50,33 @@ POST my-index-000001/_update_by_query?conflicts=proceed ==== {api-description-title} You can specify the query criteria in the request URI or the request body -using the same syntax as the <>. +using the same syntax as the <>. When you submit an update by query request, {es} gets a snapshot of the data stream or index when it begins processing the request and updates matching documents using -`internal` versioning. -When the versions match, the document is updated and the version number is incremented. -If a document changes between the time that the snapshot is taken and -the update operation is processed, it results in a version conflict and the operation fails. -You can opt to count version conflicts instead of halting and returning by -setting `conflicts` to `proceed`. +`internal` versioning. +When the versions match, the document is updated and the version number is incremented. +If a document changes between the time that the snapshot is taken and +the update operation is processed, it results in a version conflict and the operation fails. +You can opt to count version conflicts instead of halting and returning by +setting `conflicts` to `proceed`. NOTE: Documents with a version equal to 0 cannot be updated using update by query because `internal` versioning does not support 0 as a valid version number. While processing an update by query request, {es} performs multiple search -requests sequentially to find all of the matching documents. -A bulk update request is performed for each batch of matching documents. -Any query or update failures cause the update by query request to fail and +requests sequentially to find all of the matching documents. +A bulk update request is performed for each batch of matching documents. +Any query or update failures cause the update by query request to fail and the failures are shown in the response. Any update requests that completed successfully still stick, they are not rolled back. ===== Refreshing shards -Specifying the `refresh` parameter refreshes all shards once the request completes. +Specifying the `refresh` parameter refreshes all shards once the request completes. This is different than the update API's `refresh` parameter, which causes just the shard -that received the request to be refreshed. Unlike the update API, it does not support +that received the request to be refreshed. Unlike the update API, it does not support `wait_for`. [[docs-update-by-query-task-api]] @@ -84,9 +84,9 @@ that received the request to be refreshed. Unlike the update API, it does not su If the request contains `wait_for_completion=false`, {es} performs some preflight checks, launches the request, and returns a -<> you can use to cancel or get the status of the task. -{es} creates a record of this task as a document at `.tasks/task/${taskId}`. -When you are done with a task, you should delete the task document so +<> you can use to cancel or get the status of the task. +{es} creates a record of this task as a document at `.tasks/task/${taskId}`. +When you are done with a task, you should delete the task document so {es} can reclaim the space. ===== Waiting for active shards @@ -96,20 +96,20 @@ before proceeding with the request. See <> for details. `timeout` controls how long each write request waits for unavailable shards to become available. Both work exactly the way they work in the <>. Update by query uses scrolled searches, so you can also -specify the `scroll` parameter to control how long it keeps the search context +specify the `scroll` parameter to control how long it keeps the search context alive, for example `?scroll=10m`. The default is 5 minutes. ===== Throttling update requests To control the rate at which update by query issues batches of update operations, you can set `requests_per_second` to any positive decimal number. This pads each -batch with a wait time to throttle the rate. Set `requests_per_second` to `-1` +batch with a wait time to throttle the rate. Set `requests_per_second` to `-1` to disable throttling. -Throttling uses a wait time between batches so that the internal scroll requests -can be given a timeout that takes the request padding into account. The padding -time is the difference between the batch size divided by the -`requests_per_second` and the time spent writing. By default the batch size is +Throttling uses a wait time between batches so that the internal scroll requests +can be given a timeout that takes the request padding into account. The padding +time is the difference between the batch size divided by the +`requests_per_second` and the time spent writing. By default the batch size is `1000`, so if `requests_per_second` is set to `500`: [source,txt] @@ -118,9 +118,9 @@ target_time = 1000 / 500 per second = 2 seconds wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds -------------------------------------------------- -Since the batch is issued as a single `_bulk` request, large batch sizes -cause {es} to create many requests and wait before starting the next set. -This is "bursty" instead of "smooth". +Since the batch is issued as a single `_bulk` request, large batch sizes +cause {es} to create many requests and wait before starting the next set. +This is "bursty" instead of "smooth". [[docs-update-by-query-slice]] ===== Slicing @@ -129,11 +129,11 @@ Update by query supports <> to parallelize the update process. This can improve efficiency and provide a convenient way to break the request down into smaller parts. -Setting `slices` to `auto` chooses a reasonable number for most data streams and indices. -If you're slicing manually or otherwise tuning automatic slicing, keep in mind +Setting `slices` to `auto` chooses a reasonable number for most data streams and indices. +If you're slicing manually or otherwise tuning automatic slicing, keep in mind that: -* Query performance is most efficient when the number of `slices` is equal to +* Query performance is most efficient when the number of `slices` is equal to the number of shards in the index or backing index. If that number is large (for example, 500), choose a lower number as too many `slices` hurts performance. Setting `slices` higher than the number of shards generally does not improve efficiency @@ -166,15 +166,15 @@ Defaults to `true`. include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyzer] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=analyze_wildcard] - + `conflicts`:: - (Optional, string) What to do if update by query hits version conflicts: + (Optional, string) What to do if update by query hits version conflicts: `abort` or `proceed`. Defaults to `abort`. include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=default_operator] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=df] - + include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] + Defaults to `open`. @@ -182,9 +182,9 @@ Defaults to `open`. include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=from] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] - + include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=lenient] - + include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=max_docs] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=pipeline] @@ -211,9 +211,9 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=scroll_size] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search_type] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search_timeout] - + include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=slices] - + include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=sort] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source] @@ -223,7 +223,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_excludes] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=source_includes] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=stats] - + include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=terminate_after] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=timeout] @@ -236,9 +236,9 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=wait_for_active_shards ==== {api-request-body-title} `query`:: - (Optional, <>) Specifies the documents to update + (Optional, <>) Specifies the documents to update using the <>. - + [[docs-update-by-query-api-response-body]] ==== Response body @@ -336,7 +336,7 @@ POST my-index-000001/_update_by_query?routing=1 -------------------------------------------------- // TEST[setup:my_index] -By default update by query uses scroll batches of 1000. +By default update by query uses scroll batches of 1000. You can change the batch size with the `scroll_size` parameter: [source,console] @@ -348,7 +348,7 @@ POST my-index-000001/_update_by_query?scroll_size=100 [[docs-update-by-query-api-source]] ===== Update the document source -Update by query supports scripts to update the document source. +Update by query supports scripts to update the document source. For example, the following request increments the `count` field for all documents with a `user.id` of `kimchy` in `my-index-000001`: @@ -390,16 +390,16 @@ operation that is performed: [horizontal] `noop`:: -Set `ctx.op = "noop"` if your script decides that it doesn't have to make any changes. +Set `ctx.op = "noop"` if your script decides that it doesn't have to make any changes. The update by query operation skips updating the document and increments the `noop` counter. `delete`:: -Set `ctx.op = "delete"` if your script decides that the document should be deleted. +Set `ctx.op = "delete"` if your script decides that the document should be deleted. The update by query operation deletes the document and increments the `deleted` counter. Update by query only supports `update`, `noop`, and `delete`. Setting `ctx.op` to anything else is an error. Setting any other field in `ctx` is an error. -This API only enables you to modify the source of matching documents, you cannot move them. +This API only enables you to modify the source of matching documents, you cannot move them. [[docs-update-by-query-api-ingest-pipeline]] ===== Update documents using an ingest pipeline @@ -485,7 +485,7 @@ of operations that the reindex expects to perform. You can estimate the progress by adding the `updated`, `created`, and `deleted` fields. The request will finish when their sum is equal to the `total` field. -With the task id you can look up the task directly. The following example +With the task id you can look up the task directly. The following example retrieves information about task `r1A2WoRbTwKZ516z6NEs5A:36619`: [source,console] @@ -515,8 +515,8 @@ POST _tasks/r1A2WoRbTwKZ516z6NEs5A:36619/_cancel The task ID can be found using the <>. -Cancellation should happen quickly but might take a few seconds. The task status -API above will continue to list the update by query task until this task checks +Cancellation should happen quickly but might take a few seconds. The task status +API above will continue to list the update by query task until this task checks that it has been cancelled and terminates itself. diff --git a/docs/reference/indices/refresh.asciidoc b/docs/reference/indices/refresh.asciidoc index f8259b9d5ba32..98fbd5268aef1 100644 --- a/docs/reference/indices/refresh.asciidoc +++ b/docs/reference/indices/refresh.asciidoc @@ -49,7 +49,7 @@ refresh operation completes. ==== Refreshes are resource-intensive. To ensure good cluster performance, -we recommend waiting for {es}'s periodic refresh +we recommend waiting for {es}'s periodic refresh rather than performing an explicit refresh when possible. diff --git a/docs/reference/indices/segments.asciidoc b/docs/reference/indices/segments.asciidoc index 3aba8683cce03..5fc0a9799a8b4 100644 --- a/docs/reference/indices/segments.asciidoc +++ b/docs/reference/indices/segments.asciidoc @@ -60,7 +60,7 @@ Defaults to `false`. ==== {api-response-body-title} ``:: -(String) +(String) include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=segment] `generation`:: @@ -83,7 +83,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=segment-size] (Integer) include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=memory] -`committed`:: +`committed`:: (Boolean) include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=committed] diff --git a/docs/reference/indices/stats.asciidoc b/docs/reference/indices/stats.asciidoc index 0b89de23c9830..6352da9cc34e4 100644 --- a/docs/reference/indices/stats.asciidoc +++ b/docs/reference/indices/stats.asciidoc @@ -33,7 +33,7 @@ more data streams and indices. By default, the returned statistics are index-level with `primaries` and `total` aggregations. -`primaries` are the values for only the primary shards. +`primaries` are the values for only the primary shards. `total` are the accumulated values for both primary and replica shards. To get shard-level statistics, diff --git a/docs/reference/indices/update-settings.asciidoc b/docs/reference/indices/update-settings.asciidoc index ea42112980322..d0b4fef89587e 100644 --- a/docs/reference/indices/update-settings.asciidoc +++ b/docs/reference/indices/update-settings.asciidoc @@ -147,7 +147,7 @@ and reopen the index. [NOTE] ==== -You cannot close the write index of a data stream. +You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the <> -IMPORTANT: {transforms-cap} support a subset of the functionality in +IMPORTANT: {transforms-cap} support a subset of the functionality in aggregations. See <>. -- @@ -681,7 +681,7 @@ Defines how to group the data. More than one grouping can be defined + -- * <<_date_histogram,Date histogram>> -* <<_geotile_grid,Geotile Grid>> +* <<_geotile_grid,Geotile Grid>> * <<_histogram,Histogram>> * <<_terms,Terms>> diff --git a/docs/reference/search/count.asciidoc b/docs/reference/search/count.asciidoc index 5bd58cbf9b94c..07bb447c42522 100644 --- a/docs/reference/search/count.asciidoc +++ b/docs/reference/search/count.asciidoc @@ -22,16 +22,16 @@ the <> works. [[search-count-api-desc]] ==== {api-description-title} -The count API allows you to execute a query and get the number of matches for -that query. The query can either -be provided using a simple query string as a parameter, or using the +The count API allows you to execute a query and get the number of matches for +that query. The query can either +be provided using a simple query string as a parameter, or using the <> defined within the request body. The count API supports <>. You can run a single count API search across multiple data streams and indices. -The operation is broadcast across all shards. For each shard id group, a replica -is chosen and executed against it. This means that replicas increase the +The operation is broadcast across all shards. For each shard id group, a replica +is chosen and executed against it. This means that replicas increase the scalability of count. @@ -74,7 +74,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=lenient] `min_score`:: (Optional, float) - Sets the minimum `_score` value that documents must have to be included in the + Sets the minimum `_score` value that documents must have to be included in the result. include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=preference] diff --git a/docs/reference/search/rank-eval.asciidoc b/docs/reference/search/rank-eval.asciidoc index 1970c2c568a73..2dfd6db9b5cbb 100644 --- a/docs/reference/search/rank-eval.asciidoc +++ b/docs/reference/search/rank-eval.asciidoc @@ -4,7 +4,7 @@ Ranking evaluation ++++ -Allows you to evaluate the quality of ranked search results over a set of +Allows you to evaluate the quality of ranked search results over a set of typical search queries. [[search-rank-eval-api-request]] @@ -18,46 +18,46 @@ typical search queries. [[search-rank-eval-api-desc]] ==== {api-description-title} -The ranking evaluation API allows you to evaluate the quality of ranked search +The ranking evaluation API allows you to evaluate the quality of ranked search results over a set of typical search queries. Given this set of queries and a list of manually rated documents, the `_rank_eval` endpoint calculates and returns typical information retrieval metrics like _mean reciprocal rank_, _precision_ or _discounted cumulative gain_. -Search quality evaluation starts with looking at the users of your search -application, and the things that they are searching for. Users have a specific -_information need_; for example, they are looking for gift in a web shop or want -to book a flight for their next holiday. They usually enter some search terms -into a search box or some other web form. All of this information, together with -meta information about the user (for example the browser, location, earlier -preferences and so on) then gets translated into a query to the underlying +Search quality evaluation starts with looking at the users of your search +application, and the things that they are searching for. Users have a specific +_information need_; for example, they are looking for gift in a web shop or want +to book a flight for their next holiday. They usually enter some search terms +into a search box or some other web form. All of this information, together with +meta information about the user (for example the browser, location, earlier +preferences and so on) then gets translated into a query to the underlying search system. -The challenge for search engineers is to tweak this translation process from -user entries to a concrete query, in such a way that the search results contain -the most relevant information with respect to the user's information need. This -can only be done if the search result quality is evaluated constantly across a -representative test suite of typical user queries, so that improvements in the -rankings for one particular query don't negatively affect the ranking for +The challenge for search engineers is to tweak this translation process from +user entries to a concrete query, in such a way that the search results contain +the most relevant information with respect to the user's information need. This +can only be done if the search result quality is evaluated constantly across a +representative test suite of typical user queries, so that improvements in the +rankings for one particular query don't negatively affect the ranking for other types of queries. In order to get started with search quality evaluation, you need three basic things: -. A collection of documents you want to evaluate your query performance against, +. A collection of documents you want to evaluate your query performance against, usually one or more data streams or indices. . A collection of typical search requests that users enter into your system. . A set of document ratings that represent the documents' relevance with respect to a search request. - -It is important to note that one set of document ratings is needed per test -query, and that the relevance judgements are based on the information need of + +It is important to note that one set of document ratings is needed per test +query, and that the relevance judgements are based on the information need of the user that entered the query. -The ranking evaluation API provides a convenient way to use this information in -a ranking evaluation request to calculate different search evaluation metrics. -This gives you a first estimation of your overall search quality, as well as a -measurement to optimize against when fine-tuning various aspect of the query +The ranking evaluation API provides a convenient way to use this information in +a ranking evaluation request to calculate different search evaluation metrics. +This gives you a first estimation of your overall search quality, as well as a +measurement to optimize against when fine-tuning various aspect of the query generation in your application. @@ -97,7 +97,7 @@ In its most basic form, a request to the `_rank_eval` endpoint has two sections: ----------------------------- GET /my-index-000001/_rank_eval { - "requests": [ ... ], <1> + "requests": [ ... ], <1> "metric": { <2> "mean_reciprocal_rank": { ... } <3> } @@ -109,7 +109,7 @@ GET /my-index-000001/_rank_eval <2> definition of the evaluation metric to calculate <3> a specific metric and its parameters -The request section contains several search requests typical to your +The request section contains several search requests typical to your application, along with the document ratings for each particular search request. [source,js] @@ -122,7 +122,7 @@ GET /my-index-000001/_rank_eval "request": { <2> "query": { "match": { "text": "amsterdam" } } }, - "ratings": [ <3> + "ratings": [ <3> { "_index": "my-index-000001", "_id": "doc1", "rating": 0 }, { "_index": "my-index-000001", "_id": "doc2", "rating": 3 }, { "_index": "my-index-000001", "_id": "doc3", "rating": 1 } @@ -150,38 +150,38 @@ GET /my-index-000001/_rank_eval - `_id`: The document ID. - `rating`: The document's relevance with regard to this search request. -A document `rating` can be any integer value that expresses the relevance of the -document on a user-defined scale. For some of the metrics, just giving a binary -rating (for example `0` for irrelevant and `1` for relevant) will be sufficient, +A document `rating` can be any integer value that expresses the relevance of the +document on a user-defined scale. For some of the metrics, just giving a binary +rating (for example `0` for irrelevant and `1` for relevant) will be sufficient, while other metrics can use a more fine-grained scale. ===== Template-based ranking evaluation -As an alternative to having to provide a single query per test request, it is -possible to specify query templates in the evaluation request and later refer to -them. This way, queries with a similar structure that differ only in their -parameters don't have to be repeated all the time in the `requests` section. -In typical search systems, where user inputs usually get filled into a small +As an alternative to having to provide a single query per test request, it is +possible to specify query templates in the evaluation request and later refer to +them. This way, queries with a similar structure that differ only in their +parameters don't have to be repeated all the time in the `requests` section. +In typical search systems, where user inputs usually get filled into a small set of query templates, this helps make the evaluation request more succinct. [source,js] -------------------------------- GET /my-index-000001/_rank_eval -{ +{ [...] "templates": [ { "id": "match_one_field_query", <1> "template": { <2> - "inline": { - "query": { + "inline": { + "query": { "match": { "{{field}}": { "query": "{{query_string}}" }} } } } } - ], + ], "requests": [ { "id": "amsterdam_query", @@ -197,7 +197,7 @@ GET /my-index-000001/_rank_eval -------------------------------- // NOTCONSOLE -<1> the template id +<1> the template id <2> the template definition to use <3> a reference to a previously defined template <4> the parameters to use to fill the template @@ -205,7 +205,7 @@ GET /my-index-000001/_rank_eval ===== Available evaluation metrics -The `metric` section determines which of the available evaluation metrics +The `metric` section determines which of the available evaluation metrics will be used. The following metrics are supported: [discrete] @@ -254,8 +254,8 @@ The `precision` metric takes the following optional parameters [cols="<,<",options="header",] |======================================================================= |Parameter |Description -|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter -in the query. Defaults to 10. +|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter +in the query. Defaults to 10. |`relevant_rating_threshold` |sets the rating threshold above which documents are considered to be "relevant". Defaults to `1`. |`ignore_unlabeled` |controls how unlabeled documents in the search results are counted. @@ -318,10 +318,10 @@ in the query. Defaults to 10. [discrete] ===== Mean reciprocal rank -For every query in the test suite, this metric calculates the reciprocal of the -rank of the first relevant document. For example, finding the first relevant -result in position 3 means the reciprocal rank is 1/3. The reciprocal rank for -each query is averaged across all queries in the test suite to give the +For every query in the test suite, this metric calculates the reciprocal of the +rank of the first relevant document. For example, finding the first relevant +result in position 3 means the reciprocal rank is 1/3. The reciprocal rank for +each query is averaged across all queries in the test suite to give the {wikipedia}/Mean_reciprocal_rank[mean reciprocal rank]. [source,console] @@ -349,7 +349,7 @@ The `mean_reciprocal_rank` metric takes the following optional parameters [cols="<,<",options="header",] |======================================================================= |Parameter |Description -|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter +|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter in the query. Defaults to 10. |`relevant_rating_threshold` |Sets the rating threshold above which documents are considered to be "relevant". Defaults to `1`. @@ -359,13 +359,13 @@ in the query. Defaults to 10. [discrete] ===== Discounted cumulative gain (DCG) -In contrast to the two metrics above, -{wikipedia}/Discounted_cumulative_gain[discounted cumulative gain] +In contrast to the two metrics above, +{wikipedia}/Discounted_cumulative_gain[discounted cumulative gain] takes both the rank and the rating of the search results into account. -The assumption is that highly relevant documents are more useful for the user -when appearing at the top of the result list. Therefore, the DCG formula reduces -the contribution that high ratings for documents on lower search ranks have on +The assumption is that highly relevant documents are more useful for the user +when appearing at the top of the result list. Therefore, the DCG formula reduces +the contribution that high ratings for documents on lower search ranks have on the overall DCG metric. [source,console] @@ -393,7 +393,7 @@ The `dcg` metric takes the following optional parameters: [cols="<,<",options="header",] |======================================================================= |Parameter |Description -|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter +|`k` |sets the maximum number of documents retrieved per query. This value will act in place of the usual `size` parameter in the query. Defaults to 10. |`normalize` | If set to `true`, this metric will calculate the {wikipedia}/Discounted_cumulative_gain#Normalized_DCG[Normalized DCG]. |======================================================================= @@ -402,26 +402,26 @@ in the query. Defaults to 10. [discrete] ===== Expected Reciprocal Rank (ERR) -Expected Reciprocal Rank (ERR) is an extension of the classical reciprocal rank -for the graded relevance case (Olivier Chapelle, Donald Metzler, Ya Zhang, and -Pierre Grinspan. 2009. +Expected Reciprocal Rank (ERR) is an extension of the classical reciprocal rank +for the graded relevance case (Olivier Chapelle, Donald Metzler, Ya Zhang, and +Pierre Grinspan. 2009. https://olivier.chapelle.cc/pub/err.pdf[Expected reciprocal rank for graded relevance].) -It is based on the assumption of a cascade model of search, in which a user -scans through ranked search results in order and stops at the first document -that satisfies the information need. For this reason, it is a good metric for -question answering and navigation queries, but less so for survey-oriented -information needs where the user is interested in finding many relevant +It is based on the assumption of a cascade model of search, in which a user +scans through ranked search results in order and stops at the first document +that satisfies the information need. For this reason, it is a good metric for +question answering and navigation queries, but less so for survey-oriented +information needs where the user is interested in finding many relevant documents in the top k results. -The metric models the expectation of the reciprocal of the position at which a +The metric models the expectation of the reciprocal of the position at which a user stops reading through the result list. This means that a relevant document -in a top ranking position will have a large contribution to the overall score. -However, the same document will contribute much less to the score if it appears -in a lower rank; even more so if there are some relevant (but maybe less relevant) -documents preceding it. In this way, the ERR metric discounts documents that -are shown after very relevant documents. This introduces a notion of dependency -in the ordering of relevant documents that e.g. Precision or DCG don't account +in a top ranking position will have a large contribution to the overall score. +However, the same document will contribute much less to the score if it appears +in a lower rank; even more so if there are some relevant (but maybe less relevant) +documents preceding it. In this way, the ERR metric discounts documents that +are shown after very relevant documents. This introduces a notion of dependency +in the ordering of relevant documents that e.g. Precision or DCG don't account for. [source,console] @@ -458,9 +458,9 @@ in the query. Defaults to 10. ===== Response format -The response of the `_rank_eval` endpoint contains the overall calculated result -for the defined quality metric, a `details` section with a breakdown of results -for each query in the test suite and an optional `failures` section that shows +The response of the `_rank_eval` endpoint contains the overall calculated result +for the defined quality metric, a `details` section with a breakdown of results +for each query in the test suite and an optional `failures` section that shows potential errors of individual queries. The response has the following format: [source,js] diff --git a/docs/reference/search/search-shards.asciidoc b/docs/reference/search/search-shards.asciidoc index 5fee6f24fc1df..f44e129c0def6 100644 --- a/docs/reference/search/search-shards.asciidoc +++ b/docs/reference/search/search-shards.asciidoc @@ -186,5 +186,5 @@ The API returns the following result: // TESTRESPONSE[s/0TvkCyF7TAmM1wHP4a42-A/$body.shards.1.0.allocation_id.id/] // TESTRESPONSE[s/fMju3hd1QHWmWrIgFnI4Ww/$body.shards.0.0.allocation_id.id/] -Because of the specified routing values, +Because of the specified routing values, the search is only executed against two of the shards. diff --git a/docs/reference/search/search-template.asciidoc b/docs/reference/search/search-template.asciidoc index 87948e4665f95..19be9ce73fcd3 100644 --- a/docs/reference/search/search-template.asciidoc +++ b/docs/reference/search/search-template.asciidoc @@ -29,17 +29,17 @@ GET _search/template [[search-template-api-desc]] ==== {api-description-title} -The `/_search/template` endpoint allows you to use the mustache language to pre- -render search requests, before they are executed and fill existing templates +The `/_search/template` endpoint allows you to use the mustache language to pre- +render search requests, before they are executed and fill existing templates with template parameters. For more information on how Mustache templating and what kind of templating you can do with it check out the https://mustache.github.io/mustache.5.html[online documentation of the mustache project]. -NOTE: The mustache language is implemented in {es} as a sandboxed scripting -language, hence it obeys settings that may be used to enable or disable scripts -per type and context as described in the +NOTE: The mustache language is implemented in {es} as a sandboxed scripting +language, hence it obeys settings that may be used to enable or disable scripts +per type and context as described in the <>. @@ -57,17 +57,17 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=allow-no-indices] Defaults to `true`. `ccs_minimize_roundtrips`:: - (Optional, boolean) If `true`, network round-trips are minimized for + (Optional, boolean) If `true`, network round-trips are minimized for cross-cluster search requests. Defaults to `true`. include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=expand-wildcards] `explain`:: - (Optional, boolean) If `true`, the response includes additional details about + (Optional, boolean) If `true`, the response includes additional details about score computation as part of a hit. Defaults to `false`. `ignore_throttled`:: - (Optional, boolean) If `true`, specified concrete, expanded or aliased indices + (Optional, boolean) If `true`, specified concrete, expanded or aliased indices are not included in the response when throttled. Defaults to `true`. include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailable] @@ -75,11 +75,11 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=index-ignore-unavailab include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=preference] `profile`:: - (Optional, boolean) If `true`, the query execution is profiled. Defaults + (Optional, boolean) If `true`, the query execution is profiled. Defaults to `false`. `rest_total_hits_as_int`:: - (Optional, boolean) If `true`, `hits.total` are rendered as an integer in + (Optional, boolean) If `true`, `hits.total` are rendered as an integer in the response. Defaults to `false`. include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=routing] @@ -89,9 +89,9 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=scroll] include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=search_type] `typed_keys`:: - (Optional, boolean) If `true`, aggregation and suggester names are + (Optional, boolean) If `true`, aggregation and suggester names are prefixed by their respective types in the response. Defaults to `false`. - + [[search-template-api-request-body]] ==== {api-request-body-title} @@ -128,7 +128,7 @@ POST _scripts/ ////////////////////////// -The API returns the following result if the template has been successfully +The API returns the following result if the template has been successfully created: [source,console-result] @@ -198,7 +198,7 @@ GET _search/template [[_validating_templates]] ==== Validating a search template -A template can be rendered in a response with given parameters by using the +A template can be rendered in a response with given parameters by using the following request: [source,console] @@ -603,7 +603,7 @@ query as a string instead: ===== Encoding URLs The `{{#url}}value{{/url}}` function can be used to encode a string value -in a HTML encoding form as defined in by the +in a HTML encoding form as defined in by the https://www.w3.org/TR/html4/[HTML specification]. As an example, it is useful to encode a URL: @@ -657,7 +657,7 @@ Allows to execute several search template requests. [[multi-search-template-api-desc]] ==== {api-description-title} -Allows to execute several search template requests within the same API using the +Allows to execute several search template requests within the same API using the `_msearch/template` endpoint. The format of the request is similar to the <> works -If the query is invalid, `valid` will be `false`. Here the query is invalid -because {es} knows the `post_date` field should be a date due to dynamic +If the query is invalid, `valid` will be `false`. Here the query is invalid +because {es} knows the `post_date` field should be a date due to dynamic mapping, and 'foo' does not correctly parse into a date: [source,console] @@ -154,7 +154,7 @@ GET my-index-000001/_validate/query ===== The explain parameter -An `explain` parameter can be specified to get more detailed information about +An `explain` parameter can be specified to get more detailed information about why a query failed: [source,console] @@ -194,8 +194,8 @@ The API returns the following response: ===== The rewrite parameter -When the query is valid, the explanation defaults to the string representation -of that query. With `rewrite` set to `true`, the explanation is more detailed +When the query is valid, the explanation defaults to the string representation +of that query. With `rewrite` set to `true`, the explanation is more detailed showing the actual Lucene query that will be executed. [source,console] diff --git a/modules/kibana/src/main/java/org/elasticsearch/kibana/KibanaPlugin.java b/modules/kibana/src/main/java/org/elasticsearch/kibana/KibanaPlugin.java index 2d3b13989ffeb..186da3f827fcb 100644 --- a/modules/kibana/src/main/java/org/elasticsearch/kibana/KibanaPlugin.java +++ b/modules/kibana/src/main/java/org/elasticsearch/kibana/KibanaPlugin.java @@ -133,6 +133,11 @@ public String getName() { return "kibana_" + super.getName(); } + @Override + public boolean allowSystemIndexAccessByDefault() { + return true; + } + @Override public List routes() { return super.routes().stream() diff --git a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexSourceTargetValidationTests.java b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexSourceTargetValidationTests.java index a471ce7e8e4fa..3b9e055dd1a83 100644 --- a/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexSourceTargetValidationTests.java +++ b/modules/reindex/src/test/java/org/elasticsearch/index/reindex/ReindexSourceTargetValidationTests.java @@ -36,6 +36,7 @@ import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.indices.SystemIndices; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.test.ESTestCase; import java.util.HashMap; @@ -61,7 +62,8 @@ public class ReindexSourceTargetValidationTests extends ESTestCase { .put(index("baz"), true) .put(index("source", "source_multi"), true) .put(index("source2", "source_multi"), true)).build(); - private static final IndexNameExpressionResolver INDEX_NAME_EXPRESSION_RESOLVER = new IndexNameExpressionResolver(); + private static final IndexNameExpressionResolver INDEX_NAME_EXPRESSION_RESOLVER = + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); private static final AutoCreateIndex AUTO_CREATE_INDEX = new AutoCreateIndex(Settings.EMPTY, new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS), INDEX_NAME_EXPRESSION_RESOLVER, new SystemIndices(new HashMap<>())); diff --git a/modules/reindex/src/yamlRestTest/resources/rest-api-spec/test/delete_by_query/70_throttle.yml b/modules/reindex/src/yamlRestTest/resources/rest-api-spec/test/delete_by_query/70_throttle.yml index ad400ec718c87..4e877765c7cf2 100644 --- a/modules/reindex/src/yamlRestTest/resources/rest-api-spec/test/delete_by_query/70_throttle.yml +++ b/modules/reindex/src/yamlRestTest/resources/rest-api-spec/test/delete_by_query/70_throttle.yml @@ -74,6 +74,8 @@ --- "Rethrottle to -1 which turns off throttling": + - skip: + features: warnings # Throttling happens between each scroll batch so we need to control the size of the batch by using a single shard # and a small batch size on the request - do: @@ -95,6 +97,7 @@ index: test body: { "text": "test" } - do: + indices.refresh: {} - do: @@ -121,6 +124,8 @@ task_id: $task - do: + warnings: + - "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default" indices.refresh: {} - do: diff --git a/modules/reindex/src/yamlRestTest/resources/rest-api-spec/test/delete_by_query/80_slices.yml b/modules/reindex/src/yamlRestTest/resources/rest-api-spec/test/delete_by_query/80_slices.yml index f4a1a9805632a..884e19c363be9 100644 --- a/modules/reindex/src/yamlRestTest/resources/rest-api-spec/test/delete_by_query/80_slices.yml +++ b/modules/reindex/src/yamlRestTest/resources/rest-api-spec/test/delete_by_query/80_slices.yml @@ -62,6 +62,8 @@ --- "Multiple slices with wait_for_completion=false": + - skip: + features: warnings - do: index: index: test @@ -151,8 +153,12 @@ # Only the "parent" reindex task wrote its status to the tasks index though - do: + warnings: + - "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default" indices.refresh: {} - do: + warnings: + - "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default" search: rest_total_hits_as_int: true index: .tasks @@ -165,6 +171,8 @@ --- "Multiple slices with rethrottle": + - skip: + features: warnings - do: index: index: test @@ -196,7 +204,8 @@ id: 6 body: { "text": "test" } - do: - indices.refresh: {} + indices.refresh: + index: test # Start the task with a requests_per_second that should make it take a very long time - do: @@ -259,8 +268,12 @@ # Only the "parent" reindex task wrote its status to the tasks index though - do: + warnings: + - "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default" indices.refresh: {} - do: + warnings: + - "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default" search: rest_total_hits_as_int: true index: .tasks diff --git a/modules/reindex/src/yamlRestTest/resources/rest-api-spec/test/reindex/80_slices.yml b/modules/reindex/src/yamlRestTest/resources/rest-api-spec/test/reindex/80_slices.yml index 150ec2a4be45c..52d40aa9469ff 100644 --- a/modules/reindex/src/yamlRestTest/resources/rest-api-spec/test/reindex/80_slices.yml +++ b/modules/reindex/src/yamlRestTest/resources/rest-api-spec/test/reindex/80_slices.yml @@ -58,6 +58,8 @@ --- "Multiple slices with wait_for_completion=false": + - skip: + features: warnings - do: index: index: source @@ -160,8 +162,12 @@ # Only the "parent" reindex task wrote its status to the tasks index though - do: + warnings: + - "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default" indices.refresh: {} - do: + warnings: + - "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default" search: rest_total_hits_as_int: true index: .tasks @@ -170,6 +176,8 @@ --- "Multiple slices with rethrottle": + - skip: + features: warnings - do: index: index: source @@ -272,8 +280,12 @@ # Only the "parent" reindex task wrote its status to the tasks index though - do: + warnings: + - "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default" indices.refresh: {} - do: + warnings: + - "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default" search: rest_total_hits_as_int: true index: .tasks diff --git a/modules/reindex/src/yamlRestTest/resources/rest-api-spec/test/update_by_query/70_slices.yml b/modules/reindex/src/yamlRestTest/resources/rest-api-spec/test/update_by_query/70_slices.yml index 3e8d82f13d36c..234d2d712efb3 100644 --- a/modules/reindex/src/yamlRestTest/resources/rest-api-spec/test/update_by_query/70_slices.yml +++ b/modules/reindex/src/yamlRestTest/resources/rest-api-spec/test/update_by_query/70_slices.yml @@ -54,6 +54,8 @@ --- "Multiple slices with wait_for_completion=false": + - skip: + features: warnings - do: index: index: test @@ -143,8 +145,12 @@ # Only the "parent" reindex task wrote its status to the tasks index though - do: + warnings: + - "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default" indices.refresh: {} - do: + warnings: + - "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default" search: rest_total_hits_as_int: true index: .tasks @@ -152,6 +158,8 @@ --- "Multiple slices with rethrottle": + - skip: + features: warnings - do: index: index: test @@ -246,8 +254,12 @@ # Only the "parent" reindex task wrote its status to the tasks index though - do: + warnings: + - "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default" indices.refresh: {} - do: + warnings: + - "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default" search: rest_total_hits_as_int: true index: .tasks diff --git a/qa/full-cluster-restart/src/test/java/org/elasticsearch/upgrades/FullClusterRestartIT.java b/qa/full-cluster-restart/src/test/java/org/elasticsearch/upgrades/FullClusterRestartIT.java index 389bf7bd0ef91..26005d6921661 100644 --- a/qa/full-cluster-restart/src/test/java/org/elasticsearch/upgrades/FullClusterRestartIT.java +++ b/qa/full-cluster-restart/src/test/java/org/elasticsearch/upgrades/FullClusterRestartIT.java @@ -36,6 +36,7 @@ import org.elasticsearch.common.xcontent.support.XContentMapValues; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.test.NotEqualMessageBuilder; +import org.elasticsearch.test.XContentTestUtils; import org.elasticsearch.test.rest.ESRestTestCase; import org.elasticsearch.test.rest.yaml.ObjectPath; import org.junit.Before; @@ -56,6 +57,7 @@ import static java.util.Collections.emptyMap; import static java.util.Collections.singletonList; import static java.util.Collections.singletonMap; +import static org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.SYSTEM_INDEX_ENFORCEMENT_VERSION; import static org.elasticsearch.cluster.routing.UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING; import static org.elasticsearch.cluster.routing.allocation.decider.MaxRetryAllocationDecider.SETTING_ALLOCATION_MAX_RETRY; import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder; @@ -63,6 +65,7 @@ import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.greaterThan; import static org.hamcrest.Matchers.greaterThanOrEqualTo; +import static org.hamcrest.Matchers.hasKey; import static org.hamcrest.Matchers.hasSize; import static org.hamcrest.Matchers.is; import static org.hamcrest.Matchers.notNullValue; @@ -303,7 +306,7 @@ public void testShrink() throws IOException { shrinkIndexRequest.setJsonEntity("{\"settings\": {\"index.number_of_shards\": 1}}"); client().performRequest(shrinkIndexRequest); - client().performRequest(new Request("POST", "/_refresh")); + refreshAllIndices(); } else { numDocs = countOfIndexedRandomDocuments(); } @@ -379,7 +382,7 @@ public void testShrinkAfterUpgrade() throws IOException { numDocs = countOfIndexedRandomDocuments(); } - client().performRequest(new Request("POST", "/_refresh")); + refreshAllIndices(); Map response = entityAsMap(client().performRequest(new Request("GET", "/" + index + "/_search"))); assertNoFailures(response); @@ -1386,67 +1389,102 @@ public void testResize() throws Exception { } } - public void testCreateSystemIndexInOldVersion() throws Exception { - assumeTrue("only run on old cluster", isRunningAgainstOldCluster()); - // create index - Request createTestIndex = new Request("PUT", "/test_index_old"); - createTestIndex.setJsonEntity("{\"settings\": {\"index.number_of_replicas\": 0}}"); - client().performRequest(createTestIndex); - - Request bulk = new Request("POST", "/_bulk"); - bulk.addParameter("refresh", "true"); - bulk.setJsonEntity("{\"index\": {\"_index\": \"test_index_old\"}}\n" + - "{\"f1\": \"v1\", \"f2\": \"v2\"}\n"); - client().performRequest(bulk); - - // start a async reindex job - Request reindex = new Request("POST", "/_reindex"); - reindex.setJsonEntity( - "{\n" + - " \"source\":{\n" + - " \"index\":\"test_index_old\"\n" + - " },\n" + - " \"dest\":{\n" + - " \"index\":\"test_index_reindex\"\n" + - " }\n" + - "}"); - reindex.addParameter("wait_for_completion", "false"); - Map response = entityAsMap(client().performRequest(reindex)); - String taskId = (String) response.get("task"); - - // wait for task - Request getTask = new Request("GET", "/_tasks/" + taskId); - getTask.addParameter("wait_for_completion", "true"); - client().performRequest(getTask); - - // make sure .tasks index exists - assertBusy(() -> { + @SuppressWarnings("unchecked") + public void testSystemIndexMetadataIsUpgraded() throws Exception { + final String systemIndexWarning = "this request accesses system indices: [.tasks], but in a future major version, direct " + + "access to system indices will be prevented by default"; + if (isRunningAgainstOldCluster()) { + // create index + Request createTestIndex = new Request("PUT", "/test_index_old"); + createTestIndex.setJsonEntity("{\"settings\": {\"index.number_of_replicas\": 0}}"); + client().performRequest(createTestIndex); + + Request bulk = new Request("POST", "/_bulk"); + bulk.addParameter("refresh", "true"); + bulk.setJsonEntity("{\"index\": {\"_index\": \"test_index_old\"}}\n" + + "{\"f1\": \"v1\", \"f2\": \"v2\"}\n"); + client().performRequest(bulk); + + // start a async reindex job + Request reindex = new Request("POST", "/_reindex"); + reindex.setJsonEntity( + "{\n" + + " \"source\":{\n" + + " \"index\":\"test_index_old\"\n" + + " },\n" + + " \"dest\":{\n" + + " \"index\":\"test_index_reindex\"\n" + + " }\n" + + "}"); + reindex.addParameter("wait_for_completion", "false"); + Map response = entityAsMap(client().performRequest(reindex)); + String taskId = (String) response.get("task"); + + // wait for task + Request getTask = new Request("GET", "/_tasks/" + taskId); + getTask.addParameter("wait_for_completion", "true"); + client().performRequest(getTask); + + // make sure .tasks index exists Request getTasksIndex = new Request("GET", "/.tasks"); - assertThat(client().performRequest(getTasksIndex).getStatusLine().getStatusCode(), is(200)); - }); - } - - @SuppressWarnings("unchecked" + - "") - public void testSystemIndexGetsUpdatedMetadata() throws Exception { - assumeFalse("only run in upgraded cluster", isRunningAgainstOldCluster()); - - assertBusy(() -> { - Request clusterStateRequest = new Request("GET", "/_cluster/state/metadata"); - Map response = entityAsMap(client().performRequest(clusterStateRequest)); - Map metadata = (Map) response.get("metadata"); - assertNotNull(metadata); - Map indices = (Map) metadata.get("indices"); - assertNotNull(indices); - - Map tasksIndex = (Map) indices.get(".tasks"); - assertNotNull(tasksIndex); - assertThat(tasksIndex.get("system"), is(true)); - - Map testIndex = (Map) indices.get("test_index_old"); - assertNotNull(testIndex); - assertThat(testIndex.get("system"), is(false)); - }); + getTasksIndex.addParameter("allow_no_indices", "false"); + + getTasksIndex.setOptions(expectVersionSpecificWarnings(v -> { + v.current(systemIndexWarning); + v.compatible(systemIndexWarning); + })); + assertBusy(() -> { + try { + assertThat(client().performRequest(getTasksIndex).getStatusLine().getStatusCode(), is(200)); + } catch (ResponseException e) { + throw new AssertionError(".tasks index does not exist yet"); + } + }); + + // If we are on 7.x create an alias that includes both a system index and a non-system index so we can be sure it gets + // upgraded properly. If we're already on 8.x, skip this part of the test. + if (minimumNodeVersion().before(SYSTEM_INDEX_ENFORCEMENT_VERSION)) { + // Create an alias to make sure it gets upgraded properly + Request putAliasRequest = new Request("POST", "/_aliases"); + putAliasRequest.setJsonEntity("{\n" + + " \"actions\": [\n" + + " {\"add\": {\"index\": \".tasks\", \"alias\": \"test-system-alias\"}},\n" + + " {\"add\": {\"index\": \"test_index_reindex\", \"alias\": \"test-system-alias\"}}\n" + + " ]\n" + + "}"); + assertThat(client().performRequest(putAliasRequest).getStatusLine().getStatusCode(), is(200)); + } + } else { + assertBusy(() -> { + Request clusterStateRequest = new Request("GET", "/_cluster/state/metadata"); + Map indices = new XContentTestUtils.JsonMapView(entityAsMap(client().performRequest(clusterStateRequest))) + .get("metadata.indices"); + + // Make sure our non-system index is still non-system + assertThat(new XContentTestUtils.JsonMapView(indices).get("test_index_old.system"), is(false)); + + // Can't get the .tasks index via JsonMapView because it splits on `.` + assertThat(indices, hasKey(".tasks")); + XContentTestUtils.JsonMapView tasksIndex = new XContentTestUtils.JsonMapView((Map) indices.get(".tasks")); + assertThat(tasksIndex.get("system"), is(true)); + + // If .tasks was created in a 7.x version, it should have an alias on it that we need to make sure got upgraded properly. + final String tasksCreatedVersionString = tasksIndex.get("settings.index.version.created"); + assertThat(tasksCreatedVersionString, notNullValue()); + final Version tasksCreatedVersion = Version.fromId(Integer.parseInt(tasksCreatedVersionString)); + if (tasksCreatedVersion.before(SYSTEM_INDEX_ENFORCEMENT_VERSION)) { + // Verify that the alias survived the upgrade + Request getAliasRequest = new Request("GET", "/_alias/test-system-alias"); + getAliasRequest.setOptions(expectVersionSpecificWarnings(v -> { + v.current(systemIndexWarning); + v.compatible(systemIndexWarning); + })); + Map aliasResponse = entityAsMap(client().performRequest(getAliasRequest)); + assertThat(aliasResponse, hasKey(".tasks")); + assertThat(aliasResponse, hasKey("test_index_reindex")); + } + }); + } } public void testEnableSoftDeletesOnRestore() throws Exception { diff --git a/qa/rolling-upgrade/src/test/java/org/elasticsearch/upgrades/SystemIndicesUpgradeIT.java b/qa/rolling-upgrade/src/test/java/org/elasticsearch/upgrades/SystemIndicesUpgradeIT.java index 348c39b141ef9..65f9c66bff549 100644 --- a/qa/rolling-upgrade/src/test/java/org/elasticsearch/upgrades/SystemIndicesUpgradeIT.java +++ b/qa/rolling-upgrade/src/test/java/org/elasticsearch/upgrades/SystemIndicesUpgradeIT.java @@ -19,77 +19,115 @@ package org.elasticsearch.upgrades; +import org.elasticsearch.Version; import org.elasticsearch.client.Request; +import org.elasticsearch.client.ResponseException; +import org.elasticsearch.test.XContentTestUtils.JsonMapView; import java.util.Map; +import static org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.SYSTEM_INDEX_ENFORCEMENT_VERSION; +import static org.hamcrest.Matchers.hasKey; import static org.hamcrest.Matchers.is; +import static org.hamcrest.Matchers.notNullValue; public class SystemIndicesUpgradeIT extends AbstractRollingTestCase { - public void testOldDoesntHaveSystemIndexMetadata() throws Exception { - assumeTrue("only run in old cluster", CLUSTER_TYPE == ClusterType.OLD); - // create index - Request createTestIndex = new Request("PUT", "/test_index_old"); - createTestIndex.setJsonEntity("{\"settings\": {\"index.number_of_replicas\": 0}}"); - client().performRequest(createTestIndex); + @SuppressWarnings("unchecked") + public void testSystemIndicesUpgrades() throws Exception { + final String systemIndexWarning = "this request accesses system indices: [.tasks], but in a future major version, direct " + + "access to system indices will be prevented by default"; + if (CLUSTER_TYPE == ClusterType.OLD) { + // create index + Request createTestIndex = new Request("PUT", "/test_index_old"); + createTestIndex.setJsonEntity("{\"settings\": {\"index.number_of_replicas\": 0}}"); + client().performRequest(createTestIndex); - Request bulk = new Request("POST", "/_bulk"); - bulk.addParameter("refresh", "true"); - bulk.setJsonEntity("{\"index\": {\"_index\": \"test_index_old\"}}\n" + - "{\"f1\": \"v1\", \"f2\": \"v2\"}\n"); - client().performRequest(bulk); + Request bulk = new Request("POST", "/_bulk"); + bulk.addParameter("refresh", "true"); + bulk.setJsonEntity("{\"index\": {\"_index\": \"test_index_old\"}}\n" + + "{\"f1\": \"v1\", \"f2\": \"v2\"}\n"); + client().performRequest(bulk); - // start a async reindex job - Request reindex = new Request("POST", "/_reindex"); - reindex.setJsonEntity( - "{\n" + - " \"source\":{\n" + - " \"index\":\"test_index_old\"\n" + - " },\n" + - " \"dest\":{\n" + - " \"index\":\"test_index_reindex\"\n" + - " }\n" + - "}"); - reindex.addParameter("wait_for_completion", "false"); - Map response = entityAsMap(client().performRequest(reindex)); - String taskId = (String) response.get("task"); + // start a async reindex job + Request reindex = new Request("POST", "/_reindex"); + reindex.setJsonEntity( + "{\n" + + " \"source\":{\n" + + " \"index\":\"test_index_old\"\n" + + " },\n" + + " \"dest\":{\n" + + " \"index\":\"test_index_reindex\"\n" + + " }\n" + + "}"); + reindex.addParameter("wait_for_completion", "false"); + Map response = entityAsMap(client().performRequest(reindex)); + String taskId = (String) response.get("task"); - // wait for task - Request getTask = new Request("GET", "/_tasks/" + taskId); - getTask.addParameter("wait_for_completion", "true"); - client().performRequest(getTask); + // wait for task + Request getTask = new Request("GET", "/_tasks/" + taskId); + getTask.addParameter("wait_for_completion", "true"); + client().performRequest(getTask); - // make sure .tasks index exists - assertBusy(() -> { + // make sure .tasks index exists Request getTasksIndex = new Request("GET", "/.tasks"); - assertThat(client().performRequest(getTasksIndex).getStatusLine().getStatusCode(), is(200)); - }); - } + getTasksIndex.addParameter("allow_no_indices", "false"); - public void testMixedCluster() { - assumeTrue("nothing to do in mixed cluster", CLUSTER_TYPE == ClusterType.MIXED); - } + getTasksIndex.setOptions(expectVersionSpecificWarnings(v -> { + v.current(systemIndexWarning); + v.compatible(systemIndexWarning); + })); + assertBusy(() -> { + try { + assertThat(client().performRequest(getTasksIndex).getStatusLine().getStatusCode(), is(200)); + } catch (ResponseException e) { + throw new AssertionError(".tasks index does not exist yet"); + } + }); - @SuppressWarnings("unchecked") - public void testUpgradedCluster() throws Exception { - assumeTrue("only run on upgraded cluster", CLUSTER_TYPE == ClusterType.UPGRADED); + // If we are on 7.x create an alias that includes both a system index and a non-system index so we can be sure it gets + // upgraded properly. If we're already on 8.x, skip this part of the test. + if (minimumNodeVersion().before(SYSTEM_INDEX_ENFORCEMENT_VERSION)) { + // Create an alias to make sure it gets upgraded properly + Request putAliasRequest = new Request("POST", "/_aliases"); + putAliasRequest.setJsonEntity("{\n" + + " \"actions\": [\n" + + " {\"add\": {\"index\": \".tasks\", \"alias\": \"test-system-alias\"}},\n" + + " {\"add\": {\"index\": \"test_index_reindex\", \"alias\": \"test-system-alias\"}}\n" + + " ]\n" + + "}"); + assertThat(client().performRequest(putAliasRequest).getStatusLine().getStatusCode(), is(200)); + } + } else if (CLUSTER_TYPE == ClusterType.UPGRADED) { + assertBusy(() -> { + Request clusterStateRequest = new Request("GET", "/_cluster/state/metadata"); + Map indices = new JsonMapView(entityAsMap(client().performRequest(clusterStateRequest))) + .get("metadata.indices"); - assertBusy(() -> { - Request clusterStateRequest = new Request("GET", "/_cluster/state/metadata"); - Map response = entityAsMap(client().performRequest(clusterStateRequest)); - Map metadata = (Map) response.get("metadata"); - assertNotNull(metadata); - Map indices = (Map) metadata.get("indices"); - assertNotNull(indices); + // Make sure our non-system index is still non-system + assertThat(new JsonMapView(indices).get("test_index_old.system"), is(false)); - Map tasksIndex = (Map) indices.get(".tasks"); - assertNotNull(tasksIndex); - assertThat(tasksIndex.get("system"), is(true)); + // Can't get the .tasks index via JsonMapView because it splits on `.` + assertThat(indices, hasKey(".tasks")); + JsonMapView tasksIndex = new JsonMapView((Map) indices.get(".tasks")); + assertThat(tasksIndex.get("system"), is(true)); - Map testIndex = (Map) indices.get("test_index_old"); - assertNotNull(testIndex); - assertThat(testIndex.get("system"), is(false)); - }); + // If .tasks was created in a 7.x version, it should have an alias on it that we need to make sure got upgraded properly. + final String tasksCreatedVersionString = tasksIndex.get("settings.index.version.created"); + assertThat(tasksCreatedVersionString, notNullValue()); + final Version tasksCreatedVersion = Version.fromId(Integer.parseInt(tasksCreatedVersionString)); + if (tasksCreatedVersion.before(SYSTEM_INDEX_ENFORCEMENT_VERSION)) { + // Verify that the alias survived the upgrade + Request getAliasRequest = new Request("GET", "/_alias/test-system-alias"); + getAliasRequest.setOptions(expectVersionSpecificWarnings(v -> { + v.current(systemIndexWarning); + v.compatible(systemIndexWarning); + })); + Map aliasResponse = entityAsMap(client().performRequest(getAliasRequest)); + assertThat(aliasResponse, hasKey(".tasks")); + assertThat(aliasResponse, hasKey("test_index_reindex")); + } + }); + } } } diff --git a/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/upgraded_cluster/10_basic.yml b/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/upgraded_cluster/10_basic.yml index 10fded326855c..e987413087ebb 100644 --- a/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/upgraded_cluster/10_basic.yml +++ b/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/upgraded_cluster/10_basic.yml @@ -87,9 +87,13 @@ --- "Find a task result record from the old cluster": - skip: - features: headers + features: + - headers + - warnings - do: + warnings: + - "this request accesses system indices: [.tasks], but in a future major version, direct access to system indices will be prevented by default" search: rest_total_hits_as_int: true index: .tasks diff --git a/qa/smoke-test-http/src/test/java/org/elasticsearch/http/SystemIndexRestIT.java b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/SystemIndexRestIT.java new file mode 100644 index 0000000000000..12bbd12485853 --- /dev/null +++ b/qa/smoke-test-http/src/test/java/org/elasticsearch/http/SystemIndexRestIT.java @@ -0,0 +1,173 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.http; + +import org.elasticsearch.action.index.IndexRequest; +import org.elasticsearch.action.support.WriteRequest; +import org.elasticsearch.client.Request; +import org.elasticsearch.client.RequestOptions; +import org.elasticsearch.client.Response; +import org.elasticsearch.client.node.NodeClient; +import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.node.DiscoveryNodes; +import org.elasticsearch.common.settings.ClusterSettings; +import org.elasticsearch.common.settings.IndexScopedSettings; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.settings.SettingsFilter; +import org.elasticsearch.indices.SystemIndexDescriptor; +import org.elasticsearch.plugins.Plugin; +import org.elasticsearch.plugins.SystemIndexPlugin; +import org.elasticsearch.rest.BaseRestHandler; +import org.elasticsearch.rest.RestController; +import org.elasticsearch.rest.RestHandler; +import org.elasticsearch.rest.RestRequest; +import org.elasticsearch.rest.action.RestStatusToXContentListener; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.function.Supplier; + +import static org.elasticsearch.test.rest.ESRestTestCase.entityAsMap; +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.hasKey; +import static org.hamcrest.Matchers.is; + +public class SystemIndexRestIT extends HttpSmokeTestCase { + + @Override + protected Collection> nodePlugins() { + List> plugins = new ArrayList<>(super.nodePlugins()); + plugins.add(SystemIndexTestPlugin.class); + return plugins; + } + + public void testSystemIndexAccessBlockedByDefault() throws Exception { + // create index + { + Request putDocRequest = new Request("POST", "/_sys_index_test/add_doc/42"); + Response resp = getRestClient().performRequest(putDocRequest); + assertThat(resp.getStatusLine().getStatusCode(), equalTo(201)); + } + + + // make sure the system index now exists + assertBusy(() -> { + Request searchRequest = new Request("GET", "/" + SystemIndexTestPlugin.SYSTEM_INDEX_NAME + "/_count"); + searchRequest.setOptions(expectWarnings("this request accesses system indices: [" + SystemIndexTestPlugin.SYSTEM_INDEX_NAME + + "], but in a future major version, direct access to system indices will be prevented by default")); + + // Disallow no indices to cause an exception if the flag above doesn't work + searchRequest.addParameter("allow_no_indices", "false"); + searchRequest.setJsonEntity("{\"query\": {\"match\": {\"some_field\": \"some_value\"}}}"); + + final Response searchResponse = getRestClient().performRequest(searchRequest); + assertThat(searchResponse.getStatusLine().getStatusCode(), is(200)); + Map responseMap = entityAsMap(searchResponse); + assertThat(responseMap, hasKey("count")); + assertThat(responseMap.get("count"), equalTo(1)); + }); + + // And with a partial wildcard + assertDeprecationWarningOnAccess(".test-*", SystemIndexTestPlugin.SYSTEM_INDEX_NAME); + + // And with a total wildcard + assertDeprecationWarningOnAccess(randomFrom("*", "_all"), SystemIndexTestPlugin.SYSTEM_INDEX_NAME); + + // Try to index a doc directly + { + String expectedWarning = "this request accesses system indices: [" + SystemIndexTestPlugin.SYSTEM_INDEX_NAME + "], but in a " + + "future major version, direct access to system indices will be prevented by default"; + Request putDocDirectlyRequest = new Request("PUT", "/" + SystemIndexTestPlugin.SYSTEM_INDEX_NAME + "/_doc/43"); + putDocDirectlyRequest.setJsonEntity("{\"some_field\": \"some_other_value\"}"); + putDocDirectlyRequest.setOptions(expectWarnings(expectedWarning)); + Response response = getRestClient().performRequest(putDocDirectlyRequest); + assertThat(response.getStatusLine().getStatusCode(), equalTo(201)); + } + } + + private void assertDeprecationWarningOnAccess(String queryPattern, String warningIndexName) throws IOException { + String expectedWarning = "this request accesses system indices: [" + warningIndexName + "], but in a " + + "future major version, direct access to system indices will be prevented by default"; + Request searchRequest = new Request("GET", "/" + queryPattern + randomFrom("/_count", "/_search")); + searchRequest.setJsonEntity("{\"query\": {\"match\": {\"some_field\": \"some_value\"}}}"); + // Disallow no indices to cause an exception if this resolves to zero indices, so that we're sure it resolved the index + searchRequest.addParameter("allow_no_indices", "false"); + searchRequest.setOptions(expectWarnings(expectedWarning)); + + Response response = getRestClient().performRequest(searchRequest); + assertThat(response.getStatusLine().getStatusCode(), equalTo(200)); + } + + private RequestOptions expectWarnings(String expectedWarning) { + return RequestOptions.DEFAULT.toBuilder() + .setWarningsHandler(w -> w.contains(expectedWarning) == false || w.size() != 1) + .build(); + } + + + public static class SystemIndexTestPlugin extends Plugin implements SystemIndexPlugin { + + public static final String SYSTEM_INDEX_NAME = ".test-system-idx"; + + @Override + public List getRestHandlers(Settings settings, RestController restController, ClusterSettings clusterSettings, + IndexScopedSettings indexScopedSettings, SettingsFilter settingsFilter, + IndexNameExpressionResolver indexNameExpressionResolver, + Supplier nodesInCluster) { + return List.of(new AddDocRestHandler()); + } + + @Override + public Collection getSystemIndexDescriptors(Settings settings) { + return Collections.singletonList(new SystemIndexDescriptor(SYSTEM_INDEX_NAME, "System indices for tests")); + } + + public static class AddDocRestHandler extends BaseRestHandler { + @Override + public boolean allowSystemIndexAccessByDefault() { + return true; + } + + @Override + public String getName() { + return "system_index_test_doc_adder"; + } + + @Override + public List routes() { + return List.of(new Route(RestRequest.Method.POST, "/_sys_index_test/add_doc/{id}")); + } + + @Override + protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException { + IndexRequest indexRequest = new IndexRequest(SYSTEM_INDEX_NAME); + indexRequest.id(request.param("id")); + indexRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE); + indexRequest.source(Map.of("some_field", "some_value")); + return channel -> client.index(indexRequest, + new RestStatusToXContentListener<>(channel, r -> r.getLocation(indexRequest.routing()))); + } + } + } +} diff --git a/server/src/internalClusterTest/java/org/elasticsearch/action/IndicesRequestIT.java b/server/src/internalClusterTest/java/org/elasticsearch/action/IndicesRequestIT.java index e11f97d9c3498..e95072dbb4a15 100644 --- a/server/src/internalClusterTest/java/org/elasticsearch/action/IndicesRequestIT.java +++ b/server/src/internalClusterTest/java/org/elasticsearch/action/IndicesRequestIT.java @@ -392,7 +392,7 @@ public void testFlush() { internalCluster().coordOnlyNodeClient().admin().indices().flush(flushRequest).actionGet(); clearInterceptedActions(); - String[] indices = new IndexNameExpressionResolver() + String[] indices = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)) .concreteIndexNames(client().admin().cluster().prepareState().get().getState(), flushRequest); assertIndicesSubset(Arrays.asList(indices), indexShardActions); } @@ -417,7 +417,7 @@ public void testRefresh() { internalCluster().coordOnlyNodeClient().admin().indices().refresh(refreshRequest).actionGet(); clearInterceptedActions(); - String[] indices = new IndexNameExpressionResolver() + String[] indices = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)) .concreteIndexNames(client().admin().cluster().prepareState().get().getState(), refreshRequest); assertIndicesSubset(Arrays.asList(indices), indexShardActions); } diff --git a/server/src/internalClusterTest/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreIT.java b/server/src/internalClusterTest/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreIT.java index fae7f365a7be6..2f66e26b17e5b 100644 --- a/server/src/internalClusterTest/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreIT.java +++ b/server/src/internalClusterTest/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreIT.java @@ -54,6 +54,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsFilter; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentParser; @@ -818,7 +819,7 @@ public void testRestoreShrinkIndex() throws Exception { public void testSnapshotWithDateMath() { final String repo = "repo"; - final IndexNameExpressionResolver nameExpressionResolver = new IndexNameExpressionResolver(); + final IndexNameExpressionResolver nameExpressionResolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); final String snapshotName = ""; logger.info("--> creating repository"); diff --git a/server/src/main/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesAction.java b/server/src/main/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesAction.java index 4fbf8e84e62bd..67fcd04b0c72b 100644 --- a/server/src/main/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesAction.java +++ b/server/src/main/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesAction.java @@ -25,27 +25,39 @@ import org.elasticsearch.cluster.block.ClusterBlockException; import org.elasticsearch.cluster.block.ClusterBlockLevel; import org.elasticsearch.cluster.metadata.AliasMetadata; +import org.elasticsearch.cluster.metadata.IndexMetadata; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.util.concurrent.ThreadContext; +import org.elasticsearch.indices.SystemIndices; import org.elasticsearch.tasks.Task; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; import java.util.Collections; +import java.util.Iterator; import java.util.List; +import java.util.stream.Collectors; public class TransportGetAliasesAction extends TransportMasterNodeReadAction { + private static final DeprecationLogger deprecationLogger = DeprecationLogger.getLogger(TransportGetAliasesAction.class); + + private final SystemIndices systemIndices; @Inject public TransportGetAliasesAction(TransportService transportService, ClusterService clusterService, ThreadPool threadPool, ActionFilters actionFilters, - IndexNameExpressionResolver indexNameExpressionResolver) { + IndexNameExpressionResolver indexNameExpressionResolver, SystemIndices systemIndices) { super(GetAliasesAction.NAME, transportService, clusterService, threadPool, actionFilters, GetAliasesRequest::new, indexNameExpressionResolver); + this.systemIndices = systemIndices; } @Override @@ -56,8 +68,9 @@ protected String executor() { @Override protected ClusterBlockException checkBlock(GetAliasesRequest request, ClusterState state) { + // Resolve with system index access since we're just checking blocks return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, - indexNameExpressionResolver.concreteIndexNames(state, request)); + indexNameExpressionResolver.concreteIndexNamesWithSystemIndexAccess(state, request)); } @Override @@ -67,16 +80,25 @@ protected GetAliasesResponse read(StreamInput in) throws IOException { @Override protected void masterOperation(Task task, GetAliasesRequest request, ClusterState state, ActionListener listener) { - String[] concreteIndices = indexNameExpressionResolver.concreteIndexNames(state, request); + String[] concreteIndices; + // Switch to a context which will drop any deprecation warnings, because there may be indices resolved here which are not + // returned in the final response. We'll add warnings back later if necessary in checkSystemIndexAccess. + try (ThreadContext.StoredContext ignore = threadPool.getThreadContext().newStoredContext(false)) { + concreteIndices = indexNameExpressionResolver.concreteIndexNames(state, request); + } + final boolean systemIndexAccessAllowed = indexNameExpressionResolver.isSystemIndexAccessAllowed(); ImmutableOpenMap> aliases = state.metadata().findAliases(request, concreteIndices); - listener.onResponse(new GetAliasesResponse(postProcess(request, concreteIndices, aliases))); + listener.onResponse(new GetAliasesResponse(postProcess(request, concreteIndices, aliases, state, + systemIndexAccessAllowed, systemIndices))); } /** * Fills alias result with empty entries for requested indices when no specific aliases were requested. */ static ImmutableOpenMap> postProcess(GetAliasesRequest request, String[] concreteIndices, - ImmutableOpenMap> aliases) { + ImmutableOpenMap> aliases, + ClusterState state, boolean systemIndexAccessAllowed, + SystemIndices systemIndices) { boolean noAliasesSpecified = request.getOriginalAliases() == null || request.getOriginalAliases().length == 0; ImmutableOpenMap.Builder> mapBuilder = ImmutableOpenMap.builder(aliases); for (String index : concreteIndices) { @@ -85,7 +107,40 @@ static ImmutableOpenMap> postProcess(GetAliasesReque assert previous == null; } } - return mapBuilder.build(); + final ImmutableOpenMap> finalResponse = mapBuilder.build(); + if (systemIndexAccessAllowed == false) { + checkSystemIndexAccess(request, systemIndices, state, finalResponse); + } + return finalResponse; + } + + private static void checkSystemIndexAccess(GetAliasesRequest request, SystemIndices systemIndices, ClusterState state, + ImmutableOpenMap> aliasesMap) { + List systemIndicesNames = new ArrayList<>(); + for (Iterator it = aliasesMap.keysIt(); it.hasNext(); ) { + String indexName = it.next(); + IndexMetadata index = state.metadata().index(indexName); + if (index != null && index.isSystem()) { + systemIndicesNames.add(indexName); + } + } + if (systemIndicesNames.isEmpty() == false) { + deprecationLogger.deprecate("open_system_index_access", + "this request accesses system indices: {}, but in a future major version, direct access to system " + + "indices will be prevented by default", systemIndicesNames); + } else { + checkSystemAliasAccess(request, systemIndices); + } } + private static void checkSystemAliasAccess(GetAliasesRequest request, SystemIndices systemIndices) { + final List systemAliases = Arrays.stream(request.aliases()) + .filter(alias -> systemIndices.isSystemIndex(alias)) + .collect(Collectors.toList()); + if (systemAliases.isEmpty() == false) { + deprecationLogger.deprecate("open_system_alias_access", + "this request accesses aliases with names reserved for system indices: {}, but in a future major version, direct" + + "access to system indices and their aliases will not be allowed", systemAliases); + } + } } diff --git a/server/src/main/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeAction.java b/server/src/main/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeAction.java index b5fc85d6234e5..d914d6ad3037a 100644 --- a/server/src/main/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeAction.java +++ b/server/src/main/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeAction.java @@ -220,6 +220,17 @@ protected abstract Response newResponse(Request request, int totalShards, int su */ protected abstract ClusterBlockException checkRequestBlock(ClusterState state, Request request, String[] concreteIndices); + /** + * Resolves a list of concrete index names. Override this if index names should be resolved differently than normal. + * + * @param clusterState the cluster state + * @param request the underlying request + * @return a list of concrete index names that this action should operate on + */ + protected String[] resolveConcreteIndexNames(ClusterState clusterState, Request request) { + return indexNameExpressionResolver.concreteIndexNames(clusterState, request); + } + @Override protected void doExecute(Task task, Request request, ActionListener listener) { new AsyncAction(task, request, listener).start(); @@ -249,7 +260,7 @@ protected AsyncAction(Task task, Request request, ActionListener liste throw globalBlockException; } - String[] concreteIndices = indexNameExpressionResolver.concreteIndexNames(clusterState, request); + String[] concreteIndices = resolveConcreteIndexNames(clusterState, request); ClusterBlockException requestBlockException = checkRequestBlock(clusterState, request, concreteIndices); if (requestBlockException != null) { throw requestBlockException; diff --git a/server/src/main/java/org/elasticsearch/cluster/ClusterModule.java b/server/src/main/java/org/elasticsearch/cluster/ClusterModule.java index 6bcadf73e489b..a1ce60745fb4e 100644 --- a/server/src/main/java/org/elasticsearch/cluster/ClusterModule.java +++ b/server/src/main/java/org/elasticsearch/cluster/ClusterModule.java @@ -23,10 +23,10 @@ import org.elasticsearch.cluster.action.index.NodeMappingRefreshAction; import org.elasticsearch.cluster.action.shard.ShardStateAction; import org.elasticsearch.cluster.metadata.ComponentTemplateMetadata; +import org.elasticsearch.cluster.metadata.ComposableIndexTemplateMetadata; import org.elasticsearch.cluster.metadata.DataStreamMetadata; import org.elasticsearch.cluster.metadata.IndexGraveyard; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.cluster.metadata.ComposableIndexTemplateMetadata; import org.elasticsearch.cluster.metadata.Metadata; import org.elasticsearch.cluster.metadata.MetadataDeleteIndexService; import org.elasticsearch.cluster.metadata.MetadataIndexAliasesService; @@ -68,6 +68,7 @@ import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.gateway.GatewayAllocator; import org.elasticsearch.ingest.IngestMetadata; @@ -108,13 +109,13 @@ public class ClusterModule extends AbstractModule { final ShardsAllocator shardsAllocator; public ClusterModule(Settings settings, ClusterService clusterService, List clusterPlugins, - ClusterInfoService clusterInfoService, SnapshotsInfoService snapshotsInfoService) { + ClusterInfoService clusterInfoService, SnapshotsInfoService snapshotsInfoService, ThreadContext threadContext) { this.clusterPlugins = clusterPlugins; this.deciderList = createAllocationDeciders(settings, clusterService.getClusterSettings(), clusterPlugins); this.allocationDeciders = new AllocationDeciders(deciderList); this.shardsAllocator = createShardsAllocator(settings, clusterService.getClusterSettings(), clusterPlugins); this.clusterService = clusterService; - this.indexNameExpressionResolver = new IndexNameExpressionResolver(); + this.indexNameExpressionResolver = new IndexNameExpressionResolver(threadContext); this.allocationService = new AllocationService(allocationDeciders, shardsAllocator, clusterInfoService, snapshotsInfoService); } diff --git a/server/src/main/java/org/elasticsearch/cluster/metadata/IndexAbstraction.java b/server/src/main/java/org/elasticsearch/cluster/metadata/IndexAbstraction.java index c739b49b8f016..161ada3c74bdd 100644 --- a/server/src/main/java/org/elasticsearch/cluster/metadata/IndexAbstraction.java +++ b/server/src/main/java/org/elasticsearch/cluster/metadata/IndexAbstraction.java @@ -79,7 +79,7 @@ public interface IndexAbstraction { boolean isHidden(); /** - * @return whether this index abstraction is hidden or not + * @return whether this index abstraction should be treated as a system index or not */ boolean isSystem(); @@ -290,6 +290,29 @@ public void computeAndValidateAliasProperties() { Strings.collectionToCommaDelimitedString(nonHiddenOn) + "]; alias must have the same is_hidden setting " + "on all indices"); } + + // Validate system status + + final Map> groupedBySystemStatus = referenceIndexMetadatas.stream() + .collect(Collectors.groupingBy(IndexMetadata::isSystem)); + // If the alias has either all system or all non-system, then no more validation is required + if (isNonEmpty(groupedBySystemStatus.get(false)) && isNonEmpty(groupedBySystemStatus.get(true))) { + final List newVersionSystemIndices = groupedBySystemStatus.get(true).stream() + .filter(i -> i.getCreationVersion().onOrAfter(IndexNameExpressionResolver.SYSTEM_INDEX_ENFORCEMENT_VERSION)) + .map(i -> i.getIndex().getName()) + .sorted() // reliable error message for testing + .collect(Collectors.toList()); + + if (newVersionSystemIndices.isEmpty() == false) { + final List nonSystemIndices = groupedBySystemStatus.get(false).stream() + .map(i -> i.getIndex().getName()) + .sorted() // reliable error message for testing + .collect(Collectors.toList()); + throw new IllegalStateException("alias [" + aliasName + "] refers to both system indices " + newVersionSystemIndices + + " and non-system indices: " + nonSystemIndices + ", but aliases must refer to either system or" + + " non-system indices, not both"); + } + } } private boolean isNonEmpty(List idxMetas) { diff --git a/server/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java b/server/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java index 67b891002a653..13063d5486b1d 100644 --- a/server/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java +++ b/server/src/main/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolver.java @@ -20,18 +20,22 @@ package org.elasticsearch.cluster.metadata; import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.Version; import org.elasticsearch.action.IndicesRequest; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.common.Booleans; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Strings; import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.collect.Tuple; +import org.elasticsearch.common.logging.DeprecationLogger; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.time.DateFormatter; import org.elasticsearch.common.time.DateMathParser; import org.elasticsearch.common.time.DateUtils; import org.elasticsearch.common.util.CollectionUtils; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexNotFoundException; @@ -59,19 +63,37 @@ import java.util.stream.StreamSupport; public class IndexNameExpressionResolver { + private static final DeprecationLogger deprecationLogger = DeprecationLogger.getLogger(IndexNameExpressionResolver.class); public static final String EXCLUDED_DATA_STREAMS_KEY = "es.excluded_ds"; + public static final String SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY = "_system_index_access_allowed"; + public static final Version SYSTEM_INDEX_ENFORCEMENT_VERSION = Version.V_8_0_0; private final DateMathExpressionResolver dateMathExpressionResolver = new DateMathExpressionResolver(); private final WildcardExpressionResolver wildcardExpressionResolver = new WildcardExpressionResolver(); private final List expressionResolvers = List.of(dateMathExpressionResolver, wildcardExpressionResolver); + private final ThreadContext threadContext; + + public IndexNameExpressionResolver(ThreadContext threadContext) { + this.threadContext = Objects.requireNonNull(threadContext, "Thread Context must not be null"); + } + /** * Same as {@link #concreteIndexNames(ClusterState, IndicesOptions, String...)}, but the index expressions and options * are encapsulated in the specified request. */ public String[] concreteIndexNames(ClusterState state, IndicesRequest request) { - Context context = new Context(state, request.indicesOptions(), false, false, request.includeDataStreams()); + Context context = new Context(state, request.indicesOptions(), false, false, request.includeDataStreams(), + isSystemIndexAccessAllowed()); + return concreteIndexNames(context, request.indices()); + } + + /** + * Same as {@link #concreteIndexNames(ClusterState, IndicesRequest)}, but access to system indices is always allowed. + */ + public String[] concreteIndexNamesWithSystemIndexAccess(ClusterState state, IndicesRequest request) { + Context context = new Context(state, request.indicesOptions(), false, false, request.includeDataStreams(), true); return concreteIndexNames(context, request.indices()); } @@ -80,7 +102,8 @@ public String[] concreteIndexNames(ClusterState state, IndicesRequest request) { * are encapsulated in the specified request and resolves data streams. */ public Index[] concreteIndices(ClusterState state, IndicesRequest request) { - Context context = new Context(state, request.indicesOptions(), false, false, request.includeDataStreams()); + Context context = new Context(state, request.indicesOptions(), false, false, request.includeDataStreams(), + isSystemIndexAccessAllowed()); return concreteIndices(context, request.indices()); } @@ -98,22 +121,23 @@ public Index[] concreteIndices(ClusterState state, IndicesRequest request) { * indices options in the context don't allow such a case. */ public String[] concreteIndexNames(ClusterState state, IndicesOptions options, String... indexExpressions) { - Context context = new Context(state, options); + Context context = new Context(state, options, isSystemIndexAccessAllowed()); return concreteIndexNames(context, indexExpressions); } public String[] concreteIndexNames(ClusterState state, IndicesOptions options, boolean includeDataStreams, String... indexExpressions) { - Context context = new Context(state, options, false, false, includeDataStreams); + Context context = new Context(state, options, false, false, includeDataStreams, isSystemIndexAccessAllowed()); return concreteIndexNames(context, indexExpressions); } public String[] concreteIndexNames(ClusterState state, IndicesOptions options, IndicesRequest request) { - Context context = new Context(state, options, false, false, request.includeDataStreams()); + Context context = new Context(state, options, false, false, request.includeDataStreams(), isSystemIndexAccessAllowed()); return concreteIndexNames(context, request.indices()); } public List dataStreamNames(ClusterState state, IndicesOptions options, String... indexExpressions) { - Context context = new Context(state, options, false, false, true, true); + // Allow system index access - they'll be filtered out below as there's no such thing (yet) as system data streams + Context context = new Context(state, options, false, false, true, true, true); if (indexExpressions == null || indexExpressions.length == 0) { indexExpressions = new String[]{"*"}; } @@ -145,7 +169,8 @@ public Index[] concreteIndices(ClusterState state, IndicesOptions options, Strin } public Index[] concreteIndices(ClusterState state, IndicesOptions options, boolean includeDataStreams, String... indexExpressions) { - Context context = new Context(state, options, false, false, includeDataStreams); + Context context = new Context(state, options, false, false, includeDataStreams, + isSystemIndexAccessAllowed()); return concreteIndices(context, indexExpressions); } @@ -162,7 +187,8 @@ public Index[] concreteIndices(ClusterState state, IndicesOptions options, boole * indices options in the context don't allow such a case. */ public Index[] concreteIndices(ClusterState state, IndicesRequest request, long startTime) { - Context context = new Context(state, request.indicesOptions(), startTime, false, false, request.includeDataStreams(), false); + Context context = new Context(state, request.indicesOptions(), startTime, false, false, request.includeDataStreams(), false, + isSystemIndexAccessAllowed()); return concreteIndices(context, request.indices()); } @@ -282,9 +308,26 @@ Index[] concreteIndices(Context context, String... indexExpressions) { } throw infe; } + checkSystemIndexAccess(context, metadata, concreteIndices, indexExpressions); return concreteIndices.toArray(new Index[concreteIndices.size()]); } + private void checkSystemIndexAccess(Context context, Metadata metadata, Set concreteIndices, String[] originalPatterns) { + if (context.isSystemIndexAccessAllowed() == false) { + final List resolvedSystemIndices = concreteIndices.stream() + .map(metadata::index) + .filter(IndexMetadata::isSystem) + .map(i -> i.getIndex().getName()) + .sorted() // reliable order for testing + .collect(Collectors.toList()); + if (resolvedSystemIndices.isEmpty() == false) { + deprecationLogger.deprecate("open_system_index_access", + "this request accesses system indices: {}, but in a future major version, direct access to system " + + "indices will be prevented by default", resolvedSystemIndices); + } + } + } + private static boolean shouldTrackConcreteIndex(Context context, IndicesOptions options, IndexMetadata index) { if (index.getState() == IndexMetadata.State.CLOSE) { if (options.forbidClosedIndices() && options.ignoreUnavailable() == false) { @@ -369,7 +412,7 @@ public Index concreteWriteIndex(ClusterState state, IndicesOptions options, Stri options.allowAliasesToMultipleIndices(), options.forbidClosedIndices(), options.ignoreAliases(), options.ignoreThrottled()); - Context context = new Context(state, combinedOptions, false, true, includeDataStreams); + Context context = new Context(state, combinedOptions, false, true, includeDataStreams, isSystemIndexAccessAllowed()); Index[] indices = concreteIndices(context, index); if (allowNoIndices && indices.length == 0) { return null; @@ -386,7 +429,7 @@ public Index concreteWriteIndex(ClusterState state, IndicesOptions options, Stri * If the data stream, index or alias contains date math then that is resolved too. */ public boolean hasIndexAbstraction(String indexAbstraction, ClusterState state) { - Context context = new Context(state, IndicesOptions.lenientExpandOpen(), false, false, true); + Context context = new Context(state, IndicesOptions.lenientExpandOpen(), false, false, true, isSystemIndexAccessAllowed()); String resolvedAliasOrIndex = dateMathExpressionResolver.resolveExpression(indexAbstraction, context); return state.metadata().getIndicesLookup().containsKey(resolvedAliasOrIndex); } @@ -397,14 +440,14 @@ public boolean hasIndexAbstraction(String indexAbstraction, ClusterState state) public String resolveDateMathExpression(String dateExpression) { // The data math expression resolver doesn't rely on cluster state or indices options, because // it just resolves the date math to an actual date. - return dateMathExpressionResolver.resolveExpression(dateExpression, new Context(null, null)); + return dateMathExpressionResolver.resolveExpression(dateExpression, new Context(null, null, isSystemIndexAccessAllowed())); } /** * Resolve an array of expressions to the set of indices and aliases that these expressions match. */ public Set resolveExpressions(ClusterState state, String... expressions) { - Context context = new Context(state, IndicesOptions.lenientExpandOpen(), true, false, true); + Context context = new Context(state, IndicesOptions.lenientExpandOpen(), true, false, true, isSystemIndexAccessAllowed()); List resolvedExpressions = Arrays.asList(expressions); for (ExpressionResolver expressionResolver : expressionResolvers) { resolvedExpressions = expressionResolver.resolve(context, resolvedExpressions); @@ -498,7 +541,7 @@ public String[] indexAliases(ClusterState state, String index, Predicate> resolveSearchRouting(ClusterState state, @Nullable String routing, String... expressions) { List resolvedExpressions = expressions != null ? Arrays.asList(expressions) : Collections.emptyList(); - Context context = new Context(state, IndicesOptions.lenientExpandOpen(), false, false, true); + Context context = new Context(state, IndicesOptions.lenientExpandOpen(), false, false, true, isSystemIndexAccessAllowed()); for (ExpressionResolver expressionResolver : expressionResolvers) { resolvedExpressions = expressionResolver.resolve(context, resolvedExpressions); } @@ -650,6 +693,15 @@ boolean isPatternMatchingAllIndices(Metadata metadata, String[] indicesOrAliases return false; } + /** + * Determines whether or not system index access should be allowed in the current context. + * + * @return True if system index access should be allowed, false otherwise. + */ + public boolean isSystemIndexAccessAllowed() { + return Booleans.parseBoolean(threadContext.getHeader(SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY), true); + } + public static class Context { private final ClusterState state; @@ -659,27 +711,30 @@ public static class Context { private final boolean resolveToWriteIndex; private final boolean includeDataStreams; private final boolean preserveDataStreams; + private final boolean isSystemIndexAccessAllowed; - Context(ClusterState state, IndicesOptions options) { - this(state, options, System.currentTimeMillis()); + Context(ClusterState state, IndicesOptions options, boolean isSystemIndexAccessAllowed) { + this(state, options, System.currentTimeMillis(), isSystemIndexAccessAllowed); } Context(ClusterState state, IndicesOptions options, boolean preserveAliases, boolean resolveToWriteIndex, - boolean includeDataStreams) { - this(state, options, System.currentTimeMillis(), preserveAliases, resolveToWriteIndex, includeDataStreams, false); + boolean includeDataStreams, boolean isSystemIndexAccessAllowed) { + this(state, options, System.currentTimeMillis(), preserveAliases, resolveToWriteIndex, includeDataStreams, false, + isSystemIndexAccessAllowed); } Context(ClusterState state, IndicesOptions options, boolean preserveAliases, boolean resolveToWriteIndex, - boolean includeDataStreams, boolean preserveDataStreams) { - this(state, options, System.currentTimeMillis(), preserveAliases, resolveToWriteIndex, includeDataStreams, preserveDataStreams); + boolean includeDataStreams, boolean preserveDataStreams, boolean isSystemIndexAccessAllowed) { + this(state, options, System.currentTimeMillis(), preserveAliases, resolveToWriteIndex, includeDataStreams, preserveDataStreams, + isSystemIndexAccessAllowed); } - Context(ClusterState state, IndicesOptions options, long startTime) { - this(state, options, startTime, false, false, false, false); + Context(ClusterState state, IndicesOptions options, long startTime, boolean isSystemIndexAccessAllowed) { + this(state, options, startTime, false, false, false, false, isSystemIndexAccessAllowed); } protected Context(ClusterState state, IndicesOptions options, long startTime, boolean preserveAliases, boolean resolveToWriteIndex, - boolean includeDataStreams, boolean preserveDataStreams) { + boolean includeDataStreams, boolean preserveDataStreams, boolean isSystemIndexAccessAllowed) { this.state = state; this.options = options; this.startTime = startTime; @@ -687,6 +742,7 @@ protected Context(ClusterState state, IndicesOptions options, long startTime, bo this.resolveToWriteIndex = resolveToWriteIndex; this.includeDataStreams = includeDataStreams; this.preserveDataStreams = preserveDataStreams; + this.isSystemIndexAccessAllowed = isSystemIndexAccessAllowed; } public ClusterState getState() { @@ -725,6 +781,13 @@ public boolean includeDataStreams() { public boolean isPreserveDataStreams() { return preserveDataStreams; } + + /** + * Used to determine if it is allowed to access system indices in this context (e.g. for this request). + */ + public boolean isSystemIndexAccessAllowed() { + return isSystemIndexAccessAllowed; + } } private interface ExpressionResolver { diff --git a/server/src/main/java/org/elasticsearch/indices/SystemIndices.java b/server/src/main/java/org/elasticsearch/indices/SystemIndices.java index 71a21b277763c..ae4a64111a588 100644 --- a/server/src/main/java/org/elasticsearch/indices/SystemIndices.java +++ b/server/src/main/java/org/elasticsearch/indices/SystemIndices.java @@ -47,7 +47,6 @@ * to reduce the locations within the code that need to deal with {@link SystemIndexDescriptor}s. */ public class SystemIndices { - private static final Map> SERVER_SYSTEM_INDEX_DESCRIPTORS = Map.of( TaskResultsService.class.getName(), List.of(new SystemIndexDescriptor(TASK_INDEX + "*", "Task Result Index")) ); diff --git a/server/src/main/java/org/elasticsearch/node/Node.java b/server/src/main/java/org/elasticsearch/node/Node.java index 9e556bfd152cc..ebdb3c651a5df 100644 --- a/server/src/main/java/org/elasticsearch/node/Node.java +++ b/server/src/main/java/org/elasticsearch/node/Node.java @@ -408,7 +408,7 @@ protected Node(final Environment initialEnvironment, final InternalSnapshotsInfoService snapshotsInfoService = new InternalSnapshotsInfoService(settings, clusterService, repositoriesServiceReference::get, rerouteServiceReference::get); final ClusterModule clusterModule = new ClusterModule(settings, clusterService, clusterPlugins, clusterInfoService, - snapshotsInfoService); + snapshotsInfoService, threadPool.getThreadContext()); modules.add(clusterModule); IndicesModule indicesModule = new IndicesModule(pluginsService.filterPlugins(MapperPlugin.class)); modules.add(indicesModule); diff --git a/server/src/main/java/org/elasticsearch/rest/RestController.java b/server/src/main/java/org/elasticsearch/rest/RestController.java index 4c2d2114af4de..e1ca179460794 100644 --- a/server/src/main/java/org/elasticsearch/rest/RestController.java +++ b/server/src/main/java/org/elasticsearch/rest/RestController.java @@ -53,6 +53,7 @@ import java.util.function.UnaryOperator; import java.util.stream.Collectors; +import static org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY; import static org.elasticsearch.rest.BytesRestResponse.TEXT_CONTENT_TYPE; import static org.elasticsearch.rest.RestStatus.BAD_REQUEST; import static org.elasticsearch.rest.RestStatus.INTERNAL_SERVER_ERROR; @@ -64,6 +65,7 @@ public class RestController implements HttpServerTransport.Dispatcher { private static final Logger logger = LogManager.getLogger(RestController.class); private static final DeprecationLogger deprecationLogger = DeprecationLogger.getLogger(RestController.class); + private static final String ELASTIC_PRODUCT_ORIGIN_HTTP_HEADER = "X-elastic-product-origin"; private static final BytesReference FAVICON_RESPONSE; @@ -245,6 +247,13 @@ private void dispatchRequest(RestRequest request, RestChannel channel, RestHandl if (handler.allowsUnsafeBuffers() == false) { request.ensureSafeBuffers(); } + if (handler.allowSystemIndexAccessByDefault() == false && request.header(ELASTIC_PRODUCT_ORIGIN_HTTP_HEADER) == null) { + // The ELASTIC_PRODUCT_ORIGIN_HTTP_HEADER indicates that the request is coming from an Elastic product with a plan + // to move away from direct access to system indices, and thus deprecation warnings should not be emitted. + // This header is intended for internal use only. + client.threadPool().getThreadContext().putHeader(SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY, Boolean.FALSE.toString()); + } + handler.handleRequest(request, responseChannel, client); } catch (Exception e) { responseChannel.sendResponse(new BytesRestResponse(responseChannel, e)); diff --git a/server/src/main/java/org/elasticsearch/rest/RestHandler.java b/server/src/main/java/org/elasticsearch/rest/RestHandler.java index 711ce34bac08f..054c618876314 100644 --- a/server/src/main/java/org/elasticsearch/rest/RestHandler.java +++ b/server/src/main/java/org/elasticsearch/rest/RestHandler.java @@ -90,6 +90,15 @@ default List replacedRoutes() { return Collections.emptyList(); } + + /** + * Controls whether requests handled by this class are allowed to to access system indices by default. + * @return {@code true} if requests handled by this class should be allowed to access system indices. + */ + default boolean allowSystemIndexAccessByDefault() { + return false; + } + class Route { private final String path; diff --git a/server/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterAllocationExplainAction.java b/server/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterAllocationExplainAction.java index a156270d0d7ce..55ce587e3d8f5 100644 --- a/server/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterAllocationExplainAction.java +++ b/server/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterAllocationExplainAction.java @@ -55,6 +55,11 @@ public String getName() { return "cluster_allocation_explain_action"; } + @Override + public boolean allowSystemIndexAccessByDefault() { + return true; + } + @Override public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { ClusterAllocationExplainRequest req; diff --git a/server/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterHealthAction.java b/server/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterHealthAction.java index e43dd4e7c9428..242728e9ed085 100644 --- a/server/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterHealthAction.java +++ b/server/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterHealthAction.java @@ -52,6 +52,11 @@ public String getName() { return "cluster_health_action"; } + @Override + public boolean allowSystemIndexAccessByDefault() { + return true; + } + @Override public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final ClusterHealthRequest clusterHealthRequest = fromRequest(request); diff --git a/server/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterRerouteAction.java b/server/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterRerouteAction.java index e5b749c56ad40..50f63d651ba14 100644 --- a/server/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterRerouteAction.java +++ b/server/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterRerouteAction.java @@ -70,6 +70,11 @@ public String getName() { return "cluster_reroute_action"; } + @Override + public boolean allowSystemIndexAccessByDefault() { + return true; + } + @Override public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { ClusterRerouteRequest clusterRerouteRequest = createRequest(request); diff --git a/server/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterStateAction.java b/server/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterStateAction.java index de74532ee6d0c..2b592874f07d1 100644 --- a/server/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterStateAction.java +++ b/server/src/main/java/org/elasticsearch/rest/action/admin/cluster/RestClusterStateAction.java @@ -69,6 +69,11 @@ public List routes() { new Route(GET, "/_cluster/state/{metric}/{indices}")); } + @Override + public boolean allowSystemIndexAccessByDefault() { + return true; + } + @Override public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { final ClusterStateRequest clusterStateRequest = Requests.clusterStateRequest(); diff --git a/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetAliasesAction.java b/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetAliasesAction.java index ad732f531806e..2e5bc82d9b05e 100644 --- a/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetAliasesAction.java +++ b/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetAliasesAction.java @@ -73,7 +73,8 @@ public String getName() { } static RestResponse buildRestResponse(boolean aliasesExplicitlyRequested, String[] requestedAliases, - ImmutableOpenMap> responseAliasMap, XContentBuilder builder) throws Exception { + ImmutableOpenMap> responseAliasMap, + XContentBuilder builder) throws Exception { final Set indicesToDisplay = new HashSet<>(); final Set returnedAliasNames = new HashSet<>(); for (final ObjectObjectCursor> cursor : responseAliasMap) { diff --git a/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesShardStoresAction.java b/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesShardStoresAction.java index b26299f19767b..0c7b897d1c09b 100644 --- a/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesShardStoresAction.java +++ b/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesShardStoresAction.java @@ -55,6 +55,11 @@ public String getName() { return "indices_shard_stores_action"; } + @Override + public boolean allowSystemIndexAccessByDefault() { + return true; + } + @Override public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { IndicesShardStoresRequest indicesShardStoresRequest = new IndicesShardStoresRequest( diff --git a/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesStatsAction.java b/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesStatsAction.java index 2f63b7b5b47ab..67a88f2cd246f 100644 --- a/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesStatsAction.java +++ b/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestIndicesStatsAction.java @@ -57,6 +57,11 @@ public String getName() { return "indices_stats_action"; } + @Override + public boolean allowSystemIndexAccessByDefault() { + return true; + } + static final Map> METRICS; static { diff --git a/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestRecoveryAction.java b/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestRecoveryAction.java index 5f8a9424773e4..1bd1348e497c7 100644 --- a/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestRecoveryAction.java +++ b/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestRecoveryAction.java @@ -49,6 +49,11 @@ public String getName() { return "recovery_action"; } + @Override + public boolean allowSystemIndexAccessByDefault() { + return true; + } + @Override public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { diff --git a/server/src/main/java/org/elasticsearch/rest/action/cat/RestAliasAction.java b/server/src/main/java/org/elasticsearch/rest/action/cat/RestAliasAction.java index 451d67d246017..81aa0d6c55dd8 100644 --- a/server/src/main/java/org/elasticsearch/rest/action/cat/RestAliasAction.java +++ b/server/src/main/java/org/elasticsearch/rest/action/cat/RestAliasAction.java @@ -48,6 +48,11 @@ public String getName() { return "cat_alias_action"; } + @Override + public boolean allowSystemIndexAccessByDefault() { + return true; + } + @Override protected RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { final GetAliasesRequest getAliasesRequest = request.hasParam("alias") ? diff --git a/server/src/main/java/org/elasticsearch/rest/action/cat/RestHealthAction.java b/server/src/main/java/org/elasticsearch/rest/action/cat/RestHealthAction.java index 45ea2a90ef4e4..d2aca04bbdbf4 100644 --- a/server/src/main/java/org/elasticsearch/rest/action/cat/RestHealthAction.java +++ b/server/src/main/java/org/elasticsearch/rest/action/cat/RestHealthAction.java @@ -44,11 +44,17 @@ public String getName() { return "cat_health_action"; } + @Override + public boolean allowSystemIndexAccessByDefault() { + return true; + } + @Override protected void documentation(StringBuilder sb) { sb.append("/_cat/health\n"); } + @Override public RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { ClusterHealthRequest clusterHealthRequest = new ClusterHealthRequest(); diff --git a/server/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java b/server/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java index d313cb900f613..5637918353dda 100644 --- a/server/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java +++ b/server/src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java @@ -81,6 +81,11 @@ public String getName() { return "cat_indices_action"; } + @Override + public boolean allowSystemIndexAccessByDefault() { + return true; + } + @Override protected void documentation(StringBuilder sb) { sb.append("/_cat/indices\n"); diff --git a/server/src/main/java/org/elasticsearch/rest/action/cat/RestSegmentsAction.java b/server/src/main/java/org/elasticsearch/rest/action/cat/RestSegmentsAction.java index c3f5c3c9d0ff3..08be5c7a7f957 100644 --- a/server/src/main/java/org/elasticsearch/rest/action/cat/RestSegmentsAction.java +++ b/server/src/main/java/org/elasticsearch/rest/action/cat/RestSegmentsAction.java @@ -55,6 +55,11 @@ public String getName() { return "cat_segments_action"; } + @Override + public boolean allowSystemIndexAccessByDefault() { + return true; + } + @Override protected RestChannelConsumer doCatRequest(final RestRequest request, final NodeClient client) { final String[] indices = Strings.splitStringByCommaToArray(request.param("index")); diff --git a/server/src/main/java/org/elasticsearch/rest/action/cat/RestShardsAction.java b/server/src/main/java/org/elasticsearch/rest/action/cat/RestShardsAction.java index 959aeac6ff0ac..ef6b6cf60c639 100644 --- a/server/src/main/java/org/elasticsearch/rest/action/cat/RestShardsAction.java +++ b/server/src/main/java/org/elasticsearch/rest/action/cat/RestShardsAction.java @@ -72,6 +72,11 @@ public String getName() { return "cat_shards_action"; } + @Override + public boolean allowSystemIndexAccessByDefault() { + return true; + } + @Override protected void documentation(StringBuilder sb) { sb.append("/_cat/shards\n"); @@ -203,7 +208,7 @@ protected Table getTableWithHeader(final RestRequest request) { table.addCell("path.data", "alias:pd,dataPath;default:false;text-align:right;desc:shard data path"); table.addCell("path.state", "alias:ps,statsPath;default:false;text-align:right;desc:shard state path"); - + table.addCell("bulk.total_operations", "alias:bto,bulkTotalOperations;default:false;text-align:right;desc:number of bulk shard ops"); table.addCell("bulk.total_time", "alias:btti,bulkTotalTime;default:false;text-align:right;desc:time spend in shard bulk"); @@ -367,7 +372,7 @@ Table buildTable(RestRequest request, ClusterStateResponse state, IndicesStatsRe table.addCell(getOrNull(shardStats, ShardStats::getDataPath, s -> s)); table.addCell(getOrNull(shardStats, ShardStats::getStatePath, s -> s)); - + table.addCell(getOrNull(commonStats, CommonStats::getBulk, BulkStats::getTotalOperations)); table.addCell(getOrNull(commonStats, CommonStats::getBulk, BulkStats::getTotalTime)); table.addCell(getOrNull(commonStats, CommonStats::getBulk, BulkStats::getTotalSizeInBytes)); diff --git a/server/src/test/java/org/elasticsearch/action/ActionModuleTests.java b/server/src/test/java/org/elasticsearch/action/ActionModuleTests.java index aa0e592e01a79..15d2aff8d8787 100644 --- a/server/src/test/java/org/elasticsearch/action/ActionModuleTests.java +++ b/server/src/test/java/org/elasticsearch/action/ActionModuleTests.java @@ -31,6 +31,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsFilter; import org.elasticsearch.common.settings.SettingsModule; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.plugins.ActionPlugin; import org.elasticsearch.plugins.ActionPlugin.ActionHandler; import org.elasticsearch.rest.RestChannel; @@ -107,9 +108,10 @@ protected FakeAction() { public void testSetupRestHandlerContainsKnownBuiltin() { SettingsModule settings = new SettingsModule(Settings.EMPTY); UsageService usageService = new UsageService(); - ActionModule actionModule = new ActionModule(settings.getSettings(), new IndexNameExpressionResolver(), - settings.getIndexScopedSettings(), settings.getClusterSettings(), settings.getSettingsFilter(), null, emptyList(), null, - null, usageService, null); + ActionModule actionModule = new ActionModule(settings.getSettings(), + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), settings.getIndexScopedSettings(), + settings.getClusterSettings(), settings.getSettingsFilter(), null, emptyList(), null, + null, usageService, null); actionModule.initRestHandlers(null); // At this point the easiest way to confirm that a handler is loaded is to try to register another one on top of it and to fail Exception e = expectThrows(IllegalArgumentException.class, () -> @@ -146,9 +148,10 @@ public String getName() { ThreadPool threadPool = new TestThreadPool(getTestName()); try { UsageService usageService = new UsageService(); - ActionModule actionModule = new ActionModule(settings.getSettings(), new IndexNameExpressionResolver(), - settings.getIndexScopedSettings(), settings.getClusterSettings(), settings.getSettingsFilter(), threadPool, - singletonList(dupsMainAction), null, null, usageService, null); + ActionModule actionModule = new ActionModule(settings.getSettings(), + new IndexNameExpressionResolver(threadPool.getThreadContext()), settings.getIndexScopedSettings(), + settings.getClusterSettings(), settings.getSettingsFilter(), threadPool, singletonList(dupsMainAction), + null, null, usageService, null); Exception e = expectThrows(IllegalArgumentException.class, () -> actionModule.initRestHandlers(null)); assertThat(e.getMessage(), startsWith("Cannot replace existing handler for [/] for method: GET")); } finally { @@ -180,9 +183,10 @@ public List getRestHandlers(Settings settings, RestController restC ThreadPool threadPool = new TestThreadPool(getTestName()); try { UsageService usageService = new UsageService(); - ActionModule actionModule = new ActionModule(settings.getSettings(), new IndexNameExpressionResolver(), - settings.getIndexScopedSettings(), settings.getClusterSettings(), settings.getSettingsFilter(), threadPool, - singletonList(registersFakeHandler), null, null, usageService, null); + ActionModule actionModule = new ActionModule(settings.getSettings(), + new IndexNameExpressionResolver(threadPool.getThreadContext()), settings.getIndexScopedSettings(), + settings.getClusterSettings(), settings.getSettingsFilter(), threadPool, singletonList(registersFakeHandler), + null, null, usageService, null); actionModule.initRestHandlers(null); // At this point the easiest way to confirm that a handler is loaded is to try to register another one on top of it and to fail Exception e = expectThrows(IllegalArgumentException.class, () -> diff --git a/server/src/test/java/org/elasticsearch/action/admin/cluster/configuration/TransportAddVotingConfigExclusionsActionTests.java b/server/src/test/java/org/elasticsearch/action/admin/cluster/configuration/TransportAddVotingConfigExclusionsActionTests.java index 221a9a1114c65..1343c30749b20 100644 --- a/server/src/test/java/org/elasticsearch/action/admin/cluster/configuration/TransportAddVotingConfigExclusionsActionTests.java +++ b/server/src/test/java/org/elasticsearch/action/admin/cluster/configuration/TransportAddVotingConfigExclusionsActionTests.java @@ -41,6 +41,7 @@ import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.transport.MockTransport; import org.elasticsearch.threadpool.TestThreadPool; @@ -131,7 +132,7 @@ public void setupForTest() { clusterSettings = new ClusterSettings(nodeSettings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); new TransportAddVotingConfigExclusionsAction(nodeSettings, clusterSettings, transportService, clusterService, threadPool, - new ActionFilters(emptySet()), new IndexNameExpressionResolver()); // registers action + new ActionFilters(emptySet()), new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); // registers action transportService.start(); transportService.acceptIncomingRequests(); diff --git a/server/src/test/java/org/elasticsearch/action/admin/cluster/configuration/TransportClearVotingConfigExclusionsActionTests.java b/server/src/test/java/org/elasticsearch/action/admin/cluster/configuration/TransportClearVotingConfigExclusionsActionTests.java index 4c7fcbe7e2f56..e7fd68b6c0d0a 100644 --- a/server/src/test/java/org/elasticsearch/action/admin/cluster/configuration/TransportClearVotingConfigExclusionsActionTests.java +++ b/server/src/test/java/org/elasticsearch/action/admin/cluster/configuration/TransportClearVotingConfigExclusionsActionTests.java @@ -35,6 +35,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.transport.MockTransport; import org.elasticsearch.threadpool.TestThreadPool; @@ -95,7 +96,7 @@ public void setupForTest() { TransportService.NOOP_TRANSPORT_INTERCEPTOR, boundTransportAddress -> localNode, null, emptySet()); new TransportClearVotingConfigExclusionsAction(transportService, clusterService, threadPool, new ActionFilters(emptySet()), - new IndexNameExpressionResolver()); // registers action + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); // registers action transportService.start(); transportService.acceptIncomingRequests(); diff --git a/server/src/test/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesActionTests.java b/server/src/test/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesActionTests.java index 967d94c15d6df..5af88f6fc768c 100644 --- a/server/src/test/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesActionTests.java +++ b/server/src/test/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesActionTests.java @@ -18,8 +18,14 @@ */ package org.elasticsearch.action.admin.indices.alias.get; +import org.elasticsearch.Version; +import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.metadata.AliasMetadata; +import org.elasticsearch.cluster.metadata.IndexMetadata; +import org.elasticsearch.cluster.metadata.Metadata; import org.elasticsearch.common.collect.ImmutableOpenMap; +import org.elasticsearch.indices.SystemIndexDescriptor; +import org.elasticsearch.indices.SystemIndices; import org.elasticsearch.test.ESTestCase; import java.util.Collections; @@ -28,6 +34,7 @@ import static org.hamcrest.Matchers.equalTo; public class TransportGetAliasesActionTests extends ESTestCase { + private final SystemIndices EMPTY_SYSTEM_INDICES = new SystemIndices(Collections.emptyMap()); public void testPostProcess() { GetAliasesRequest request = new GetAliasesRequest(); @@ -35,7 +42,8 @@ public void testPostProcess() { .fPut("b", Collections.singletonList(new AliasMetadata.Builder("y").build())) .build(); ImmutableOpenMap> result = - TransportGetAliasesAction.postProcess(request, new String[]{"a", "b", "c"}, aliases); + TransportGetAliasesAction.postProcess(request, new String[]{"a", "b", "c"}, aliases, ClusterState.EMPTY_STATE, false, + EMPTY_SYSTEM_INDICES); assertThat(result.size(), equalTo(3)); assertThat(result.get("a").size(), equalTo(0)); assertThat(result.get("b").size(), equalTo(1)); @@ -46,7 +54,8 @@ public void testPostProcess() { aliases = ImmutableOpenMap.>builder() .fPut("b", Collections.singletonList(new AliasMetadata.Builder("y").build())) .build(); - result = TransportGetAliasesAction.postProcess(request, new String[]{"a", "b", "c"}, aliases); + result = TransportGetAliasesAction.postProcess(request, new String[]{"a", "b", "c"}, aliases, ClusterState.EMPTY_STATE, false, + EMPTY_SYSTEM_INDICES); assertThat(result.size(), equalTo(3)); assertThat(result.get("a").size(), equalTo(0)); assertThat(result.get("b").size(), equalTo(1)); @@ -56,9 +65,129 @@ public void testPostProcess() { aliases = ImmutableOpenMap.>builder() .fPut("b", Collections.singletonList(new AliasMetadata.Builder("y").build())) .build(); - result = TransportGetAliasesAction.postProcess(request, new String[]{"a", "b", "c"}, aliases); + result = TransportGetAliasesAction.postProcess(request, new String[]{"a", "b", "c"}, aliases, ClusterState.EMPTY_STATE, false, + EMPTY_SYSTEM_INDICES); assertThat(result.size(), equalTo(1)); assertThat(result.get("b").size(), equalTo(1)); } + public void testDeprecationWarningEmittedForTotalWildcard() { + ClusterState state = systemIndexTestClusterState(); + + GetAliasesRequest request = new GetAliasesRequest(); + ImmutableOpenMap> aliases = ImmutableOpenMap.>builder() + .fPut(".b", Collections.singletonList(new AliasMetadata.Builder(".y").build())) + .fPut("c", Collections.singletonList(new AliasMetadata.Builder("d").build())) + .build(); + final String[] concreteIndices = {"a", ".b", "c"}; + assertEquals(state.metadata().findAliases(request, concreteIndices), aliases); + ImmutableOpenMap> result = + TransportGetAliasesAction.postProcess(request, concreteIndices, aliases, state, false, EMPTY_SYSTEM_INDICES); + assertThat(result.size(), equalTo(3)); + assertThat(result.get("a").size(), equalTo(0)); + assertThat(result.get(".b").size(), equalTo(1)); + assertThat(result.get("c").size(), equalTo(1)); + assertWarnings("this request accesses system indices: [.b], but in a future major version, direct access to system " + + "indices will be prevented by default"); + } + + public void testDeprecationWarningEmittedWhenSystemIndexIsRequested() { + ClusterState state = systemIndexTestClusterState(); + + GetAliasesRequest request = new GetAliasesRequest(); + request.indices(".b"); + ImmutableOpenMap> aliases = ImmutableOpenMap.>builder() + .fPut(".b", Collections.singletonList(new AliasMetadata.Builder(".y").build())) + .build(); + final String[] concreteIndices = {".b"}; + assertEquals(state.metadata().findAliases(request, concreteIndices), aliases); + ImmutableOpenMap> result = + TransportGetAliasesAction.postProcess(request, concreteIndices, aliases, state, false, EMPTY_SYSTEM_INDICES); + assertThat(result.size(), equalTo(1)); + assertThat(result.get(".b").size(), equalTo(1)); + assertWarnings("this request accesses system indices: [.b], but in a future major version, direct access to system " + + "indices will be prevented by default"); + } + + public void testDeprecationWarningEmittedWhenSystemIndexIsRequestedByAlias() { + ClusterState state = systemIndexTestClusterState(); + + GetAliasesRequest request = new GetAliasesRequest(".y"); + ImmutableOpenMap> aliases = ImmutableOpenMap.>builder() + .fPut(".b", Collections.singletonList(new AliasMetadata.Builder(".y").build())) + .build(); + final String[] concreteIndices = {"a", ".b", "c"}; + assertEquals(state.metadata().findAliases(request, concreteIndices), aliases); + ImmutableOpenMap> result = + TransportGetAliasesAction.postProcess(request, concreteIndices, aliases, state, false, EMPTY_SYSTEM_INDICES); + assertThat(result.size(), equalTo(1)); + assertThat(result.get(".b").size(), equalTo(1)); + assertWarnings("this request accesses system indices: [.b], but in a future major version, direct access to system " + + "indices will be prevented by default"); + } + + public void testDeprecationWarningNotEmittedWhenSystemAccessAllowed() { + ClusterState state = systemIndexTestClusterState(); + + GetAliasesRequest request = new GetAliasesRequest(".y"); + ImmutableOpenMap> aliases = ImmutableOpenMap.>builder() + .fPut(".b", Collections.singletonList(new AliasMetadata.Builder(".y").build())) + .build(); + final String[] concreteIndices = {"a", ".b", "c"}; + assertEquals(state.metadata().findAliases(request, concreteIndices), aliases); + ImmutableOpenMap> result = + TransportGetAliasesAction.postProcess(request, concreteIndices, aliases, state, true, EMPTY_SYSTEM_INDICES); + assertThat(result.size(), equalTo(1)); + assertThat(result.get(".b").size(), equalTo(1)); + } + + /** + * Ensures that deprecation warnings are not emitted when + */ + public void testDeprecationWarningNotEmittedWhenOnlyNonsystemIndexRequested() { + ClusterState state = systemIndexTestClusterState(); + + GetAliasesRequest request = new GetAliasesRequest(); + request.indices("c"); + ImmutableOpenMap> aliases = ImmutableOpenMap.>builder() + .fPut("c", Collections.singletonList(new AliasMetadata.Builder("d").build())) + .build(); + final String[] concreteIndices = {"c"}; + assertEquals(state.metadata().findAliases(request, concreteIndices), aliases); + ImmutableOpenMap> result = + TransportGetAliasesAction.postProcess(request, concreteIndices, aliases, state, false, EMPTY_SYSTEM_INDICES); + assertThat(result.size(), equalTo(1)); + assertThat(result.get("c").size(), equalTo(1)); + } + + public void testDeprecationWarningEmittedWhenRequestingNonExistingAliasInSystemPattern() { + ClusterState state = systemIndexTestClusterState(); + SystemIndices systemIndices = new SystemIndices(Collections.singletonMap(this.getTestName(), + Collections.singletonList(new SystemIndexDescriptor(".y", "an index that doesn't exist")))); + + GetAliasesRequest request = new GetAliasesRequest(".y"); + ImmutableOpenMap> aliases = ImmutableOpenMap.>builder() + .build(); + final String[] concreteIndices = {}; + assertEquals(state.metadata().findAliases(request, concreteIndices), aliases); + ImmutableOpenMap> result = + TransportGetAliasesAction.postProcess(request, concreteIndices, aliases, state, false, systemIndices); + assertThat(result.size(), equalTo(0)); + assertWarnings("this request accesses aliases with names reserved for system indices: [.y], but in a future major version, direct" + + "access to system indices and their aliases will not be allowed"); + } + + public ClusterState systemIndexTestClusterState() { + return ClusterState.builder(ClusterState.EMPTY_STATE) + .metadata(Metadata.builder() + .put(IndexMetadata.builder("a").settings(settings(Version.CURRENT)).numberOfShards(1).numberOfReplicas(0)) + .put(IndexMetadata.builder(".b").settings(settings(Version.CURRENT)).numberOfShards(1).numberOfReplicas(0) + .system(true).putAlias(AliasMetadata.builder(".y"))) + .put(IndexMetadata.builder("c").settings(settings(Version.CURRENT)).numberOfShards(1).numberOfReplicas(0) + .putAlias(AliasMetadata.builder("d"))) + .build()) + .build(); + } + + } diff --git a/server/src/test/java/org/elasticsearch/action/admin/indices/get/GetIndexActionTests.java b/server/src/test/java/org/elasticsearch/action/admin/indices/get/GetIndexActionTests.java index 0ff854af20c54..1594fa1bff5e1 100644 --- a/server/src/test/java/org/elasticsearch/action/admin/indices/get/GetIndexActionTests.java +++ b/server/src/test/java/org/elasticsearch/action/admin/indices/get/GetIndexActionTests.java @@ -31,6 +31,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsFilter; import org.elasticsearch.common.settings.SettingsModule; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.Index; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.test.ESSingleNodeTestCase; @@ -122,6 +123,10 @@ protected void doMasterOperation(GetIndexRequest request, String[] concreteIndic } static class Resolver extends IndexNameExpressionResolver { + Resolver() { + super(new ThreadContext(Settings.EMPTY)); + } + @Override public String[] concreteIndexNames(ClusterState state, IndicesRequest request) { return request.indices(); diff --git a/server/src/test/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingRequestTests.java b/server/src/test/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingRequestTests.java index 9b50d2ee2f93b..b3d54c200a78c 100644 --- a/server/src/test/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingRequestTests.java +++ b/server/src/test/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingRequestTests.java @@ -28,6 +28,8 @@ import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.metadata.Metadata; import org.elasticsearch.common.collect.Tuple; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.Index; import org.elasticsearch.test.ESTestCase; @@ -89,7 +91,8 @@ public void testResolveIndicesWithWriteIndexOnlyAndDataStreamsAndWriteAliases() tuple("alias2", List.of(tuple("index2", false), tuple("index3", true))) )); PutMappingRequest request = new PutMappingRequest().indices("foo", "alias1", "alias2").writeIndexOnly(true); - Index[] indices = TransportPutMappingAction.resolveIndices(cs, request, new IndexNameExpressionResolver()); + Index[] indices = TransportPutMappingAction.resolveIndices(cs, request, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); List indexNames = Arrays.stream(indices).map(Index::getName).collect(Collectors.toList()); IndexAbstraction expectedDs = cs.metadata().getIndicesLookup().get("foo"); // should resolve the data stream and each alias to their respective write indices @@ -109,7 +112,8 @@ public void testResolveIndicesWithoutWriteIndexOnlyAndDataStreamsAndWriteAliases tuple("alias2", List.of(tuple("index2", false), tuple("index3", true))) )); PutMappingRequest request = new PutMappingRequest().indices("foo", "alias1", "alias2"); - Index[] indices = TransportPutMappingAction.resolveIndices(cs, request, new IndexNameExpressionResolver()); + Index[] indices = TransportPutMappingAction.resolveIndices(cs, request, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); List indexNames = Arrays.stream(indices).map(Index::getName).collect(Collectors.toList()); IndexAbstraction expectedDs = cs.metadata().getIndicesLookup().get("foo"); List expectedIndices = expectedDs.getIndices().stream().map(im -> im.getIndex().getName()).collect(Collectors.toList()); @@ -131,7 +135,8 @@ public void testResolveIndicesWithWriteIndexOnlyAndDataStreamAndIndex() { tuple("alias2", List.of(tuple("index2", false), tuple("index3", true))) )); PutMappingRequest request = new PutMappingRequest().indices("foo", "index3").writeIndexOnly(true); - Index[] indices = TransportPutMappingAction.resolveIndices(cs, request, new IndexNameExpressionResolver()); + Index[] indices = TransportPutMappingAction.resolveIndices(cs, request, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); List indexNames = Arrays.stream(indices).map(Index::getName).collect(Collectors.toList()); IndexAbstraction expectedDs = cs.metadata().getIndicesLookup().get("foo"); List expectedIndices = expectedDs.getIndices().stream().map(im -> im.getIndex().getName()).collect(Collectors.toList()); @@ -154,7 +159,8 @@ public void testResolveIndicesWithWriteIndexOnlyAndNoSingleWriteIndex() { )); PutMappingRequest request = new PutMappingRequest().indices("*").writeIndexOnly(true); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, - () -> TransportPutMappingAction.resolveIndices(cs2, request, new IndexNameExpressionResolver())); + () -> TransportPutMappingAction.resolveIndices(cs2, request, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)))); assertThat(e.getMessage(), containsString("The index expression [*] and options provided did not point to a single write-index")); } @@ -172,7 +178,8 @@ public void testResolveIndicesWithWriteIndexOnlyAndAliasWithoutWriteIndex() { )); PutMappingRequest request = new PutMappingRequest().indices("alias2").writeIndexOnly(true); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, - () -> TransportPutMappingAction.resolveIndices(cs2, request, new IndexNameExpressionResolver())); + () -> TransportPutMappingAction.resolveIndices(cs2, request, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)))); assertThat(e.getMessage(), containsString("no write index is defined for alias [alias2]")); } diff --git a/server/src/test/java/org/elasticsearch/action/admin/indices/resolve/ResolveIndexTests.java b/server/src/test/java/org/elasticsearch/action/admin/indices/resolve/ResolveIndexTests.java index 5a33643202d09..acf6d97f5878e 100644 --- a/server/src/test/java/org/elasticsearch/action/admin/indices/resolve/ResolveIndexTests.java +++ b/server/src/test/java/org/elasticsearch/action/admin/indices/resolve/ResolveIndexTests.java @@ -34,6 +34,7 @@ import org.elasticsearch.cluster.metadata.Metadata; import org.elasticsearch.common.Strings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.test.ESTestCase; import java.util.ArrayList; @@ -69,7 +70,8 @@ public class ResolveIndexTests extends ESTestCase { }; private Metadata metadata = buildMetadata(dataStreams, indices); - private IndexAbstractionResolver resolver = new IndexAbstractionResolver(new IndexNameExpressionResolver()); + private IndexAbstractionResolver resolver = new IndexAbstractionResolver( + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); public void testResolveStarWithDefaultOptions() { String[] names = new String[] {"*"}; diff --git a/server/src/test/java/org/elasticsearch/action/admin/indices/rollover/MetadataRolloverServiceTests.java b/server/src/test/java/org/elasticsearch/action/admin/indices/rollover/MetadataRolloverServiceTests.java index 66846af2d9096..d9a853a4eb45e 100644 --- a/server/src/test/java/org/elasticsearch/action/admin/indices/rollover/MetadataRolloverServiceTests.java +++ b/server/src/test/java/org/elasticsearch/action/admin/indices/rollover/MetadataRolloverServiceTests.java @@ -51,6 +51,7 @@ import org.elasticsearch.common.settings.IndexScopedSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.json.JsonXContent; import org.elasticsearch.env.Environment; import org.elasticsearch.index.Index; @@ -296,7 +297,7 @@ public void testDataStreamValidation() throws IOException { public void testGenerateRolloverIndexName() { String invalidIndexName = randomAlphaOfLength(10) + "A"; - IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver(); + IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); expectThrows(IllegalArgumentException.class, () -> MetadataRolloverService.generateRolloverIndexName(invalidIndexName, indexNameExpressionResolver)); int num = randomIntBetween(0, 100); diff --git a/server/src/test/java/org/elasticsearch/action/admin/indices/settings/get/GetSettingsActionTests.java b/server/src/test/java/org/elasticsearch/action/admin/indices/settings/get/GetSettingsActionTests.java index db443cf97f126..8975cbac98656 100644 --- a/server/src/test/java/org/elasticsearch/action/admin/indices/settings/get/GetSettingsActionTests.java +++ b/server/src/test/java/org/elasticsearch/action/admin/indices/settings/get/GetSettingsActionTests.java @@ -31,6 +31,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsFilter; import org.elasticsearch.common.settings.SettingsModule; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.Index; import org.elasticsearch.tasks.Task; import org.elasticsearch.test.ESTestCase; @@ -132,6 +133,10 @@ public void testIncludeDefaultsWithFiltering() { } static class Resolver extends IndexNameExpressionResolver { + Resolver() { + super(new ThreadContext(Settings.EMPTY)); + } + @Override public String[] concreteIndexNames(ClusterState state, IndicesRequest request) { return request.indices(); diff --git a/server/src/test/java/org/elasticsearch/action/bulk/TransportBulkActionIngestTests.java b/server/src/test/java/org/elasticsearch/action/bulk/TransportBulkActionIngestTests.java index aca9e9d495bf0..78f7dbabbe252 100644 --- a/server/src/test/java/org/elasticsearch/action/bulk/TransportBulkActionIngestTests.java +++ b/server/src/test/java/org/elasticsearch/action/bulk/TransportBulkActionIngestTests.java @@ -50,6 +50,7 @@ import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.concurrent.AtomicArray; import org.elasticsearch.common.util.concurrent.EsExecutors; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.IndexingPressure; @@ -145,9 +146,9 @@ class TestTransportBulkAction extends TransportBulkAction { null, new ActionFilters(Collections.emptySet()), null, new AutoCreateIndex( SETTINGS, new ClusterSettings(SETTINGS, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS), - new IndexNameExpressionResolver(), - new SystemIndices(Map.of())), - new IndexingPressure(SETTINGS), new SystemIndices(Map.of()) + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), + new SystemIndices(Map.of()) + ), new IndexingPressure(SETTINGS), new SystemIndices(Map.of()) ); } @Override diff --git a/server/src/test/java/org/elasticsearch/action/bulk/TransportBulkActionTookTests.java b/server/src/test/java/org/elasticsearch/action/bulk/TransportBulkActionTookTests.java index d0772f36c4039..792b1aed322ef 100644 --- a/server/src/test/java/org/elasticsearch/action/bulk/TransportBulkActionTookTests.java +++ b/server/src/test/java/org/elasticsearch/action/bulk/TransportBulkActionTookTests.java @@ -39,6 +39,7 @@ import org.elasticsearch.common.Strings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.index.IndexingPressure; @@ -209,6 +210,10 @@ public void onFailure(Exception e) { } static class Resolver extends IndexNameExpressionResolver { + Resolver() { + super(new ThreadContext(Settings.EMPTY)); + } + @Override public String[] concreteIndexNames(ClusterState state, IndicesRequest request) { return request.indices(); diff --git a/server/src/test/java/org/elasticsearch/action/get/TransportMultiGetActionTests.java b/server/src/test/java/org/elasticsearch/action/get/TransportMultiGetActionTests.java index 3a0f3ba116c9a..32e7de7f402b9 100644 --- a/server/src/test/java/org/elasticsearch/action/get/TransportMultiGetActionTests.java +++ b/server/src/test/java/org/elasticsearch/action/get/TransportMultiGetActionTests.java @@ -38,6 +38,7 @@ import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentType; @@ -223,6 +224,10 @@ private static Task createTask() { static class Resolver extends IndexNameExpressionResolver { + Resolver() { + super(new ThreadContext(Settings.EMPTY)); + } + @Override public Index concreteSingleIndex(ClusterState state, IndicesRequest request) { return new Index("index1", randomBase64UUID()); diff --git a/server/src/test/java/org/elasticsearch/action/search/MultiSearchActionTookTests.java b/server/src/test/java/org/elasticsearch/action/search/MultiSearchActionTookTests.java index 19b53e2f8d380..fab9fd33c05ac 100644 --- a/server/src/test/java/org/elasticsearch/action/search/MultiSearchActionTookTests.java +++ b/server/src/test/java/org/elasticsearch/action/search/MultiSearchActionTookTests.java @@ -32,6 +32,7 @@ import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.search.internal.InternalSearchResponse; import org.elasticsearch.tasks.Task; import org.elasticsearch.tasks.TaskManager; @@ -191,6 +192,10 @@ void executeSearch(final Queue requests, final AtomicArray getResults() { } class MyResolver extends IndexNameExpressionResolver { + MyResolver() { + super(new ThreadContext(Settings.EMPTY)); + } + @Override public String[] concreteIndexNames(ClusterState state, IndicesRequest request) { return request.indices(); diff --git a/server/src/test/java/org/elasticsearch/action/support/master/TransportMasterNodeActionTests.java b/server/src/test/java/org/elasticsearch/action/support/master/TransportMasterNodeActionTests.java index 77f617ca3f9fd..05bad21c4f0f6 100644 --- a/server/src/test/java/org/elasticsearch/action/support/master/TransportMasterNodeActionTests.java +++ b/server/src/test/java/org/elasticsearch/action/support/master/TransportMasterNodeActionTests.java @@ -43,7 +43,9 @@ import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.discovery.MasterNotDiscoveredException; import org.elasticsearch.node.NodeClosedException; import org.elasticsearch.rest.RestStatus; @@ -172,7 +174,7 @@ class Action extends TransportMasterNodeAction { Action(String actionName, TransportService transportService, ClusterService clusterService, ThreadPool threadPool) { super(actionName, transportService, clusterService, threadPool, - new ActionFilters(new HashSet<>()), Request::new, new IndexNameExpressionResolver()); + new ActionFilters(new HashSet<>()), Request::new, new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); } @Override diff --git a/server/src/test/java/org/elasticsearch/action/support/replication/BroadcastReplicationTests.java b/server/src/test/java/org/elasticsearch/action/support/replication/BroadcastReplicationTests.java index f2f376de3ca72..e154ea86f84a0 100644 --- a/server/src/test/java/org/elasticsearch/action/support/replication/BroadcastReplicationTests.java +++ b/server/src/test/java/org/elasticsearch/action/support/replication/BroadcastReplicationTests.java @@ -42,6 +42,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.PageCacheRecycler; import org.elasticsearch.common.util.concurrent.ConcurrentCollections; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.core.internal.io.IOUtils; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.indices.breaker.CircuitBreakerService; @@ -103,7 +104,7 @@ threadPool, new NetworkService(Collections.emptyList()), PageCacheRecycler.NON_R transportService.start(); transportService.acceptIncomingRequests(); broadcastReplicationAction = new TestBroadcastReplicationAction(clusterService, transportService, - new ActionFilters(new HashSet<>()), new IndexNameExpressionResolver()); + new ActionFilters(new HashSet<>()), new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); } @Override diff --git a/server/src/test/java/org/elasticsearch/action/support/single/instance/TransportInstanceSingleOperationActionTests.java b/server/src/test/java/org/elasticsearch/action/support/single/instance/TransportInstanceSingleOperationActionTests.java index 6ad629c9e56df..85554a381760a 100644 --- a/server/src/test/java/org/elasticsearch/action/support/single/instance/TransportInstanceSingleOperationActionTests.java +++ b/server/src/test/java/org/elasticsearch/action/support/single/instance/TransportInstanceSingleOperationActionTests.java @@ -39,7 +39,9 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.test.ESTestCase; @@ -136,6 +138,10 @@ protected ShardIterator shards(ClusterState clusterState, Request request) { } class MyResolver extends IndexNameExpressionResolver { + MyResolver() { + super(new ThreadContext(Settings.EMPTY)); + } + @Override public String[] concreteIndexNames(ClusterState state, IndicesRequest request) { return request.indices(); diff --git a/server/src/test/java/org/elasticsearch/action/termvectors/TransportMultiTermVectorsActionTests.java b/server/src/test/java/org/elasticsearch/action/termvectors/TransportMultiTermVectorsActionTests.java index ba51419bfb928..92fe3f16f0718 100644 --- a/server/src/test/java/org/elasticsearch/action/termvectors/TransportMultiTermVectorsActionTests.java +++ b/server/src/test/java/org/elasticsearch/action/termvectors/TransportMultiTermVectorsActionTests.java @@ -39,6 +39,7 @@ import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentType; @@ -224,6 +225,10 @@ private static Task createTask() { static class Resolver extends IndexNameExpressionResolver { + Resolver() { + super(new ThreadContext(Settings.EMPTY)); + } + @Override public Index concreteSingleIndex(ClusterState state, IndicesRequest request) { return new Index("index1", randomBase64UUID()); diff --git a/server/src/test/java/org/elasticsearch/cluster/ClusterModuleTests.java b/server/src/test/java/org/elasticsearch/cluster/ClusterModuleTests.java index fadf86ff206a1..bac8d08a45f46 100644 --- a/server/src/test/java/org/elasticsearch/cluster/ClusterModuleTests.java +++ b/server/src/test/java/org/elasticsearch/cluster/ClusterModuleTests.java @@ -49,6 +49,7 @@ import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsModule; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.gateway.GatewayAllocator; import org.elasticsearch.plugins.ClusterPlugin; import org.elasticsearch.test.gateway.TestGatewayAllocator; @@ -63,8 +64,23 @@ public class ClusterModuleTests extends ModuleTestCase { private ClusterInfoService clusterInfoService = EmptyClusterInfoService.INSTANCE; - private ClusterService clusterService = new ClusterService(Settings.EMPTY, - new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS), null); + private ClusterService clusterService; + private ThreadContext threadContext; + + @Override + public void setUp() throws Exception { + super.setUp(); + threadContext = new ThreadContext(Settings.EMPTY); + clusterService = new ClusterService(Settings.EMPTY, + new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS), null); + } + + @Override + public void tearDown() throws Exception { + super.tearDown(); + clusterService.close(); + } + static class FakeAllocationDecider extends AllocationDecider { protected FakeAllocationDecider() { } @@ -119,7 +135,7 @@ public void testRegisterAllocationDeciderDuplicate() { public Collection createAllocationDeciders(Settings settings, ClusterSettings clusterSettings) { return Collections.singletonList(new EnableAllocationDecider(settings, clusterSettings)); } - }), clusterInfoService, null)); + }), clusterInfoService, null, threadContext)); assertEquals(e.getMessage(), "Cannot specify allocation decider [" + EnableAllocationDecider.class.getName() + "] twice"); } @@ -131,7 +147,7 @@ public void testRegisterAllocationDecider() { public Collection createAllocationDeciders(Settings settings, ClusterSettings clusterSettings) { return Collections.singletonList(new FakeAllocationDecider()); } - }), clusterInfoService, null); + }), clusterInfoService, null, threadContext); assertTrue(module.deciderList.stream().anyMatch(d -> d.getClass().equals(FakeAllocationDecider.class))); } @@ -143,7 +159,7 @@ public Map> getShardsAllocators(Settings setti return Collections.singletonMap(name, supplier); } } - ), clusterInfoService, null); + ), clusterInfoService, null, threadContext); } public void testRegisterShardsAllocator() { @@ -161,7 +177,7 @@ public void testRegisterShardsAllocatorAlreadyRegistered() { public void testUnknownShardsAllocator() { Settings settings = Settings.builder().put(ClusterModule.SHARDS_ALLOCATOR_TYPE_SETTING.getKey(), "dne").build(); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> - new ClusterModule(settings, clusterService, Collections.emptyList(), clusterInfoService, null)); + new ClusterModule(settings, clusterService, Collections.emptyList(), clusterInfoService, null, threadContext)); assertEquals("Unknown ShardsAllocator [dne]", e.getMessage()); } @@ -204,13 +220,14 @@ public void testAllocationDeciderOrder() { public void testRejectsReservedExistingShardsAllocatorName() { final ClusterModule clusterModule = new ClusterModule(Settings.EMPTY, clusterService, - List.of(existingShardsAllocatorPlugin(GatewayAllocator.ALLOCATOR_NAME)), clusterInfoService, null); + List.of(existingShardsAllocatorPlugin(GatewayAllocator.ALLOCATOR_NAME)), clusterInfoService, null, threadContext); expectThrows(IllegalArgumentException.class, () -> clusterModule.setExistingShardsAllocators(new TestGatewayAllocator())); } public void testRejectsDuplicateExistingShardsAllocatorName() { final ClusterModule clusterModule = new ClusterModule(Settings.EMPTY, clusterService, - List.of(existingShardsAllocatorPlugin("duplicate"), existingShardsAllocatorPlugin("duplicate")), clusterInfoService, null); + List.of(existingShardsAllocatorPlugin("duplicate"), existingShardsAllocatorPlugin("duplicate")), clusterInfoService, null, + threadContext); expectThrows(IllegalArgumentException.class, () -> clusterModule.setExistingShardsAllocators(new TestGatewayAllocator())); } diff --git a/server/src/test/java/org/elasticsearch/cluster/health/ClusterStateHealthTests.java b/server/src/test/java/org/elasticsearch/cluster/health/ClusterStateHealthTests.java index 39bb85fda8da9..86c8c77d38c12 100644 --- a/server/src/test/java/org/elasticsearch/cluster/health/ClusterStateHealthTests.java +++ b/server/src/test/java/org/elasticsearch/cluster/health/ClusterStateHealthTests.java @@ -48,6 +48,7 @@ import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.gateway.TestGatewayAllocator; @@ -79,7 +80,8 @@ import static org.hamcrest.Matchers.lessThanOrEqualTo; public class ClusterStateHealthTests extends ESTestCase { - private final IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver(); + private final IndexNameExpressionResolver indexNameExpressionResolver = + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); private static ThreadPool threadPool; diff --git a/server/src/test/java/org/elasticsearch/cluster/metadata/DateMathExpressionResolverTests.java b/server/src/test/java/org/elasticsearch/cluster/metadata/DateMathExpressionResolverTests.java index ee1a93646cfe3..527c6618f6bc8 100644 --- a/server/src/test/java/org/elasticsearch/cluster/metadata/DateMathExpressionResolverTests.java +++ b/server/src/test/java/org/elasticsearch/cluster/metadata/DateMathExpressionResolverTests.java @@ -43,7 +43,8 @@ public class DateMathExpressionResolverTests extends ESTestCase { private final DateMathExpressionResolver expressionResolver = new DateMathExpressionResolver(); private final Context context = new Context( - ClusterState.builder(new ClusterName("_name")).build(), IndicesOptions.strictExpand() + ClusterState.builder(new ClusterName("_name")).build(), IndicesOptions.strictExpand(), + false ); public void testNormal() throws Exception { @@ -146,7 +147,7 @@ public void testExpression_CustomTimeZoneInIndexName() throws Exception { // rounding to today 00:00 now = DateTime.now(UTC).withHourOfDay(0).withMinuteOfHour(0).withSecondOfMinute(0); } - Context context = new Context(this.context.getState(), this.context.getOptions(), now.getMillis()); + Context context = new Context(this.context.getState(), this.context.getOptions(), now.getMillis(), false); List results = expressionResolver.resolve(context, Arrays.asList("<.marvel-{now/d{yyyy.MM.dd|" + timeZone.getID() + "}}>")); assertThat(results.size(), equalTo(1)); logger.info("timezone: [{}], now [{}], name: [{}]", timeZone, now, results.get(0)); diff --git a/server/src/test/java/org/elasticsearch/cluster/metadata/IndexAbstractionTests.java b/server/src/test/java/org/elasticsearch/cluster/metadata/IndexAbstractionTests.java index ca8033cbfdfe8..6c58692d305b7 100644 --- a/server/src/test/java/org/elasticsearch/cluster/metadata/IndexAbstractionTests.java +++ b/server/src/test/java/org/elasticsearch/cluster/metadata/IndexAbstractionTests.java @@ -24,6 +24,7 @@ import org.elasticsearch.Version; import org.elasticsearch.common.Nullable; import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.VersionUtils; import java.util.Objects; @@ -32,16 +33,18 @@ public class IndexAbstractionTests extends ESTestCase { + public static final String SYSTEM_ALIAS_NAME = "system_alias"; + public void testHiddenAliasValidation() { final String hiddenAliasName = "hidden_alias"; AliasMetadata hiddenAliasMetadata = new AliasMetadata.Builder(hiddenAliasName).isHidden(true).build(); - IndexMetadata hidden1 = buildIndexWithAlias("hidden1", hiddenAliasName, true); - IndexMetadata hidden2 = buildIndexWithAlias("hidden2", hiddenAliasName, true); - IndexMetadata hidden3 = buildIndexWithAlias("hidden3", hiddenAliasName, true); + IndexMetadata hidden1 = buildIndexWithAlias("hidden1", hiddenAliasName, true, Version.CURRENT, false); + IndexMetadata hidden2 = buildIndexWithAlias("hidden2", hiddenAliasName, true, Version.CURRENT, false); + IndexMetadata hidden3 = buildIndexWithAlias("hidden3", hiddenAliasName, true, Version.CURRENT, false); - IndexMetadata indexWithNonHiddenAlias = buildIndexWithAlias("nonhidden1", hiddenAliasName, false); - IndexMetadata indexWithUnspecifiedAlias = buildIndexWithAlias("nonhidden2", hiddenAliasName, null); + IndexMetadata indexWithNonHiddenAlias = buildIndexWithAlias("nonhidden1", hiddenAliasName, false, Version.CURRENT, false); + IndexMetadata indexWithUnspecifiedAlias = buildIndexWithAlias("nonhidden2", hiddenAliasName, null, Version.CURRENT, false); { IndexAbstraction.Alias allHidden = new IndexAbstraction.Alias(hiddenAliasMetadata, hidden1); @@ -116,13 +119,97 @@ public void testHiddenAliasValidation() { } } - private IndexMetadata buildIndexWithAlias(String indexName, String aliasName, @Nullable Boolean aliasIsHidden) { + public void testSystemAliasValidationMixedVersionSystemAndRegularFails() { + final Version random7xVersion = VersionUtils.randomVersionBetween(random(), Version.V_7_0_0, + VersionUtils.getPreviousVersion(Version.V_8_0_0)); + final AliasMetadata aliasMetadata = new AliasMetadata.Builder(SYSTEM_ALIAS_NAME).build(); + final IndexMetadata currentVersionSystem = buildIndexWithAlias(".system1", SYSTEM_ALIAS_NAME, null, Version.CURRENT, true); + final IndexMetadata oldVersionSystem = buildIndexWithAlias(".oldVersionSystem", SYSTEM_ALIAS_NAME, null, random7xVersion, true); + final IndexMetadata regularIndex = buildIndexWithAlias("regular1", SYSTEM_ALIAS_NAME, false, Version.CURRENT, false); + + IndexAbstraction.Alias mixedVersionSystemAndRegular = new IndexAbstraction.Alias(aliasMetadata, currentVersionSystem); + mixedVersionSystemAndRegular.addIndex(oldVersionSystem); + mixedVersionSystemAndRegular.addIndex(regularIndex); + IllegalStateException exception = expectThrows(IllegalStateException.class, + () -> mixedVersionSystemAndRegular.computeAndValidateAliasProperties()); + assertThat(exception.getMessage(), containsString("alias [" + SYSTEM_ALIAS_NAME + + "] refers to both system indices [" + currentVersionSystem.getIndex().getName() + "] and non-system indices: [" + + regularIndex.getIndex().getName() + "], but aliases must refer to either system or non-system indices, not both")); + } + + public void testSystemAliasValidationNewSystemAndRegularFails() { + final AliasMetadata aliasMetadata = new AliasMetadata.Builder(SYSTEM_ALIAS_NAME).build(); + final IndexMetadata currentVersionSystem = buildIndexWithAlias(".system1", SYSTEM_ALIAS_NAME, null, Version.CURRENT, true); + final IndexMetadata regularIndex = buildIndexWithAlias("regular1", SYSTEM_ALIAS_NAME, false, Version.CURRENT, false); + + IndexAbstraction.Alias systemAndRegular = new IndexAbstraction.Alias(aliasMetadata, currentVersionSystem); + systemAndRegular.addIndex(regularIndex); + IllegalStateException exception = expectThrows(IllegalStateException.class, + () -> systemAndRegular.computeAndValidateAliasProperties()); + assertThat(exception.getMessage(), containsString("alias [" + SYSTEM_ALIAS_NAME + + "] refers to both system indices [" + currentVersionSystem.getIndex().getName() + "] and non-system indices: [" + + regularIndex.getIndex().getName() + "], but aliases must refer to either system or non-system indices, not both")); + } + + public void testSystemAliasOldSystemAndNewRegular() { + final Version random7xVersion = VersionUtils.randomVersionBetween(random(), Version.V_7_0_0, + VersionUtils.getPreviousVersion(Version.V_8_0_0)); + final AliasMetadata aliasMetadata = new AliasMetadata.Builder(SYSTEM_ALIAS_NAME).build(); + final IndexMetadata oldVersionSystem = buildIndexWithAlias(".oldVersionSystem", SYSTEM_ALIAS_NAME, null, random7xVersion, true); + final IndexMetadata regularIndex = buildIndexWithAlias("regular1", SYSTEM_ALIAS_NAME, false, Version.CURRENT, false); + + IndexAbstraction.Alias oldAndRegular = new IndexAbstraction.Alias(aliasMetadata, oldVersionSystem); + oldAndRegular.addIndex(regularIndex); + oldAndRegular.computeAndValidateAliasProperties(); // Should be ok + } + + public void testSystemIndexValidationAllRegular() { + final Version random7xVersion = VersionUtils.randomVersionBetween(random(), Version.V_7_0_0, + VersionUtils.getPreviousVersion(Version.V_8_0_0)); + final AliasMetadata aliasMetadata = new AliasMetadata.Builder(SYSTEM_ALIAS_NAME).build(); + final IndexMetadata currentVersionSystem = buildIndexWithAlias(".system1", SYSTEM_ALIAS_NAME, null, Version.CURRENT, true); + final IndexMetadata currentVersionSystem2 = buildIndexWithAlias(".system2", SYSTEM_ALIAS_NAME, null, Version.CURRENT, true); + final IndexMetadata oldVersionSystem = buildIndexWithAlias(".oldVersionSystem", SYSTEM_ALIAS_NAME, null, random7xVersion, true); + + IndexAbstraction.Alias allRegular = new IndexAbstraction.Alias(aliasMetadata, currentVersionSystem); + allRegular.addIndex(currentVersionSystem2); + allRegular.addIndex(oldVersionSystem); + allRegular.computeAndValidateAliasProperties(); // Should be ok + } + + public void testSystemAliasValidationAllSystemSomeOld() { + final Version random7xVersion = VersionUtils.randomVersionBetween(random(), Version.V_7_0_0, + VersionUtils.getPreviousVersion(Version.V_8_0_0)); + final AliasMetadata aliasMetadata = new AliasMetadata.Builder(SYSTEM_ALIAS_NAME).build(); + final IndexMetadata currentVersionSystem = buildIndexWithAlias(".system1", SYSTEM_ALIAS_NAME, null, Version.CURRENT, true); + final IndexMetadata currentVersionSystem2 = buildIndexWithAlias(".system2", SYSTEM_ALIAS_NAME, null, Version.CURRENT, true); + final IndexMetadata oldVersionSystem = buildIndexWithAlias(".oldVersionSystem", SYSTEM_ALIAS_NAME, null, random7xVersion, true); + + IndexAbstraction.Alias allSystemMixed = new IndexAbstraction.Alias(aliasMetadata, currentVersionSystem); + allSystemMixed.addIndex(currentVersionSystem2); + allSystemMixed.addIndex(oldVersionSystem); + allSystemMixed.computeAndValidateAliasProperties(); // Should be ok + } + + public void testSystemAliasValidationAll8x() { + final AliasMetadata aliasMetadata = new AliasMetadata.Builder(SYSTEM_ALIAS_NAME).build(); + final IndexMetadata currentVersionSystem = buildIndexWithAlias(".system1", SYSTEM_ALIAS_NAME, null, Version.CURRENT, true); + final IndexMetadata currentVersionSystem2 = buildIndexWithAlias(".system2", SYSTEM_ALIAS_NAME, null, Version.CURRENT, true); + + IndexAbstraction.Alias allSystemCurrent = new IndexAbstraction.Alias(aliasMetadata, currentVersionSystem); + allSystemCurrent.addIndex(currentVersionSystem2); + allSystemCurrent.computeAndValidateAliasProperties(); // Should be ok + } + + private IndexMetadata buildIndexWithAlias(String indexName, String aliasName, @Nullable Boolean aliasIsHidden, + Version indexCreationVersion, boolean isSystem) { final AliasMetadata.Builder aliasMetadata = new AliasMetadata.Builder(aliasName); if (Objects.nonNull(aliasIsHidden) || randomBoolean()) { aliasMetadata.isHidden(aliasIsHidden); } return new IndexMetadata.Builder(indexName) - .settings(settings(Version.CURRENT)) + .settings(settings(indexCreationVersion)) + .system(isSystem) .numberOfShards(1) .numberOfReplicas(0) .putAlias(aliasMetadata) diff --git a/server/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverAliasIterationTests.java b/server/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverAliasIterationTests.java index 13d3cfd6cea95..5fd417c1eec50 100644 --- a/server/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverAliasIterationTests.java +++ b/server/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverAliasIterationTests.java @@ -19,10 +19,13 @@ package org.elasticsearch.cluster.metadata; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; + public class IndexNameExpressionResolverAliasIterationTests extends IndexNameExpressionResolverTests { - protected IndexNameExpressionResolver getIndexNameExpressionResolver() { - return new IndexNameExpressionResolver() { + protected IndexNameExpressionResolver createIndexNameExpressionResolver() { + return new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)) { @Override boolean iterateIndexAliases(int indexAliasesSize, int resolvedExpressionsSize) { return true; diff --git a/server/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverExpressionsIterationTests.java b/server/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverExpressionsIterationTests.java index 00d46aad0e8cd..79760be1fecbc 100644 --- a/server/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverExpressionsIterationTests.java +++ b/server/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverExpressionsIterationTests.java @@ -19,10 +19,13 @@ package org.elasticsearch.cluster.metadata; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; + public class IndexNameExpressionResolverExpressionsIterationTests extends IndexNameExpressionResolverTests { - protected IndexNameExpressionResolver getIndexNameExpressionResolver() { - return new IndexNameExpressionResolver() { + protected IndexNameExpressionResolver createIndexNameExpressionResolver() { + return new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)) { @Override boolean iterateIndexAliases(int indexAliasesSize, int resolvedExpressionsSize) { return false; diff --git a/server/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverTests.java b/server/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverTests.java index 8dcd79d781ad1..ade851cf15f9d 100644 --- a/server/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverTests.java +++ b/server/src/test/java/org/elasticsearch/cluster/metadata/IndexNameExpressionResolverTests.java @@ -26,6 +26,7 @@ import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest; import org.elasticsearch.action.delete.DeleteRequest; import org.elasticsearch.action.index.IndexRequest; +import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.action.update.UpdateRequest; import org.elasticsearch.cluster.ClusterName; @@ -33,6 +34,7 @@ import org.elasticsearch.cluster.metadata.IndexMetadata.State; import org.elasticsearch.common.Strings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.index.IndexSettings; @@ -48,10 +50,12 @@ import java.util.List; import java.util.Set; import java.util.function.Function; +import java.util.stream.Collectors; import static org.elasticsearch.cluster.DataStreamTestHelper.createBackingIndex; import static org.elasticsearch.cluster.DataStreamTestHelper.createTimestampField; import static org.elasticsearch.cluster.metadata.IndexMetadata.INDEX_HIDDEN_SETTING; +import static org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY; import static org.elasticsearch.common.util.set.Sets.newHashSet; import static org.hamcrest.Matchers.arrayContaining; import static org.hamcrest.Matchers.arrayContainingInAnyOrder; @@ -67,15 +71,21 @@ public class IndexNameExpressionResolverTests extends ESTestCase { private IndexNameExpressionResolver indexNameExpressionResolver; + private ThreadContext threadContext; - protected IndexNameExpressionResolver getIndexNameExpressionResolver() { - return new IndexNameExpressionResolver(); + private ThreadContext createThreadContext() { + return new ThreadContext(Settings.EMPTY); + } + + protected IndexNameExpressionResolver createIndexNameExpressionResolver(ThreadContext threadContext) { + return new IndexNameExpressionResolver(threadContext); } @Override public void setUp() throws Exception { super.setUp(); - indexNameExpressionResolver = getIndexNameExpressionResolver(); + threadContext = createThreadContext(); + indexNameExpressionResolver = createIndexNameExpressionResolver(threadContext); } public void testIndexOptionsStrict() { @@ -89,7 +99,7 @@ public void testIndexOptionsStrict() { IndicesOptions[] indicesOptions = new IndicesOptions[]{ IndicesOptions.strictExpandOpen(), IndicesOptions.strictExpand()}; for (IndicesOptions options : indicesOptions) { - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options); + IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options, false); String[] results = indexNameExpressionResolver.concreteIndexNames(context, "foo"); assertEquals(1, results.length); assertEquals("foo", results[0]); @@ -138,26 +148,27 @@ public void testIndexOptionsStrict() { assertEquals("foo", results[0]); } - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen()); + IndexNameExpressionResolver.Context context = + new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen(), false); String[] results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY); assertEquals(3, results.length); results = indexNameExpressionResolver.concreteIndexNames(context, (String[])null); assertEquals(3, results.length); - context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpand()); + context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpand(), false); results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY); assertEquals(4, results.length); results = indexNameExpressionResolver.concreteIndexNames(context, (String[])null); assertEquals(4, results.length); - context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen()); + context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen(), false); results = indexNameExpressionResolver.concreteIndexNames(context, "foofoo*"); assertEquals(3, results.length); assertThat(results, arrayContainingInAnyOrder("foo", "foobar", "foofoo")); - context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpand()); + context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpand(), false); results = indexNameExpressionResolver.concreteIndexNames(context, "foofoo*"); assertEquals(4, results.length); assertThat(results, arrayContainingInAnyOrder("foo", "foobar", "foofoo", "foofoo-closed")); @@ -173,7 +184,7 @@ public void testIndexOptionsLenient() { IndicesOptions[] indicesOptions = new IndicesOptions[]{IndicesOptions.lenientExpandOpen(), IndicesOptions.lenientExpand()}; for (IndicesOptions options : indicesOptions) { - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options); + IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options, false); String[] results = indexNameExpressionResolver.concreteIndexNames(context, "foo"); assertEquals(1, results.length); assertEquals("foo", results[0]); @@ -210,20 +221,21 @@ public void testIndexOptionsLenient() { assertEquals("foo", results[0]); } - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen()); + IndexNameExpressionResolver.Context context = + new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false); String[] results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY); assertEquals(3, results.length); - context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpand()); + context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpand(), false); results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY); assertEquals(Arrays.toString(results), 4, results.length); - context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen()); + context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false); results = indexNameExpressionResolver.concreteIndexNames(context, "foofoo*"); assertEquals(3, results.length); assertThat(results, arrayContainingInAnyOrder("foo", "foobar", "foofoo")); - context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpand()); + context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpand(), false); results = indexNameExpressionResolver.concreteIndexNames(context, "foofoo*"); assertEquals(4, results.length); assertThat(results, arrayContainingInAnyOrder("foo", "foobar", "foofoo", "foofoo-closed")); @@ -242,7 +254,7 @@ public void testIndexOptionsAllowUnavailableDisallowEmpty() { IndicesOptions[] indicesOptions = new IndicesOptions[]{expandOpen, expand}; for (IndicesOptions options : indicesOptions) { - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options); + IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options, false); String[] results = indexNameExpressionResolver.concreteIndexNames(context, "foo"); assertEquals(1, results.length); assertEquals("foo", results[0]); @@ -264,11 +276,11 @@ public void testIndexOptionsAllowUnavailableDisallowEmpty() { } } - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, expandOpen); + IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, expandOpen, false); String[] results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY); assertEquals(3, results.length); - context = new IndexNameExpressionResolver.Context(state, expand); + context = new IndexNameExpressionResolver.Context(state, expand, false); results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY); assertEquals(4, results.length); } @@ -286,7 +298,7 @@ public void testIndexOptionsWildcardExpansion() { // Only closed IndicesOptions options = IndicesOptions.fromOptions(false, true, false, true, false); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options); + IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options, false); String[] results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY); assertEquals(1, results.length); assertEquals("foo", results[0]); @@ -311,7 +323,7 @@ public void testIndexOptionsWildcardExpansion() { // Only open options = IndicesOptions.fromOptions(false, true, true, false, false); - context = new IndexNameExpressionResolver.Context(state, options); + context = new IndexNameExpressionResolver.Context(state, options, false); results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY); assertEquals(2, results.length); assertThat(results, arrayContainingInAnyOrder("bar", "foobar")); @@ -335,7 +347,7 @@ public void testIndexOptionsWildcardExpansion() { // Open and closed options = IndicesOptions.fromOptions(false, true, true, true, false); - context = new IndexNameExpressionResolver.Context(state, options); + context = new IndexNameExpressionResolver.Context(state, options, false); results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY); assertEquals(3, results.length); assertThat(results, arrayContainingInAnyOrder("bar", "foobar", "foo")); @@ -374,7 +386,7 @@ public void testIndexOptionsWildcardExpansion() { // open closed and hidden options = IndicesOptions.fromOptions(false, true, true, true, true); - context = new IndexNameExpressionResolver.Context(state, options); + context = new IndexNameExpressionResolver.Context(state, options, false); results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY); assertEquals(7, results.length); assertThat(results, arrayContainingInAnyOrder("bar", "foobar", "foo", "hidden", "hidden-closed", ".hidden", ".hidden-closed")); @@ -416,7 +428,7 @@ public void testIndexOptionsWildcardExpansion() { // open and hidden options = IndicesOptions.fromOptions(false, true, true, false, true); - context = new IndexNameExpressionResolver.Context(state, options); + context = new IndexNameExpressionResolver.Context(state, options, false); results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY); assertEquals(4, results.length); assertThat(results, arrayContainingInAnyOrder("bar", "foobar", "hidden", ".hidden")); @@ -435,7 +447,7 @@ public void testIndexOptionsWildcardExpansion() { // closed and hidden options = IndicesOptions.fromOptions(false, true, false, true, true); - context = new IndexNameExpressionResolver.Context(state, options); + context = new IndexNameExpressionResolver.Context(state, options, false); results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY); assertEquals(3, results.length); assertThat(results, arrayContainingInAnyOrder("foo", "hidden-closed", ".hidden-closed")); @@ -454,7 +466,7 @@ public void testIndexOptionsWildcardExpansion() { // only hidden options = IndicesOptions.fromOptions(false, true, false, false, true); - context = new IndexNameExpressionResolver.Context(state, options); + context = new IndexNameExpressionResolver.Context(state, options, false); results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY); assertThat(results, emptyArray()); @@ -468,7 +480,7 @@ public void testIndexOptionsWildcardExpansion() { assertThat(results, arrayContainingInAnyOrder("hidden-closed")); options = IndicesOptions.fromOptions(false, false, true, true, true); - IndexNameExpressionResolver.Context context2 = new IndexNameExpressionResolver.Context(state, options); + IndexNameExpressionResolver.Context context2 = new IndexNameExpressionResolver.Context(state, options, false); IndexNotFoundException infe = expectThrows(IndexNotFoundException.class, () -> indexNameExpressionResolver.concreteIndexNames(context2, "-*")); assertThat(infe.getResourceId().toString(), equalTo("[-*]")); @@ -485,7 +497,7 @@ public void testIndexOptionsNoExpandWildcards() { //ignore unavailable and allow no indices { IndicesOptions noExpandLenient = IndicesOptions.fromOptions(true, true, false, false, randomBoolean()); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, noExpandLenient); + IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, noExpandLenient, false); String[] results = indexNameExpressionResolver.concreteIndexNames(context, "baz*"); assertThat(results, emptyArray()); @@ -507,7 +519,7 @@ public void testIndexOptionsNoExpandWildcards() { //ignore unavailable but don't allow no indices { IndicesOptions noExpandDisallowEmpty = IndicesOptions.fromOptions(true, false, false, false, randomBoolean()); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, noExpandDisallowEmpty); + IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, noExpandDisallowEmpty, false); { IndexNotFoundException infe = expectThrows(IndexNotFoundException.class, @@ -532,7 +544,7 @@ public void testIndexOptionsNoExpandWildcards() { //error on unavailable but allow no indices { IndicesOptions noExpandErrorUnavailable = IndicesOptions.fromOptions(false, true, false, false, randomBoolean()); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, noExpandErrorUnavailable); + IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, noExpandErrorUnavailable, false); { String[] results = indexNameExpressionResolver.concreteIndexNames(context, "baz*"); assertThat(results, emptyArray()); @@ -558,7 +570,7 @@ public void testIndexOptionsNoExpandWildcards() { //error on both unavailable and no indices { IndicesOptions noExpandStrict = IndicesOptions.fromOptions(false, false, false, false, randomBoolean()); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, noExpandStrict); + IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, noExpandStrict, false); IndexNotFoundException infe = expectThrows(IndexNotFoundException.class, () -> indexNameExpressionResolver.concreteIndexNames(context, "baz*")); assertThat(infe.getIndex().getName(), equalTo("baz*")); @@ -585,7 +597,7 @@ public void testIndexOptionsSingleIndexNoExpandWildcards() { { IndexNameExpressionResolver.Context context = - new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed()); + new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed(), false); IndexNotFoundException infe = expectThrows(IndexNotFoundException.class, () -> indexNameExpressionResolver.concreteIndexNames(context, "baz*")); assertThat(infe.getIndex().getName(), equalTo("baz*")); @@ -593,7 +605,7 @@ public void testIndexOptionsSingleIndexNoExpandWildcards() { { IndexNameExpressionResolver.Context context = - new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed()); + new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed(), false); IndexNotFoundException infe = expectThrows(IndexNotFoundException.class, () -> indexNameExpressionResolver.concreteIndexNames(context, "foo", "baz*")); assertThat(infe.getIndex().getName(), equalTo("baz*")); @@ -601,7 +613,7 @@ public void testIndexOptionsSingleIndexNoExpandWildcards() { { IndexNameExpressionResolver.Context context = - new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed()); + new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed(), false); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> indexNameExpressionResolver.concreteIndexNames(context, "foofoobar")); assertThat(e.getMessage(), containsString("alias [foofoobar] has more than one index associated with it")); @@ -609,7 +621,7 @@ public void testIndexOptionsSingleIndexNoExpandWildcards() { { IndexNameExpressionResolver.Context context = - new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed()); + new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed(), false); IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> indexNameExpressionResolver.concreteIndexNames(context, "foo", "foofoobar")); assertThat(e.getMessage(), containsString("alias [foofoobar] has more than one index associated with it")); @@ -617,7 +629,7 @@ public void testIndexOptionsSingleIndexNoExpandWildcards() { { IndexNameExpressionResolver.Context context = - new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed()); + new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed(), false); IndexClosedException ince = expectThrows(IndexClosedException.class, () -> indexNameExpressionResolver.concreteIndexNames(context, "foofoo-closed", "foofoobar")); assertThat(ince.getMessage(), equalTo("closed")); @@ -625,7 +637,7 @@ public void testIndexOptionsSingleIndexNoExpandWildcards() { } IndexNameExpressionResolver.Context context = - new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed()); + new IndexNameExpressionResolver.Context(state, IndicesOptions.strictSingleIndexNoExpandForbidClosed(), false); String[] results = indexNameExpressionResolver.concreteIndexNames(context, "foo", "barbaz"); assertEquals(2, results.length); assertThat(results, arrayContainingInAnyOrder("foo", "foofoo")); @@ -635,7 +647,7 @@ public void testIndexOptionsEmptyCluster() { ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(Metadata.builder().build()).build(); IndicesOptions options = IndicesOptions.strictExpandOpen(); - final IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options); + final IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options, false); String[] results = indexNameExpressionResolver.concreteIndexNames(context, Strings.EMPTY_ARRAY); assertThat(results, emptyArray()); @@ -656,7 +668,7 @@ public void testIndexOptionsEmptyCluster() { final IndexNameExpressionResolver.Context context2 = - new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen()); + new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false); results = indexNameExpressionResolver.concreteIndexNames(context2, Strings.EMPTY_ARRAY); assertThat(results, emptyArray()); results = indexNameExpressionResolver.concreteIndexNames(context2, "foo"); @@ -667,7 +679,7 @@ public void testIndexOptionsEmptyCluster() { assertThat(results, emptyArray()); final IndexNameExpressionResolver.Context context3 = - new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, false, true, false)); + new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, false, true, false), false); IndexNotFoundException infe = expectThrows(IndexNotFoundException.class, () -> indexNameExpressionResolver.concreteIndexNames(context3, Strings.EMPTY_ARRAY)); assertThat(infe.getResourceId().toString(), equalTo("[_all]")); @@ -692,7 +704,8 @@ public void testConcreteIndicesIgnoreIndicesOneMissingIndex() { .put(indexBuilder("testXXX")) .put(indexBuilder("kuku")); ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen()); + IndexNameExpressionResolver.Context context = + new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen(), false); IndexNotFoundException infe = expectThrows(IndexNotFoundException.class, () -> indexNameExpressionResolver.concreteIndexNames(context, "testZZZ")); @@ -704,7 +717,8 @@ public void testConcreteIndicesIgnoreIndicesOneMissingIndexOtherFound() { .put(indexBuilder("testXXX")) .put(indexBuilder("kuku")); ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen()); + IndexNameExpressionResolver.Context context = + new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false); assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, "testXXX", "testZZZ")), equalTo(newHashSet("testXXX"))); @@ -715,7 +729,8 @@ public void testConcreteIndicesIgnoreIndicesAllMissing() { .put(indexBuilder("testXXX")) .put(indexBuilder("kuku")); ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen()); + IndexNameExpressionResolver.Context context = + new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen(), false); IndexNotFoundException infe = expectThrows(IndexNotFoundException.class, () -> indexNameExpressionResolver.concreteIndexNames(context, "testMo", "testMahdy")); @@ -727,7 +742,8 @@ public void testConcreteIndicesIgnoreIndicesEmptyRequest() { .put(indexBuilder("testXXX")) .put(indexBuilder("kuku")); ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen()); + IndexNameExpressionResolver.Context context = + new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false); assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, new String[]{})), equalTo(newHashSet("kuku", "testXXX"))); } @@ -735,7 +751,7 @@ public void testConcreteIndicesNoIndicesErrorMessage() { Metadata.Builder mdBuilder = Metadata.builder(); ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, - IndicesOptions.fromOptions(false, false, true, true)); + IndicesOptions.fromOptions(false, false, true, true), false); IndexNotFoundException infe = expectThrows(IndexNotFoundException.class, () -> indexNameExpressionResolver.concreteIndices(context, new String[]{})); assertThat(infe.getMessage(), is("no such index [null] and no indices exist")); @@ -745,7 +761,7 @@ public void testConcreteIndicesNoIndicesErrorMessageNoExpand() { Metadata.Builder mdBuilder = Metadata.builder(); ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, - IndicesOptions.fromOptions(false, false, false, false)); + IndicesOptions.fromOptions(false, false, false, false), false); IndexNotFoundException infe = expectThrows(IndexNotFoundException.class, () -> indexNameExpressionResolver.concreteIndices(context, new String[]{})); assertThat(infe.getMessage(), is("no such index [_all] and no indices exist")); @@ -761,16 +777,16 @@ public void testConcreteIndicesWildcardExpansion() { ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); IndexNameExpressionResolver.Context context = - new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, false, false)); + new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, false, false), false); assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, "testX*")), equalTo(new HashSet())); - context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, true, false)); + context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, true, false), false); assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, "testX*")), equalTo(newHashSet("testXXX", "testXXY"))); - context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, false, true)); + context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, false, true), false); assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, "testX*")), equalTo(newHashSet("testXYY"))); - context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, true, true)); + context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, true, true), false); assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, "testX*")), equalTo(newHashSet("testXXX", "testXXY", "testXYY"))); } @@ -788,7 +804,7 @@ public void testConcreteIndicesWildcardWithNegation() { ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, - IndicesOptions.fromOptions(true, true, true, true)); + IndicesOptions.fromOptions(true, true, true, true), false); assertThat(newHashSet(indexNameExpressionResolver.concreteIndexNames(context, "testX*")), equalTo(newHashSet("testXXX", "testXXY", "testXYY"))); @@ -1076,7 +1092,7 @@ public void testConcreteIndicesAllPatternRandom() { { ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(Metadata.builder().build()).build(); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, indicesOptions); + IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, indicesOptions, false); // with no indices, asking for all indices should return empty list or exception, depending on indices options if (indicesOptions.allowNoIndices()) { @@ -1095,7 +1111,7 @@ public void testConcreteIndicesAllPatternRandom() { .put(indexBuilder("bbb").state(State.OPEN).putAlias(AliasMetadata.builder("bbb_alias1"))) .put(indexBuilder("ccc").state(State.CLOSE).putAlias(AliasMetadata.builder("ccc_alias1"))); ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, indicesOptions); + IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, indicesOptions, false); if (indicesOptions.expandWildcardsOpen() || indicesOptions.expandWildcardsClosed() || indicesOptions.allowNoIndices()) { String[] concreteIndices = indexNameExpressionResolver.concreteIndexNames(context, allIndices); assertThat(concreteIndices, notNullValue()); @@ -1125,7 +1141,7 @@ public void testConcreteIndicesWildcardNoMatch() { .put(indexBuilder("bbb").state(State.OPEN).putAlias(AliasMetadata.builder("bbb_alias1"))) .put(indexBuilder("ccc").state(State.CLOSE).putAlias(AliasMetadata.builder("ccc_alias1"))); ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, indicesOptions); + IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, indicesOptions, false); // asking for non existing wildcard pattern should return empty list or exception if (indicesOptions.allowNoIndices()) { @@ -1254,20 +1270,20 @@ public void testIndexOptionsFailClosedIndicesAndAliases() { ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); IndexNameExpressionResolver.Context contextICE = - new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpenAndForbidClosed()); + new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpenAndForbidClosed(), false); expectThrows(IndexClosedException.class, () -> indexNameExpressionResolver.concreteIndexNames(contextICE, "foo1-closed")); expectThrows(IndexClosedException.class, () -> indexNameExpressionResolver.concreteIndexNames(contextICE, "foobar1-closed")); IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, contextICE.getOptions().allowNoIndices(), contextICE.getOptions().expandWildcardsOpen(), - contextICE.getOptions().expandWildcardsClosed(), contextICE.getOptions())); + contextICE.getOptions().expandWildcardsClosed(), contextICE.getOptions()), false); String[] results = indexNameExpressionResolver.concreteIndexNames(context, "foo1-closed"); assertThat(results, emptyArray()); results = indexNameExpressionResolver.concreteIndexNames(context, "foobar1-closed"); assertThat(results, emptyArray()); - context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen()); + context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false); results = indexNameExpressionResolver.concreteIndexNames(context, "foo1-closed"); assertThat(results, arrayWithSize(1)); assertThat(results, arrayContaining("foo1-closed")); @@ -1277,7 +1293,7 @@ public void testIndexOptionsFailClosedIndicesAndAliases() { assertThat(results, arrayContaining("foo1-closed")); // testing an alias pointing to three indices: - context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpenAndForbidClosed()); + context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpenAndForbidClosed(), false); try { indexNameExpressionResolver.concreteIndexNames(context, "foobar2-closed"); fail("foo2-closed should be closed, but it is open"); @@ -1287,12 +1303,12 @@ public void testIndexOptionsFailClosedIndicesAndAliases() { context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, context.getOptions().allowNoIndices(), context.getOptions().expandWildcardsOpen(), - context.getOptions().expandWildcardsClosed(), context.getOptions())); + context.getOptions().expandWildcardsClosed(), context.getOptions()), false); results = indexNameExpressionResolver.concreteIndexNames(context, "foobar2-closed"); assertThat(results, arrayWithSize(1)); assertThat(results, arrayContaining("foo3")); - context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen()); + context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false); results = indexNameExpressionResolver.concreteIndexNames(context, "foobar2-closed"); assertThat(results, arrayWithSize(3)); assertThat(results, arrayContainingInAnyOrder("foo1-closed", "foo2-closed", "foo3")); @@ -1305,7 +1321,7 @@ public void testDedupConcreteIndices() { IndicesOptions[] indicesOptions = new IndicesOptions[]{ IndicesOptions.strictExpandOpen(), IndicesOptions.strictExpand(), IndicesOptions.lenientExpandOpen(), IndicesOptions.strictExpandOpenAndForbidClosed()}; for (IndicesOptions options : indicesOptions) { - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options); + IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, options, false); String[] results = indexNameExpressionResolver.concreteIndexNames(context, "index1", "index1", "alias1"); assertThat(results, equalTo(new String[]{"index1"})); } @@ -1325,11 +1341,12 @@ public void testFilterClosedIndicesOnAliases() { .put(indexBuilder("test-1").state(IndexMetadata.State.CLOSE).putAlias(AliasMetadata.builder("alias-1"))); ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen()); + IndexNameExpressionResolver.Context context = + new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false); String[] strings = indexNameExpressionResolver.concreteIndexNames(context, "alias-*"); assertArrayEquals(new String[] {"test-0"}, strings); - context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen()); + context = new IndexNameExpressionResolver.Context(state, IndicesOptions.strictExpandOpen(), false); strings = indexNameExpressionResolver.concreteIndexNames(context, "alias-*"); assertArrayEquals(new String[] {"test-0"}, strings); @@ -1739,7 +1756,8 @@ public void testIndicesAliasesRequestTargetDataStreams() { public void testInvalidIndex() { Metadata.Builder mdBuilder = Metadata.builder().put(indexBuilder("test")); ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen()); + IndexNameExpressionResolver.Context context = + new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false); InvalidIndexNameException iine = expectThrows(InvalidIndexNameException.class, () -> indexNameExpressionResolver.concreteIndexNames(context, "_foo")); @@ -1810,6 +1828,86 @@ public void testIgnoreThrottled() { } } + public void testFullWildcardSystemIndexResolutionAllowed() { + ClusterState state = systemIndexTestClusterState(); + SearchRequest request = new SearchRequest(randomFrom("*", "_all")); + + List indexNames = resolveConcreteIndexNameList(state, request); + assertThat(indexNames, containsInAnyOrder("some-other-index", ".ml-stuff", ".ml-meta", ".watches")); + } + + public void testWildcardSystemIndexResolutionMultipleMatchesAllowed() { + ClusterState state = systemIndexTestClusterState(); + SearchRequest request = new SearchRequest(".w*"); + + List indexNames = resolveConcreteIndexNameList(state, request); + assertThat(indexNames, containsInAnyOrder(".watches")); + } + + public void testWildcardSystemIndexResolutionSingleMatchAllowed() { + ClusterState state = systemIndexTestClusterState(); + SearchRequest request = new SearchRequest(".ml-*"); + + List indexNames = resolveConcreteIndexNameList(state, request); + assertThat(indexNames, containsInAnyOrder(".ml-meta", ".ml-stuff")); + } + + public void testSingleSystemIndexResolutionAllowed() { + ClusterState state = systemIndexTestClusterState(); + SearchRequest request = new SearchRequest(".ml-meta"); + + List indexNames = resolveConcreteIndexNameList(state, request); + assertThat(indexNames, containsInAnyOrder(".ml-meta")); + } + + public void testFullWildcardSystemIndexResolutionDeprecated() { + threadContext.putHeader(SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY, Boolean.FALSE.toString()); + ClusterState state = systemIndexTestClusterState(); + SearchRequest request = new SearchRequest(randomFrom("*", "_all")); + + List indexNames = resolveConcreteIndexNameList(state, request); + assertThat(indexNames, containsInAnyOrder("some-other-index", ".ml-stuff", ".ml-meta", ".watches")); + assertWarnings("this request accesses system indices: [.ml-meta, .ml-stuff, .watches], but in a future major version, " + + "direct access to system indices will be prevented by default"); + + } + + public void testSingleSystemIndexResolutionDeprecated() { + threadContext.putHeader(SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY, Boolean.FALSE.toString()); + ClusterState state = systemIndexTestClusterState(); + SearchRequest request = new SearchRequest(".ml-meta"); + + List indexNames = resolveConcreteIndexNameList(state, request); + assertThat(indexNames, containsInAnyOrder(".ml-meta")); + assertWarnings("this request accesses system indices: [.ml-meta], but in a future major version, direct access " + + "to system indices will be prevented by default"); + + } + + public void testWildcardSystemIndexReslutionSingleMatchDeprecated() { + threadContext.putHeader(SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY, Boolean.FALSE.toString()); + ClusterState state = systemIndexTestClusterState(); + SearchRequest request = new SearchRequest(".w*"); + + List indexNames = resolveConcreteIndexNameList(state, request); + assertThat(indexNames, containsInAnyOrder(".watches")); + assertWarnings("this request accesses system indices: [.watches], but in a future major version, direct access " + + "to system indices will be prevented by default"); + + } + + public void testWildcardSystemIndexResolutionMultipleMatchesDeprecated() { + threadContext.putHeader(SYSTEM_INDEX_ACCESS_CONTROL_HEADER_KEY, Boolean.FALSE.toString()); + ClusterState state = systemIndexTestClusterState(); + SearchRequest request = new SearchRequest(".ml-*"); + + List indexNames = resolveConcreteIndexNameList(state, request); + assertThat(indexNames, containsInAnyOrder(".ml-meta", ".ml-stuff")); + assertWarnings("this request accesses system indices: [.ml-meta, .ml-stuff], but in a future major version, direct access " + + "to system indices will be prevented by default"); + + } + public void testDataStreams() { final String dataStreamName = "my-data-stream"; IndexMetadata index1 = createBackingIndex(dataStreamName, 1).build(); @@ -2043,4 +2141,21 @@ public void testDataStreamsNames() { names = indexNameExpressionResolver.dataStreamNames(state, IndicesOptions.lenientExpand(), "*", "-*"); assertThat(names, empty()); } + + private ClusterState systemIndexTestClusterState() { + Settings settings = Settings.builder().build(); + Metadata.Builder mdBuilder = Metadata.builder() + .put(indexBuilder(".ml-meta", settings).state(State.OPEN).system(true)) + .put(indexBuilder(".watches", settings).state(State.OPEN).system(true)) + .put(indexBuilder(".ml-stuff", settings).state(State.OPEN).system(true)) + .put(indexBuilder("some-other-index").state(State.OPEN)); + return ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); + } + + private List resolveConcreteIndexNameList(ClusterState state, SearchRequest request) { + return Arrays + .stream(indexNameExpressionResolver.concreteIndices(state, request)) + .map(i -> i.getName()) + .collect(Collectors.toList()); + } } diff --git a/server/src/test/java/org/elasticsearch/cluster/metadata/WildcardExpressionResolverTests.java b/server/src/test/java/org/elasticsearch/cluster/metadata/WildcardExpressionResolverTests.java index b68db18a1cec2..b0fd3c2523280 100644 --- a/server/src/test/java/org/elasticsearch/cluster/metadata/WildcardExpressionResolverTests.java +++ b/server/src/test/java/org/elasticsearch/cluster/metadata/WildcardExpressionResolverTests.java @@ -48,7 +48,8 @@ public void testConvertWildcardsJustIndicesTests() { ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); IndexNameExpressionResolver.WildcardExpressionResolver resolver = new IndexNameExpressionResolver.WildcardExpressionResolver(); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen()); + IndexNameExpressionResolver.Context context = + new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false); assertThat(newHashSet(resolver.resolve(context, Collections.singletonList("testXXX"))), equalTo(newHashSet("testXXX"))); assertThat(newHashSet(resolver.resolve(context, Arrays.asList("testXXX", "testYYY"))), equalTo(newHashSet("testXXX", "testYYY"))); assertThat(newHashSet(resolver.resolve(context, Arrays.asList("testXXX", "ku*"))), equalTo(newHashSet("testXXX", "kuku"))); @@ -76,7 +77,8 @@ public void testConvertWildcardsTests() { ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); IndexNameExpressionResolver.WildcardExpressionResolver resolver = new IndexNameExpressionResolver.WildcardExpressionResolver(); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen()); + IndexNameExpressionResolver.Context context = + new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false); assertThat(newHashSet(resolver.resolve(context, Arrays.asList("testYY*", "alias*"))), equalTo(newHashSet("testXXX", "testXYY", "testYYY"))); assertThat(newHashSet(resolver.resolve(context, Collections.singletonList("-kuku"))), equalTo(newHashSet("-kuku"))); @@ -99,12 +101,12 @@ public void testConvertWildcardsOpenClosedIndicesTests() { IndexNameExpressionResolver.WildcardExpressionResolver resolver = new IndexNameExpressionResolver.WildcardExpressionResolver(); IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, - IndicesOptions.fromOptions(true, true, true, true)); + IndicesOptions.fromOptions(true, true, true, true), false); assertThat(newHashSet(resolver.resolve(context, Collections.singletonList("testX*"))), equalTo(newHashSet("testXXX", "testXXY", "testXYY"))); - context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, false, true)); + context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, false, true), false); assertThat(newHashSet(resolver.resolve(context, Collections.singletonList("testX*"))), equalTo(newHashSet("testXYY"))); - context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, true, false)); + context = new IndexNameExpressionResolver.Context(state, IndicesOptions.fromOptions(true, true, true, false), false); assertThat(newHashSet(resolver.resolve(context, Collections.singletonList("testX*"))), equalTo(newHashSet("testXXX", "testXXY"))); } @@ -121,7 +123,8 @@ public void testMultipleWildcards() { ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); IndexNameExpressionResolver.WildcardExpressionResolver resolver = new IndexNameExpressionResolver.WildcardExpressionResolver(); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen()); + IndexNameExpressionResolver.Context context = + new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false); assertThat(newHashSet(resolver.resolve(context, Collections.singletonList("test*X*"))), equalTo(newHashSet("testXXX", "testXXY", "testXYY"))); assertThat(newHashSet(resolver.resolve(context, Collections.singletonList("test*X*Y"))), equalTo(newHashSet("testXXY", "testXYY"))); @@ -140,7 +143,8 @@ public void testAll() { ClusterState state = ClusterState.builder(new ClusterName("_name")).metadata(mdBuilder).build(); IndexNameExpressionResolver.WildcardExpressionResolver resolver = new IndexNameExpressionResolver.WildcardExpressionResolver(); - IndexNameExpressionResolver.Context context = new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen()); + IndexNameExpressionResolver.Context context = + new IndexNameExpressionResolver.Context(state, IndicesOptions.lenientExpandOpen(), false); assertThat(newHashSet(resolver.resolve(context, Collections.singletonList("_all"))), equalTo(newHashSet("testXXX", "testXYY", "testYYY"))); } @@ -158,15 +162,15 @@ public void testResolveAliases() { IndicesOptions indicesAndAliasesOptions = IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), true, false, true, false, false, false); IndexNameExpressionResolver.Context indicesAndAliasesContext = - new IndexNameExpressionResolver.Context(state, indicesAndAliasesOptions); + new IndexNameExpressionResolver.Context(state, indicesAndAliasesOptions, false); // ignoreAliases option is set, WildcardExpressionResolver throws error when IndicesOptions skipAliasesIndicesOptions = IndicesOptions.fromOptions(true, true, true, false, true, false, true, false); IndexNameExpressionResolver.Context skipAliasesLenientContext = - new IndexNameExpressionResolver.Context(state, skipAliasesIndicesOptions); + new IndexNameExpressionResolver.Context(state, skipAliasesIndicesOptions, false); // ignoreAliases option is set, WildcardExpressionResolver resolves the provided expressions only against the defined indices IndicesOptions errorOnAliasIndicesOptions = IndicesOptions.fromOptions(false, false, true, false, true, false, true, false); IndexNameExpressionResolver.Context skipAliasesStrictContext = - new IndexNameExpressionResolver.Context(state, errorOnAliasIndicesOptions); + new IndexNameExpressionResolver.Context(state, errorOnAliasIndicesOptions, false); { List indices = resolver.resolve(indicesAndAliasesContext, Collections.singletonList("foo_a*")); @@ -232,7 +236,7 @@ public void testResolveDataStreams() { IndicesOptions indicesAndAliasesOptions = IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), true, false, true, false, false, false); IndexNameExpressionResolver.Context indicesAndAliasesContext = - new IndexNameExpressionResolver.Context(state, indicesAndAliasesOptions); + new IndexNameExpressionResolver.Context(state, indicesAndAliasesOptions, false); // data streams are not included but expression matches the data stream List indices = resolver.resolve(indicesAndAliasesContext, Collections.singletonList("foo_*")); @@ -247,7 +251,7 @@ public void testResolveDataStreams() { IndicesOptions indicesAndAliasesOptions = IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), true, false, true, false, false, false); IndexNameExpressionResolver.Context indicesAliasesAndDataStreamsContext = new IndexNameExpressionResolver.Context(state, - indicesAndAliasesOptions, false, false, true); + indicesAndAliasesOptions, false, false, true, false); // data stream's corresponding backing indices are resolved List indices = resolver.resolve(indicesAliasesAndDataStreamsContext, Collections.singletonList("foo_*")); @@ -264,7 +268,7 @@ public void testResolveDataStreams() { IndicesOptions indicesAliasesAndExpandHiddenOptions = IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), true, false, true, true, false, false, false); IndexNameExpressionResolver.Context indicesAliasesDataStreamsAndHiddenIndices = new IndexNameExpressionResolver.Context(state, - indicesAliasesAndExpandHiddenOptions, false, false, true); + indicesAliasesAndExpandHiddenOptions, false, false, true, false); // data stream's corresponding backing indices are resolved List indices = resolver.resolve(indicesAliasesDataStreamsAndHiddenIndices, Collections.singletonList("foo_*")); @@ -290,12 +294,12 @@ public void testMatchesConcreteIndicesWildcardAndAliases() { // expressions against the defined indices and aliases IndicesOptions indicesAndAliasesOptions = IndicesOptions.fromOptions(false, false, true, false, true, false, false, false); IndexNameExpressionResolver.Context indicesAndAliasesContext = - new IndexNameExpressionResolver.Context(state, indicesAndAliasesOptions); + new IndexNameExpressionResolver.Context(state, indicesAndAliasesOptions, false); // ignoreAliases option is set, WildcardExpressionResolver resolves the provided expressions // only against the defined indices IndicesOptions onlyIndicesOptions = IndicesOptions.fromOptions(false, false, true, false, true, false, true, false); - IndexNameExpressionResolver.Context onlyIndicesContext = new IndexNameExpressionResolver.Context(state, onlyIndicesOptions); + IndexNameExpressionResolver.Context onlyIndicesContext = new IndexNameExpressionResolver.Context(state, onlyIndicesOptions, false); { Set matches = IndexNameExpressionResolver.WildcardExpressionResolver.matches(indicesAndAliasesContext, diff --git a/server/src/test/java/org/elasticsearch/index/IndexModuleTests.java b/server/src/test/java/org/elasticsearch/index/IndexModuleTests.java index c2a8425134cf3..19b526b30a182 100644 --- a/server/src/test/java/org/elasticsearch/index/IndexModuleTests.java +++ b/server/src/test/java/org/elasticsearch/index/IndexModuleTests.java @@ -48,6 +48,7 @@ import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.util.PageCacheRecycler; import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.core.internal.io.IOUtils; import org.elasticsearch.env.Environment; import org.elasticsearch.env.NodeEnvironment; @@ -179,7 +180,7 @@ public void testWrapperIsBound() throws IOException { engineFactory, Collections.emptyMap(), () -> true, - new IndexNameExpressionResolver(), + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), Collections.emptyMap()); module.setReaderWrapper(s -> new Wrapper()); @@ -201,7 +202,7 @@ public void testRegisterIndexStore() throws IOException { final Map indexStoreFactories = singletonMap( "foo_store", new FooFunction()); final IndexModule module = new IndexModule(indexSettings, emptyAnalysisRegistry, new InternalEngineFactory(), indexStoreFactories, - () -> true, new IndexNameExpressionResolver(), Collections.emptyMap()); + () -> true, new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), Collections.emptyMap()); final IndexService indexService = newIndexService(module); assertThat(indexService.getDirectoryFactory(), instanceOf(FooFunction.class)); @@ -514,7 +515,7 @@ public void testRegisterCustomRecoveryStateFactory() throws IOException { new InternalEngineFactory(), Collections.emptyMap(), () -> true, - new IndexNameExpressionResolver(), + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), recoveryStateFactories); final IndexService indexService = newIndexService(module); @@ -535,7 +536,7 @@ private ShardRouting createInitializedShardRouting() { private static IndexModule createIndexModule(IndexSettings indexSettings, AnalysisRegistry emptyAnalysisRegistry) { return new IndexModule(indexSettings, emptyAnalysisRegistry, new InternalEngineFactory(), Collections.emptyMap(), () -> true, - new IndexNameExpressionResolver(), Collections.emptyMap()); + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), Collections.emptyMap()); } class CustomQueryCache implements QueryCache { diff --git a/server/src/test/java/org/elasticsearch/index/query/SearchIndexNameMatcherTests.java b/server/src/test/java/org/elasticsearch/index/query/SearchIndexNameMatcherTests.java index eacb0f641404f..3a3a60287c89c 100644 --- a/server/src/test/java/org/elasticsearch/index/query/SearchIndexNameMatcherTests.java +++ b/server/src/test/java/org/elasticsearch/index/query/SearchIndexNameMatcherTests.java @@ -28,6 +28,7 @@ import org.elasticsearch.cluster.metadata.Metadata; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.test.ESTestCase; import org.junit.Before; @@ -49,8 +50,10 @@ public void setUpMatchers() { ClusterService clusterService = mock(ClusterService.class); when(clusterService.state()).thenReturn(state); - matcher = new SearchIndexNameMatcher("index1", "", clusterService, new IndexNameExpressionResolver()); - remoteMatcher = new SearchIndexNameMatcher("index1", "cluster", clusterService, new IndexNameExpressionResolver()); + matcher = new SearchIndexNameMatcher("index1", "", clusterService, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); + remoteMatcher = new SearchIndexNameMatcher("index1", "cluster", clusterService, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); } private static IndexMetadata.Builder indexBuilder(String index) { diff --git a/server/src/test/java/org/elasticsearch/indices/cluster/ClusterStateChanges.java b/server/src/test/java/org/elasticsearch/indices/cluster/ClusterStateChanges.java index 290281ef94fa8..7468718025f28 100644 --- a/server/src/test/java/org/elasticsearch/indices/cluster/ClusterStateChanges.java +++ b/server/src/test/java/org/elasticsearch/indices/cluster/ClusterStateChanges.java @@ -83,6 +83,7 @@ import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.IndexScopedSettings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.env.Environment; import org.elasticsearch.env.TestEnvironment; @@ -155,7 +156,7 @@ public ClusterStateChanges(NamedXContentRegistry xContentRegistry, ThreadPool th shardStartedClusterStateTaskExecutor = new ShardStateAction.ShardStartedClusterStateTaskExecutor(allocationService, null, logger); ActionFilters actionFilters = new ActionFilters(Collections.emptySet()); - IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver(); + IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); DestructiveOperations destructiveOperations = new DestructiveOperations(SETTINGS, clusterSettings); Environment environment = TestEnvironment.newEnvironment(SETTINGS); Transport transport = mock(Transport.class); // it's not used diff --git a/server/src/test/java/org/elasticsearch/rest/BaseRestHandlerTests.java b/server/src/test/java/org/elasticsearch/rest/BaseRestHandlerTests.java index c412af527507f..f5a44e40c1057 100644 --- a/server/src/test/java/org/elasticsearch/rest/BaseRestHandlerTests.java +++ b/server/src/test/java/org/elasticsearch/rest/BaseRestHandlerTests.java @@ -22,6 +22,7 @@ import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.Table; import org.elasticsearch.common.bytes.BytesArray; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.json.JsonXContent; @@ -29,6 +30,8 @@ import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.rest.FakeRestChannel; import org.elasticsearch.test.rest.FakeRestRequest; +import org.elasticsearch.threadpool.TestThreadPool; +import org.elasticsearch.threadpool.ThreadPool; import java.io.IOException; import java.util.Collections; @@ -39,9 +42,24 @@ import static org.hamcrest.core.StringContains.containsString; import static org.hamcrest.object.HasToString.hasToString; -import static org.mockito.Mockito.mock; public class BaseRestHandlerTests extends ESTestCase { + private NodeClient mockClient; + private ThreadPool threadPool; + + @Override + public void setUp() throws Exception { + super.setUp(); + threadPool = new TestThreadPool(this.getClass().getSimpleName() + "ThreadPool"); + mockClient = new NodeClient(Settings.EMPTY, threadPool); + } + + @Override + public void tearDown() throws Exception { + super.tearDown(); + threadPool.shutdown(); + mockClient.close(); + } public void testOneUnconsumedParameters() throws Exception { final AtomicBoolean executed = new AtomicBoolean(); @@ -69,7 +87,7 @@ public List routes() { RestRequest request = new FakeRestRequest.Builder(xContentRegistry()).withParams(params).build(); RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1); final IllegalArgumentException e = - expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mock(NodeClient.class))); + expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mockClient)); assertThat(e, hasToString(containsString("request [/] contains unrecognized parameter: [unconsumed]"))); assertFalse(executed.get()); } @@ -101,7 +119,7 @@ public List routes() { RestRequest request = new FakeRestRequest.Builder(xContentRegistry()).withParams(params).build(); RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1); final IllegalArgumentException e = - expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mock(NodeClient.class))); + expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mockClient)); assertThat(e, hasToString(containsString("request [/] contains unrecognized parameters: [unconsumed-first], [unconsumed-second]"))); assertFalse(executed.get()); } @@ -145,7 +163,7 @@ public List routes() { RestRequest request = new FakeRestRequest.Builder(xContentRegistry()).withParams(params).build(); RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1); final IllegalArgumentException e = - expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mock(NodeClient.class))); + expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mockClient)); assertThat( e, hasToString(containsString( @@ -188,7 +206,7 @@ public List routes() { params.put("response_param", randomAlphaOfLength(8)); RestRequest request = new FakeRestRequest.Builder(xContentRegistry()).withParams(params).build(); RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1); - handler.handleRequest(request, channel, mock(NodeClient.class)); + handler.handleRequest(request, channel, mockClient); assertTrue(executed.get()); } @@ -218,7 +236,7 @@ public List routes() { params.put("human", null); RestRequest request = new FakeRestRequest.Builder(xContentRegistry()).withParams(params).build(); RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1); - handler.handleRequest(request, channel, mock(NodeClient.class)); + handler.handleRequest(request, channel, mockClient); assertTrue(executed.get()); } @@ -262,7 +280,7 @@ public List routes() { params.put("time", randomAlphaOfLength(8)); RestRequest request = new FakeRestRequest.Builder(xContentRegistry()).withParams(params).build(); RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1); - handler.handleRequest(request, channel, mock(NodeClient.class)); + handler.handleRequest(request, channel, mockClient); assertTrue(executed.get()); } @@ -291,7 +309,7 @@ public List routes() { .withContent(new BytesArray(builder.toString()), XContentType.JSON) .build(); final RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1); - handler.handleRequest(request, channel, mock(NodeClient.class)); + handler.handleRequest(request, channel, mockClient); assertTrue(executed.get()); } } @@ -317,7 +335,7 @@ public List routes() { final RestRequest request = new FakeRestRequest.Builder(xContentRegistry()).build(); final RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1); - handler.handleRequest(request, channel, mock(NodeClient.class)); + handler.handleRequest(request, channel, mockClient); assertTrue(executed.get()); } @@ -346,7 +364,7 @@ public List routes() { .build(); final RestChannel channel = new FakeRestChannel(request, randomBoolean(), 1); final IllegalArgumentException e = - expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mock(NodeClient.class))); + expectThrows(IllegalArgumentException.class, () -> handler.handleRequest(request, channel, mockClient)); assertThat(e, hasToString(containsString("request [GET /] does not support having a body"))); assertFalse(executed.get()); } diff --git a/server/src/test/java/org/elasticsearch/rest/RestControllerTests.java b/server/src/test/java/org/elasticsearch/rest/RestControllerTests.java index 8c8e83e5bfe42..0b756acb5ef86 100644 --- a/server/src/test/java/org/elasticsearch/rest/RestControllerTests.java +++ b/server/src/test/java/org/elasticsearch/rest/RestControllerTests.java @@ -34,6 +34,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.common.xcontent.yaml.YamlXContent; +import org.elasticsearch.core.internal.io.IOUtils; import org.elasticsearch.http.HttpInfo; import org.elasticsearch.http.HttpRequest; import org.elasticsearch.http.HttpResponse; @@ -41,8 +42,10 @@ import org.elasticsearch.http.HttpStats; import org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService; import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.client.NoOpNodeClient; import org.elasticsearch.test.rest.FakeRestRequest; import org.elasticsearch.usage.UsageService; +import org.junit.After; import org.junit.Before; import java.io.IOException; @@ -76,6 +79,7 @@ public class RestControllerTests extends ESTestCase { private RestController restController; private HierarchyCircuitBreakerService circuitBreakerService; private UsageService usageService; + private NodeClient client; @Before public void setup() { @@ -92,7 +96,8 @@ public void setup() { inFlightRequestsBreaker = circuitBreakerService.getBreaker(CircuitBreaker.IN_FLIGHT_REQUESTS); HttpServerTransport httpServerTransport = new TestHttpServerTransport(); - restController = new RestController(Collections.emptySet(), null, null, circuitBreakerService, usageService); + client = new NoOpNodeClient(this.getTestName()); + restController = new RestController(Collections.emptySet(), null, client, circuitBreakerService, usageService); restController.registerHandler(RestRequest.Method.GET, "/", (request, channel, client) -> channel.sendResponse( new BytesRestResponse(RestStatus.OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY))); @@ -103,8 +108,13 @@ public void setup() { httpServerTransport.start(); } + @After + public void teardown() throws IOException { + IOUtils.close(client); + } + public void testApplyRelevantHeaders() throws Exception { - final ThreadContext threadContext = new ThreadContext(Settings.EMPTY); + final ThreadContext threadContext = client.threadPool().getThreadContext(); Set headers = new HashSet<>(Arrays.asList(new RestHeaderDefinition("header.1", true), new RestHeaderDefinition("header.2", true))); final RestController restController = new RestController(headers, null, null, circuitBreakerService, usageService); @@ -140,7 +150,7 @@ public MethodHandlers next() { } public void testRequestWithDisallowedMultiValuedHeader() { - final ThreadContext threadContext = new ThreadContext(Settings.EMPTY); + final ThreadContext threadContext = client.threadPool().getThreadContext(); Set headers = new HashSet<>(Arrays.asList(new RestHeaderDefinition("header.1", true), new RestHeaderDefinition("header.2", false))); final RestController restController = new RestController(headers, null, null, circuitBreakerService, usageService); @@ -154,10 +164,10 @@ public void testRequestWithDisallowedMultiValuedHeader() { } public void testRequestWithDisallowedMultiValuedHeaderButSameValues() { - final ThreadContext threadContext = new ThreadContext(Settings.EMPTY); + final ThreadContext threadContext = client.threadPool().getThreadContext(); Set headers = new HashSet<>(Arrays.asList(new RestHeaderDefinition("header.1", true), new RestHeaderDefinition("header.2", false))); - final RestController restController = new RestController(headers, null, null, circuitBreakerService, usageService); + final RestController restController = new RestController(headers, null, client, circuitBreakerService, usageService); Map> restHeaders = new HashMap<>(); restHeaders.put("header.1", Collections.singletonList("boo")); restHeaders.put("header.2", List.of("foo", "foo")); @@ -238,11 +248,11 @@ public void testRestHandlerWrapper() throws Exception { h -> { assertSame(handler, h); return (RestRequest request, RestChannel channel, NodeClient client) -> wrapperCalled.set(true); - }, null, circuitBreakerService, usageService); + }, client, circuitBreakerService, usageService); restController.registerHandler(RestRequest.Method.GET, "/wrapped", handler); RestRequest request = testRestRequest("/wrapped", "{}", XContentType.JSON); AssertingChannel channel = new AssertingChannel(request, true, RestStatus.BAD_REQUEST); - restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(request, channel, client.threadPool().getThreadContext()); httpServerTransport.start(); assertTrue(wrapperCalled.get()); assertFalse(handlerCalled.get()); @@ -254,7 +264,7 @@ public void testDispatchRequestAddsAndFreesBytesOnSuccess() { RestRequest request = testRestRequest("/", content, XContentType.JSON); AssertingChannel channel = new AssertingChannel(request, true, RestStatus.OK); - restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(request, channel, client.threadPool().getThreadContext()); assertEquals(0, inFlightRequestsBreaker.getTrippedCount()); assertEquals(0, inFlightRequestsBreaker.getUsed()); @@ -266,7 +276,7 @@ public void testDispatchRequestAddsAndFreesBytesOnError() { RestRequest request = testRestRequest("/error", content, XContentType.JSON); AssertingChannel channel = new AssertingChannel(request, true, RestStatus.BAD_REQUEST); - restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(request, channel, client.threadPool().getThreadContext()); assertEquals(0, inFlightRequestsBreaker.getTrippedCount()); assertEquals(0, inFlightRequestsBreaker.getUsed()); @@ -279,7 +289,7 @@ public void testDispatchRequestAddsAndFreesBytesOnlyOnceOnError() { RestRequest request = testRestRequest("/error", content, XContentType.JSON); ExceptionThrowingChannel channel = new ExceptionThrowingChannel(request, true); - restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(request, channel, client.threadPool().getThreadContext()); assertEquals(0, inFlightRequestsBreaker.getTrippedCount()); assertEquals(0, inFlightRequestsBreaker.getUsed()); @@ -291,7 +301,7 @@ public void testDispatchRequestLimitsBytes() { RestRequest request = testRestRequest("/", content, XContentType.JSON); AssertingChannel channel = new AssertingChannel(request, true, RestStatus.TOO_MANY_REQUESTS); - restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(request, channel, client.threadPool().getThreadContext()); assertEquals(1, inFlightRequestsBreaker.getTrippedCount()); assertEquals(0, inFlightRequestsBreaker.getUsed()); @@ -307,7 +317,7 @@ public void testDispatchRequiresContentTypeForRequestsWithContent() { new BytesRestResponse(RestStatus.OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY))); assertFalse(channel.getSendResponseCalled()); - restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(request, channel, client.threadPool().getThreadContext()); assertTrue(channel.getSendResponseCalled()); } @@ -316,7 +326,7 @@ public void testDispatchDoesNotRequireContentTypeForRequestsWithoutContent() { AssertingChannel channel = new AssertingChannel(fakeRestRequest, true, RestStatus.OK); assertFalse(channel.getSendResponseCalled()); - restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext()); assertTrue(channel.getSendResponseCalled()); } @@ -334,7 +344,7 @@ public void handleRequest(RestRequest request, RestChannel channel, NodeClient c }); assertFalse(channel.getSendResponseCalled()); - restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext()); assertTrue(channel.getSendResponseCalled()); } @@ -345,7 +355,7 @@ public void testDispatchUnsupportedContentType() { AssertingChannel channel = new AssertingChannel(fakeRestRequest, true, RestStatus.NOT_ACCEPTABLE); assertFalse(channel.getSendResponseCalled()); - restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext()); assertTrue(channel.getSendResponseCalled()); } @@ -369,7 +379,7 @@ public boolean supportsContentStream() { }); assertFalse(channel.getSendResponseCalled()); - restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext()); assertTrue(channel.getSendResponseCalled()); } @@ -394,7 +404,7 @@ public boolean supportsContentStream() { }); assertFalse(channel.getSendResponseCalled()); - restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext()); assertTrue(channel.getSendResponseCalled()); } @@ -415,7 +425,7 @@ public boolean supportsContentStream() { }); assertFalse(channel.getSendResponseCalled()); - restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext()); assertTrue(channel.getSendResponseCalled()); } @@ -436,7 +446,7 @@ public boolean supportsContentStream() { } }); assertFalse(channel.getSendResponseCalled()); - restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext()); assertTrue(channel.getSendResponseCalled()); } @@ -458,7 +468,7 @@ public boolean supportsContentStream() { } }); assertFalse(channel.getSendResponseCalled()); - restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext()); assertTrue(channel.getSendResponseCalled()); } @@ -467,7 +477,7 @@ public void testDispatchBadRequest() { final AssertingChannel channel = new AssertingChannel(fakeRestRequest, true, RestStatus.BAD_REQUEST); restController.dispatchBadRequest( channel, - new ThreadContext(Settings.EMPTY), + client.threadPool().getThreadContext(), randomBoolean() ? new IllegalStateException("bad request") : new Throwable("bad request")); assertTrue(channel.getSendResponseCalled()); assertThat(channel.getRestResponse().content().utf8ToString(), containsString("bad request")); @@ -499,7 +509,7 @@ public boolean canTripCircuitBreaker() { assertFalse(channel.getSendResponseCalled()); assertFalse(restRequest.isContentConsumed()); - restController.dispatchRequest(restRequest, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(restRequest, channel, client.threadPool().getThreadContext()); assertTrue(channel.getSendResponseCalled()); assertFalse("RestController must not consume request content", restRequest.isContentConsumed()); @@ -508,7 +518,7 @@ public boolean canTripCircuitBreaker() { public void testDispatchBadRequestUnknownCause() { final FakeRestRequest fakeRestRequest = new FakeRestRequest.Builder(NamedXContentRegistry.EMPTY).build(); final AssertingChannel channel = new AssertingChannel(fakeRestRequest, true, RestStatus.BAD_REQUEST); - restController.dispatchBadRequest(channel, new ThreadContext(Settings.EMPTY), null); + restController.dispatchBadRequest(channel, client.threadPool().getThreadContext(), null); assertTrue(channel.getSendResponseCalled()); assertThat(channel.getRestResponse().content().utf8ToString(), containsString("unknown cause")); } @@ -519,7 +529,7 @@ public void testFavicon() { .withPath("/favicon.ico") .build(); final AssertingChannel channel = new AssertingChannel(fakeRestRequest, false, RestStatus.OK); - restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext()); assertTrue(channel.getSendResponseCalled()); assertThat(channel.getRestResponse().contentType(), containsString("image/x-icon")); } @@ -531,7 +541,7 @@ public void testFaviconWithWrongHttpMethod() { .withPath("/favicon.ico") .build(); final AssertingChannel channel = new AssertingChannel(fakeRestRequest, true, RestStatus.METHOD_NOT_ALLOWED); - restController.dispatchRequest(fakeRestRequest, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(fakeRestRequest, channel, client.threadPool().getThreadContext()); assertTrue(channel.getSendResponseCalled()); assertThat(channel.getRestResponse().getHeaders().containsKey("Allow"), equalTo(true)); assertThat(channel.getRestResponse().getHeaders().get("Allow"), hasItem(equalTo(RestRequest.Method.GET.toString()))); @@ -604,7 +614,7 @@ public Exception getInboundException() { final AssertingChannel channel = new AssertingChannel(request, true, RestStatus.METHOD_NOT_ALLOWED); assertFalse(channel.getSendResponseCalled()); - restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY)); + restController.dispatchRequest(request, channel, client.threadPool().getThreadContext()); assertTrue(channel.getSendResponseCalled()); assertThat(channel.getRestResponse().getHeaders().containsKey("Allow"), equalTo(true)); assertThat(channel.getRestResponse().getHeaders().get("Allow"), hasItem(equalTo(RestRequest.Method.GET.toString()))); diff --git a/server/src/test/java/org/elasticsearch/rest/action/admin/indices/RestAnalyzeActionTests.java b/server/src/test/java/org/elasticsearch/rest/action/admin/indices/RestAnalyzeActionTests.java index 8c3c9ae03cb2b..52569047c7680 100644 --- a/server/src/test/java/org/elasticsearch/rest/action/admin/indices/RestAnalyzeActionTests.java +++ b/server/src/test/java/org/elasticsearch/rest/action/admin/indices/RestAnalyzeActionTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.rest.action.admin.indices; import org.elasticsearch.action.admin.indices.analyze.AnalyzeAction; +import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; @@ -26,6 +27,7 @@ import org.elasticsearch.index.analysis.NameOrDefinition; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.client.NoOpNodeClient; import org.elasticsearch.test.rest.FakeRestRequest; import java.io.IOException; @@ -95,8 +97,10 @@ public void testParseXContentForAnalyzeRequestWithInvalidJsonThrowsException() { RestAnalyzeAction action = new RestAnalyzeAction(); RestRequest request = new FakeRestRequest.Builder(xContentRegistry()) .withContent(new BytesArray("{invalid_json}"), XContentType.JSON).build(); - IOException e = expectThrows(IOException.class, () -> action.handleRequest(request, null, null)); - assertThat(e.getMessage(), containsString("expecting double-quote")); + try (NodeClient client = new NoOpNodeClient(this.getClass().getSimpleName())) { + IOException e = expectThrows(IOException.class, () -> action.handleRequest(request, null, client)); + assertThat(e.getMessage(), containsString("expecting double-quote")); + } } public void testParseXContentForAnalyzeRequestWithUnknownParamThrowsException() throws Exception { diff --git a/server/src/test/java/org/elasticsearch/rest/action/document/RestBulkActionTests.java b/server/src/test/java/org/elasticsearch/rest/action/document/RestBulkActionTests.java index fabd9e36051f8..c28bf08b27dc5 100644 --- a/server/src/test/java/org/elasticsearch/rest/action/document/RestBulkActionTests.java +++ b/server/src/test/java/org/elasticsearch/rest/action/document/RestBulkActionTests.java @@ -19,8 +19,11 @@ package org.elasticsearch.rest.action.document; +import org.apache.lucene.util.SetOnce; import org.elasticsearch.Version; +import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.bulk.BulkRequest; +import org.elasticsearch.action.bulk.BulkResponse; import org.elasticsearch.action.update.UpdateRequest; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.bytes.BytesArray; @@ -28,15 +31,14 @@ import org.elasticsearch.rest.RestChannel; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.client.NoOpNodeClient; import org.elasticsearch.test.rest.FakeRestRequest; -import org.hamcrest.CustomMatcher; -import org.mockito.Mockito; import java.util.HashMap; import java.util.Map; -import static org.mockito.Matchers.any; -import static org.mockito.Matchers.argThat; +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.hasSize; import static org.mockito.Mockito.mock; /** @@ -45,32 +47,34 @@ public class RestBulkActionTests extends ESTestCase { public void testBulkPipelineUpsert() throws Exception { - final NodeClient mockClient = mock(NodeClient.class); - final Map params = new HashMap<>(); - params.put("pipeline", "timestamps"); - new RestBulkAction(settings(Version.CURRENT).build()) - .handleRequest( - new FakeRestRequest.Builder( - xContentRegistry()).withPath("my_index/_bulk").withParams(params) - .withContent( - new BytesArray( - "{\"index\":{\"_id\":\"1\"}}\n" + - "{\"field1\":\"val1\"}\n" + - "{\"update\":{\"_id\":\"2\"}}\n" + - "{\"script\":{\"source\":\"ctx._source.counter++;\"},\"upsert\":{\"field1\":\"upserted_val\"}}\n" - ), - XContentType.JSON - ).withMethod(RestRequest.Method.POST).build(), - mock(RestChannel.class), mockClient - ); - Mockito.verify(mockClient) - .bulk(argThat(new CustomMatcher("Pipeline in upsert request") { - @Override - public boolean matches(final Object item) { - BulkRequest request = (BulkRequest) item; - UpdateRequest update = (UpdateRequest) request.requests().get(1); - return "timestamps".equals(update.upsertRequest().getPipeline()); - } - }), any()); + SetOnce bulkCalled = new SetOnce<>(); + try (NodeClient verifyingClient = new NoOpNodeClient(this.getTestName()) { + @Override + public void bulk(BulkRequest request, ActionListener listener) { + bulkCalled.set(true); + assertThat(request.requests(), hasSize(2)); + UpdateRequest updateRequest = (UpdateRequest) request.requests().get(1); + assertThat(updateRequest.upsertRequest().getPipeline(), equalTo("timestamps")); + } + }) { + final Map params = new HashMap<>(); + params.put("pipeline", "timestamps"); + new RestBulkAction(settings(Version.CURRENT).build()) + .handleRequest( + new FakeRestRequest.Builder( + xContentRegistry()).withPath("my_index/_bulk").withParams(params) + .withContent( + new BytesArray( + "{\"index\":{\"_id\":\"1\"}}\n" + + "{\"field1\":\"val1\"}\n" + + "{\"update\":{\"_id\":\"2\"}}\n" + + "{\"script\":{\"source\":\"ctx._source.counter++;\"},\"upsert\":{\"field1\":\"upserted_val\"}}\n" + ), + XContentType.JSON + ).withMethod(RestRequest.Method.POST).build(), + mock(RestChannel.class), verifyingClient + ); + assertThat(bulkCalled.get(), equalTo(true)); + } } } diff --git a/server/src/test/java/org/elasticsearch/rest/action/document/RestIndexActionTests.java b/server/src/test/java/org/elasticsearch/rest/action/document/RestIndexActionTests.java index bb549b64c724f..cafe96a0c4d55 100644 --- a/server/src/test/java/org/elasticsearch/rest/action/document/RestIndexActionTests.java +++ b/server/src/test/java/org/elasticsearch/rest/action/document/RestIndexActionTests.java @@ -19,8 +19,8 @@ package org.elasticsearch.rest.action.document; +import org.apache.lucene.util.SetOnce; import org.elasticsearch.Version; -import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.cluster.ClusterName; @@ -36,13 +36,11 @@ import org.elasticsearch.test.rest.FakeRestRequest; import org.elasticsearch.test.rest.RestActionTestCase; import org.junit.Before; -import org.mockito.ArgumentCaptor; import java.util.concurrent.atomic.AtomicReference; import static org.hamcrest.Matchers.equalTo; -import static org.mockito.Matchers.any; -import static org.mockito.Mockito.verify; +import static org.hamcrest.Matchers.instanceOf; public class RestIndexActionTests extends RestActionTestCase { @@ -76,6 +74,13 @@ public void testAutoIdDefaultsToOptypeIndexForOlderVersions() { } private void checkAutoIdOpType(Version minClusterVersion, DocWriteRequest.OpType expectedOpType) { + SetOnce executeCalled = new SetOnce<>(); + verifyingClient.setExecuteVerifier((actionType, request) -> { + assertThat(request, instanceOf(IndexRequest.class)); + assertThat(((IndexRequest) request).opType(), equalTo(expectedOpType)); + executeCalled.set(true); + return null; + }); RestRequest autoIdRequest = new FakeRestRequest.Builder(xContentRegistry()) .withMethod(RestRequest.Method.POST) .withPath("/some_index/_doc") @@ -86,9 +91,6 @@ private void checkAutoIdOpType(Version minClusterVersion, DocWriteRequest.OpType .add(new DiscoveryNode("test", buildNewFakeTransportAddress(), minClusterVersion)) .build()).build()); dispatchRequest(autoIdRequest); - ArgumentCaptor argumentCaptor = ArgumentCaptor.forClass(IndexRequest.class); - verify(nodeClient).index(argumentCaptor.capture(), any(ActionListener.class)); - IndexRequest indexRequest = argumentCaptor.getValue(); - assertEquals(expectedOpType, indexRequest.opType()); + assertThat(executeCalled.get(), equalTo(true)); } } diff --git a/server/src/test/java/org/elasticsearch/search/scroll/RestClearScrollActionTests.java b/server/src/test/java/org/elasticsearch/search/scroll/RestClearScrollActionTests.java index 03e9a242d35c0..8618b6b8de873 100644 --- a/server/src/test/java/org/elasticsearch/search/scroll/RestClearScrollActionTests.java +++ b/server/src/test/java/org/elasticsearch/search/scroll/RestClearScrollActionTests.java @@ -19,26 +19,24 @@ package org.elasticsearch.search.scroll; +import org.apache.lucene.util.SetOnce; +import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.search.ClearScrollRequest; +import org.elasticsearch.action.search.ClearScrollResponse; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.search.RestClearScrollAction; import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.client.NoOpNodeClient; import org.elasticsearch.test.rest.FakeRestChannel; import org.elasticsearch.test.rest.FakeRestRequest; -import org.mockito.ArgumentCaptor; import java.util.Collections; -import java.util.List; import static org.hamcrest.Matchers.equalTo; -import static org.mockito.Matchers.any; -import static org.mockito.Matchers.anyObject; -import static org.mockito.Mockito.doNothing; -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.verify; +import static org.hamcrest.Matchers.hasSize; public class RestClearScrollActionTests extends ESTestCase { @@ -51,21 +49,23 @@ public void testParseClearScrollRequestWithInvalidJsonThrowsException() throws E } public void testBodyParamsOverrideQueryStringParams() throws Exception { - NodeClient nodeClient = mock(NodeClient.class); - doNothing().when(nodeClient).searchScroll(any(), any()); - - RestClearScrollAction action = new RestClearScrollAction(); - RestRequest request = new FakeRestRequest.Builder(xContentRegistry()) + SetOnce scrollCalled = new SetOnce<>(); + try (NodeClient nodeClient = new NoOpNodeClient(this.getTestName()) { + @Override + public void clearScroll(ClearScrollRequest request, ActionListener listener) { + scrollCalled.set(true); + assertThat(request.getScrollIds(), hasSize(1)); + assertThat(request.getScrollIds().get(0), equalTo("BODY")); + } + }) { + RestClearScrollAction action = new RestClearScrollAction(); + RestRequest request = new FakeRestRequest.Builder(xContentRegistry()) .withParams(Collections.singletonMap("scroll_id", "QUERY_STRING")) .withContent(new BytesArray("{\"scroll_id\": [\"BODY\"]}"), XContentType.JSON).build(); - FakeRestChannel channel = new FakeRestChannel(request, false, 0); - action.handleRequest(request, channel, nodeClient); + FakeRestChannel channel = new FakeRestChannel(request, false, 0); + action.handleRequest(request, channel, nodeClient); - ArgumentCaptor argument = ArgumentCaptor.forClass(ClearScrollRequest.class); - verify(nodeClient).clearScroll(argument.capture(), anyObject()); - ClearScrollRequest clearScrollRequest = argument.getValue(); - List scrollIds = clearScrollRequest.getScrollIds(); - assertEquals(1, scrollIds.size()); - assertEquals("BODY", scrollIds.get(0)); + assertThat(scrollCalled.get(), equalTo(true)); + } } } diff --git a/server/src/test/java/org/elasticsearch/search/scroll/RestSearchScrollActionTests.java b/server/src/test/java/org/elasticsearch/search/scroll/RestSearchScrollActionTests.java index 6c58d32b8b8bb..986c72c1bbe9e 100644 --- a/server/src/test/java/org/elasticsearch/search/scroll/RestSearchScrollActionTests.java +++ b/server/src/test/java/org/elasticsearch/search/scroll/RestSearchScrollActionTests.java @@ -19,6 +19,9 @@ package org.elasticsearch.search.scroll; +import org.apache.lucene.util.SetOnce; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.action.search.SearchScrollRequest; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.bytes.BytesArray; @@ -26,19 +29,14 @@ import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.action.search.RestSearchScrollAction; import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.client.NoOpNodeClient; import org.elasticsearch.test.rest.FakeRestChannel; import org.elasticsearch.test.rest.FakeRestRequest; -import org.mockito.ArgumentCaptor; import java.util.HashMap; import java.util.Map; import static org.hamcrest.Matchers.equalTo; -import static org.mockito.Matchers.any; -import static org.mockito.Matchers.anyObject; -import static org.mockito.Mockito.doNothing; -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.verify; public class RestSearchScrollActionTests extends ESTestCase { @@ -51,23 +49,26 @@ public void testParseSearchScrollRequestWithInvalidJsonThrowsException() throws } public void testBodyParamsOverrideQueryStringParams() throws Exception { - NodeClient nodeClient = mock(NodeClient.class); - doNothing().when(nodeClient).searchScroll(any(), any()); - - RestSearchScrollAction action = new RestSearchScrollAction(); - Map params = new HashMap<>(); - params.put("scroll_id", "QUERY_STRING"); - params.put("scroll", "1000m"); - RestRequest request = new FakeRestRequest.Builder(xContentRegistry()) + SetOnce scrollCalled = new SetOnce<>(); + try (NodeClient nodeClient = new NoOpNodeClient(this.getTestName()) { + @Override + public void searchScroll(SearchScrollRequest request, ActionListener listener) { + scrollCalled.set(true); + assertThat(request.scrollId(), equalTo("BODY")); + assertThat(request.scroll().keepAlive().getStringRep(), equalTo("1m")); + } + }) { + RestSearchScrollAction action = new RestSearchScrollAction(); + Map params = new HashMap<>(); + params.put("scroll_id", "QUERY_STRING"); + params.put("scroll", "1000m"); + RestRequest request = new FakeRestRequest.Builder(xContentRegistry()) .withParams(params) .withContent(new BytesArray("{\"scroll_id\":\"BODY\", \"scroll\":\"1m\"}"), XContentType.JSON).build(); - FakeRestChannel channel = new FakeRestChannel(request, false, 0); - action.handleRequest(request, channel, nodeClient); + FakeRestChannel channel = new FakeRestChannel(request, false, 0); + action.handleRequest(request, channel, nodeClient); - ArgumentCaptor argument = ArgumentCaptor.forClass(SearchScrollRequest.class); - verify(nodeClient).searchScroll(argument.capture(), anyObject()); - SearchScrollRequest searchScrollRequest = argument.getValue(); - assertEquals("BODY", searchScrollRequest.scrollId()); - assertEquals("1m", searchScrollRequest.scroll().keepAlive().getStringRep()); + assertThat(scrollCalled.get(), equalTo(true)); + } } } diff --git a/server/src/test/java/org/elasticsearch/snapshots/SnapshotResiliencyTests.java b/server/src/test/java/org/elasticsearch/snapshots/SnapshotResiliencyTests.java index 31a230bfb4a2f..fbe84963b71ed 100644 --- a/server/src/test/java/org/elasticsearch/snapshots/SnapshotResiliencyTests.java +++ b/server/src/test/java/org/elasticsearch/snapshots/SnapshotResiliencyTests.java @@ -145,6 +145,7 @@ import org.elasticsearch.common.util.PageCacheRecycler; import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.env.Environment; import org.elasticsearch.env.NodeEnvironment; @@ -1478,7 +1479,8 @@ public void onFailure(final Exception e) { }, a -> node, null, emptySet() ); - final IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver(); + final IndexNameExpressionResolver indexNameExpressionResolver = + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); repositoriesService = new RepositoriesService( settings, clusterService, transportService, Collections.singletonMap(FsRepository.TYPE, getRepoFactory(environment)), emptyMap(), threadPool diff --git a/server/src/test/java/org/elasticsearch/usage/UsageServiceTests.java b/server/src/test/java/org/elasticsearch/usage/UsageServiceTests.java index 947a64b3d34e2..83b596b364ccc 100644 --- a/server/src/test/java/org/elasticsearch/usage/UsageServiceTests.java +++ b/server/src/test/java/org/elasticsearch/usage/UsageServiceTests.java @@ -24,6 +24,7 @@ import org.elasticsearch.rest.RestRequest; import org.elasticsearch.search.aggregations.support.AggregationUsageService; import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.client.NoOpNodeClient; import org.elasticsearch.test.rest.FakeRestRequest; import java.util.Collections; @@ -104,20 +105,22 @@ public void testRestUsage() throws Exception { usageService.addRestHandler(handlerD); usageService.addRestHandler(handlerE); usageService.addRestHandler(handlerF); - handlerA.handleRequest(restRequest, null, null); - handlerB.handleRequest(restRequest, null, null); - handlerA.handleRequest(restRequest, null, null); - handlerA.handleRequest(restRequest, null, null); - handlerB.handleRequest(restRequest, null, null); - handlerC.handleRequest(restRequest, null, null); - handlerC.handleRequest(restRequest, null, null); - handlerD.handleRequest(restRequest, null, null); - handlerA.handleRequest(restRequest, null, null); - handlerB.handleRequest(restRequest, null, null); - handlerE.handleRequest(restRequest, null, null); - handlerF.handleRequest(restRequest, null, null); - handlerC.handleRequest(restRequest, null, null); - handlerD.handleRequest(restRequest, null, null); + try (NodeClient client = new NoOpNodeClient(this.getClass().getSimpleName() + "TestClient")) { + handlerA.handleRequest(restRequest, null, client); + handlerB.handleRequest(restRequest, null, client); + handlerA.handleRequest(restRequest, null, client); + handlerA.handleRequest(restRequest, null, client); + handlerB.handleRequest(restRequest, null, client); + handlerC.handleRequest(restRequest, null, client); + handlerC.handleRequest(restRequest, null, client); + handlerD.handleRequest(restRequest, null, client); + handlerA.handleRequest(restRequest, null, client); + handlerB.handleRequest(restRequest, null, client); + handlerE.handleRequest(restRequest, null, client); + handlerF.handleRequest(restRequest, null, client); + handlerC.handleRequest(restRequest, null, client); + handlerD.handleRequest(restRequest, null, client); + } Map restUsage = usageService.getRestUsageStats(); assertThat(restUsage, notNullValue()); assertThat(restUsage.size(), equalTo(6)); diff --git a/test/framework/src/main/java/org/elasticsearch/test/client/NoOpClient.java b/test/framework/src/main/java/org/elasticsearch/test/client/NoOpClient.java index d034fa0c5e3e3..af7370cdeb537 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/client/NoOpClient.java +++ b/test/framework/src/main/java/org/elasticsearch/test/client/NoOpClient.java @@ -20,10 +20,10 @@ package org.elasticsearch.test.client; import org.elasticsearch.ElasticsearchException; -import org.elasticsearch.action.ActionType; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.ActionRequest; import org.elasticsearch.action.ActionResponse; +import org.elasticsearch.action.ActionType; import org.elasticsearch.client.support.AbstractClient; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.threadpool.TestThreadPool; @@ -32,7 +32,10 @@ import java.util.concurrent.TimeUnit; /** - * Client that always responds with {@code null} to every request. Override this for testing. + * Client that always responds with {@code null} to every request. Override {@link #doExecute(ActionType, ActionRequest, ActionListener)} + * for testing. + * + * See also {@link NoOpNodeClient} if you need to mock a {@link org.elasticsearch.client.node.NodeClient}. */ public class NoOpClient extends AbstractClient { /** @@ -43,7 +46,7 @@ public NoOpClient(ThreadPool threadPool) { } /** - * Create a new {@link TestThreadPool} for this client. + * Create a new {@link TestThreadPool} for this client. This {@linkplain TestThreadPool} is terminated on {@link #close()}. */ public NoOpClient(String testName) { super(Settings.EMPTY, new TestThreadPool(testName)); diff --git a/test/framework/src/main/java/org/elasticsearch/test/client/NoOpNodeClient.java b/test/framework/src/main/java/org/elasticsearch/test/client/NoOpNodeClient.java new file mode 100644 index 0000000000000..4edc7e6114971 --- /dev/null +++ b/test/framework/src/main/java/org/elasticsearch/test/client/NoOpNodeClient.java @@ -0,0 +1,110 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.test.client; + +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.ActionRequest; +import org.elasticsearch.action.ActionResponse; +import org.elasticsearch.action.ActionType; +import org.elasticsearch.action.support.TransportAction; +import org.elasticsearch.client.Client; +import org.elasticsearch.client.node.NodeClient; +import org.elasticsearch.common.io.stream.NamedWriteableRegistry; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.tasks.Task; +import org.elasticsearch.tasks.TaskListener; +import org.elasticsearch.tasks.TaskManager; +import org.elasticsearch.threadpool.TestThreadPool; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.RemoteClusterService; + +import java.util.Map; +import java.util.concurrent.TimeUnit; +import java.util.function.Supplier; + +/** + * Client that always response with {@code null} to every request. Override {@link #doExecute(ActionType, ActionRequest, ActionListener)}, + * {@link #executeLocally(ActionType, ActionRequest, ActionListener)}, or {@link #executeLocally(ActionType, ActionRequest, TaskListener)} + * for testing. + * + * See also {@link NoOpClient} if you do not specifically need a {@link NodeClient}. + */ +public class NoOpNodeClient extends NodeClient { + + /** + * Build with {@link ThreadPool}. This {@linkplain ThreadPool} is terminated on {@link #close()}. + */ + public NoOpNodeClient(ThreadPool threadPool) { + super(Settings.EMPTY, threadPool); + } + + /** + * Create a new {@link TestThreadPool} for this client. This {@linkplain TestThreadPool} is terminated on {@link #close()}. + */ + public NoOpNodeClient(String testName) { + super(Settings.EMPTY, new TestThreadPool(testName)); + } + + @Override + public + void doExecute(ActionType action, Request request, ActionListener listener) { + listener.onResponse(null); + } + + @Override + public void initialize(Map actions, TaskManager taskManager, Supplier localNodeId, + RemoteClusterService remoteClusterService, NamedWriteableRegistry namedWriteableRegistry) { + throw new UnsupportedOperationException("cannot initialize " + this.getClass().getSimpleName()); + } + + @Override + public + Task executeLocally(ActionType action, Request request, ActionListener listener) { + listener.onResponse(null); + return null; + } + + @Override + public + Task executeLocally(ActionType action, Request request, TaskListener listener) { + listener.onResponse(null, null); + return null; + } + + @Override + public String getLocalNodeId() { + return null; + } + + @Override + public Client getRemoteClusterClient(String clusterAlias) { + return null; + } + + @Override + public void close() { + try { + ThreadPool.terminate(threadPool(), 10, TimeUnit.SECONDS); + } catch (Exception e) { + throw new ElasticsearchException(e.getMessage(), e); + } + } +} diff --git a/test/framework/src/main/java/org/elasticsearch/test/rest/ESRestTestCase.java b/test/framework/src/main/java/org/elasticsearch/test/rest/ESRestTestCase.java index 11ae1ae9c05dd..ec81c4c268319 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/rest/ESRestTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/test/rest/ESRestTestCase.java @@ -648,6 +648,20 @@ protected static void wipeAllIndices() throws IOException { try { final Request deleteRequest = new Request("DELETE", "*"); deleteRequest.addParameter("expand_wildcards", "open,closed" + (includeHidden ? ",hidden" : "")); + RequestOptions allowSystemIndexAccessWarningOptions = RequestOptions.DEFAULT.toBuilder() + .setWarningsHandler(warnings -> { + if (warnings.size() == 0) { + return false; + } else if (warnings.size() > 1) { + return true; + } + // We don't know exactly which indices we're cleaning up in advance, so just accept all system index access warnings. + final String warning = warnings.get(0); + final boolean isSystemIndexWarning = warning.contains("this request accesses system indices") + && warning.contains("but in a future major version, direct access to system indices will be prevented by default"); + return isSystemIndexWarning == false; + }).build(); + deleteRequest.setOptions(allowSystemIndexAccessWarningOptions); final Response response = adminClient().performRequest(deleteRequest); try (InputStream is = response.getEntity().getContent()) { assertTrue((boolean) XContentHelper.convertToMap(XContentType.JSON.xContent(), is, true).get("acknowledged")); @@ -798,7 +812,17 @@ private void wipeRollupJobs() throws IOException { protected void refreshAllIndices() throws IOException { boolean includeHidden = minimumNodeVersion().onOrAfter(Version.V_7_7_0); Request refreshRequest = new Request("POST", "/_refresh"); - refreshRequest.addParameter("expand_wildcards", "open,closed" + (includeHidden ? ",hidden" : "")); + refreshRequest.addParameter("expand_wildcards", "open" + (includeHidden ? ",hidden" : "")); + // Allow system index deprecation warnings + refreshRequest.setOptions(RequestOptions.DEFAULT.toBuilder().setWarningsHandler(warnings -> { + if (warnings.isEmpty()) { + return false; + } else if (warnings.size() > 1) { + return true; + } else { + return warnings.get(0).startsWith("this request accesses system indices:") == false; + } + })); client().performRequest(refreshRequest); } diff --git a/test/framework/src/main/java/org/elasticsearch/test/rest/RestActionTestCase.java b/test/framework/src/main/java/org/elasticsearch/test/rest/RestActionTestCase.java index a5d932a3d1a3d..0577ad0c23441 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/rest/RestActionTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/test/rest/RestActionTestCase.java @@ -19,19 +19,26 @@ package org.elasticsearch.test.rest; -import org.elasticsearch.client.node.NodeClient; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.ActionRequest; +import org.elasticsearch.action.ActionResponse; +import org.elasticsearch.action.ActionType; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.indices.breaker.NoneCircuitBreakerService; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; +import org.elasticsearch.tasks.Task; +import org.elasticsearch.tasks.TaskListener; import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.client.NoOpNodeClient; import org.elasticsearch.usage.UsageService; +import org.junit.After; import org.junit.Before; import java.util.Collections; - -import static org.mockito.Mockito.mock; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.BiFunction; /** * A common base class for Rest*ActionTests. Provides access to a {@link RestController} @@ -39,17 +46,22 @@ */ public abstract class RestActionTestCase extends ESTestCase { private RestController controller; - protected NodeClient nodeClient; + protected VerifyingClient verifyingClient; @Before public void setUpController() { - nodeClient = mock(NodeClient.class); + verifyingClient = new VerifyingClient(this.getTestName()); controller = new RestController(Collections.emptySet(), null, - nodeClient, + verifyingClient, new NoneCircuitBreakerService(), new UsageService()); } + @After + public void tearDownController() { + verifyingClient.close(); + } + /** * A test {@link RestController}. This controller can be used to register and delegate * to handlers, but uses a mock client and cannot carry out the full request. @@ -66,4 +78,76 @@ protected void dispatchRequest(RestRequest request) { ThreadContext threadContext = new ThreadContext(Settings.EMPTY); controller.dispatchRequest(request, channel, threadContext); } + + /** + * A mocked {@link org.elasticsearch.client.node.NodeClient} which can be easily reconfigured to verify arbitrary verification + * functions, and can be reset to allow reconfiguration partway through a test without having to construct a new object. + * + * By default, will throw {@link AssertionError} when any execution method is called, unless configured otherwise using + * {@link #setExecuteVerifier(BiFunction)} or {@link #setExecuteLocallyVerifier(BiFunction)}. + */ + public static class VerifyingClient extends NoOpNodeClient { + AtomicReference executeVerifier = new AtomicReference<>(); + AtomicReference executeLocallyVerifier = new AtomicReference<>(); + + public VerifyingClient(String testName) { + super(testName); + reset(); + } + + /** + * Clears any previously set verifier functions set by {@link #setExecuteVerifier(BiFunction)} and/or + * {@link #setExecuteLocallyVerifier(BiFunction)}. These functions are replaced with functions which will throw an + * {@link AssertionError} if called. + */ + public void reset() { + executeVerifier.set((arg1, arg2) -> { + throw new AssertionError(); + }); + executeLocallyVerifier.set((arg1, arg2) -> { + throw new AssertionError(); + }); + } + + /** + * Sets the function that will be called when {@link #doExecute(ActionType, ActionRequest, ActionListener)} is called. The given + * function should return either a subclass of {@link ActionResponse} or {@code null}. + * @param verifier A function which is called in place of {@link #doExecute(ActionType, ActionRequest, ActionListener)} + */ + public + void setExecuteVerifier(BiFunction, Request, Void> verifier) { + executeVerifier.set(verifier); + } + + @Override + public + void doExecute(ActionType action, Request request, ActionListener listener) { + listener.onResponse((Response) executeVerifier.get().apply(action, request)); + } + + /** + * Sets the function that will be called when {@link #executeLocally(ActionType, ActionRequest, TaskListener)}is called. The given + * function should return either a subclass of {@link ActionResponse} or {@code null}. + * @param verifier A function which is called in place of {@link #executeLocally(ActionType, ActionRequest, TaskListener)} + */ + public + void setExecuteLocallyVerifier(BiFunction, Request, Void> verifier) { + executeLocallyVerifier.set(verifier); + } + + @Override + public + Task executeLocally(ActionType action, Request request, ActionListener listener) { + listener.onResponse((Response) executeLocallyVerifier.get().apply(action, request)); + return null; + } + + @Override + public + Task executeLocally(ActionType action, Request request, TaskListener listener) { + listener.onResponse(null, (Response) executeLocallyVerifier.get().apply(action, request)); + return null; + } + + } } diff --git a/x-pack/docs/en/watcher/managing-watches.asciidoc b/x-pack/docs/en/watcher/managing-watches.asciidoc index aa4c71a0dcdca..828fe8ab0b3b6 100644 --- a/x-pack/docs/en/watcher/managing-watches.asciidoc +++ b/x-pack/docs/en/watcher/managing-watches.asciidoc @@ -33,4 +33,4 @@ GET .watches/_search "size" : 100 } -------------------------------------------------- -// TEST[setup:my_active_watch] +// TEST[skip:deprecation warning] diff --git a/x-pack/docs/en/watcher/troubleshooting.asciidoc b/x-pack/docs/en/watcher/troubleshooting.asciidoc index e6c193896f4a0..352884298fdb0 100644 --- a/x-pack/docs/en/watcher/troubleshooting.asciidoc +++ b/x-pack/docs/en/watcher/troubleshooting.asciidoc @@ -18,7 +18,7 @@ do that by submitting the following request: -------------------------------------------------- GET .watches/_mapping -------------------------------------------------- -// TEST[setup:my_active_watch] +// TEST[skip:deprecation warning] If the index mappings are missing, follow these steps to restore the correct mappings: @@ -33,7 +33,7 @@ mappings: -------------------------------------------------- DELETE .watches -------------------------------------------------- -// TEST[skip:index deletion] +// TEST[skip:index deletion and deprecation warning] -- . Disable direct access to the `.watches` index: .. Stop the Elasticsearch node. @@ -62,4 +62,4 @@ Keep in mind that there's no built-in validation of scripts that you add to a watch. Buggy or deliberately malicious scripts can negatively impact {watcher} performance. For example, if you add multiple watches with buggy script conditions in a short period of time, {watcher} might be temporarily unable to -process watches until the bad watches time out. \ No newline at end of file +process watches until the bad watches time out. diff --git a/x-pack/plugin/async-search/src/test/java/org/elasticsearch/xpack/search/RestSubmitAsyncSearchActionTests.java b/x-pack/plugin/async-search/src/test/java/org/elasticsearch/xpack/search/RestSubmitAsyncSearchActionTests.java index 045dd153f9853..e51c8a1054c9c 100644 --- a/x-pack/plugin/async-search/src/test/java/org/elasticsearch/xpack/search/RestSubmitAsyncSearchActionTests.java +++ b/x-pack/plugin/async-search/src/test/java/org/elasticsearch/xpack/search/RestSubmitAsyncSearchActionTests.java @@ -5,26 +5,24 @@ */ package org.elasticsearch.xpack.search; -import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.ActionType; +import org.apache.lucene.util.SetOnce; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.test.rest.FakeRestRequest; import org.elasticsearch.test.rest.RestActionTestCase; import org.elasticsearch.xpack.core.search.action.SubmitAsyncSearchRequest; import org.junit.Before; -import org.mockito.ArgumentCaptor; import java.io.IOException; import java.util.HashMap; import java.util.Map; import java.util.function.Function; -import static org.mockito.Matchers.any; -import static org.mockito.Mockito.reset; -import static org.mockito.Mockito.verify; +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.instanceOf; public class RestSubmitAsyncSearchActionTests extends RestActionTestCase { @@ -42,26 +40,31 @@ public void setUpAction() { */ @SuppressWarnings("unchecked") public void testRequestParameterDefaults() throws IOException { + SetOnce executeCalled = new SetOnce<>(); + verifyingClient.setExecuteLocallyVerifier((actionType, request) -> { + assertThat(request, instanceOf(SubmitAsyncSearchRequest.class)); + SubmitAsyncSearchRequest submitRequest = (SubmitAsyncSearchRequest) request; + assertThat(submitRequest.getWaitForCompletionTimeout(), equalTo(TimeValue.timeValueSeconds(1))); + assertThat(submitRequest.isKeepOnCompletion(), equalTo(false)); + assertThat(submitRequest.getKeepAlive(), equalTo(TimeValue.timeValueDays(5))); + // check parameters we implicitly set in the SubmitAsyncSearchRequest ctor + assertThat(submitRequest.getSearchRequest().isCcsMinimizeRoundtrips(), equalTo(false)); + assertThat(submitRequest.getSearchRequest().getBatchedReduceSize(), equalTo(5)); + assertThat(submitRequest.getSearchRequest().requestCache(), equalTo(true)); + assertThat(submitRequest.getSearchRequest().getPreFilterShardSize().intValue(), equalTo(1)); + executeCalled.set(true); + return null; + }); RestRequest submitAsyncRestRequest = new FakeRestRequest.Builder(xContentRegistry()) .withMethod(RestRequest.Method.POST) .withPath("/test_index/_async_search") .withContent(new BytesArray("{}"), XContentType.JSON) .build(); dispatchRequest(submitAsyncRestRequest); - ArgumentCaptor argumentCaptor = ArgumentCaptor.forClass(SubmitAsyncSearchRequest.class); - verify(nodeClient).executeLocally(any(ActionType.class), argumentCaptor.capture(), any(ActionListener.class)); - SubmitAsyncSearchRequest submitRequest = argumentCaptor.getValue(); - assertEquals(TimeValue.timeValueSeconds(1), submitRequest.getWaitForCompletionTimeout()); - assertFalse(submitRequest.isKeepOnCompletion()); - assertEquals(TimeValue.timeValueDays(5), submitRequest.getKeepAlive()); - // check parameters we implicitly set in the SubmitAsyncSearchRequest ctor - assertFalse(submitRequest.getSearchRequest().isCcsMinimizeRoundtrips()); - assertEquals(5, submitRequest.getSearchRequest().getBatchedReduceSize()); - assertEquals(true, submitRequest.getSearchRequest().requestCache()); - assertEquals(1, submitRequest.getSearchRequest().getPreFilterShardSize().intValue()); + assertThat(executeCalled.get(), equalTo(true)); } - public void testParameters() throws IOException { + public void testParameters() throws Exception { String tvString = randomTimeValue(1, 100); doTestParameter("keep_alive", tvString, TimeValue.parseTimeValue(tvString, ""), SubmitAsyncSearchRequest::getKeepAlive); doTestParameter("wait_for_completion_timeout", tvString, TimeValue.parseTimeValue(tvString, ""), @@ -79,18 +82,26 @@ public void testParameters() throws IOException { @SuppressWarnings("unchecked") private void doTestParameter(String paramName, String paramValue, T expectedValue, - Function valueAccessor) { + Function valueAccessor) throws Exception { + SetOnce executeCalled = new SetOnce<>(); + verifyingClient.setExecuteLocallyVerifier((actionType, request) -> { + assertThat(request, instanceOf(SubmitAsyncSearchRequest.class)); + assertThat(valueAccessor.apply((SubmitAsyncSearchRequest) request), equalTo(expectedValue)); + executeCalled.set(true); + return null; + }); Map params = new HashMap<>(); params.put(paramName, paramValue); RestRequest submitAsyncRestRequest = new FakeRestRequest.Builder(xContentRegistry()).withMethod(RestRequest.Method.POST) .withPath("/test_index/_async_search") .withParams(params) .withContent(new BytesArray("{}"), XContentType.JSON).build(); - ArgumentCaptor argumentCaptor = ArgumentCaptor.forClass(SubmitAsyncSearchRequest.class); - dispatchRequest(submitAsyncRestRequest); - verify(nodeClient).executeLocally(any(ActionType.class), argumentCaptor.capture(), any(ActionListener.class)); - SubmitAsyncSearchRequest submitRequest = argumentCaptor.getValue(); - assertEquals(expectedValue, valueAccessor.apply(submitRequest)); - reset(nodeClient); + + // Get a new context each time, so we don't get exceptions due to trying to add the same header multiple times + try (ThreadContext.StoredContext context = verifyingClient.threadPool().getThreadContext().stashContext()) { + dispatchRequest(submitAsyncRestRequest); + } + assertThat(executeCalled.get(), equalTo(true)); + verifyingClient.reset(); } } diff --git a/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ClientHelper.java b/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ClientHelper.java index d5c817f33e95b..76f5956b17a5a 100644 --- a/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ClientHelper.java +++ b/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ClientHelper.java @@ -70,6 +70,7 @@ public static Map filterSecurityHeaders(Map head public static final String IDP_ORIGIN = "idp"; public static final String STACK_ORIGIN = "stack"; public static final String SEARCHABLE_SNAPSHOTS_ORIGIN = "searchable_snapshots"; + public static final String LOGSTASH_MANAGEMENT_ORIGIN = "logstash_management"; private ClientHelper() {} diff --git a/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ilm/GenerateSnapshotNameStep.java b/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ilm/GenerateSnapshotNameStep.java index 1f35c8d56645d..68629927139ae 100644 --- a/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ilm/GenerateSnapshotNameStep.java +++ b/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ilm/GenerateSnapshotNameStep.java @@ -132,7 +132,7 @@ public ResolverContext() { } public ResolverContext(long startTime) { - super(null, null, startTime, false, false, false, false); + super(null, null, startTime, false, false, false, false, false); } @Override diff --git a/x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/common/validation/SourceDestValidatorTests.java b/x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/common/validation/SourceDestValidatorTests.java index 74d08082ad838..807c3cc506d2c 100644 --- a/x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/common/validation/SourceDestValidatorTests.java +++ b/x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/common/validation/SourceDestValidatorTests.java @@ -22,6 +22,7 @@ import org.elasticsearch.common.CheckedConsumer; import org.elasticsearch.common.ValidationException; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.license.License; import org.elasticsearch.license.RemoteClusterLicenseChecker; import org.elasticsearch.license.XPackLicenseState; @@ -91,7 +92,7 @@ public class SourceDestValidatorTests extends ESTestCase { private final TransportService transportService = MockTransportService.createNewService(Settings.EMPTY, Version.CURRENT, threadPool); private final RemoteClusterService remoteClusterService = transportService.getRemoteClusterService(); private final SourceDestValidator simpleNonRemoteValidator = new SourceDestValidator( - new IndexNameExpressionResolver(), + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), remoteClusterService, null, "node_id", @@ -571,7 +572,7 @@ public void testRemoteSourceBasic() throws InterruptedException { Context context = spy( new SourceDestValidator.Context( CLUSTER_STATE, - new IndexNameExpressionResolver(), + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), remoteClusterService, remoteClusterLicenseCheckerBasic, new String[] { REMOTE_BASIC + ":" + "SOURCE_1" }, @@ -595,7 +596,7 @@ public void testRemoteSourcePlatinum() throws InterruptedException { final Context context = spy( new SourceDestValidator.Context( CLUSTER_STATE, - new IndexNameExpressionResolver(), + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), remoteClusterService, new RemoteClusterLicenseChecker(clientWithBasicLicense, operationMode -> XPackLicenseState.isAllowedByOperationMode(operationMode, License.OperationMode.PLATINUM)), @@ -625,7 +626,7 @@ public void testRemoteSourcePlatinum() throws InterruptedException { final Context context2 = spy( new SourceDestValidator.Context( CLUSTER_STATE, - new IndexNameExpressionResolver(), + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), remoteClusterService, new RemoteClusterLicenseChecker(clientWithPlatinumLicense, operationMode -> XPackLicenseState.isAllowedByOperationMode(operationMode, License.OperationMode.PLATINUM)), @@ -646,7 +647,7 @@ public void testRemoteSourcePlatinum() throws InterruptedException { final Context context3 = spy( new SourceDestValidator.Context( CLUSTER_STATE, - new IndexNameExpressionResolver(), + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), remoteClusterService, new RemoteClusterLicenseChecker(clientWithPlatinumLicense, operationMode -> XPackLicenseState.isAllowedByOperationMode(operationMode, License.OperationMode.PLATINUM)), @@ -668,7 +669,7 @@ public void testRemoteSourcePlatinum() throws InterruptedException { final Context context4 = spy( new SourceDestValidator.Context( CLUSTER_STATE, - new IndexNameExpressionResolver(), + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), remoteClusterService, new RemoteClusterLicenseChecker(clientWithTrialLicense, operationMode -> XPackLicenseState.isAllowedByOperationMode(operationMode, License.OperationMode.PLATINUM)), @@ -692,7 +693,7 @@ public void testRemoteSourceLicenseInActive() throws InterruptedException { final Context context = spy( new SourceDestValidator.Context( CLUSTER_STATE, - new IndexNameExpressionResolver(), + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), remoteClusterService, new RemoteClusterLicenseChecker(clientWithExpiredBasicLicense, operationMode -> XPackLicenseState.isAllowedByOperationMode(operationMode, License.OperationMode.PLATINUM)), @@ -719,7 +720,7 @@ public void testRemoteSourceDoesNotExist() throws InterruptedException { Context context = spy( new SourceDestValidator.Context( CLUSTER_STATE, - new IndexNameExpressionResolver(), + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), remoteClusterService, new RemoteClusterLicenseChecker(clientWithExpiredBasicLicense, operationMode -> XPackLicenseState.isAllowedByOperationMode(operationMode, License.OperationMode.PLATINUM)), diff --git a/x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/deprecation/DeprecationInfoActionResponseTests.java b/x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/deprecation/DeprecationInfoActionResponseTests.java index 59006f30e75c8..985cb356efc0d 100644 --- a/x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/deprecation/DeprecationInfoActionResponseTests.java +++ b/x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/deprecation/DeprecationInfoActionResponseTests.java @@ -17,6 +17,7 @@ import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.TransportAddress; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; @@ -78,7 +79,7 @@ public void testFrom() throws IOException { new TransportAddress(TransportAddress.META_ADDRESS, 9300), "test"); ClusterState state = ClusterState.builder(ClusterName.DEFAULT).metadata(metadata).build(); List datafeeds = Collections.singletonList(DatafeedConfigTests.createRandomizedDatafeedConfig("foo")); - IndexNameExpressionResolver resolver = new IndexNameExpressionResolver(); + IndexNameExpressionResolver resolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); IndicesOptions indicesOptions = IndicesOptions.fromOptions(false, false, true, true); boolean clusterIssueFound = randomBoolean(); diff --git a/x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/ml/utils/MlIndexAndAliasTests.java b/x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/ml/utils/MlIndexAndAliasTests.java index 25dda8e196db8..7af3f75fb685b 100644 --- a/x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/ml/utils/MlIndexAndAliasTests.java +++ b/x-pack/plugin/core/src/test/java/org/elasticsearch/xpack/core/ml/utils/MlIndexAndAliasTests.java @@ -280,7 +280,8 @@ public void testIndexNameComparator() { private void createIndexAndAliasIfNecessary(ClusterState clusterState) { MlIndexAndAlias.createIndexAndAliasIfNecessary( - client, clusterState, new IndexNameExpressionResolver(), TEST_INDEX_PREFIX, TEST_INDEX_ALIAS, listener); + client, clusterState, new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), + TEST_INDEX_PREFIX, TEST_INDEX_ALIAS, listener); } @SuppressWarnings("unchecked") diff --git a/x-pack/plugin/data-streams/src/main/java/org/elasticsearch/xpack/datastreams/action/DataStreamsStatsTransportAction.java b/x-pack/plugin/data-streams/src/main/java/org/elasticsearch/xpack/datastreams/action/DataStreamsStatsTransportAction.java index bccac4e2ed711..18be5fce55ccd 100644 --- a/x-pack/plugin/data-streams/src/main/java/org/elasticsearch/xpack/datastreams/action/DataStreamsStatsTransportAction.java +++ b/x-pack/plugin/data-streams/src/main/java/org/elasticsearch/xpack/datastreams/action/DataStreamsStatsTransportAction.java @@ -95,17 +95,13 @@ protected ClusterBlockException checkRequestBlock( return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, concreteIndices); } - private List dataStreamNames(ClusterState clusterState, DataStreamsStatsAction.Request request) { - String[] requestIndices = request.indices(); - if (requestIndices == null || requestIndices.length == 0) { - requestIndices = new String[] { "*" }; - } - return indexNameExpressionResolver.dataStreamNames(clusterState, request.indicesOptions(), requestIndices); - } - @Override - protected ShardsIterator shards(ClusterState clusterState, DataStreamsStatsAction.Request request, String[] concreteIndices) { - List abstractionNames = dataStreamNames(clusterState, request); + protected String[] resolveConcreteIndexNames(ClusterState clusterState, DataStreamsStatsAction.Request request) { + List abstractionNames = indexNameExpressionResolver.dataStreamNames( + clusterState, + request.indicesOptions(), + request.indices() + ); SortedMap indicesLookup = clusterState.getMetadata().getIndicesLookup(); String[] concreteDatastreamIndices = abstractionNames.stream().flatMap(abstractionName -> { @@ -119,7 +115,12 @@ protected ShardsIterator shards(ClusterState clusterState, DataStreamsStatsActio return Stream.empty(); } }).toArray(String[]::new); - return clusterState.getRoutingTable().allShards(concreteDatastreamIndices); + return concreteDatastreamIndices; + } + + @Override + protected ShardsIterator shards(ClusterState clusterState, DataStreamsStatsAction.Request request, String[] concreteIndices) { + return clusterState.getRoutingTable().allShards(concreteIndices); } @Override @@ -171,7 +172,11 @@ protected DataStreamsStatsAction.Response newResponse( // Collect the number of backing indices from the cluster state. If every shard operation for an index fails, // or if a backing index simply has no shards allocated, it would be excluded from the counts if we only used // shard results to calculate. - List abstractionNames = dataStreamNames(clusterState, request); + List abstractionNames = indexNameExpressionResolver.dataStreamNames( + clusterState, + request.indicesOptions(), + request.indices() + ); for (String abstractionName : abstractionNames) { IndexAbstraction indexAbstraction = indicesLookup.get(abstractionName); assert indexAbstraction != null; diff --git a/x-pack/plugin/data-streams/src/test/java/org/elasticsearch/xpack/datastreams/action/DeleteDataStreamTransportActionTests.java b/x-pack/plugin/data-streams/src/test/java/org/elasticsearch/xpack/datastreams/action/DeleteDataStreamTransportActionTests.java index 66f4d9acbc2b4..d51a8137f33fa 100644 --- a/x-pack/plugin/data-streams/src/test/java/org/elasticsearch/xpack/datastreams/action/DeleteDataStreamTransportActionTests.java +++ b/x-pack/plugin/data-streams/src/test/java/org/elasticsearch/xpack/datastreams/action/DeleteDataStreamTransportActionTests.java @@ -17,6 +17,8 @@ import org.elasticsearch.common.Strings; import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.collect.Tuple; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.Index; import org.elasticsearch.snapshots.Snapshot; import org.elasticsearch.snapshots.SnapshotId; @@ -37,7 +39,7 @@ public class DeleteDataStreamTransportActionTests extends ESTestCase { - private final IndexNameExpressionResolver iner = new IndexNameExpressionResolver(); + private final IndexNameExpressionResolver iner = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); public void testDeleteDataStream() { final String dataStreamName = "my-data-stream"; diff --git a/x-pack/plugin/data-streams/src/test/java/org/elasticsearch/xpack/datastreams/action/GetDataStreamsTransportActionTests.java b/x-pack/plugin/data-streams/src/test/java/org/elasticsearch/xpack/datastreams/action/GetDataStreamsTransportActionTests.java index e4c2cb000a15f..9f231d30ae5cd 100644 --- a/x-pack/plugin/data-streams/src/test/java/org/elasticsearch/xpack/datastreams/action/GetDataStreamsTransportActionTests.java +++ b/x-pack/plugin/data-streams/src/test/java/org/elasticsearch/xpack/datastreams/action/GetDataStreamsTransportActionTests.java @@ -11,6 +11,8 @@ import org.elasticsearch.cluster.metadata.DataStream; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.common.collect.Tuple; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.xpack.core.action.GetDataStreamAction; @@ -27,7 +29,11 @@ public void testGetDataStream() { final String dataStreamName = "my-data-stream"; ClusterState cs = getClusterStateWithDataStreams(List.of(new Tuple<>(dataStreamName, 1)), List.of()); GetDataStreamAction.Request req = new GetDataStreamAction.Request(new String[] { dataStreamName }); - List dataStreams = GetDataStreamsTransportAction.getDataStreams(cs, new IndexNameExpressionResolver(), req); + List dataStreams = GetDataStreamsTransportAction.getDataStreams( + cs, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), + req + ); assertThat(dataStreams.size(), equalTo(1)); assertThat(dataStreams.get(0).getName(), equalTo(dataStreamName)); } @@ -40,24 +46,40 @@ public void testGetDataStreamsWithWildcards() { ); GetDataStreamAction.Request req = new GetDataStreamAction.Request(new String[] { dataStreamNames[1].substring(0, 5) + "*" }); - List dataStreams = GetDataStreamsTransportAction.getDataStreams(cs, new IndexNameExpressionResolver(), req); + List dataStreams = GetDataStreamsTransportAction.getDataStreams( + cs, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), + req + ); assertThat(dataStreams.size(), equalTo(1)); assertThat(dataStreams.get(0).getName(), equalTo(dataStreamNames[1])); req = new GetDataStreamAction.Request(new String[] { "*" }); - dataStreams = GetDataStreamsTransportAction.getDataStreams(cs, new IndexNameExpressionResolver(), req); + dataStreams = GetDataStreamsTransportAction.getDataStreams( + cs, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), + req + ); assertThat(dataStreams.size(), equalTo(2)); assertThat(dataStreams.get(0).getName(), equalTo(dataStreamNames[1])); assertThat(dataStreams.get(1).getName(), equalTo(dataStreamNames[0])); req = new GetDataStreamAction.Request((String[]) null); - dataStreams = GetDataStreamsTransportAction.getDataStreams(cs, new IndexNameExpressionResolver(), req); + dataStreams = GetDataStreamsTransportAction.getDataStreams( + cs, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), + req + ); assertThat(dataStreams.size(), equalTo(2)); assertThat(dataStreams.get(0).getName(), equalTo(dataStreamNames[1])); assertThat(dataStreams.get(1).getName(), equalTo(dataStreamNames[0])); req = new GetDataStreamAction.Request(new String[] { "matches-none*" }); - dataStreams = GetDataStreamsTransportAction.getDataStreams(cs, new IndexNameExpressionResolver(), req); + dataStreams = GetDataStreamsTransportAction.getDataStreams( + cs, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), + req + ); assertThat(dataStreams.size(), equalTo(0)); } @@ -69,25 +91,37 @@ public void testGetDataStreamsWithoutWildcards() { ); GetDataStreamAction.Request req = new GetDataStreamAction.Request(new String[] { dataStreamNames[0], dataStreamNames[1] }); - List dataStreams = GetDataStreamsTransportAction.getDataStreams(cs, new IndexNameExpressionResolver(), req); + List dataStreams = GetDataStreamsTransportAction.getDataStreams( + cs, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), + req + ); assertThat(dataStreams.size(), equalTo(2)); assertThat(dataStreams.get(0).getName(), equalTo(dataStreamNames[1])); assertThat(dataStreams.get(1).getName(), equalTo(dataStreamNames[0])); req = new GetDataStreamAction.Request(new String[] { dataStreamNames[1] }); - dataStreams = GetDataStreamsTransportAction.getDataStreams(cs, new IndexNameExpressionResolver(), req); + dataStreams = GetDataStreamsTransportAction.getDataStreams( + cs, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), + req + ); assertThat(dataStreams.size(), equalTo(1)); assertThat(dataStreams.get(0).getName(), equalTo(dataStreamNames[1])); req = new GetDataStreamAction.Request(new String[] { dataStreamNames[0] }); - dataStreams = GetDataStreamsTransportAction.getDataStreams(cs, new IndexNameExpressionResolver(), req); + dataStreams = GetDataStreamsTransportAction.getDataStreams( + cs, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), + req + ); assertThat(dataStreams.size(), equalTo(1)); assertThat(dataStreams.get(0).getName(), equalTo(dataStreamNames[0])); GetDataStreamAction.Request req2 = new GetDataStreamAction.Request(new String[] { "foo" }); IndexNotFoundException e = expectThrows( IndexNotFoundException.class, - () -> GetDataStreamsTransportAction.getDataStreams(cs, new IndexNameExpressionResolver(), req2) + () -> GetDataStreamsTransportAction.getDataStreams(cs, new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), req2) ); assertThat(e.getMessage(), containsString("no such index [foo]")); } @@ -98,7 +132,7 @@ public void testGetNonexistentDataStream() { GetDataStreamAction.Request req = new GetDataStreamAction.Request(new String[] { dataStreamName }); IndexNotFoundException e = expectThrows( IndexNotFoundException.class, - () -> GetDataStreamsTransportAction.getDataStreams(cs, new IndexNameExpressionResolver(), req) + () -> GetDataStreamsTransportAction.getDataStreams(cs, new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), req) ); assertThat(e.getMessage(), containsString("no such index [" + dataStreamName + "]")); } diff --git a/x-pack/plugin/enrich/src/main/java/org/elasticsearch/xpack/enrich/AbstractEnrichProcessor.java b/x-pack/plugin/enrich/src/main/java/org/elasticsearch/xpack/enrich/AbstractEnrichProcessor.java index fc4d0691b39a3..525770d1fbd97 100644 --- a/x-pack/plugin/enrich/src/main/java/org/elasticsearch/xpack/enrich/AbstractEnrichProcessor.java +++ b/x-pack/plugin/enrich/src/main/java/org/elasticsearch/xpack/enrich/AbstractEnrichProcessor.java @@ -9,6 +9,7 @@ import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.client.Client; +import org.elasticsearch.client.OriginSettingClient; import org.elasticsearch.cluster.routing.Preference; import org.elasticsearch.index.query.ConstantScoreQueryBuilder; import org.elasticsearch.index.query.QueryBuilder; @@ -25,6 +26,8 @@ import java.util.Map; import java.util.function.BiConsumer; +import static org.elasticsearch.xpack.core.ClientHelper.ENRICH_ORIGIN; + public abstract class AbstractEnrichProcessor extends AbstractProcessor { private final String policyName; @@ -188,8 +191,9 @@ int getMaxMatches() { } private static BiConsumer> createSearchRunner(Client client) { + Client originClient = new OriginSettingClient(client, ENRICH_ORIGIN); return (req, handler) -> { - client.execute( + originClient.execute( EnrichCoordinatorProxyAction.INSTANCE, req, ActionListener.wrap(resp -> { handler.accept(resp, null); }, e -> { handler.accept(null, e); }) diff --git a/x-pack/plugin/enrich/src/main/java/org/elasticsearch/xpack/enrich/EnrichPolicyRunner.java b/x-pack/plugin/enrich/src/main/java/org/elasticsearch/xpack/enrich/EnrichPolicyRunner.java index faf893b42e00a..211557083e899 100644 --- a/x-pack/plugin/enrich/src/main/java/org/elasticsearch/xpack/enrich/EnrichPolicyRunner.java +++ b/x-pack/plugin/enrich/src/main/java/org/elasticsearch/xpack/enrich/EnrichPolicyRunner.java @@ -29,8 +29,10 @@ import org.elasticsearch.action.admin.indices.segments.ShardSegments; import org.elasticsearch.action.admin.indices.settings.put.UpdateSettingsRequest; import org.elasticsearch.action.bulk.BulkItemResponse; +import org.elasticsearch.action.support.ContextPreservingActionListener; import org.elasticsearch.action.support.master.AcknowledgedResponse; import org.elasticsearch.client.Client; +import org.elasticsearch.client.OriginSettingClient; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.metadata.AliasMetadata; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; @@ -62,6 +64,8 @@ import java.util.Set; import java.util.function.LongSupplier; +import static org.elasticsearch.xpack.core.ClientHelper.ENRICH_ORIGIN; + public class EnrichPolicyRunner implements Runnable { private static final Logger logger = LogManager.getLogger(EnrichPolicyRunner.class); @@ -116,6 +120,7 @@ public void run() { final String[] sourceIndices = policy.getIndices().toArray(new String[0]); logger.debug("Policy [{}]: Checking source indices [{}]", policyName, sourceIndices); GetIndexRequest getIndexRequest = new GetIndexRequest().indices(sourceIndices); + // This call does not set the origin to ensure that the user executing the policy has permission to access the source index client.admin().indices().getIndex(getIndexRequest, new ActionListener<>() { @Override public void onResponse(GetIndexResponse getIndexResponse) { @@ -300,7 +305,7 @@ private void prepareAndCreateEnrichIndex() { CreateIndexRequest createEnrichIndexRequest = new CreateIndexRequest(enrichIndexName, enrichIndexSettings); createEnrichIndexRequest.mapping(resolveEnrichMapping(policy)); logger.debug("Policy [{}]: Creating new enrich index [{}]", policyName, enrichIndexName); - client.admin().indices().create(createEnrichIndexRequest, new ActionListener<>() { + enrichOriginClient().admin().indices().create(createEnrichIndexRequest, new ActionListener<>() { @Override public void onResponse(CreateIndexResponse createIndexResponse) { prepareReindexOperation(enrichIndexName); @@ -316,7 +321,7 @@ public void onFailure(Exception e) { private void prepareReindexOperation(final String destinationIndexName) { // Check to make sure that the enrich pipeline exists, and create it if it is missing. if (EnrichPolicyReindexPipeline.exists(clusterService.state()) == false) { - EnrichPolicyReindexPipeline.create(client, new ActionListener<>() { + EnrichPolicyReindexPipeline.create(enrichOriginClient(), new ActionListener<>() { @Override public void onResponse(AcknowledgedResponse acknowledgedResponse) { transferDataToEnrichIndex(destinationIndexName); @@ -350,67 +355,80 @@ private void transferDataToEnrichIndex(final String destinationIndexName) { reindexRequest.getDestination().source(new BytesArray(new byte[0]), XContentType.SMILE); reindexRequest.getDestination().routing("discard"); reindexRequest.getDestination().setPipeline(EnrichPolicyReindexPipeline.pipelineName()); - client.execute(ReindexAction.INSTANCE, reindexRequest, new ActionListener<>() { - @Override - public void onResponse(BulkByScrollResponse bulkByScrollResponse) { - // Do we want to fail the request if there were failures during the reindex process? - if (bulkByScrollResponse.getBulkFailures().size() > 0) { - logger.warn( - "Policy [{}]: encountered [{}] bulk failures. Turn on DEBUG logging for details.", - policyName, - bulkByScrollResponse.getBulkFailures().size() - ); - if (logger.isDebugEnabled()) { - for (BulkItemResponse.Failure failure : bulkByScrollResponse.getBulkFailures()) { - logger.debug( - new ParameterizedMessage( - "Policy [{}]: bulk index failed for index [{}], id [{}]", - policyName, - failure.getIndex(), - failure.getId() - ), - failure.getCause() + + // The ContextPreservingActionListener here is for the purpose of dropping the response headers, as we need this reindex to run + // in the security context of the user (rather than Enrich's security context) to ensure that DLS/FLS is correctly applied, but + // the reindex needs to access the `.enrich` index, which causes a deprecation warning. Since we drop the response headers, + // the deprecation warning is also dropped - but this is a hack and will not work once full protections of system indices are + // enabled. + client.execute( + ReindexAction.INSTANCE, + reindexRequest, + new ContextPreservingActionListener<>( + client.threadPool().getThreadContext().newRestorableContext(false), + new ActionListener<>() { + @Override + public void onResponse(BulkByScrollResponse bulkByScrollResponse) { + // Do we want to fail the request if there were failures during the reindex process? + if (bulkByScrollResponse.getBulkFailures().size() > 0) { + logger.warn( + "Policy [{}]: encountered [{}] bulk failures. Turn on DEBUG logging for details.", + policyName, + bulkByScrollResponse.getBulkFailures().size() ); - } - } - listener.onFailure(new ElasticsearchException("Encountered bulk failures during reindex process")); - } else if (bulkByScrollResponse.getSearchFailures().size() > 0) { - logger.warn( - "Policy [{}]: encountered [{}] search failures. Turn on DEBUG logging for details.", - policyName, - bulkByScrollResponse.getSearchFailures().size() - ); - if (logger.isDebugEnabled()) { - for (ScrollableHitSource.SearchFailure failure : bulkByScrollResponse.getSearchFailures()) { - logger.debug( - new ParameterizedMessage( - "Policy [{}]: search failed for index [{}], shard [{}] on node [{}]", - policyName, - failure.getIndex(), - failure.getShardId(), - failure.getNodeId() - ), - failure.getReason() + if (logger.isDebugEnabled()) { + for (BulkItemResponse.Failure failure : bulkByScrollResponse.getBulkFailures()) { + logger.debug( + new ParameterizedMessage( + "Policy [{}]: bulk index failed for index [{}], id [{}]", + policyName, + failure.getIndex(), + failure.getId() + ), + failure.getCause() + ); + } + } + listener.onFailure(new ElasticsearchException("Encountered bulk failures during reindex process")); + } else if (bulkByScrollResponse.getSearchFailures().size() > 0) { + logger.warn( + "Policy [{}]: encountered [{}] search failures. Turn on DEBUG logging for details.", + policyName, + bulkByScrollResponse.getSearchFailures().size() + ); + if (logger.isDebugEnabled()) { + for (ScrollableHitSource.SearchFailure failure : bulkByScrollResponse.getSearchFailures()) { + logger.debug( + new ParameterizedMessage( + "Policy [{}]: search failed for index [{}], shard [{}] on node [{}]", + policyName, + failure.getIndex(), + failure.getShardId(), + failure.getNodeId() + ), + failure.getReason() + ); + } + } + listener.onFailure(new ElasticsearchException("Encountered search failures during reindex process")); + } else { + logger.info( + "Policy [{}]: Transferred [{}] documents to enrich index [{}]", + policyName, + bulkByScrollResponse.getCreated(), + destinationIndexName ); + forceMergeEnrichIndex(destinationIndexName, 1); } } - listener.onFailure(new ElasticsearchException("Encountered search failures during reindex process")); - } else { - logger.info( - "Policy [{}]: Transferred [{}] documents to enrich index [{}]", - policyName, - bulkByScrollResponse.getCreated(), - destinationIndexName - ); - forceMergeEnrichIndex(destinationIndexName, 1); - } - } - @Override - public void onFailure(Exception e) { - listener.onFailure(e); - } - }); + @Override + public void onFailure(Exception e) { + listener.onFailure(e); + } + } + ) + ); } private void forceMergeEnrichIndex(final String destinationIndexName, final int attempt) { @@ -421,22 +439,24 @@ private void forceMergeEnrichIndex(final String destinationIndexName, final int attempt, maxForceMergeAttempts ); - client.admin().indices().forceMerge(new ForceMergeRequest(destinationIndexName).maxNumSegments(1), new ActionListener<>() { - @Override - public void onResponse(ForceMergeResponse forceMergeResponse) { - refreshEnrichIndex(destinationIndexName, attempt); - } + enrichOriginClient().admin() + .indices() + .forceMerge(new ForceMergeRequest(destinationIndexName).maxNumSegments(1), new ActionListener<>() { + @Override + public void onResponse(ForceMergeResponse forceMergeResponse) { + refreshEnrichIndex(destinationIndexName, attempt); + } - @Override - public void onFailure(Exception e) { - listener.onFailure(e); - } - }); + @Override + public void onFailure(Exception e) { + listener.onFailure(e); + } + }); } private void refreshEnrichIndex(final String destinationIndexName, final int attempt) { logger.debug("Policy [{}]: Refreshing enrich index [{}]", policyName, destinationIndexName); - client.admin().indices().refresh(new RefreshRequest(destinationIndexName), new ActionListener<>() { + enrichOriginClient().admin().indices().refresh(new RefreshRequest(destinationIndexName), new ActionListener<>() { @Override public void onResponse(RefreshResponse refreshResponse) { ensureSingleSegment(destinationIndexName, attempt); @@ -450,7 +470,7 @@ public void onFailure(Exception e) { } protected void ensureSingleSegment(final String destinationIndexName, final int attempt) { - client.admin().indices().segments(new IndicesSegmentsRequest(destinationIndexName), new ActionListener<>() { + enrichOriginClient().admin().indices().segments(new IndicesSegmentsRequest(destinationIndexName), new ActionListener<>() { @Override public void onResponse(IndicesSegmentResponse indicesSegmentResponse) { IndexSegments indexSegments = indicesSegmentResponse.getIndices().get(destinationIndexName); @@ -503,7 +523,7 @@ private void setIndexReadOnly(final String destinationIndexName) { logger.debug("Policy [{}]: Setting new enrich index [{}] to be read only", policyName, destinationIndexName); UpdateSettingsRequest request = new UpdateSettingsRequest(destinationIndexName).setPreserveExisting(true) .settings(Settings.builder().put("index.auto_expand_replicas", "0-all").put("index.blocks.write", "true")); - client.admin().indices().updateSettings(request, new ActionListener<>() { + enrichOriginClient().admin().indices().updateSettings(request, new ActionListener<>() { @Override public void onResponse(AcknowledgedResponse acknowledgedResponse) { waitForIndexGreen(destinationIndexName); @@ -518,7 +538,7 @@ public void onFailure(Exception e) { private void waitForIndexGreen(final String destinationIndexName) { ClusterHealthRequest request = new ClusterHealthRequest(destinationIndexName).waitForGreenStatus(); - client.admin().cluster().health(request, new ActionListener<>() { + enrichOriginClient().admin().cluster().health(request, new ActionListener<>() { @Override public void onResponse(ClusterHealthResponse clusterHealthResponse) { updateEnrichPolicyAlias(destinationIndexName); @@ -536,7 +556,7 @@ private void updateEnrichPolicyAlias(final String destinationIndexName) { logger.debug("Policy [{}]: Promoting new enrich index [{}] to alias [{}]", policyName, destinationIndexName, enrichIndexBase); GetAliasesRequest aliasRequest = new GetAliasesRequest(enrichIndexBase); ClusterState clusterState = clusterService.state(); - String[] concreteIndices = indexNameExpressionResolver.concreteIndexNames(clusterState, aliasRequest); + String[] concreteIndices = indexNameExpressionResolver.concreteIndexNamesWithSystemIndexAccess(clusterState, aliasRequest); ImmutableOpenMap> aliases = clusterState.metadata().findAliases(aliasRequest, concreteIndices); IndicesAliasesRequest aliasToggleRequest = new IndicesAliasesRequest(); String[] indices = aliases.keys().toArray(String.class); @@ -544,7 +564,7 @@ private void updateEnrichPolicyAlias(final String destinationIndexName) { aliasToggleRequest.addAliasAction(IndicesAliasesRequest.AliasActions.remove().indices(indices).alias(enrichIndexBase)); } aliasToggleRequest.addAliasAction(IndicesAliasesRequest.AliasActions.add().index(destinationIndexName).alias(enrichIndexBase)); - client.admin().indices().aliases(aliasToggleRequest, new ActionListener<>() { + enrichOriginClient().admin().indices().aliases(aliasToggleRequest, new ActionListener<>() { @Override public void onResponse(AcknowledgedResponse acknowledgedResponse) { logger.info("Policy [{}]: Policy execution complete", policyName); @@ -559,4 +579,12 @@ public void onFailure(Exception e) { } }); } + + /** + * Use this client to access information at the access level of the Enrich plugin, rather than at the access level of the user. + * For example, use this client to access system indices (such as `.enrich*` indices). + */ + private Client enrichOriginClient() { + return new OriginSettingClient(client, ENRICH_ORIGIN); + } } diff --git a/x-pack/plugin/enrich/src/main/java/org/elasticsearch/xpack/enrich/action/TransportDeleteEnrichPolicyAction.java b/x-pack/plugin/enrich/src/main/java/org/elasticsearch/xpack/enrich/action/TransportDeleteEnrichPolicyAction.java index cd72e0f07428a..d37d1cb603787 100644 --- a/x-pack/plugin/enrich/src/main/java/org/elasticsearch/xpack/enrich/action/TransportDeleteEnrichPolicyAction.java +++ b/x-pack/plugin/enrich/src/main/java/org/elasticsearch/xpack/enrich/action/TransportDeleteEnrichPolicyAction.java @@ -15,6 +15,7 @@ import org.elasticsearch.action.support.master.AcknowledgedResponse; import org.elasticsearch.action.support.master.TransportMasterNodeAction; import org.elasticsearch.client.Client; +import org.elasticsearch.client.OriginSettingClient; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.block.ClusterBlockException; import org.elasticsearch.cluster.block.ClusterBlockLevel; @@ -38,6 +39,8 @@ import java.util.ArrayList; import java.util.List; +import static org.elasticsearch.xpack.core.ClientHelper.ENRICH_ORIGIN; + public class TransportDeleteEnrichPolicyAction extends TransportMasterNodeAction { private final EnrichPolicyLocks enrichPolicyLocks; @@ -132,7 +135,7 @@ protected void masterOperation( GetIndexRequest indices = new GetIndexRequest().indices(EnrichPolicy.getBaseName(request.getName()) + "-*") .indicesOptions(IndicesOptions.lenientExpand()); - String[] concreteIndices = indexNameExpressionResolver.concreteIndexNames(state, indices); + String[] concreteIndices = indexNameExpressionResolver.concreteIndexNamesWithSystemIndexAccess(state, indices); deleteIndicesAndPolicy(concreteIndices, request.getName(), ActionListener.wrap((response) -> { enrichPolicyLocks.releasePolicy(request.getName()); @@ -153,7 +156,7 @@ private void deleteIndicesAndPolicy(String[] indices, String name, ActionListene // as the setting 'action.destructive_requires_name' may be set to true DeleteIndexRequest deleteRequest = new DeleteIndexRequest().indices(indices).indicesOptions(LENIENT_OPTIONS); - client.admin().indices().delete(deleteRequest, ActionListener.wrap((response) -> { + new OriginSettingClient(client, ENRICH_ORIGIN).admin().indices().delete(deleteRequest, ActionListener.wrap((response) -> { if (response.isAcknowledged() == false) { listener.onFailure( new ElasticsearchStatusException( diff --git a/x-pack/plugin/enrich/src/test/java/org/elasticsearch/xpack/enrich/AbstractEnrichTestCase.java b/x-pack/plugin/enrich/src/test/java/org/elasticsearch/xpack/enrich/AbstractEnrichTestCase.java index 76ba526ce4123..1091772cc65bf 100644 --- a/x-pack/plugin/enrich/src/test/java/org/elasticsearch/xpack/enrich/AbstractEnrichTestCase.java +++ b/x-pack/plugin/enrich/src/test/java/org/elasticsearch/xpack/enrich/AbstractEnrichTestCase.java @@ -11,6 +11,8 @@ import org.elasticsearch.client.Client; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.test.ESSingleNodeTestCase; import org.elasticsearch.xpack.core.enrich.EnrichPolicy; @@ -32,7 +34,7 @@ protected AtomicReference saveEnrichPolicy(String name, EnrichPolicy if (policy != null) { createSourceIndices(policy); } - IndexNameExpressionResolver resolver = new IndexNameExpressionResolver(); + IndexNameExpressionResolver resolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); CountDownLatch latch = new CountDownLatch(1); AtomicReference error = new AtomicReference<>(); EnrichStore.putPolicy(name, policy, clusterService, resolver, e -> { diff --git a/x-pack/plugin/enrich/src/test/java/org/elasticsearch/xpack/enrich/EnrichPolicyExecutorTests.java b/x-pack/plugin/enrich/src/test/java/org/elasticsearch/xpack/enrich/EnrichPolicyExecutorTests.java index fa9d43246775e..94088caae50f8 100644 --- a/x-pack/plugin/enrich/src/test/java/org/elasticsearch/xpack/enrich/EnrichPolicyExecutorTests.java +++ b/x-pack/plugin/enrich/src/test/java/org/elasticsearch/xpack/enrich/EnrichPolicyExecutorTests.java @@ -19,6 +19,7 @@ import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.tasks.TaskManager; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.threadpool.TestThreadPool; @@ -141,7 +142,7 @@ public void testNonConcurrentPolicyExecution() throws InterruptedException { null, testTaskManager, testThreadPool, - new IndexNameExpressionResolver(), + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), ESTestCase::randomNonNegativeLong ); @@ -198,7 +199,7 @@ public void testMaximumPolicyExecutionLimit() throws InterruptedException { null, testTaskManager, testThreadPool, - new IndexNameExpressionResolver(), + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), ESTestCase::randomNonNegativeLong ); diff --git a/x-pack/plugin/enrich/src/test/java/org/elasticsearch/xpack/enrich/EnrichPolicyMaintenanceServiceTests.java b/x-pack/plugin/enrich/src/test/java/org/elasticsearch/xpack/enrich/EnrichPolicyMaintenanceServiceTests.java index c26243fb9e40e..40c9c2e84d5ba 100644 --- a/x-pack/plugin/enrich/src/test/java/org/elasticsearch/xpack/enrich/EnrichPolicyMaintenanceServiceTests.java +++ b/x-pack/plugin/enrich/src/test/java/org/elasticsearch/xpack/enrich/EnrichPolicyMaintenanceServiceTests.java @@ -14,6 +14,7 @@ import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.json.JsonXContent; import org.elasticsearch.index.mapper.MapperService; @@ -122,7 +123,7 @@ private EnrichPolicy randomPolicy() { } private void addPolicy(String policyName, EnrichPolicy policy) throws InterruptedException { - IndexNameExpressionResolver resolver = new IndexNameExpressionResolver(); + IndexNameExpressionResolver resolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); createSourceIndices(client(), policy); doSyncronously( (clusterService, exceptionConsumer) -> EnrichStore.putPolicy(policyName, policy, clusterService, resolver, exceptionConsumer) diff --git a/x-pack/plugin/enrich/src/test/java/org/elasticsearch/xpack/enrich/EnrichProcessorFactoryTests.java b/x-pack/plugin/enrich/src/test/java/org/elasticsearch/xpack/enrich/EnrichProcessorFactoryTests.java index c2f31b063dee7..2c0321b2e2e57 100644 --- a/x-pack/plugin/enrich/src/test/java/org/elasticsearch/xpack/enrich/EnrichProcessorFactoryTests.java +++ b/x-pack/plugin/enrich/src/test/java/org/elasticsearch/xpack/enrich/EnrichProcessorFactoryTests.java @@ -7,6 +7,7 @@ import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.Version; +import org.elasticsearch.client.Client; import org.elasticsearch.cluster.metadata.AliasMetadata; import org.elasticsearch.cluster.metadata.IndexMetadata; import org.elasticsearch.cluster.metadata.Metadata; @@ -14,6 +15,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.script.ScriptService; import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.client.NoOpClient; import org.elasticsearch.xpack.core.enrich.EnrichPolicy; import org.junit.Before; @@ -41,51 +43,53 @@ public void initializeScriptService() { public void testCreateProcessorInstance() throws Exception { List enrichValues = List.of("globalRank", "tldRank", "tld"); EnrichPolicy policy = new EnrichPolicy(EnrichPolicy.MATCH_TYPE, null, List.of("source_index"), "my_key", enrichValues); - EnrichProcessorFactory factory = new EnrichProcessorFactory(null, scriptService); - factory.metadata = createMetadata("majestic", policy); - - Map config = new HashMap<>(); - config.put("policy_name", "majestic"); - config.put("field", "host"); - config.put("target_field", "entry"); - boolean keyIgnoreMissing = randomBoolean(); - if (keyIgnoreMissing || randomBoolean()) { - config.put("ignore_missing", keyIgnoreMissing); - } - - Boolean overrideEnabled = randomBoolean() ? null : randomBoolean(); - if (overrideEnabled != null) { - config.put("override", overrideEnabled); - } - - Integer maxMatches = null; - if (randomBoolean()) { - maxMatches = randomIntBetween(1, 128); - config.put("max_matches", maxMatches); - } - - int numRandomValues = randomIntBetween(1, 8); - List> randomValues = new ArrayList<>(numRandomValues); - for (int i = 0; i < numRandomValues; i++) { - randomValues.add(new Tuple<>(randomFrom(enrichValues), randomAlphaOfLength(4))); - } - - MatchProcessor result = (MatchProcessor) factory.create(Collections.emptyMap(), "_tag", null, config); - assertThat(result, notNullValue()); - assertThat(result.getPolicyName(), equalTo("majestic")); - assertThat(result.getField(), equalTo("host")); - assertThat(result.getTargetField(), equalTo("entry")); - assertThat(result.getMatchField(), equalTo("my_key")); - assertThat(result.isIgnoreMissing(), is(keyIgnoreMissing)); - if (overrideEnabled != null) { - assertThat(result.isOverrideEnabled(), is(overrideEnabled)); - } else { - assertThat(result.isOverrideEnabled(), is(true)); - } - if (maxMatches != null) { - assertThat(result.getMaxMatches(), equalTo(maxMatches)); - } else { - assertThat(result.getMaxMatches(), equalTo(1)); + try (Client client = new NoOpClient(this.getClass().getSimpleName() + "TestClient")) { + EnrichProcessorFactory factory = new EnrichProcessorFactory(client, scriptService); + factory.metadata = createMetadata("majestic", policy); + + Map config = new HashMap<>(); + config.put("policy_name", "majestic"); + config.put("field", "host"); + config.put("target_field", "entry"); + boolean keyIgnoreMissing = randomBoolean(); + if (keyIgnoreMissing || randomBoolean()) { + config.put("ignore_missing", keyIgnoreMissing); + } + + Boolean overrideEnabled = randomBoolean() ? null : randomBoolean(); + if (overrideEnabled != null) { + config.put("override", overrideEnabled); + } + + Integer maxMatches = null; + if (randomBoolean()) { + maxMatches = randomIntBetween(1, 128); + config.put("max_matches", maxMatches); + } + + int numRandomValues = randomIntBetween(1, 8); + List> randomValues = new ArrayList<>(numRandomValues); + for (int i = 0; i < numRandomValues; i++) { + randomValues.add(new Tuple<>(randomFrom(enrichValues), randomAlphaOfLength(4))); + } + + MatchProcessor result = (MatchProcessor) factory.create(Collections.emptyMap(), "_tag", null, config); + assertThat(result, notNullValue()); + assertThat(result.getPolicyName(), equalTo("majestic")); + assertThat(result.getField(), equalTo("host")); + assertThat(result.getTargetField(), equalTo("entry")); + assertThat(result.getMatchField(), equalTo("my_key")); + assertThat(result.isIgnoreMissing(), is(keyIgnoreMissing)); + if (overrideEnabled != null) { + assertThat(result.isOverrideEnabled(), is(overrideEnabled)); + } else { + assertThat(result.isOverrideEnabled(), is(true)); + } + if (maxMatches != null) { + assertThat(result.getMaxMatches(), equalTo(maxMatches)); + } else { + assertThat(result.getMaxMatches(), equalTo(1)); + } } } @@ -167,19 +171,21 @@ public void testUnsupportedPolicy() throws Exception { public void testCompactEnrichValuesFormat() throws Exception { List enrichValues = List.of("globalRank", "tldRank", "tld"); EnrichPolicy policy = new EnrichPolicy(EnrichPolicy.MATCH_TYPE, null, List.of("source_index"), "host", enrichValues); - EnrichProcessorFactory factory = new EnrichProcessorFactory(null, scriptService); - factory.metadata = createMetadata("majestic", policy); - - Map config = new HashMap<>(); - config.put("policy_name", "majestic"); - config.put("field", "host"); - config.put("target_field", "entry"); - - MatchProcessor result = (MatchProcessor) factory.create(Collections.emptyMap(), "_tag", null, config); - assertThat(result, notNullValue()); - assertThat(result.getPolicyName(), equalTo("majestic")); - assertThat(result.getField(), equalTo("host")); - assertThat(result.getTargetField(), equalTo("entry")); + try (Client client = new NoOpClient(this.getClass().getSimpleName() + "TestClient")) { + EnrichProcessorFactory factory = new EnrichProcessorFactory(client, scriptService); + factory.metadata = createMetadata("majestic", policy); + + Map config = new HashMap<>(); + config.put("policy_name", "majestic"); + config.put("field", "host"); + config.put("target_field", "entry"); + + MatchProcessor result = (MatchProcessor) factory.create(Collections.emptyMap(), "_tag", null, config); + assertThat(result, notNullValue()); + assertThat(result.getPolicyName(), equalTo("majestic")); + assertThat(result.getField(), equalTo("host")); + assertThat(result.getTargetField(), equalTo("entry")); + } } public void testNoTargetField() throws Exception { diff --git a/x-pack/plugin/logstash/src/main/java/org/elasticsearch/xpack/logstash/action/TransportDeletePipelineAction.java b/x-pack/plugin/logstash/src/main/java/org/elasticsearch/xpack/logstash/action/TransportDeletePipelineAction.java index e2b79747ec82e..46b5982196538 100644 --- a/x-pack/plugin/logstash/src/main/java/org/elasticsearch/xpack/logstash/action/TransportDeletePipelineAction.java +++ b/x-pack/plugin/logstash/src/main/java/org/elasticsearch/xpack/logstash/action/TransportDeletePipelineAction.java @@ -12,11 +12,14 @@ import org.elasticsearch.action.support.HandledTransportAction; import org.elasticsearch.action.support.WriteRequest; import org.elasticsearch.client.Client; +import org.elasticsearch.client.OriginSettingClient; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.tasks.Task; import org.elasticsearch.transport.TransportService; import org.elasticsearch.xpack.logstash.Logstash; +import static org.elasticsearch.xpack.core.ClientHelper.LOGSTASH_MANAGEMENT_ORIGIN; + public class TransportDeletePipelineAction extends HandledTransportAction { private final Client client; @@ -24,7 +27,7 @@ public class TransportDeletePipelineAction extends HandledTransportAction { private static final Logger logger = LogManager.getLogger(TransportGetPipelineAction.class); @@ -43,7 +46,7 @@ public class TransportGetPipelineAction extends HandledTransportAction { private final Client client; @@ -23,7 +26,7 @@ public class TransportPutPipelineAction extends HandledTransportAction { + if (warnings.isEmpty()) { + // There may not be an index to delete, in which case there's no warning + return false; + } else if (warnings.size() > 1) { + return true; + } + // We don't know exactly which indices we're cleaning up in advance, so just accept all system index access warnings. + final String warning = warnings.get(0); + final boolean isSystemIndexWarning = warning.contains("this request accesses system indices") + && warning.contains("but in a future major version, direct access to system indices will be prevented by default"); + return isSystemIndexWarning == false; + }).build(); + final Request deleteInferenceRequest = new Request("DELETE", InferenceIndexConstants.INDEX_PATTERN); + deleteInferenceRequest.setOptions(allowSystemIndexAccessWarningOptions); + client().performRequest(deleteInferenceRequest); + final Request deleteStatsRequest = new Request("DELETE", MlStatsIndex.indexPattern()); + client().performRequest(deleteStatsRequest); Request loggingSettings = new Request("PUT", "_cluster/settings"); loggingSettings.setJsonEntity("" + "{" + diff --git a/x-pack/plugin/ml/qa/native-multi-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/ml/integration/MlJobIT.java b/x-pack/plugin/ml/qa/native-multi-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/ml/integration/MlJobIT.java index 7f6892842ea89..721a566cb18c9 100644 --- a/x-pack/plugin/ml/qa/native-multi-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/ml/integration/MlJobIT.java +++ b/x-pack/plugin/ml/qa/native-multi-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/ml/integration/MlJobIT.java @@ -7,6 +7,7 @@ import org.apache.http.util.EntityUtils; import org.elasticsearch.client.Request; +import org.elasticsearch.client.RequestOptions; import org.elasticsearch.client.Response; import org.elasticsearch.client.ResponseException; import org.elasticsearch.common.settings.Settings; @@ -34,7 +35,6 @@ import java.util.regex.Matcher; import java.util.regex.Pattern; -import static org.elasticsearch.xpack.core.security.authc.support.UsernamePasswordToken.basicAuthHeaderValue; import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.hasEntry; @@ -818,7 +818,19 @@ public void testDelete_multipleRequest() throws Exception { } private String getAliases() throws IOException { - Response response = client().performRequest(new Request("GET", "/_aliases")); + final Request aliasesRequest = new Request("GET", "/_aliases"); + // Allow system index deprecation warnings - this can be removed once system indices are omitted from responses rather than + // triggering a deprecation warning. + aliasesRequest.setOptions(RequestOptions.DEFAULT.toBuilder().setWarningsHandler(warnings -> { + if (warnings.isEmpty()) { + return false; + } else if (warnings.size() > 1) { + return true; + } else { + return warnings.get(0).startsWith("this request accesses system indices:") == false; + } + }).build()); + Response response = client().performRequest(aliasesRequest); return EntityUtils.toString(response.getEntity()); } diff --git a/x-pack/plugin/ml/qa/native-multi-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/ml/integration/ModelSnapshotRetentionIT.java b/x-pack/plugin/ml/qa/native-multi-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/ml/integration/ModelSnapshotRetentionIT.java index 8ba21efcee20f..344a833df711a 100644 --- a/x-pack/plugin/ml/qa/native-multi-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/ml/integration/ModelSnapshotRetentionIT.java +++ b/x-pack/plugin/ml/qa/native-multi-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/ml/integration/ModelSnapshotRetentionIT.java @@ -20,7 +20,9 @@ import org.elasticsearch.action.support.WriteRequest; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.json.JsonXContent; @@ -67,7 +69,8 @@ public class ModelSnapshotRetentionIT extends MlNativeAutodetectIntegTestCase { @Before public void addMlState() { PlainActionFuture future = new PlainActionFuture<>(); - createStateIndexAndAliasIfNecessary(client(), ClusterState.EMPTY_STATE, new IndexNameExpressionResolver(), future); + createStateIndexAndAliasIfNecessary(client(), ClusterState.EMPTY_STATE, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), future); future.actionGet(); } diff --git a/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/AutodetectResultProcessorIT.java b/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/AutodetectResultProcessorIT.java index 4d0f147cee2b6..dc69fede7af9b 100644 --- a/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/AutodetectResultProcessorIT.java +++ b/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/AutodetectResultProcessorIT.java @@ -24,6 +24,7 @@ import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.reindex.ReindexPlugin; @@ -135,7 +136,8 @@ public void createComponents() throws Exception { Settings.Builder builder = Settings.builder() .put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), TimeValue.timeValueSeconds(1)); AnomalyDetectionAuditor auditor = new AnomalyDetectionAuditor(client(), getInstanceFromNode(ClusterService.class)); - jobResultsProvider = new JobResultsProvider(client(), builder.build(), new IndexNameExpressionResolver()); + jobResultsProvider = new JobResultsProvider(client(), builder.build(), + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); renormalizer = mock(Renormalizer.class); process = mock(AutodetectProcess.class); capturedUpdateModelSnapshotOnJobRequests = new ArrayList<>(); @@ -175,7 +177,8 @@ protected void updateModelSnapshotOnJob(ModelSnapshot modelSnapshot) { // A a result they must create the index as part of the test setup. Do not // copy this setup to tests that run jobs in the way they are run in production. PlainActionFuture future = new PlainActionFuture<>(); - createStateIndexAndAliasIfNecessary(client(), ClusterState.EMPTY_STATE, new IndexNameExpressionResolver(), future); + createStateIndexAndAliasIfNecessary(client(), ClusterState.EMPTY_STATE, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), future); future.get(); } diff --git a/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/EstablishedMemUsageIT.java b/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/EstablishedMemUsageIT.java index 7bf20cea0b4c7..5f178bd7c0548 100644 --- a/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/EstablishedMemUsageIT.java +++ b/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/EstablishedMemUsageIT.java @@ -13,6 +13,7 @@ import org.elasticsearch.cluster.service.MasterService; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.xpack.core.ClientHelper; import org.elasticsearch.xpack.core.ml.action.PutJobAction; @@ -59,7 +60,7 @@ public void createComponents() { OriginSettingClient originSettingClient = new OriginSettingClient(client(), ClientHelper.ML_ORIGIN); ResultsPersisterService resultsPersisterService = new ResultsPersisterService(originSettingClient, clusterService, settings); - jobResultsProvider = new JobResultsProvider(client(), settings, new IndexNameExpressionResolver()); + jobResultsProvider = new JobResultsProvider(client(), settings, new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); jobResultsPersister = new JobResultsPersister( originSettingClient, resultsPersisterService, new AnomalyDetectionAuditor(client(), clusterService)); } diff --git a/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/JobResultsProviderIT.java b/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/JobResultsProviderIT.java index af7362a9247b4..cac051247d5d8 100644 --- a/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/JobResultsProviderIT.java +++ b/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/JobResultsProviderIT.java @@ -33,6 +33,7 @@ import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; @@ -112,7 +113,7 @@ public class JobResultsProviderIT extends MlSingleNodeTestCase { public void createComponents() throws Exception { Settings.Builder builder = Settings.builder() .put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), TimeValue.timeValueSeconds(1)); - jobProvider = new JobResultsProvider(client(), builder.build(), new IndexNameExpressionResolver()); + jobProvider = new JobResultsProvider(client(), builder.build(), new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); ThreadPool tp = mock(ThreadPool.class); ClusterSettings clusterSettings = new ClusterSettings(builder.build(), new HashSet<>(Arrays.asList(InferenceProcessor.MAX_INFERENCE_PROCESSORS, @@ -916,7 +917,8 @@ private void indexModelSnapshot(ModelSnapshot snapshot) { private void indexQuantiles(Quantiles quantiles) { PlainActionFuture future = new PlainActionFuture<>(); - createStateIndexAndAliasIfNecessary(client(), ClusterState.EMPTY_STATE, new IndexNameExpressionResolver(), future); + createStateIndexAndAliasIfNecessary(client(), ClusterState.EMPTY_STATE, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), future); future.actionGet(); JobResultsPersister persister = new JobResultsPersister(new OriginSettingClient(client(), ClientHelper.ML_ORIGIN), resultsPersisterService, auditor); diff --git a/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/JobStorageDeletionTaskIT.java b/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/JobStorageDeletionTaskIT.java index 2f7868776638c..5c93960726106 100644 --- a/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/JobStorageDeletionTaskIT.java +++ b/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/JobStorageDeletionTaskIT.java @@ -18,6 +18,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeUnit; import org.elasticsearch.common.unit.ByteSizeValue; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.search.builder.SearchSourceBuilder; import org.elasticsearch.threadpool.ThreadPool; @@ -74,7 +75,7 @@ public void createComponents() { ClusterService clusterService = new ClusterService(settings, clusterSettings, tp); OriginSettingClient originSettingClient = new OriginSettingClient(client(), ClientHelper.ML_ORIGIN); ResultsPersisterService resultsPersisterService = new ResultsPersisterService(originSettingClient, clusterService, settings); - jobResultsProvider = new JobResultsProvider(client(), settings, new IndexNameExpressionResolver()); + jobResultsProvider = new JobResultsProvider(client(), settings, new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); jobResultsPersister = new JobResultsPersister( originSettingClient, resultsPersisterService, new AnomalyDetectionAuditor(client(), clusterService)); } diff --git a/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/MlAutoUpdateServiceIT.java b/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/MlAutoUpdateServiceIT.java index f0f3e66fe3f79..27ad678b21b7e 100644 --- a/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/MlAutoUpdateServiceIT.java +++ b/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/MlAutoUpdateServiceIT.java @@ -16,6 +16,7 @@ import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.TransportAddress; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.search.SearchModule; @@ -70,7 +71,7 @@ public void createComponents() throws Exception { public void testAutomaticModelUpdate() throws Exception { ensureGreen("_all"); - IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver(); + IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); client().prepareIndex(MlConfigIndex.indexName()) .setId(DatafeedConfig.documentId("farequote-datafeed-with-old-agg")) .setSource(AGG_WITH_OLD_DATE_HISTOGRAM_INTERVAL, XContentType.JSON) diff --git a/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/MlConfigMigratorIT.java b/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/MlConfigMigratorIT.java index d736af39a3eba..71d1066695bba 100644 --- a/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/MlConfigMigratorIT.java +++ b/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/MlConfigMigratorIT.java @@ -29,6 +29,7 @@ import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.LoggingDeprecationHandler; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; @@ -75,7 +76,7 @@ public class MlConfigMigratorIT extends MlSingleNodeTestCase { - private final IndexNameExpressionResolver expressionResolver = new IndexNameExpressionResolver(); + private final IndexNameExpressionResolver expressionResolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); private ClusterService clusterService; @Before diff --git a/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/UnusedStatsRemoverIT.java b/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/UnusedStatsRemoverIT.java index cf376e2467f6e..92c395eaa7836 100644 --- a/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/UnusedStatsRemoverIT.java +++ b/x-pack/plugin/ml/src/internalClusterTest/java/org/elasticsearch/xpack/ml/integration/UnusedStatsRemoverIT.java @@ -52,7 +52,8 @@ public class UnusedStatsRemoverIT extends BaseMlIntegTestCase { public void createComponents() { client = new OriginSettingClient(client(), ClientHelper.ML_ORIGIN); PlainActionFuture future = new PlainActionFuture<>(); - MlStatsIndex.createStatsIndexAndAliasIfNecessary(client(), clusterService().state(), new IndexNameExpressionResolver(), future); + MlStatsIndex.createStatsIndexAndAliasIfNecessary(client(), clusterService().state(), + new IndexNameExpressionResolver(client.threadPool().getThreadContext()), future); future.actionGet(); } diff --git a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/action/TransportOpenJobActionTests.java b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/action/TransportOpenJobActionTests.java index a495d9a9d13e1..fbd98b8d9657d 100644 --- a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/action/TransportOpenJobActionTests.java +++ b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/action/TransportOpenJobActionTests.java @@ -25,6 +25,7 @@ import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.index.Index; import org.elasticsearch.index.shard.ShardId; @@ -92,7 +93,7 @@ public void testValidate_givenValidJob() { } public void testVerifyIndicesPrimaryShardsAreActive() { - final IndexNameExpressionResolver resolver = new IndexNameExpressionResolver(); + final IndexNameExpressionResolver resolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); Metadata.Builder metadata = Metadata.builder(); RoutingTable.Builder routingTable = RoutingTable.builder(); addIndices(metadata, routingTable); @@ -106,7 +107,7 @@ public void testVerifyIndicesPrimaryShardsAreActive() { metadata = new Metadata.Builder(cs.metadata()); routingTable = new RoutingTable.Builder(cs.routingTable()); - IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver(); + IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); String indexToRemove = randomFrom(indexNameExpressionResolver.concreteIndexNames(cs, IndicesOptions.lenientExpandOpen(), TransportOpenJobAction.indicesOfInterest(".ml-anomalies-shared"))); if (randomBoolean()) { @@ -157,7 +158,7 @@ public void testGetAssignment_GivenJobThatRequiresMigration() { TransportOpenJobAction.OpenJobPersistentTasksExecutor executor = new TransportOpenJobAction.OpenJobPersistentTasksExecutor( Settings.EMPTY, clusterService, mock(AutodetectProcessManager.class), mock(MlMemoryTracker.class), mock(Client.class), - new IndexNameExpressionResolver()); + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); OpenJobAction.JobParams params = new OpenJobAction.JobParams("missing_job_field"); assertEquals(TransportOpenJobAction.AWAITING_MIGRATION, executor.getAssignment(params, mock(ClusterState.class))); @@ -183,7 +184,7 @@ public void testGetAssignment_GivenUnavailableIndicesWithLazyNode() { TransportOpenJobAction.OpenJobPersistentTasksExecutor executor = new TransportOpenJobAction.OpenJobPersistentTasksExecutor( settings, clusterService, mock(AutodetectProcessManager.class), mock(MlMemoryTracker.class), mock(Client.class), - new IndexNameExpressionResolver()); + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); OpenJobAction.JobParams params = new OpenJobAction.JobParams("unavailable_index_with_lazy_node"); params.setJob(mock(Job.class)); @@ -210,7 +211,7 @@ public void testGetAssignment_GivenLazyJobAndNoGlobalLazyNodes() { TransportOpenJobAction.OpenJobPersistentTasksExecutor executor = new TransportOpenJobAction.OpenJobPersistentTasksExecutor( settings, clusterService, mock(AutodetectProcessManager.class), mock(MlMemoryTracker.class), mock(Client.class), - new IndexNameExpressionResolver()); + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); Job job = mock(Job.class); when(job.allowLazyOpen()).thenReturn(true); diff --git a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/action/TransportStartDataFrameAnalyticsActionTests.java b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/action/TransportStartDataFrameAnalyticsActionTests.java index b77a9d1cb6fef..7183550277a3f 100644 --- a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/action/TransportStartDataFrameAnalyticsActionTests.java +++ b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/action/TransportStartDataFrameAnalyticsActionTests.java @@ -24,6 +24,7 @@ import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.TransportAddress; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.index.Index; import org.elasticsearch.index.shard.ShardId; @@ -89,7 +90,8 @@ public void testVerifyIndicesPrimaryShardsAreActive() { ClusterState cs = csBuilder.build(); assertThat( - TransportStartDataFrameAnalyticsAction.verifyIndicesPrimaryShardsAreActive(cs, new IndexNameExpressionResolver(), indexName), + TransportStartDataFrameAnalyticsAction.verifyIndicesPrimaryShardsAreActive(cs, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), indexName), empty()); metadata = new Metadata.Builder(cs.metadata()); @@ -109,7 +111,7 @@ public void testVerifyIndicesPrimaryShardsAreActive() { csBuilder.routingTable(routingTable.build()); csBuilder.metadata(metadata); List result = TransportStartDataFrameAnalyticsAction.verifyIndicesPrimaryShardsAreActive(csBuilder.build(), - new IndexNameExpressionResolver(), indexName); + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), indexName); assertThat(result, contains(indexName)); } @@ -229,7 +231,7 @@ private static TaskExecutor createTaskExecutor() { mock(DataFrameAnalyticsManager.class), mock(DataFrameAnalyticsAuditor.class), mock(MlMemoryTracker.class), - new IndexNameExpressionResolver(), + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), mock(IndexTemplateConfig.class)); } diff --git a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/datafeed/DatafeedConfigAutoUpdaterTests.java b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/datafeed/DatafeedConfigAutoUpdaterTests.java index 4ac742231e3c9..a986ee693bbe6 100644 --- a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/datafeed/DatafeedConfigAutoUpdaterTests.java +++ b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/datafeed/DatafeedConfigAutoUpdaterTests.java @@ -21,6 +21,7 @@ import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.cluster.routing.UnassignedInfo; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.Index; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.test.ESTestCase; @@ -49,7 +50,7 @@ public class DatafeedConfigAutoUpdaterTests extends ESTestCase { private DatafeedConfigProvider provider; private List datafeeds = new ArrayList<>(); - private IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver(); + private IndexNameExpressionResolver indexNameExpressionResolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); @Before public void setup() { diff --git a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/datafeed/DatafeedNodeSelectorTests.java b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/datafeed/DatafeedNodeSelectorTests.java index 183f0ad6eb3e1..46ac294be3efa 100644 --- a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/datafeed/DatafeedNodeSelectorTests.java +++ b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/datafeed/DatafeedNodeSelectorTests.java @@ -25,7 +25,9 @@ import org.elasticsearch.cluster.routing.TestShardRouting; import org.elasticsearch.cluster.routing.UnassignedInfo; import org.elasticsearch.common.collect.Tuple; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.TransportAddress; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.Index; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.persistent.PersistentTasksCustomMetadata; @@ -64,7 +66,7 @@ public class DatafeedNodeSelectorTests extends ESTestCase { @Before public void init() { - resolver = new IndexNameExpressionResolver(); + resolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); nodes = DiscoveryNodes.builder() .add(new DiscoveryNode("node_name", "node_id", new TransportAddress(InetAddress.getLoopbackAddress(), 9300), Collections.emptyMap(), Collections.emptySet(), Version.CURRENT)) diff --git a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/inference/TrainedModelStatsServiceTests.java b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/inference/TrainedModelStatsServiceTests.java index cf47e64bf178a..d417019d3f158 100644 --- a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/inference/TrainedModelStatsServiceTests.java +++ b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/inference/TrainedModelStatsServiceTests.java @@ -24,6 +24,7 @@ import org.elasticsearch.cluster.routing.UnassignedInfo; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.Index; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.test.ESTestCase; @@ -46,7 +47,7 @@ public class TrainedModelStatsServiceTests extends ESTestCase { public void testVerifyIndicesExistAndPrimaryShardsAreActive() { String aliasName = MlStatsIndex.writeAlias(); String concreteIndex = ".ml-stats-000001"; - IndexNameExpressionResolver resolver = new IndexNameExpressionResolver(); + IndexNameExpressionResolver resolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); { Metadata.Builder metadata = Metadata.builder(); @@ -136,7 +137,7 @@ public void testVerifyIndicesExistAndPrimaryShardsAreActive() { public void testUpdateStatsUpgradeMode() { String aliasName = MlStatsIndex.writeAlias(); String concreteIndex = ".ml-stats-000001"; - IndexNameExpressionResolver resolver = new IndexNameExpressionResolver(); + IndexNameExpressionResolver resolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); // create a valid index routing so persistence will occur RoutingTable.Builder routingTableBuilder = RoutingTable.builder(); diff --git a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/job/persistence/JobResultsProviderTests.java b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/job/persistence/JobResultsProviderTests.java index 49db60e884d61..d435848cbecc4 100644 --- a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/job/persistence/JobResultsProviderTests.java +++ b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/job/persistence/JobResultsProviderTests.java @@ -877,7 +877,7 @@ public void testCreateTermFieldsMapping() throws IOException { } private JobResultsProvider createProvider(Client client) { - return new JobResultsProvider(client, Settings.EMPTY, new IndexNameExpressionResolver()); + return new JobResultsProvider(client, Settings.EMPTY, new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); } private static SearchResponse createSearchResponse(List> source) throws IOException { diff --git a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/job/process/autodetect/AutodetectProcessManagerTests.java b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/job/process/autodetect/AutodetectProcessManagerTests.java index fb6565e061a28..25f3762dd7fa9 100644 --- a/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/job/process/autodetect/AutodetectProcessManagerTests.java +++ b/x-pack/plugin/ml/src/test/java/org/elasticsearch/xpack/ml/job/process/autodetect/AutodetectProcessManagerTests.java @@ -726,7 +726,7 @@ private AutodetectProcessManager createManager(Settings settings) { return new AutodetectProcessManager(settings, client, threadPool, new NamedXContentRegistry(Collections.emptyList()), auditor, clusterService, jobManager, jobResultsProvider, jobResultsPersister, jobDataCountsPersister, annotationPersister, autodetectFactory, normalizerFactory, nativeStorageProvider, - new IndexNameExpressionResolver()); + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); } private AutodetectProcessManager createSpyManagerAndCallProcessData(String jobId) { AutodetectProcessManager manager = createSpyManager(); diff --git a/x-pack/plugin/monitoring/src/test/java/org/elasticsearch/xpack/monitoring/collector/cluster/ClusterStatsCollectorTests.java b/x-pack/plugin/monitoring/src/test/java/org/elasticsearch/xpack/monitoring/collector/cluster/ClusterStatsCollectorTests.java index cb7ef41a36411..3c43158f8b754 100644 --- a/x-pack/plugin/monitoring/src/test/java/org/elasticsearch/xpack/monitoring/collector/cluster/ClusterStatsCollectorTests.java +++ b/x-pack/plugin/monitoring/src/test/java/org/elasticsearch/xpack/monitoring/collector/cluster/ClusterStatsCollectorTests.java @@ -19,6 +19,7 @@ import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.license.License; @@ -68,7 +69,7 @@ public void setUp() throws Exception { public void testShouldCollectReturnsFalseIfNotMaster() { final ClusterStatsCollector collector = new ClusterStatsCollector(Settings.EMPTY, clusterService, licenseState, client, licenseService, - new IndexNameExpressionResolver()); + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); assertThat(collector.shouldCollect(false), is(false)); } @@ -76,7 +77,7 @@ public void testShouldCollectReturnsFalseIfNotMaster() { public void testShouldCollectReturnsTrue() { final ClusterStatsCollector collector = new ClusterStatsCollector(Settings.EMPTY, clusterService, licenseState, client, licenseService, - new IndexNameExpressionResolver()); + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); assertThat(collector.shouldCollect(true), is(true)); } diff --git a/x-pack/plugin/ql/src/main/java/org/elasticsearch/xpack/ql/index/IndexResolver.java b/x-pack/plugin/ql/src/main/java/org/elasticsearch/xpack/ql/index/IndexResolver.java index 536e6c825a266..d75d38529958b 100644 --- a/x-pack/plugin/ql/src/main/java/org/elasticsearch/xpack/ql/index/IndexResolver.java +++ b/x-pack/plugin/ql/src/main/java/org/elasticsearch/xpack/ql/index/IndexResolver.java @@ -7,7 +7,6 @@ import com.carrotsearch.hppc.cursors.ObjectCursor; import com.carrotsearch.hppc.cursors.ObjectObjectCursor; - import org.elasticsearch.ElasticsearchSecurityException; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.admin.indices.alias.get.GetAliasesRequest; @@ -232,8 +231,8 @@ private void resolveIndices(String[] indices, String javaRegex, GetAliasesRespon } client.admin().indices().getIndex(indexRequest, - wrap(response -> filterResults(javaRegex, aliases, response, retrieveIndices, retrieveFrozenIndices, listener), - listener::onFailure)); + wrap(response -> filterResults(javaRegex, aliases, response, retrieveIndices, retrieveFrozenIndices, listener), + listener::onFailure)); } else { filterResults(javaRegex, aliases, null, false, false, listener); diff --git a/x-pack/plugin/security/src/internalClusterTest/java/org/elasticsearch/integration/DateMathExpressionIntegTests.java b/x-pack/plugin/security/src/internalClusterTest/java/org/elasticsearch/integration/DateMathExpressionIntegTests.java index 5f3df136d1727..4158c56546d89 100644 --- a/x-pack/plugin/security/src/internalClusterTest/java/org/elasticsearch/integration/DateMathExpressionIntegTests.java +++ b/x-pack/plugin/security/src/internalClusterTest/java/org/elasticsearch/integration/DateMathExpressionIntegTests.java @@ -18,6 +18,8 @@ import org.elasticsearch.client.Requests; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.common.settings.SecureString; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.test.SecurityIntegTestCase; @@ -59,7 +61,8 @@ protected String configRoles() { public void testDateMathExpressionsCanBeAuthorized() throws Exception { final String expression = ""; - final String expectedIndexName = new IndexNameExpressionResolver().resolveDateMathExpression(expression); + final String expectedIndexName = + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)).resolveDateMathExpression(expression); final boolean refeshOnOperation = randomBoolean(); Client client = client().filterWithHeader(Collections.singletonMap("Authorization", basicAuthHeaderValue("user1", USERS_PASSWD))); diff --git a/x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/authz/AuthorizationUtils.java b/x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/authz/AuthorizationUtils.java index 3570967db7ea2..a8670e5c139ef 100644 --- a/x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/authz/AuthorizationUtils.java +++ b/x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/authz/AuthorizationUtils.java @@ -24,17 +24,18 @@ import static org.elasticsearch.ingest.IngestService.INGEST_ORIGIN; import static org.elasticsearch.persistent.PersistentTasksService.PERSISTENT_TASK_ORIGIN; import static org.elasticsearch.xpack.core.ClientHelper.ASYNC_SEARCH_ORIGIN; +import static org.elasticsearch.xpack.core.ClientHelper.DEPRECATION_ORIGIN; import static org.elasticsearch.xpack.core.ClientHelper.ENRICH_ORIGIN; import static org.elasticsearch.xpack.core.ClientHelper.IDP_ORIGIN; -import static org.elasticsearch.xpack.core.ClientHelper.SEARCHABLE_SNAPSHOTS_ORIGIN; -import static org.elasticsearch.xpack.core.ClientHelper.STACK_ORIGIN; -import static org.elasticsearch.xpack.core.ClientHelper.TRANSFORM_ORIGIN; -import static org.elasticsearch.xpack.core.ClientHelper.DEPRECATION_ORIGIN; import static org.elasticsearch.xpack.core.ClientHelper.INDEX_LIFECYCLE_ORIGIN; +import static org.elasticsearch.xpack.core.ClientHelper.LOGSTASH_MANAGEMENT_ORIGIN; import static org.elasticsearch.xpack.core.ClientHelper.ML_ORIGIN; import static org.elasticsearch.xpack.core.ClientHelper.MONITORING_ORIGIN; import static org.elasticsearch.xpack.core.ClientHelper.ROLLUP_ORIGIN; +import static org.elasticsearch.xpack.core.ClientHelper.SEARCHABLE_SNAPSHOTS_ORIGIN; import static org.elasticsearch.xpack.core.ClientHelper.SECURITY_ORIGIN; +import static org.elasticsearch.xpack.core.ClientHelper.STACK_ORIGIN; +import static org.elasticsearch.xpack.core.ClientHelper.TRANSFORM_ORIGIN; import static org.elasticsearch.xpack.core.ClientHelper.WATCHER_ORIGIN; public final class AuthorizationUtils { @@ -124,6 +125,7 @@ public static void switchUserBasedOnActionOriginAndExecute(ThreadContext threadC case INGEST_ORIGIN: case STACK_ORIGIN: case SEARCHABLE_SNAPSHOTS_ORIGIN: + case LOGSTASH_MANAGEMENT_ORIGIN: case TASKS_ORIGIN: // TODO use a more limited user for tasks securityContext.executeAsUser(XPackUser.INSTANCE, consumer, Version.CURRENT); break; diff --git a/x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/rest/SecurityRestFilter.java b/x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/rest/SecurityRestFilter.java index d2327dcb3e833..0ba39ae2d26f6 100644 --- a/x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/rest/SecurityRestFilter.java +++ b/x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/rest/SecurityRestFilter.java @@ -50,6 +50,11 @@ public SecurityRestFilter(XPackLicenseState licenseState, ThreadContext threadCo this.extractClientCertificate = extractClientCertificate; } + @Override + public boolean allowSystemIndexAccessByDefault() { + return restHandler.allowSystemIndexAccessByDefault(); + } + @Override public void handleRequest(RestRequest request, RestChannel channel, NodeClient client) throws Exception { if (licenseState.isSecurityEnabled() && request.method() != Method.OPTIONS) { diff --git a/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/SecurityTests.java b/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/SecurityTests.java index ce8e276ca5101..06f95e0ee7fc5 100644 --- a/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/SecurityTests.java +++ b/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/SecurityTests.java @@ -132,7 +132,7 @@ protected SSLService getSslService() { when(client.threadPool()).thenReturn(threadPool); when(client.settings()).thenReturn(settings); return security.createComponents(client, threadPool, clusterService, mock(ResourceWatcherService.class), mock(ScriptService.class), - xContentRegistry(), env, new IndexNameExpressionResolver()); + xContentRegistry(), env, new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); } private Collection createComponentsWithSecurityNotExplicitlyEnabled(Settings testSettings, SecurityExtension... extensions) diff --git a/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/authz/AuthorizationServiceTests.java b/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/authz/AuthorizationServiceTests.java index 2077c15adba7a..61601c054c688 100644 --- a/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/authz/AuthorizationServiceTests.java +++ b/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/authz/AuthorizationServiceTests.java @@ -271,7 +271,7 @@ public void setup() { roleMap.put(ReservedRolesStore.SUPERUSER_ROLE_DESCRIPTOR.getName(), ReservedRolesStore.SUPERUSER_ROLE_DESCRIPTOR); authorizationService = new AuthorizationService(settings, rolesStore, clusterService, auditTrailService, new DefaultAuthenticationFailureHandler(Collections.emptyMap()), threadPool, new AnonymousUser(settings), - null, Collections.emptySet(), licenseState, new IndexNameExpressionResolver()); + null, Collections.emptySet(), licenseState, new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); } private void authorize(Authentication authentication, String action, TransportRequest request) { @@ -915,7 +915,7 @@ public void testDenialForAnonymousUser() throws IOException { final AnonymousUser anonymousUser = new AnonymousUser(settings); authorizationService = new AuthorizationService(settings, rolesStore, clusterService, auditTrailService, new DefaultAuthenticationFailureHandler(Collections.emptyMap()), threadPool, anonymousUser, null, Collections.emptySet(), - new XPackLicenseState(settings, () -> 0), new IndexNameExpressionResolver()); + new XPackLicenseState(settings, () -> 0), new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); RoleDescriptor role = new RoleDescriptor("a_all", null, new IndicesPrivileges[] { IndicesPrivileges.builder().indices("a").privileges("all").build() }, null); @@ -943,7 +943,8 @@ public void testDenialForAnonymousUserAuthorizationExceptionDisabled() throws IO final Authentication authentication = createAuthentication(new AnonymousUser(settings)); authorizationService = new AuthorizationService(settings, rolesStore, clusterService, auditTrailService, new DefaultAuthenticationFailureHandler(Collections.emptyMap()), threadPool, new AnonymousUser(settings), null, - Collections.emptySet(), new XPackLicenseState(settings, () -> 0), new IndexNameExpressionResolver()); + Collections.emptySet(), new XPackLicenseState(settings, () -> 0), + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); RoleDescriptor role = new RoleDescriptor("a_all", null, new IndicesPrivileges[]{IndicesPrivileges.builder().indices("a").privileges("all").build()}, null); @@ -1687,7 +1688,8 @@ public void getUserPrivileges(Authentication authentication, AuthorizationInfo a when(licenseState.checkFeature(Feature.SECURITY_AUTHORIZATION_ENGINE)).thenReturn(true); authorizationService = new AuthorizationService(Settings.EMPTY, rolesStore, clusterService, auditTrailService, new DefaultAuthenticationFailureHandler(Collections.emptyMap()), threadPool, - new AnonymousUser(Settings.EMPTY), engine, Collections.emptySet(), licenseState, new IndexNameExpressionResolver()); + new AnonymousUser(Settings.EMPTY), engine, Collections.emptySet(), licenseState, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); Authentication authentication; try (ThreadContext.StoredContext ignore = threadContext.stashContext()) { authentication = createAuthentication(new User("test user", "a_all")); diff --git a/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/authz/IndicesAndAliasesResolverTests.java b/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/authz/IndicesAndAliasesResolverTests.java index 16e5505a286e8..9c3d7b036c3e6 100644 --- a/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/authz/IndicesAndAliasesResolverTests.java +++ b/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/authz/IndicesAndAliasesResolverTests.java @@ -45,6 +45,7 @@ import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.protocol.xpack.graph.GraphExploreRequest; @@ -121,7 +122,7 @@ public void setup() { .put("cluster.remote.other_remote.seeds", "127.0.0.1:" + randomIntBetween(9351, 9399)) .build(); - indexNameExpressionResolver = new IndexNameExpressionResolver(); + indexNameExpressionResolver = new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)); final boolean withAlias = randomBoolean(); final String securityIndexName = SECURITY_MAIN_ALIAS + (withAlias ? "-" + randomAlphaOfLength(5) : ""); @@ -266,7 +267,8 @@ public void setup() { ClusterService clusterService = mock(ClusterService.class); when(clusterService.getClusterSettings()).thenReturn(new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS)); - defaultIndicesResolver = new IndicesAndAliasesResolver(settings, clusterService, new IndexNameExpressionResolver()); + defaultIndicesResolver = + new IndicesAndAliasesResolver(settings, clusterService, new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))); } public void testDashIndicesAreAllowedInShardLevelRequests() { diff --git a/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/rest/action/SecurityBaseRestHandlerTests.java b/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/rest/action/SecurityBaseRestHandlerTests.java index b64e619f96bcb..48009f05e22bc 100644 --- a/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/rest/action/SecurityBaseRestHandlerTests.java +++ b/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/rest/action/SecurityBaseRestHandlerTests.java @@ -11,6 +11,7 @@ import org.elasticsearch.license.XPackLicenseState; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.client.NoOpNodeClient; import org.elasticsearch.test.rest.FakeRestChannel; import org.elasticsearch.test.rest.FakeRestRequest; @@ -57,21 +58,22 @@ protected RestChannelConsumer innerPrepareRequest(RestRequest request, NodeClien }; FakeRestRequest fakeRestRequest = new FakeRestRequest(); FakeRestChannel fakeRestChannel = new FakeRestChannel(fakeRestRequest, randomBoolean(), securityDefaultEnabled ? 0 : 1); - NodeClient client = mock(NodeClient.class); - assertFalse(consumerCalled.get()); - verifyZeroInteractions(licenseState); - handler.handleRequest(fakeRestRequest, fakeRestChannel, client); - - verify(licenseState).checkFeature(XPackLicenseState.Feature.SECURITY); - if (securityDefaultEnabled) { - assertTrue(consumerCalled.get()); - assertEquals(0, fakeRestChannel.responses().get()); - assertEquals(0, fakeRestChannel.errors().get()); - } else { + try (NodeClient client = new NoOpNodeClient(this.getTestName())) { assertFalse(consumerCalled.get()); - assertEquals(0, fakeRestChannel.responses().get()); - assertEquals(1, fakeRestChannel.errors().get()); + verifyZeroInteractions(licenseState); + handler.handleRequest(fakeRestRequest, fakeRestChannel, client); + + verify(licenseState).checkFeature(XPackLicenseState.Feature.SECURITY); + if (securityDefaultEnabled) { + assertTrue(consumerCalled.get()); + assertEquals(0, fakeRestChannel.responses().get()); + assertEquals(0, fakeRestChannel.errors().get()); + } else { + assertFalse(consumerCalled.get()); + assertEquals(0, fakeRestChannel.responses().get()); + assertEquals(1, fakeRestChannel.errors().get()); + } } } } diff --git a/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/rest/action/user/RestGetUserPrivilegesActionTests.java b/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/rest/action/user/RestGetUserPrivilegesActionTests.java index 0b3d9ed355e6c..e5ecaaf436b19 100644 --- a/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/rest/action/user/RestGetUserPrivilegesActionTests.java +++ b/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/rest/action/user/RestGetUserPrivilegesActionTests.java @@ -15,6 +15,7 @@ import org.elasticsearch.license.XPackLicenseState; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.client.NoOpNodeClient; import org.elasticsearch.test.rest.FakeRestChannel; import org.elasticsearch.test.rest.FakeRestRequest; import org.elasticsearch.xpack.core.security.SecurityContext; @@ -45,7 +46,9 @@ public void testBasicLicense() throws Exception { when(licenseState.checkFeature(XPackLicenseState.Feature.SECURITY)).thenReturn(false); final FakeRestRequest request = new FakeRestRequest(); final FakeRestChannel channel = new FakeRestChannel(request, true, 1); - action.handleRequest(request, channel, mock(NodeClient.class)); + try (NodeClient nodeClient = new NoOpNodeClient(this.getTestName())) { + action.handleRequest(request, channel, nodeClient); + } assertThat(channel.capturedResponse(), notNullValue()); assertThat(channel.capturedResponse().status(), equalTo(RestStatus.FORBIDDEN)); assertThat(channel.capturedResponse().content().utf8ToString(), containsString("current license is non-compliant for [security]")); diff --git a/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/rest/action/user/RestHasPrivilegesActionTests.java b/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/rest/action/user/RestHasPrivilegesActionTests.java index 835d44ba28ff7..29fdba89705f0 100644 --- a/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/rest/action/user/RestHasPrivilegesActionTests.java +++ b/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/rest/action/user/RestHasPrivilegesActionTests.java @@ -16,6 +16,7 @@ import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.client.NoOpNodeClient; import org.elasticsearch.test.rest.FakeRestChannel; import org.elasticsearch.test.rest.FakeRestRequest; import org.elasticsearch.xpack.core.security.SecurityContext; @@ -39,13 +40,14 @@ public void testBodyConsumed() throws Exception { final XPackLicenseState licenseState = mock(XPackLicenseState.class); final RestHasPrivilegesAction action = new RestHasPrivilegesAction(Settings.EMPTY, mock(SecurityContext.class), licenseState); - try (XContentBuilder bodyBuilder = JsonXContent.contentBuilder().startObject().endObject()) { + try (XContentBuilder bodyBuilder = JsonXContent.contentBuilder().startObject().endObject(); + NodeClient client = new NoOpNodeClient(this.getTestName())) { final RestRequest request = new FakeRestRequest.Builder(xContentRegistry()) .withPath("/_security/user/_has_privileges/") .withContent(new BytesArray(bodyBuilder.toString()), XContentType.JSON) .build(); final RestChannel channel = new FakeRestChannel(request, true, 1); - action.handleRequest(request, channel, mock(NodeClient.class)); + action.handleRequest(request, channel, client); } } @@ -54,13 +56,14 @@ public void testBasicLicense() throws Exception { final RestHasPrivilegesAction action = new RestHasPrivilegesAction(Settings.EMPTY, mock(SecurityContext.class), licenseState); when(licenseState.checkFeature(XPackLicenseState.Feature.SECURITY)).thenReturn(false); - try (XContentBuilder bodyBuilder = JsonXContent.contentBuilder().startObject().endObject()) { + try (XContentBuilder bodyBuilder = JsonXContent.contentBuilder().startObject().endObject(); + NodeClient client = new NoOpNodeClient(this.getTestName())) { final RestRequest request = new FakeRestRequest.Builder(xContentRegistry()) .withPath("/_security/user/_has_privileges/") .withContent(new BytesArray(bodyBuilder.toString()), XContentType.JSON) .build(); final FakeRestChannel channel = new FakeRestChannel(request, true, 1); - action.handleRequest(request, channel, mock(NodeClient.class)); + action.handleRequest(request, channel, client); assertThat(channel.capturedResponse(), notNullValue()); assertThat(channel.capturedResponse().status(), equalTo(RestStatus.FORBIDDEN)); assertThat( diff --git a/x-pack/plugin/sql/qa/server/security/src/test/java/org/elasticsearch/xpack/sql/qa/security/RestSqlSecurityIT.java b/x-pack/plugin/sql/qa/server/security/src/test/java/org/elasticsearch/xpack/sql/qa/security/RestSqlSecurityIT.java index abb12a3212241..e46dc26a3dc5d 100644 --- a/x-pack/plugin/sql/qa/server/security/src/test/java/org/elasticsearch/xpack/sql/qa/security/RestSqlSecurityIT.java +++ b/x-pack/plugin/sql/qa/server/security/src/test/java/org/elasticsearch/xpack/sql/qa/security/RestSqlSecurityIT.java @@ -60,13 +60,13 @@ public void queryWorksAsAdmin() throws Exception { ); expected.put("rows", Arrays.asList(Arrays.asList(1, 2, 3), Arrays.asList(4, 5, 6))); - assertResponse(expected, runSql(null, mode, "SELECT * FROM test ORDER BY a")); + assertResponse(expected, runSql(null, mode, "SELECT * FROM test ORDER BY a", false)); } @Override public void expectMatchesAdmin(String adminSql, String user, String userSql) throws Exception { String mode = randomMode(); - assertResponse(runSql(null, mode, adminSql), runSql(user, mode, userSql)); + assertResponse(runSql(null, mode, adminSql, false), runSql(user, mode, userSql, false)); } @Override @@ -75,12 +75,14 @@ public void expectScrollMatchesAdmin(String adminSql, String user, String userSq Map adminResponse = runSql( null, new StringEntity(query(adminSql).mode(mode).fetchSize(1).toString(), ContentType.APPLICATION_JSON), - mode + mode, + false ); Map otherResponse = runSql( user, new StringEntity(query(adminSql).mode(mode).fetchSize(1).toString(), ContentType.APPLICATION_JSON), - mode + mode, + false ); String adminCursor = (String) adminResponse.remove("cursor"); @@ -92,12 +94,14 @@ public void expectScrollMatchesAdmin(String adminSql, String user, String userSq adminResponse = runSql( null, new StringEntity(cursor(adminCursor).mode(mode).toString(), ContentType.APPLICATION_JSON), - mode + mode, + false ); otherResponse = runSql( user, new StringEntity(cursor(otherCursor).mode(mode).toString(), ContentType.APPLICATION_JSON), - mode + mode, + false ); adminCursor = (String) adminResponse.remove("cursor"); otherCursor = (String) otherResponse.remove("cursor"); @@ -131,7 +135,7 @@ public void expectDescribe(Map> columns, String user) throw } expected.put("rows", rows); - assertResponse(expected, runSql(user, mode, "DESCRIBE test")); + assertResponse(expected, runSql(user, mode, "DESCRIBE test", false)); } @Override @@ -153,7 +157,8 @@ public void expectShowTables(List tables, String user) throws Exception } expected.put("rows", rows); - Map actual = runSql(user, mode, "SHOW TABLES"); + // Allow system index deprecation warnings, because this may return `.security*` indices. + Map actual = runSql(user, mode, "SHOW TABLES", true); /* * Security automatically creates either a `.security` or a * `.security6` index but it might not have created the index @@ -169,21 +174,21 @@ public void expectShowTables(List tables, String user) throws Exception @Override public void expectForbidden(String user, String sql) { - ResponseException e = expectThrows(ResponseException.class, () -> runSql(user, randomMode(), sql)); + ResponseException e = expectThrows(ResponseException.class, () -> runSql(user, randomMode(), sql, false)); assertThat(e.getResponse().getStatusLine().getStatusCode(), equalTo(403)); assertThat(e.getMessage(), containsString("unauthorized")); } @Override public void expectUnknownIndex(String user, String sql) { - ResponseException e = expectThrows(ResponseException.class, () -> runSql(user, randomMode(), sql)); + ResponseException e = expectThrows(ResponseException.class, () -> runSql(user, randomMode(), sql, false)); assertThat(e.getResponse().getStatusLine().getStatusCode(), equalTo(400)); assertThat(e.getMessage(), containsString("Unknown index")); } @Override public void expectUnknownColumn(String user, String sql, String column) throws Exception { - ResponseException e = expectThrows(ResponseException.class, () -> runSql(user, randomMode(), sql)); + ResponseException e = expectThrows(ResponseException.class, () -> runSql(user, randomMode(), sql, false)); assertThat(e.getMessage(), containsString("Unknown column [" + column + "]")); } @@ -195,17 +200,45 @@ public void checkNoMonitorMain(String user) throws Exception { expectMatchesAdmin("DESCRIBE test", user, "DESCRIBE test"); } - private static Map runSql(@Nullable String asUser, String mode, String sql) throws IOException { - return runSql(asUser, new StringEntity(query(sql).mode(mode).toString(), ContentType.APPLICATION_JSON), mode); + private static Map runSql( + @Nullable String asUser, + String mode, + String sql, + boolean allowSystemIndexDeprecationWarning + ) throws IOException { + return runSql( + asUser, + new StringEntity(query(sql).mode(mode).toString(), ContentType.APPLICATION_JSON), + mode, + allowSystemIndexDeprecationWarning + ); } - private static Map runSql(@Nullable String asUser, HttpEntity entity, String mode) throws IOException { + private static Map runSql( + @Nullable String asUser, + HttpEntity entity, + String mode, + boolean allowSystemIndexDeprecationWarning + ) throws IOException { Request request = new Request("POST", SQL_QUERY_REST_ENDPOINT); + RequestOptions.Builder options = request.getOptions().toBuilder(); if (asUser != null) { - RequestOptions.Builder options = request.getOptions().toBuilder(); options.addHeader("es-security-runas-user", asUser); - request.setOptions(options); } + if (allowSystemIndexDeprecationWarning) { + options.setWarningsHandler(warnings -> { + if (warnings.isEmpty()) { + // No warnings is OK + return false; + } else if (warnings.size() > 1) { + return true; + } else { + String warning = warnings.get(0); + return warning.startsWith("this request accesses system indices: ") == false; + } + }); + } + request.setOptions(options); request.setEntity(entity); return toMap(client().performRequest(request), mode); } @@ -251,7 +284,8 @@ public void testHijackScrollFails() throws Exception { Map adminResponse = RestActions.runSql( null, new StringEntity(query("SELECT * FROM test").mode(mode).fetchSize(1).toString(), ContentType.APPLICATION_JSON), - mode + mode, + false ); String cursor = (String) adminResponse.remove("cursor"); @@ -262,7 +296,8 @@ public void testHijackScrollFails() throws Exception { () -> RestActions.runSql( "full_access", new StringEntity(cursor(cursor).mode(mode).toString(), ContentType.APPLICATION_JSON), - mode + mode, + false ) ); // TODO return a better error message for bad scrolls diff --git a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/calendar_crud.yml b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/calendar_crud.yml index ba85f9d093f8d..cfe0ead9e5040 100644 --- a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/calendar_crud.yml +++ b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/calendar_crud.yml @@ -389,6 +389,8 @@ --- "Test delete calendar deletes events": + - skip: + features: warnings - do: ml.put_calendar: @@ -425,6 +427,8 @@ # Check the event from calendar 1 is deleted - do: + warnings: + - "this request accesses system indices: [.ml-meta], but in a future major version, direct access to system indices will be prevented by default" count: index: .ml-meta body: @@ -436,6 +440,8 @@ - match: { count: 2 } - do: + warnings: + - "this request accesses system indices: [.ml-meta], but in a future major version, direct access to system indices will be prevented by default" count: index: .ml-meta body: diff --git a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/custom_all_field.yml b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/custom_all_field.yml index f6bf53fb289db..eefd9b937cbec 100644 --- a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/custom_all_field.yml +++ b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/custom_all_field.yml @@ -148,6 +148,7 @@ setup: - do: search: + index: .ml-anomalies-shared expand_wildcards: all rest_total_hits_as_int: true body: { query: { bool: { must: [ diff --git a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/delete_expired_data.yml b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/delete_expired_data.yml index 4cd5adafd020c..b4b9dbc03e3c1 100644 --- a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/delete_expired_data.yml +++ b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/delete_expired_data.yml @@ -71,6 +71,8 @@ setup: job_id: not-a-job --- "Test delete expired data with job id": + - skip: + features: warnings - do: headers: Content-Type: application/json @@ -153,6 +155,8 @@ setup: job_id: delete-expired-data-a - do: + warnings: + - "this request accesses system indices: [.ml-config], but in a future major version, direct access to system indices will be prevented by default" headers: Authorization: "Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==" indices.refresh: {} diff --git a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/filter_crud.yml b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/filter_crud.yml index e6ae375261b0c..130171a818409 100644 --- a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/filter_crud.yml +++ b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/filter_crud.yml @@ -3,6 +3,7 @@ setup: - skip: features: - headers + - warnings - do: headers: @@ -26,6 +27,8 @@ setup: } - do: + warnings: + - "this request accesses system indices: [.ml-meta], but in a future major version, direct access to system indices will be prevented by default" headers: Authorization: "Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==" # run as x_pack_rest_user, i.e. the test setup superuser indices.refresh: {} diff --git a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/index_layout.yml b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/index_layout.yml index 2b8a9063b4ad3..64a613bd946d9 100644 --- a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/index_layout.yml +++ b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/index_layout.yml @@ -118,6 +118,7 @@ setup: headers: Authorization: "Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==" # run as x_pack_rest_user, i.e. the test setup superuser indices.refresh: + index: [".ml-anomalies*", ".ml-state*"] expand_wildcards: all - do: @@ -478,6 +479,7 @@ setup: headers: Authorization: "Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==" # run as x_pack_rest_user, i.e. the test setup superuser indices.refresh: + index: ["foo", ".ml-anomalies*"] expand_wildcards: all - do: @@ -528,6 +530,7 @@ setup: headers: Authorization: "Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==" # run as x_pack_rest_user, i.e. the test setup superuser indices.refresh: + index: ".ml-state*" expand_wildcards: all - do: @@ -621,6 +624,7 @@ setup: headers: Authorization: "Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==" # run as x_pack_rest_user, i.e. the test setup superuser indices.refresh: + index: [".ml-state*", ".ml-anomalies*"] expand_wildcards: all - do: @@ -706,6 +710,7 @@ setup: headers: Authorization: "Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==" # run as x_pack_rest_user, i.e. the test setup superuser indices.refresh: + index: ".ml-state*" expand_wildcards: all - do: diff --git a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/inference_crud.yml b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/inference_crud.yml index 51b9c7fd9cfe5..15c698f1d94c2 100644 --- a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/inference_crud.yml +++ b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/inference_crud.yml @@ -3,9 +3,12 @@ setup: features: - headers - allowed_warnings + - warnings - do: allowed_warnings: - "index [.ml-inference-000003] matches multiple legacy templates [.ml-inference-000003, global], composable templates will only match a single template" + warnings: + - "this request accesses system indices: [.ml-inference-000003], but in a future major version, direct access to system indices will be prevented by default" headers: Content-Type: application/json Authorization: "Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==" # run as x_pack_rest_user, i.e. the test setup superuser diff --git a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/inference_stats_crud.yml b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/inference_stats_crud.yml index e2da5db5b4495..2eb2430c2522c 100644 --- a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/inference_stats_crud.yml +++ b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/inference_stats_crud.yml @@ -3,9 +3,12 @@ setup: features: - headers - allowed_warnings + - warnings - do: allowed_warnings: - "index [.ml-inference-000003] matches multiple legacy templates [.ml-inference-000003, global], composable templates will only match a single template" + warnings: + - "this request accesses system indices: [.ml-inference-000003], but in a future major version, direct access to system indices will be prevented by default" headers: Authorization: "Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==" # run as x_pack_rest_user, i.e. the test setup superuser index: @@ -23,6 +26,8 @@ setup: } - do: + warnings: + - "this request accesses system indices: [.ml-inference-000003], but in a future major version, direct access to system indices will be prevented by default" headers: Authorization: "Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==" # run as x_pack_rest_user, i.e. the test setup superuser index: @@ -39,6 +44,8 @@ setup: "doc_type": "trained_model_config" } - do: + warnings: + - "this request accesses system indices: [.ml-inference-000003], but in a future major version, direct access to system indices will be prevented by default" headers: Authorization: "Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==" # run as x_pack_rest_user, i.e. the test setup superuser index: @@ -56,9 +63,12 @@ setup: } - do: + warnings: + - "this request accesses system indices: [.ml-inference-000003], but in a future major version, direct access to system indices will be prevented by default" headers: Authorization: "Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==" # run as x_pack_rest_user, i.e. the test setup superuser - indices.refresh: {} + indices.refresh: + index: ".ml-inference-*" - do: headers: diff --git a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/jobs_crud.yml b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/jobs_crud.yml index b8af6ad31d1f3..ec7528098db72 100644 --- a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/jobs_crud.yml +++ b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/jobs_crud.yml @@ -554,6 +554,7 @@ - do: indices.refresh: + index: ".ml-anomalies*" expand_wildcards: all - do: diff --git a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/jobs_get_stats.yml b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/jobs_get_stats.yml index 3ffd4087cb372..4313d48f0e146 100644 --- a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/jobs_get_stats.yml +++ b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/jobs_get_stats.yml @@ -267,7 +267,8 @@ setup: - do: headers: Authorization: "Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==" # run as x_pack_rest_user, i.e. the test setup superuser - indices.refresh: {} + indices.refresh: + index: ".ml-anomalies*" # This is testing that the documents with v5.4 IDs are fetched. # Ideally we would use the v5.4 type but we can't put a mapping diff --git a/x-pack/plugin/src/test/resources/rest-api-spec/test/security/authz/13_index_datemath.yml b/x-pack/plugin/src/test/resources/rest-api-spec/test/security/authz/13_index_datemath.yml index 462b023d18cc0..2651519e5f785 100644 --- a/x-pack/plugin/src/test/resources/rest-api-spec/test/security/authz/13_index_datemath.yml +++ b/x-pack/plugin/src/test/resources/rest-api-spec/test/security/authz/13_index_datemath.yml @@ -67,7 +67,7 @@ teardown: - do: # superuser indices.refresh: - index: "_all" + index: "write-*" - do: # superuser search: @@ -104,7 +104,7 @@ teardown: - do: # superuser indices.refresh: - index: "_all" + index: "read-*" - do: # superuser search: @@ -129,7 +129,7 @@ teardown: - do: # superuser indices.refresh: - index: "_all" + index: "write-*" - do: # superuser search: diff --git a/x-pack/plugin/src/test/resources/rest-api-spec/test/security/authz/15_auto_create.yml b/x-pack/plugin/src/test/resources/rest-api-spec/test/security/authz/15_auto_create.yml index bbe6f42f8270f..77948a95a9665 100644 --- a/x-pack/plugin/src/test/resources/rest-api-spec/test/security/authz/15_auto_create.yml +++ b/x-pack/plugin/src/test/resources/rest-api-spec/test/security/authz/15_auto_create.yml @@ -59,7 +59,7 @@ teardown: - do: # superuser indices.refresh: - index: "_all" + index: "logs-*" - do: # superuser search: diff --git a/x-pack/plugin/src/test/resources/rest-api-spec/test/security/authz/31_rollover_using_alias.yml b/x-pack/plugin/src/test/resources/rest-api-spec/test/security/authz/31_rollover_using_alias.yml index 52b6259f7ccf0..fd9f6d1d46050 100644 --- a/x-pack/plugin/src/test/resources/rest-api-spec/test/security/authz/31_rollover_using_alias.yml +++ b/x-pack/plugin/src/test/resources/rest-api-spec/test/security/authz/31_rollover_using_alias.yml @@ -90,7 +90,8 @@ teardown: } - do: - indices.refresh: {} + indices.refresh: + index: "write_manage_alias" # rollover using alias - do: @@ -127,7 +128,8 @@ teardown: } - do: - indices.refresh: {} + indices.refresh: + index: write_manage_alias # check alias points to the new index and the doc was indexed - do: diff --git a/x-pack/plugin/src/test/resources/rest-api-spec/test/set_security_user/10_small_users_one_index.yml b/x-pack/plugin/src/test/resources/rest-api-spec/test/set_security_user/10_small_users_one_index.yml index 80a1ea12dec3d..7442c74a9eae6 100644 --- a/x-pack/plugin/src/test/resources/rest-api-spec/test/set_security_user/10_small_users_one_index.yml +++ b/x-pack/plugin/src/test/resources/rest-api-spec/test/set_security_user/10_small_users_one_index.yml @@ -116,7 +116,8 @@ teardown: } - do: - indices.refresh: {} + indices.refresh: + index: shared_logs # Joe searches: - do: diff --git a/x-pack/plugin/src/test/resources/rest-api-spec/test/users/10_basic.yml b/x-pack/plugin/src/test/resources/rest-api-spec/test/users/10_basic.yml index 9f992adde9670..dcee957b6c3fb 100644 --- a/x-pack/plugin/src/test/resources/rest-api-spec/test/users/10_basic.yml +++ b/x-pack/plugin/src/test/resources/rest-api-spec/test/users/10_basic.yml @@ -107,6 +107,8 @@ teardown: } --- "Test put user with password hash": + - skip: + features: warnings # Mostly this chain of put_user , search index, set value is to work around the fact that the # rest tests treat anything with a leading "$" as a stashed value, and bcrypt passwords start with "$" @@ -122,6 +124,8 @@ teardown: } - do: + warnings: + - "this request accesses system indices: [.security-7], but in a future major version, direct access to system indices will be prevented by default" get: index: .security id: user-bob diff --git a/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformConfigurationIndexIT.java b/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformConfigurationIndexIT.java index dfab389c4e423..68c8e249c1b99 100644 --- a/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformConfigurationIndexIT.java +++ b/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformConfigurationIndexIT.java @@ -9,6 +9,7 @@ import org.apache.http.entity.ContentType; import org.apache.http.entity.StringEntity; import org.elasticsearch.client.Request; +import org.elasticsearch.client.RequestOptions; import org.elasticsearch.client.Response; import org.elasticsearch.client.ResponseException; import org.elasticsearch.common.Strings; @@ -33,6 +34,9 @@ public class TransformConfigurationIndexIT extends TransformRestTestCase { */ public void testDeleteConfigurationLeftOver() throws IOException { String fakeTransformName = randomAlphaOfLengthBetween(5, 20); + final RequestOptions expectWarningOptions = expectWarnings("this request accesses system indices: [" + + TransformInternalIndexConstants.LATEST_INDEX_NAME + "], but in a future major version, direct access to system indices will " + + "be prevented by default"); try (XContentBuilder builder = jsonBuilder()) { builder.startObject(); @@ -43,12 +47,15 @@ public void testDeleteConfigurationLeftOver() throws IOException { final StringEntity entity = new StringEntity(Strings.toString(builder), ContentType.APPLICATION_JSON); Request req = new Request("PUT", TransformInternalIndexConstants.LATEST_INDEX_NAME + "/_doc/" + TransformConfig.documentId(fakeTransformName)); + req.setOptions(expectWarningOptions); req.setEntity(entity); client().performRequest(req); } // refresh the index - assertOK(client().performRequest(new Request("POST", TransformInternalIndexConstants.LATEST_INDEX_NAME + "/_refresh"))); + final Request refreshRequest = new Request("POST", TransformInternalIndexConstants.LATEST_INDEX_NAME + "/_refresh"); + refreshRequest.setOptions(expectWarningOptions); + assertOK(client().performRequest(refreshRequest)); Request deleteRequest = new Request("DELETE", getTransformEndpoint() + fakeTransformName); Response deleteResponse = client().performRequest(deleteRequest); diff --git a/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformInternalIndexIT.java b/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformInternalIndexIT.java index 82bed7ac0ea78..6f84d358a7e67 100644 --- a/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformInternalIndexIT.java +++ b/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformInternalIndexIT.java @@ -6,11 +6,10 @@ package org.elasticsearch.xpack.transform.integration; -import org.elasticsearch.action.get.GetRequest; -import org.elasticsearch.action.get.GetResponse; -import org.elasticsearch.action.index.IndexRequest; -import org.elasticsearch.action.support.WriteRequest; +import org.elasticsearch.client.Request; import org.elasticsearch.client.RequestOptions; +import org.elasticsearch.client.Response; +import org.elasticsearch.client.ResponseException; import org.elasticsearch.client.RestHighLevelClient; import org.elasticsearch.client.indices.CreateIndexRequest; import org.elasticsearch.client.transform.GetTransformRequest; @@ -22,7 +21,6 @@ import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; -import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.search.SearchModule; import org.elasticsearch.test.rest.ESRestTestCase; import org.elasticsearch.xpack.core.transform.TransformField; @@ -36,7 +34,6 @@ import static org.elasticsearch.xpack.transform.persistence.TransformInternalIndex.addTransformsConfigMappings; import static org.hamcrest.Matchers.equalTo; -import static org.hamcrest.Matchers.is; public class TransformInternalIndexIT extends ESRestTestCase { @@ -79,14 +76,20 @@ public void testUpdateDeletesOldTransformConfig() throws Exception { + " } } } }," + "\"frequency\":\"1s\"" + "}"; - client.index(new IndexRequest(OLD_INDEX) - .id(TransformConfig.documentId(transformId)) - .source(config, XContentType.JSON) - .setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE), - RequestOptions.DEFAULT); - GetResponse getResponse = client.get(new GetRequest(OLD_INDEX, TransformConfig.documentId(transformId)), - RequestOptions.DEFAULT); - assertThat(getResponse.isExists(), is(true)); + Request indexRequest = new Request("PUT", OLD_INDEX + "/_doc/" + TransformConfig.documentId(transformId)); + indexRequest.setOptions(expectWarnings("this request accesses system indices: [" + OLD_INDEX + "], but in a future major " + + "version, direct access to system indices will be prevented by default")); + indexRequest.addParameter("refresh", "true"); + indexRequest.setJsonEntity(config); + assertOK(client().performRequest(indexRequest)); + + { + Request getRequest = new Request("GET", OLD_INDEX + "/_doc/" + TransformConfig.documentId(transformId)); + getRequest.setOptions(expectWarnings("this request accesses system indices: [" + OLD_INDEX + "], but in a future major " + + "version, direct access to system indices will be prevented by default")); + Response getResponse = client().performRequest(getRequest); + assertOK(getResponse); + } GetTransformResponse response = client.transform() .getTransform(new GetTransformRequest(transformId), RequestOptions.DEFAULT); @@ -100,13 +103,27 @@ public void testUpdateDeletesOldTransformConfig() throws Exception { assertThat(updated.getTransformConfiguration().getDescription(), equalTo("updated")); // Old should now be gone - getResponse = client.get(new GetRequest(OLD_INDEX, TransformConfig.documentId(transformId)), RequestOptions.DEFAULT); - assertThat(getResponse.isExists(), is(false)); + { + Request getRequest = new Request("GET", OLD_INDEX + "/_doc/" + TransformConfig.documentId(transformId)); + getRequest.setOptions(expectWarnings("this request accesses system indices: [" + OLD_INDEX + "], but in a future major " + + "version, direct access to system indices will be prevented by default")); + try { + Response getResponse = client().performRequest(getRequest); + assertThat(getResponse.getStatusLine().getStatusCode(), equalTo(404)); + } catch (ResponseException e) { + // this is fine, we want it to 404 + assertThat(e.getResponse().getStatusLine().getStatusCode(), equalTo(404)); + } + } // New should be here - getResponse = client.get(new GetRequest(CURRENT_INDEX, TransformConfig.documentId(transformId)), - RequestOptions.DEFAULT); - assertThat(getResponse.isExists(), is(true)); + { + Request getRequest = new Request("GET", CURRENT_INDEX + "/_doc/" + TransformConfig.documentId(transformId)); + getRequest.setOptions(expectWarnings("this request accesses system indices: [" + CURRENT_INDEX + "], but in a future major " + + "version, direct access to system indices will be prevented by default")); + Response getResponse = client().performRequest(getRequest); + assertOK(getResponse); + } } diff --git a/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformRestTestCase.java b/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformRestTestCase.java index 6e94e3225a6e1..68a9de13f2f6b 100644 --- a/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformRestTestCase.java +++ b/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformRestTestCase.java @@ -480,6 +480,8 @@ public void wipeTransforms() throws IOException { // the configuration index should be empty Request request = new Request("GET", TransformInternalIndexConstants.LATEST_INDEX_NAME + "/_search"); + request.setOptions(expectWarnings("this request accesses system indices: [" + TransformInternalIndexConstants.LATEST_INDEX_NAME + + "], but in a future major version, direct access to system indices will be prevented by default")); try { Response searchResponse = adminClient().performRequest(request); Map searchResult = entityAsMap(searchResponse); diff --git a/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformRobustnessIT.java b/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformRobustnessIT.java index 9d069eb5f9366..d2014c79fc333 100644 --- a/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformRobustnessIT.java +++ b/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformRobustnessIT.java @@ -117,6 +117,10 @@ private int getNumberOfTransformTasks() throws IOException { } private void beEvilAndDeleteTheTransformIndex() throws IOException { - adminClient().performRequest(new Request("DELETE", TransformInternalIndexConstants.LATEST_INDEX_NAME)); + final Request deleteRequest = new Request("DELETE", TransformInternalIndexConstants.LATEST_INDEX_NAME); + deleteRequest.setOptions(expectWarnings("this request accesses system indices: [" + + TransformInternalIndexConstants.LATEST_INDEX_NAME + "], but in a future major version, direct access to system indices will " + + "be prevented by default")); + adminClient().performRequest(deleteRequest); } } diff --git a/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformUsageIT.java b/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformUsageIT.java index f6184e734e6d9..f8efe1d581c4d 100644 --- a/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformUsageIT.java +++ b/x-pack/plugin/transform/qa/single-node-tests/src/javaRestTest/java/org/elasticsearch/xpack/transform/integration/TransformUsageIT.java @@ -62,6 +62,9 @@ public void testUsage() throws Exception { + ":" + TransformStoredDoc.NAME ); + statsExistsRequest.setOptions(expectWarnings("this request accesses system indices: [" + + TransformInternalIndexConstants.LATEST_INDEX_NAME + "], but in a future major version, direct access to system indices will " + + "be prevented by default")); // Verify that we have one stat document assertBusy(() -> { Map hasStatsMap = entityAsMap(client().performRequest(statsExistsRequest)); @@ -120,7 +123,7 @@ public void testUsage() throws Exception { } } // Refresh the index so that statistics are searchable - refreshIndex(TransformInternalIndexConstants.LATEST_INDEX_VERSIONED_NAME); + refreshAllIndices(); }, 60, TimeUnit.SECONDS); stopTransform("test_usage_continuous", false); diff --git a/x-pack/plugin/transform/src/test/java/org/elasticsearch/xpack/transform/transforms/TransformPersistentTasksExecutorTests.java b/x-pack/plugin/transform/src/test/java/org/elasticsearch/xpack/transform/transforms/TransformPersistentTasksExecutorTests.java index d6f622b61823c..ce7d513cc1aaa 100644 --- a/x-pack/plugin/transform/src/test/java/org/elasticsearch/xpack/transform/transforms/TransformPersistentTasksExecutorTests.java +++ b/x-pack/plugin/transform/src/test/java/org/elasticsearch/xpack/transform/transforms/TransformPersistentTasksExecutorTests.java @@ -25,6 +25,7 @@ import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.Index; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.persistent.PersistentTasksCustomMetadata; @@ -178,7 +179,9 @@ public void testVerifyIndicesPrimaryShardsAreActive() { csBuilder.metadata(metadata); ClusterState cs = csBuilder.build(); - assertEquals(0, TransformPersistentTasksExecutor.verifyIndicesPrimaryShardsAreActive(cs, new IndexNameExpressionResolver()).size()); + assertEquals(0, + TransformPersistentTasksExecutor.verifyIndicesPrimaryShardsAreActive(cs, + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY))).size()); metadata = new Metadata.Builder(cs.metadata()); routingTable = new RoutingTable.Builder(cs.routingTable()); @@ -204,7 +207,7 @@ public void testVerifyIndicesPrimaryShardsAreActive() { csBuilder.metadata(metadata); List result = TransformPersistentTasksExecutor.verifyIndicesPrimaryShardsAreActive( csBuilder.build(), - new IndexNameExpressionResolver() + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)) ); assertEquals(1, result.size()); assertEquals(indexToRemove, result.get(0)); @@ -391,7 +394,7 @@ public TransformPersistentTasksExecutor buildTaskExecutor() { mock(ThreadPool.class), clusterService, Settings.EMPTY, - new IndexNameExpressionResolver() + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)) ); } } diff --git a/x-pack/plugin/watcher/qa/common/src/main/java/org/elasticsearch/xpack/watcher/WatcherRestTestCase.java b/x-pack/plugin/watcher/qa/common/src/main/java/org/elasticsearch/xpack/watcher/WatcherRestTestCase.java index 42c6a4b6ecb81..c44f1ed29bae3 100644 --- a/x-pack/plugin/watcher/qa/common/src/main/java/org/elasticsearch/xpack/watcher/WatcherRestTestCase.java +++ b/x-pack/plugin/watcher/qa/common/src/main/java/org/elasticsearch/xpack/watcher/WatcherRestTestCase.java @@ -73,6 +73,12 @@ public final void stopWatcher() throws Exception { Request deleteWatchesIndexRequest = new Request("DELETE", ".watches"); deleteWatchesIndexRequest.addParameter("ignore_unavailable", "true"); + deleteWatchesIndexRequest.setOptions( + expectWarnings( + "this request accesses system indices: [.watches], but in a future major " + + "version, direct access to system indices will be prevented by default" + ) + ); ESRestTestCase.adminClient().performRequest(deleteWatchesIndexRequest); Request deleteWatchHistoryRequest = new Request("DELETE", ".watcher-history-*"); diff --git a/x-pack/plugin/watcher/qa/common/src/main/java/org/elasticsearch/xpack/watcher/WatcherYamlSuiteTestCase.java b/x-pack/plugin/watcher/qa/common/src/main/java/org/elasticsearch/xpack/watcher/WatcherYamlSuiteTestCase.java index d41579230d95f..4f59c01c4dd10 100644 --- a/x-pack/plugin/watcher/qa/common/src/main/java/org/elasticsearch/xpack/watcher/WatcherYamlSuiteTestCase.java +++ b/x-pack/plugin/watcher/qa/common/src/main/java/org/elasticsearch/xpack/watcher/WatcherYamlSuiteTestCase.java @@ -8,6 +8,7 @@ import com.carrotsearch.randomizedtesting.annotations.Name; import com.carrotsearch.randomizedtesting.annotations.ParametersFactory; import org.elasticsearch.client.Request; +import org.elasticsearch.client.RequestOptions; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.rest.ESRestTestCase; import org.elasticsearch.test.rest.yaml.ClientYamlTestCandidate; @@ -105,6 +106,12 @@ public final void stopWatcher() throws Exception { private static void deleteWatcherIndices() throws IOException { Request deleteWatchesIndexRequest = new Request("DELETE", ".watches"); deleteWatchesIndexRequest.addParameter("ignore_unavailable", "true"); + deleteWatchesIndexRequest.setOptions(RequestOptions.DEFAULT.toBuilder().setWarningsHandler(warnings -> { + final String expectedWaring = "this request accesses system indices: [.watches], but in a future major version, direct " + + "access to system indices will be prevented by default"; + // There might not be a warning if the .watches index doesn't exist + return (warnings.isEmpty() || warnings.get(0).equals(expectedWaring)) == false; + })); ESRestTestCase.adminClient().performRequest(deleteWatchesIndexRequest); Request deleteWatchHistoryRequest = new Request("DELETE", ".watcher-history-*"); diff --git a/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/ack_watch/10_basic.yml b/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/ack_watch/10_basic.yml index ed35d17984679..3a3f962ce6f54 100644 --- a/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/ack_watch/10_basic.yml +++ b/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/ack_watch/10_basic.yml @@ -1,5 +1,7 @@ --- "Test ack watch api": + - skip: + features: warnings - do: cluster.health: wait_for_status: yellow @@ -44,6 +46,8 @@ - match: { "status.actions.test_index.ack.state" : "awaits_successful_execution" } - do: + warnings: + - "this request accesses system indices: [.watches], but in a future major version, direct access to system indices will be prevented by default" search: rest_total_hits_as_int: true index: .watches diff --git a/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/activate_watch/10_basic.yml b/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/activate_watch/10_basic.yml index 5f09e7ef1847a..015310babd29d 100644 --- a/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/activate_watch/10_basic.yml +++ b/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/activate_watch/10_basic.yml @@ -1,5 +1,7 @@ --- "Test activate watch api": + - skip: + features: warnings - do: cluster.health: wait_for_status: yellow @@ -48,6 +50,8 @@ - match: { status.state.active : false } - do: + warnings: + - "this request accesses system indices: [.watches], but in a future major version, direct access to system indices will be prevented by default" search: rest_total_hits_as_int: true index: .watches @@ -69,6 +73,8 @@ - match: { status.state.active : true } - do: + warnings: + - "this request accesses system indices: [.watches], but in a future major version, direct access to system indices will be prevented by default" search: rest_total_hits_as_int: true index: .watches diff --git a/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/delete_watch/10_basic.yml b/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/delete_watch/10_basic.yml index 1e9526ab209fa..1b999a5eabe35 100644 --- a/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/delete_watch/10_basic.yml +++ b/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/delete_watch/10_basic.yml @@ -13,10 +13,12 @@ teardown: --- "Test delete watch api": + - skip: + features: warnings - do: watcher.put_watch: id: "my_watch" - body: > + body: > { "trigger": { "schedule": { @@ -52,6 +54,8 @@ teardown: - match: { found: true } - do: + warnings: + - "this request accesses system indices: [.watches], but in a future major version, direct access to system indices will be prevented by default" search: rest_total_hits_as_int: true index: .watches diff --git a/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/get_watch/10_basic.yml b/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/get_watch/10_basic.yml index 09b2230f04c60..913366853b974 100644 --- a/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/get_watch/10_basic.yml +++ b/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/get_watch/10_basic.yml @@ -13,6 +13,8 @@ teardown: --- "Test get watch api": + - skip: + features: warnings - do: watcher.put_watch: id: "my_watch" @@ -47,6 +49,8 @@ teardown: - match: { created: true } - do: + warnings: + - "this request accesses system indices: [.watches], but in a future major version, direct access to system indices will be prevented by default" search: rest_total_hits_as_int: true index: .watches diff --git a/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/put_watch/80_put_get_watch_with_passwords.yml b/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/put_watch/80_put_get_watch_with_passwords.yml index 02191f0b680a9..b66579d7b044e 100644 --- a/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/put_watch/80_put_get_watch_with_passwords.yml +++ b/x-pack/plugin/watcher/qa/rest/src/yamlRestTest/resources/rest-api-spec/test/watcher/put_watch/80_put_get_watch_with_passwords.yml @@ -117,6 +117,8 @@ setup: --- "Test putting a watch with a redacted password with old seq no returns an error": + - skip: + features: warnings # version 1 - do: watcher.put_watch: @@ -260,6 +262,8 @@ setup: } - do: + warnings: + - "this request accesses system indices: [.watches], but in a future major version, direct access to system indices will be prevented by default" search: rest_total_hits_as_int: true index: .watches diff --git a/x-pack/plugin/watcher/qa/with-monitoring/src/javaRestTest/java/org/elasticsearch/smoketest/MonitoringWithWatcherRestIT.java b/x-pack/plugin/watcher/qa/with-monitoring/src/javaRestTest/java/org/elasticsearch/smoketest/MonitoringWithWatcherRestIT.java index cac2268f20119..39e18f55909a5 100644 --- a/x-pack/plugin/watcher/qa/with-monitoring/src/javaRestTest/java/org/elasticsearch/smoketest/MonitoringWithWatcherRestIT.java +++ b/x-pack/plugin/watcher/qa/with-monitoring/src/javaRestTest/java/org/elasticsearch/smoketest/MonitoringWithWatcherRestIT.java @@ -6,6 +6,7 @@ package org.elasticsearch.smoketest; import org.elasticsearch.client.Request; +import org.elasticsearch.client.RequestOptions; import org.elasticsearch.client.Response; import org.elasticsearch.common.Strings; import org.elasticsearch.test.rest.ESRestTestCase; @@ -33,13 +34,26 @@ public class MonitoringWithWatcherRestIT extends ESRestTestCase { @After public void cleanExporters() throws Exception { - Request request = new Request("PUT", "/_cluster/settings"); - request.setJsonEntity(Strings.toString(jsonBuilder().startObject() + Request cleanupSettingsRequest = new Request("PUT", "/_cluster/settings"); + cleanupSettingsRequest.setJsonEntity(Strings.toString(jsonBuilder().startObject() .startObject("transient") .nullField("xpack.monitoring.exporters.*") .endObject().endObject())); - adminClient().performRequest(request); - adminClient().performRequest(new Request("DELETE", "/.watch*")); + adminClient().performRequest(cleanupSettingsRequest); + final Request deleteRequest = new Request("DELETE", "/.watch*"); + RequestOptions allowSystemIndexAccessWarningOptions = RequestOptions.DEFAULT.toBuilder() + .setWarningsHandler(warnings -> { + if (warnings.size() != 1) { + return true; + } + // We don't know exactly which indices we're cleaning up in advance, so just accept all system index access warnings. + final String warning = warnings.get(0); + final boolean isSystemIndexWarning = warning.contains("this request accesses system indices") + && warning.contains("but in a future major version, direct access to system indices will be prevented by default"); + return isSystemIndexWarning == false; + }).build(); + deleteRequest.setOptions(allowSystemIndexAccessWarningOptions); + adminClient().performRequest(deleteRequest); } public void testThatLocalExporterAddsWatches() throws Exception { @@ -86,8 +100,11 @@ private void assertMonitoringWatchHasBeenOverWritten(String watchId) throws Exce private void assertTotalWatchCount(int expectedWatches) throws Exception { assertBusy(() -> { - assertOK(client().performRequest(new Request("POST", "/.watches/_refresh"))); - ObjectPath path = ObjectPath.createFromResponse(client().performRequest(new Request("POST", "/.watches/_count"))); + refreshAllIndices(); + final Request countRequest = new Request("POST", "/.watches/_count"); + countRequest.setOptions(expectWarnings("this request accesses system indices: [.watches], but in a future major " + + "version, direct access to system indices will be prevented by default")); + ObjectPath path = ObjectPath.createFromResponse(client().performRequest(countRequest)); int count = path.evaluate("count"); assertThat(count, is(expectedWatches)); }); diff --git a/x-pack/plugin/watcher/src/test/java/org/elasticsearch/xpack/watcher/WatcherPluginTests.java b/x-pack/plugin/watcher/src/test/java/org/elasticsearch/xpack/watcher/WatcherPluginTests.java index 2ea9a5be843bc..cf078e9e1b092 100644 --- a/x-pack/plugin/watcher/src/test/java/org/elasticsearch/xpack/watcher/WatcherPluginTests.java +++ b/x-pack/plugin/watcher/src/test/java/org/elasticsearch/xpack/watcher/WatcherPluginTests.java @@ -7,6 +7,7 @@ import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.env.TestEnvironment; import org.elasticsearch.index.IndexModule; import org.elasticsearch.index.IndexSettings; @@ -76,7 +77,7 @@ public void testWatcherDisabledTests() throws Exception { AnalysisRegistry registry = new AnalysisRegistry(TestEnvironment.newEnvironment(settings), emptyMap(), emptyMap(), emptyMap(), emptyMap(), emptyMap(), emptyMap(), emptyMap(), emptyMap(), emptyMap()); IndexModule indexModule = new IndexModule(indexSettings, registry, new InternalEngineFactory(), Collections.emptyMap(), - () -> true, new IndexNameExpressionResolver(), Collections.emptyMap()); + () -> true, new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), Collections.emptyMap()); // this will trip an assertion if the watcher indexing operation listener is null (which it is) but we try to add it watcher.onIndexModule(indexModule); diff --git a/x-pack/qa/full-cluster-restart/src/test/java/org/elasticsearch/xpack/restart/FullClusterRestartIT.java b/x-pack/qa/full-cluster-restart/src/test/java/org/elasticsearch/xpack/restart/FullClusterRestartIT.java index 3190e1cf4e07f..febcda14bf7f5 100644 --- a/x-pack/qa/full-cluster-restart/src/test/java/org/elasticsearch/xpack/restart/FullClusterRestartIT.java +++ b/x-pack/qa/full-cluster-restart/src/test/java/org/elasticsearch/xpack/restart/FullClusterRestartIT.java @@ -102,7 +102,10 @@ public void testSecurityNativeRealm() throws Exception { createRole(true); } else { waitForYellow(".security"); - Response settingsResponse = client().performRequest(new Request("GET", "/.security/_settings/index.format")); + final Request getSettingsRequest = new Request("GET", "/.security/_settings/index.format"); + getSettingsRequest.setOptions(expectWarnings("this request accesses system indices: [.security-7], but in a future major " + + "version, direct access to system indices will be prevented by default")); + Response settingsResponse = client().performRequest(getSettingsRequest); Map settingsResponseMap = entityAsMap(settingsResponse); logger.info("settings response map {}", settingsResponseMap); final String concreteSecurityIndex; @@ -176,7 +179,10 @@ public void testWatcher() throws Exception { logger.info("checking that the Watches index is the correct version"); - Response settingsResponse = client().performRequest(new Request("GET", "/.watches/_settings/index.format")); + final Request getSettingsRequest = new Request("GET", "/.watches/_settings/index.format"); + getSettingsRequest.setOptions(expectWarnings("this request accesses system indices: [.watches], but in a future major " + + "version, direct access to system indices will be prevented by default")); + Response settingsResponse = client().performRequest(getSettingsRequest); Map settingsResponseMap = entityAsMap(settingsResponse); logger.info("settings response map {}", settingsResponseMap); final String concreteWatchesIndex; diff --git a/x-pack/qa/full-cluster-restart/src/test/java/org/elasticsearch/xpack/restart/MlConfigIndexMappingsFullClusterRestartIT.java b/x-pack/qa/full-cluster-restart/src/test/java/org/elasticsearch/xpack/restart/MlConfigIndexMappingsFullClusterRestartIT.java index 4b80bec3a6cb7..c3c450d4f805b 100644 --- a/x-pack/qa/full-cluster-restart/src/test/java/org/elasticsearch/xpack/restart/MlConfigIndexMappingsFullClusterRestartIT.java +++ b/x-pack/qa/full-cluster-restart/src/test/java/org/elasticsearch/xpack/restart/MlConfigIndexMappingsFullClusterRestartIT.java @@ -71,6 +71,12 @@ public void testMlConfigIndexMappingsAfterMigration() throws Exception { private void assertThatMlConfigIndexDoesNotExist() { Request getIndexRequest = new Request("GET", ".ml-config"); + getIndexRequest.setOptions(expectVersionSpecificWarnings(v -> { + final String systemIndexWarning = "this request accesses system indices: [.ml-config], but in a future major version, direct " + + "access to system indices will be prevented by default"; + v.current(systemIndexWarning); + v.compatible(systemIndexWarning); + })); ResponseException e = expectThrows(ResponseException.class, () -> client().performRequest(getIndexRequest)); assertThat(e.getResponse().getStatusLine().getStatusCode(), equalTo(404)); } @@ -98,6 +104,12 @@ private void createAnomalyDetectorJob(String jobId) throws IOException { @SuppressWarnings("unchecked") private Map getConfigIndexMappings() throws Exception { Request getIndexMappingsRequest = new Request("GET", ".ml-config/_mappings"); + getIndexMappingsRequest.setOptions(expectVersionSpecificWarnings(v -> { + final String systemIndexWarning = "this request accesses system indices: [.ml-config], but in a future major version, direct " + + "access to system indices will be prevented by default"; + v.current(systemIndexWarning); + v.compatible(systemIndexWarning); + })); Response getIndexMappingsResponse = client().performRequest(getIndexMappingsRequest); assertThat(getIndexMappingsResponse.getStatusLine().getStatusCode(), equalTo(200)); diff --git a/x-pack/qa/rolling-upgrade/src/test/java/org/elasticsearch/upgrades/MlMappingsUpgradeIT.java b/x-pack/qa/rolling-upgrade/src/test/java/org/elasticsearch/upgrades/MlMappingsUpgradeIT.java index 5997f63a0145b..00cfe0f4d4de5 100644 --- a/x-pack/qa/rolling-upgrade/src/test/java/org/elasticsearch/upgrades/MlMappingsUpgradeIT.java +++ b/x-pack/qa/rolling-upgrade/src/test/java/org/elasticsearch/upgrades/MlMappingsUpgradeIT.java @@ -163,6 +163,8 @@ private void assertUpgradedConfigMappings() throws Exception { assertBusy(() -> { Request getMappings = new Request("GET", ".ml-config/_mappings"); + getMappings.setOptions(expectWarnings("this request accesses system indices: [.ml-config], but in a future major " + + "version, direct access to system indices will be prevented by default")); Response response = client().performRequest(getMappings); Map responseLevel = entityAsMap(response); diff --git a/x-pack/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/mixed_cluster/10_basic.yml b/x-pack/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/mixed_cluster/10_basic.yml index dd0639d0a65db..265f3547b6d65 100644 --- a/x-pack/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/mixed_cluster/10_basic.yml +++ b/x-pack/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/mixed_cluster/10_basic.yml @@ -25,7 +25,8 @@ body: { foo: 2 } - do: - indices.refresh: {} + indices.refresh: + index: upgraded_scroll - do: search: diff --git a/x-pack/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/upgraded_cluster/80_transform_jobs_crud.yml b/x-pack/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/upgraded_cluster/80_transform_jobs_crud.yml index 8e4a540a92cf6..9d2f27565d2ad 100644 --- a/x-pack/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/upgraded_cluster/80_transform_jobs_crud.yml +++ b/x-pack/qa/rolling-upgrade/src/test/resources/rest-api-spec/test/upgraded_cluster/80_transform_jobs_crud.yml @@ -279,6 +279,8 @@ setup: --- "Test index mappings for latest internal index and audit index": + - skip: + features: warnings - do: transform.put_transform: transform_id: "upgraded-simple-transform" @@ -295,6 +297,8 @@ setup: - match: { acknowledged: true } - do: + warnings: + - "this request accesses system indices: [.transform-internal-005], but in a future major version, direct access to system indices will be prevented by default" indices.get_mapping: index: .transform-internal-005 - match: { \.transform-internal-005.mappings.dynamic: "false" } diff --git a/x-pack/qa/smoke-test-security-with-mustache/src/test/resources/rest-api-spec/test/10_templated_role_query.yml b/x-pack/qa/smoke-test-security-with-mustache/src/test/resources/rest-api-spec/test/10_templated_role_query.yml index 84d8d98e27384..4dcc8c847c464 100644 --- a/x-pack/qa/smoke-test-security-with-mustache/src/test/resources/rest-api-spec/test/10_templated_role_query.yml +++ b/x-pack/qa/smoke-test-security-with-mustache/src/test/resources/rest-api-spec/test/10_templated_role_query.yml @@ -125,7 +125,8 @@ setup: } - do: - indices.refresh: {} + indices.refresh: + index: foobar --- teardown: diff --git a/x-pack/qa/smoke-test-security-with-mustache/src/test/resources/rest-api-spec/test/11_templated_role_query_runas.yml b/x-pack/qa/smoke-test-security-with-mustache/src/test/resources/rest-api-spec/test/11_templated_role_query_runas.yml index 2f4755943aa2d..b3948028f4144 100644 --- a/x-pack/qa/smoke-test-security-with-mustache/src/test/resources/rest-api-spec/test/11_templated_role_query_runas.yml +++ b/x-pack/qa/smoke-test-security-with-mustache/src/test/resources/rest-api-spec/test/11_templated_role_query_runas.yml @@ -125,7 +125,8 @@ setup: } - do: - indices.refresh: {} + indices.refresh: + index: foobar --- teardown: diff --git a/x-pack/qa/smoke-test-security-with-mustache/src/test/resources/rest-api-spec/test/20_small_users_one_index.yml b/x-pack/qa/smoke-test-security-with-mustache/src/test/resources/rest-api-spec/test/20_small_users_one_index.yml index e3f706570a22a..4e38838f5dce1 100644 --- a/x-pack/qa/smoke-test-security-with-mustache/src/test/resources/rest-api-spec/test/20_small_users_one_index.yml +++ b/x-pack/qa/smoke-test-security-with-mustache/src/test/resources/rest-api-spec/test/20_small_users_one_index.yml @@ -107,7 +107,8 @@ teardown: } - do: - indices.refresh: {} + indices.refresh: + index: shared_logs # Joe searches: - do: @@ -177,7 +178,8 @@ teardown: } - do: - indices.refresh: {} + indices.refresh: + index: shared_logs # Joe searches: - do: diff --git a/x-pack/qa/smoke-test-security-with-mustache/src/test/resources/rest-api-spec/test/30_search_template.yml b/x-pack/qa/smoke-test-security-with-mustache/src/test/resources/rest-api-spec/test/30_search_template.yml index a208bda67cfe2..1ce18208a1085 100644 --- a/x-pack/qa/smoke-test-security-with-mustache/src/test/resources/rest-api-spec/test/30_search_template.yml +++ b/x-pack/qa/smoke-test-security-with-mustache/src/test/resources/rest-api-spec/test/30_search_template.yml @@ -44,7 +44,8 @@ setup: title: "contains some words too" - do: - indices.refresh: {} + indices.refresh: + index: ["foobar", "unauthorized_index"] --- teardown: diff --git a/x-pack/qa/src/main/java/org/elasticsearch/xpack/test/rest/IndexMappingTemplateAsserter.java b/x-pack/qa/src/main/java/org/elasticsearch/xpack/test/rest/IndexMappingTemplateAsserter.java index a073f07698afb..19c15a5463390 100644 --- a/x-pack/qa/src/main/java/org/elasticsearch/xpack/test/rest/IndexMappingTemplateAsserter.java +++ b/x-pack/qa/src/main/java/org/elasticsearch/xpack/test/rest/IndexMappingTemplateAsserter.java @@ -82,16 +82,16 @@ public static void assertMlMappingsMatchTemplates(RestClient client) throws IOEx statsIndexException.add("properties.hyperparameters.properties.regularization_soft_tree_depth_tolerance.type"); statsIndexException.add("properties.hyperparameters.properties.regularization_tree_size_penalty_multiplier.type"); - assertLegacyTemplateMatchesIndexMappings(client, ".ml-config", ".ml-config", false, configIndexExceptions); + assertLegacyTemplateMatchesIndexMappings(client, ".ml-config", ".ml-config", false, configIndexExceptions, true); // the true parameter means the index may not have been created - assertLegacyTemplateMatchesIndexMappings(client, ".ml-meta", ".ml-meta", true, Collections.emptySet()); - assertLegacyTemplateMatchesIndexMappings(client, ".ml-stats", ".ml-stats-000001", true, statsIndexException); - assertLegacyTemplateMatchesIndexMappings(client, ".ml-state", ".ml-state-000001", true, Collections.emptySet()); + assertLegacyTemplateMatchesIndexMappings(client, ".ml-meta", ".ml-meta", true, Collections.emptySet(), true); + assertLegacyTemplateMatchesIndexMappings(client, ".ml-stats", ".ml-stats-000001", true, statsIndexException, false); + assertLegacyTemplateMatchesIndexMappings(client, ".ml-state", ".ml-state-000001", true, Collections.emptySet(), false); // Depending on the order Full Cluster restart tests are run there may not be an notifications index yet assertLegacyTemplateMatchesIndexMappings(client, - ".ml-notifications-000001", ".ml-notifications-000001", true, Collections.emptySet()); + ".ml-notifications-000001", ".ml-notifications-000001", true, Collections.emptySet(), false); assertLegacyTemplateMatchesIndexMappings(client, - ".ml-inference-000003", ".ml-inference-000003", true, Collections.emptySet()); + ".ml-inference-000003", ".ml-inference-000003", true, Collections.emptySet(), true); // .ml-annotations-6 does not use a template // .ml-anomalies-shared uses a template but will have dynamically updated mappings as new jobs are opened } @@ -122,14 +122,16 @@ public static void assertMlMappingsMatchTemplates(RestClient client) throws IOEx * index does not cause an error * @param exceptions List of keys to ignore in the index mappings. * Each key is a '.' separated path. + * @param allowSystemIndexWarnings Whether deprecation warnings for system index access should be allowed/expected. * @throws IOException Yes */ @SuppressWarnings("unchecked") public static void assertLegacyTemplateMatchesIndexMappings(RestClient client, - String templateName, - String indexName, - boolean notAnErrorIfIndexDoesNotExist, - Set exceptions) throws IOException { + String templateName, + String indexName, + boolean notAnErrorIfIndexDoesNotExist, + Set exceptions, + boolean allowSystemIndexWarnings) throws IOException { Request getTemplate = new Request("GET", "_template/" + templateName); Response templateResponse = client.performRequest(getTemplate); @@ -141,6 +143,14 @@ public static void assertLegacyTemplateMatchesIndexMappings(RestClient client, assertNotNull(templateMappings); Request getIndexMapping = new Request("GET", indexName + "/_mapping"); + if (allowSystemIndexWarnings) { + final String systemIndexWarning = "this request accesses system indices: [" + indexName + "], but in a future major version, " + + "direct access to system indices will be prevented by default"; + getIndexMapping.setOptions(ESRestTestCase.expectVersionSpecificWarnings(v -> { + v.current(systemIndexWarning); + v.compatible(systemIndexWarning); + })); + } Response indexMappingResponse; try { indexMappingResponse = client.performRequest(getIndexMapping); @@ -239,7 +249,7 @@ public static void assertLegacyTemplateMatchesIndexMappings(RestClient client, public static void assertLegacyTemplateMatchesIndexMappings(RestClient client, String templateName, String indexName) throws IOException { - assertLegacyTemplateMatchesIndexMappings(client, templateName, indexName, false, Collections.emptySet()); + assertLegacyTemplateMatchesIndexMappings(client, templateName, indexName, false, Collections.emptySet(), false); } private static boolean areBooleanObjectsAndEqual(Object a, Object b) {