forked from elastic/elasticsearch
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Robin13 patch 1 #3
Open
ivansun1010
wants to merge
10,000
commits into
ivansun1010:test
Choose a base branch
from
elastic:robin13-patch-1
base: test
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This commit allows the JSON schema's documentation.url property to have a null value. This can useful for cases where a feature is under development, and does not have documentation published yet. This commit also adds a documentation.url for two ml resources.
* Swaps outdated index patterns for the default `logstash` index alias. Adds some related information about Logstash ILM defaults to the callout. * Swaps `*.raw` fields for `*.keyword` fields. The Logstash template uses `keyword` fields by default since 6.x. * Swaps instances of `ctx.payload.hits.total.value` with `ctx.payload.hits.total`
Watcher adds watches to the trigger service on the postIndex action for the .watches index. This has the (intentional) side effect of also adding the watches to the stats. The tests rely on these stats for their assertions. The tests also start and stop Watcher between each test for a clean slate. When Watcher executes it updates the .watches index and upon this update it will go through the postIndex method and end up added that watch to the trigger service (and stats). Functionally this is not a problem, if Watcher is stopping or stopped since Watcher is also paused and will not execute the watch. However, with specific timing and expectations of a clean slate can cause issues the test assertions against the stats. This commit ensures that the postIndex action only adds to the trigger service if the Watcher state is not stopping or stopped. When started back up it will re-read index .watches. This commit also un-mutes the tests related to #53177 and #56534
* [DOCS] Extract the cron docs from Watcher docs and add to the API conventions. (#56313) * [DOCS] Promote cron expressions info from Watcher to a separate topic. * Fix table error * Fixed xref * Apply suggestions from code review Co-authored-by: James Rodewig <[email protected]> * Incorporated review feedback Co-authored-by: James Rodewig <[email protected]> * [DOCS] Clarify definition of max_size (#56561) Co-authored-by: James Rodewig <[email protected]>
#56663) Without the flag we run into the situation where a broken repository (broken by some old 6.x version of ES that is missing some snap-${uuid}.dat blobs fails to run the SLM retention task since it always errors out).
Today a user can create an index without setting the index.number_of_replicas setting even though the index metadata requires that the setting has a value. We do this when creating an index by explicitly settings index.number_of_replicas to a default value if one is not provided. However, if a user updates the number of replicas, and then let wants to return to the default value, they are naturally inclined to try setting this setting to null, as the agreed upon way to return a setting to its default. Since the index metadata requires that this setting has a non-null value, we blow up when a user attempts to make this change. This is because we are not taking the same action when updating a setting on an index that we take when create an index. Namely, we are not explicitly setting index.number_of_replicas if the request does not carry a value for this setting. This would happen when nulling the setting, which we want to support. This commit addresses this by setting index.number_of_replicas to the default if the value for this setting is null when updating the settings for an index.
A backport brought back a symbol name change from a prior change that was never backported to this branch. This commit fixes this by using the symbol name already present in the source file.
Corrects the datatype for the `query` property of an enrich policy object. The `query` property is a query object, not a string.
A backport brought back a symbol name change from a prior change that was never backported to this branch. This commit fixes this by using the symbol name already present in this branch.
Bump 7.7 branch to version 7.7.1.
…56650) This optional parameter can only be a string. To test out a transient custom analysis chain, users are expected to use the 'tokenizer', 'filter', and 'char_filter' parameters.
This reverts commit 86d720d.
This is similar to a previous change that allowed removing the number of replicas settings (so setting it to its default) on open indices. This commit allows the same for closed indices. It is unfortunate that we have separate branches for handling open and closed indices here, but I do not see a clean way to merge these two together without making a rather unnatural method (note that they invoke different methods for doing the settings updates). For now, we leave this as-is even though it led to the miss here.
…pipelines (#56020) (#56127) * [ML] reduce InferenceProcessor.Factory log spam by not parsing pipelines (#56020) If there are ill-formed pipelines, or other pipelines are not ready to be parsed, `InferenceProcessor.Factory::accept(ClusterState)` logs warnings. This can be confusing and cause log spam. It might lead folks to think there an issue with the inference processor. Also, they would see logs for the inference processor even though they might not be using the inference processor. Leading to more confusion. Additionally, pipelines might not be parseable in this method as some processors require the new cluster state metadata before construction (e.g. `enrich` requires cluster metadata to be set before creating the processor). closes #55985 * fixing for backport Co-authored-by: Elastic Machine <[email protected]>
* [DOCS] Add info about ILM and unallocated shards. * Incorporated review feedback. * Update docs/reference/ilm/actions/ilm-allocate.asciidoc Co-authored-by: James Rodewig <[email protected]> * Apply suggestions from code review Co-authored-by: James Rodewig <[email protected]> * Fix xref Co-authored-by: James Rodewig <[email protected]> Co-authored-by: James Rodewig <[email protected]>
We previously rejected removing the number of replicas setting, which prevents users from reverting this setting to its default the natural way. To fix this, we put back the setting with the default value in the cases that the user is trying to remove it. Yet, we also need to do the work of updating the routing table and so on appropriately. This case was missed because when the setting is being removed, we were defaulting to -1 in this code path, which is treated as not being updated. Instead, we must treat the case when we are removing this setting as if the setting is being updated, too. This commit does that.
… (#56731) * [DOCS] Added info about automatic config for Beats & Logstash. * Update docs/reference/ilm/set-up-lifecycle-policy.asciidoc Co-authored-by: James Rodewig <[email protected]> * Update docs/reference/ilm/set-up-lifecycle-policy.asciidoc Co-authored-by: James Rodewig <[email protected]> * Update docs/reference/ilm/index.asciidoc * Updated note in GS tutorial Co-authored-by: James Rodewig <[email protected]> Co-authored-by: James Rodewig <[email protected]>
In normal operation native controllers are not expected to write anything to stdout or stderr. However, if due to an error or something unexpected with the environment a native controller does write something to stdout or stderr then it will block if nothing is reading that output. This change makes the stdout and stderr of native controllers reuse the same stdout and stderr as the Elasticsearch JVM (which are by default redirected to es.stdout.log and es.stderr.log) so that if something unexpected is written to native controller output then: 1. The native controller process does not block, waiting for something to read the output 2. We can see what the output was, making it easier to debug obscure environmental problems Backport of #56491
This commit fixes our behavior regarding the responses we return in various cases for the use of token related APIs. More concretely: - In the Get Token API with the `refresh` grant, when an invalid (already deleted, malformed, unknown) refresh token is used in the body of the request, we respond with `400` HTTP status code and an `error_description` header with the message "could not refresh the requested token". Previously we would return erroneously return a `401` with "token malformed" message. - In the Invalidate Token API, when using an invalid (already deleted, malformed, unknown) access or refresh token, we respond with `404` and a body that shows that no tokens were invalidated: ``` { "invalidated_tokens":0, "previously_invalidated_tokens":0, "error_count":0 } ``` The previous behavior would be to erroneously return a `400` or `401` ( depending on the case ). - In the Invalidate Token API, when the tokens index doesn't exist or is closed, we return `400` because we assume this is a user issue either because they tried to invalidate a token when there is no tokens index yet ( i.e. no tokens have been created yet or the tokens index has been deleted ) or the index is closed. - In the Invalidate Token API, when the tokens index is unavailable, we return a `503` status code because we want to signal to the caller of the API that the token they tried to invalidate was not invalidated and we can't be sure if it is still valid or not, and that they should try the request again. Backport of #54532
If a task runs with a user, and it's canceled after we have sent the ban requests, then the unban request will be denied as it must not execute with a user. We need to wrap it with the current thread context. Backport of #55404
We document that the cluster state API is an internal representation which may change, but apparently not emphatically enough. This commit adds a `NOTE:` admonition to this paragraph.
Changes: * Condenses and relocates the `docvalue_fields` example to the 'Run a search' page. * Adds docs for the `docvalue_fields` request body parameter. * Updates several related xrefs. Co-authored-by: debadair <[email protected]>
Changes: * Rewrites description and adds Lucene link * Adds analyze example * Adds parameter definitions * Adds custom analyzer example
Fixes exponent off-by-ones in Painless documentation for int and long.
We were previously configuring BWC testing tasks by matching on task name prefix. This naive approach breaks down when you have versions like 1.0.1 and 1.0.10 since they both share a common prefix. This commit makes the pattern matching more specific so we won't inadvertently spin up the wrong cluster version.
This commit clarifies that the `expand_wildcards` option (as well as other `IndicesOptions` parameters) can be used with the Create Snapshot API, but that they must be in the body of the request. Also clarifies the connection between `expand_wildcards` and hidden indices as it relates to snapshots.
- This configures the testkit gradle runner to run with debug enabled automatically when test is executed in debug mode (e.g. from the IDE) - Allows step by step debugging of GradleIntegrationTestCase tests
…57963) Without this fix, users who try to use Metricbeat for Stack Monitoring today see the following error repeatedly in their Metricbeat log. Due to this error Metricbeat is unwilling to proceed further and, thus, no Stack Monitoring data is indexed into the Elasticsearch cluster. Co-authored-by: Shaunak Kashyap <[email protected]>
* [DOCS] Reformat release highlights as What's new. * [DOCS] Moved 7.7 highlights
Co-authored-by: Tim Vernum <[email protected]>
This change aims to fix our setup in CI so that we can run 7.7 in FIPS 140 mode. The major issue that we have in 7.x and did not have in master is that we can't use the diagnostic trust manager in FIPS mode in Java 8 with SunJSSE in FIPS approved mode as it explicitly disallows the wrapping of X509TrustManager. Previous attempts like #56427 and #52211 focused on disabling the setting in all of our tests when creating a Settings object or on setting fips_mode.enabled accordingly (which implicitly disables the diagnostic trust manager). The attempts weren't future proof though as nothing would forbid someone to add new tests without setting the necessary setting and forcing this would be very inconvenient for any other case ( see This change introduces a runtime check in SSLService that overrides the configuration value of xpack.security.ssl.diagnose.trust and disables the diagnostic trust manager when we are running in Java 8 and the SunJSSE provider is set in FIPS mode.
* [DOCS] Fixes problematic terminology (#58178) * Update docs/reference/snapshot-restore/register-repository.asciidoc Co-authored-by: James Rodewig <[email protected]> * [DOCS] Fixes terminology in the Painless docs (#58179)
There is sometimes confusion as it is possible to define multiple instances of some other realms (e.g. ldap), but only one file realm can be defined (https://github.com/elastic/elasticsearch/blob/ef48eb35cf30adf4db14086e8aabd07ef6fb113f/x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/authc/file/FileUserRolesStore.java#L83). As there are some situations where for ease of management users may want to define multiple file realms, it may be useful to explicitly note here that the file realm can have only one instance defined.
Co-authored-by: Lisa Cawley <[email protected]>
Co-authored-by: Lisa Cawley <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.