Skip to content

Commit

Permalink
Merge branch 'main' into testWriteWithIterator
Browse files Browse the repository at this point in the history
  • Loading branch information
elasticmachine authored Oct 13, 2023
2 parents 074ccad + 2ce5392 commit a58dbef
Show file tree
Hide file tree
Showing 23 changed files with 315 additions and 228 deletions.
9 changes: 9 additions & 0 deletions docs/changelog/100033.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
pr: 100033
summary: "[Behavioral Analytics] Analytics collections use Data Stream Lifecycle (DSL)\
\ instead of Index Lifecycle Management (ILM) for data retention management. Behavioral\
\ analytics has traditionally used ILM to manage data retention. Starting with 8.12.0,\
\ this will change. Analytics collections created prior to 8.12.0 will continue to use\
\ their existing ILM policies, but new analytics collections will be managed using DSL."
area: Application
type: feature
issues: [ ]
7 changes: 7 additions & 0 deletions docs/reference/esql/functions/starts_with.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
[[esql-starts_with]]
=== `STARTS_WITH`
[.text-center]
image::esql/functions/signature/ends_with.svg[Embedded,opts=inline]

Returns a boolean that indicates whether a keyword string starts with another
string:

Expand All @@ -11,3 +14,7 @@ include::{esql-specs}/docs.csv-spec[tag=startsWith]
|===
include::{esql-specs}/docs.csv-spec[tag=startsWith-result]
|===

Supported types:

include::types/starts_with.asciidoc[]
7 changes: 7 additions & 0 deletions docs/reference/esql/functions/trim.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
[[esql-trim]]
=== `TRIM`
[.text-center]
image::esql/functions/signature/trim.svg[Embedded,opts=inline]

Removes leading and trailing whitespaces from strings.

[source.merge.styled,esql]
Expand All @@ -10,3 +13,7 @@ include::{esql-specs}/string.csv-spec[tag=trim]
|===
include::{esql-specs}/string.csv-spec[tag=trim-result]
|===

Supported types:

include::types/trim.asciidoc[]
8 changes: 4 additions & 4 deletions docs/reference/ilm/actions/ilm-rollover.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -129,10 +129,10 @@ opt in to rolling over empty indices, by adding a `"min_docs": 0` condition. Thi
disabled on a cluster-wide basis by setting `indices.lifecycle.rollover.only_if_has_documents` to
`false`.

NOTE: The rollover action implicitly always rolls over a data stream or alias if one or more shards contain
200000000 or more documents. Normally a shard will reach 50GB long before it reaches 200M documents,
but this isn't the case for space efficient data sets. Search performance will very likely suffer
if a shard contains more than 200M documents. This is the reason of the builtin limit.
IMPORTANT: The rollover action implicitly always rolls over a data stream or alias if one or more shards contain
200000000 or more documents. Normally a shard will reach 50GB long before it reaches 200M documents,
but this isn't the case for space efficient data sets. Search performance will very likely suffer
if a shard contains more than 200M documents. This is the reason of the builtin limit.

[[ilm-rollover-ex]]
==== Example
Expand Down
12 changes: 12 additions & 0 deletions docs/reference/ilm/index-rollover.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -51,3 +51,15 @@ TIP: Rolling over to a new index based on size, document count, or age is prefer
to time-based rollovers. Rolling over at an arbitrary time often results in
many small indices, which can have a negative impact on performance and
resource usage.

IMPORTANT: Empty indices will not be rolled over, even if they have an associated `max_age` that
would otherwise result in a roll over occurring. A policy can override this behavior, and explicitly
opt in to rolling over empty indices, by adding a `"min_docs": 0` condition. This can also be
disabled on a cluster-wide basis by setting `indices.lifecycle.rollover.only_if_has_documents` to
`false`.

IMPORTANT: The rollover action implicitly always rolls over a data stream or alias if one or more shards contain
200000000 or more documents. Normally a shard will reach 50GB long before it reaches 200M documents,
but this isn't the case for space efficient data sets. Search performance will very likely suffer
if a shard contains more than 200M documents. This is the reason of the builtin limit.

5 changes: 5 additions & 0 deletions docs/reference/ilm/set-up-lifecycle-policy.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,11 @@ PUT _ilm/policy/my_policy
<2> Delete the index 30 days after rollover
====

IMPORTANT: The rollover action implicitly always rolls over a data stream or alias if one or more shards contain
200000000 or more documents. Normally a shard will reach 25GB long before it reaches 200M documents,
but this isn't the case for space efficient data sets. Search performance will very likely suffer
if a shard contains more than 200M documents. This is the reason of the builtin limit.

[discrete]
[[apply-policy-template]]
=== Apply lifecycle policy with an index template
Expand Down
16 changes: 8 additions & 8 deletions docs/reference/query-rules/apis/get-query-ruleset.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,9 @@ PUT _query_rules/my-ruleset
"type": "pinned",
"criteria": [
{
"type": "exact",
"type": "contains",
"metadata": "query_string",
"values": [ "marvel" ]
"values": [ "pugs", "puggles" ]
}
],
"actions": {
Expand All @@ -69,9 +69,9 @@ PUT _query_rules/my-ruleset
"type": "pinned",
"criteria": [
{
"type": "exact",
"type": "fuzzy",
"metadata": "query_string",
"values": [ "dc" ]
"values": [ "rescue dogs" ]
}
],
"actions": {
Expand Down Expand Up @@ -117,9 +117,9 @@ A sample response:
"type": "pinned",
"criteria": [
{
"type": "exact",
"type": "contains",
"metadata": "query_string",
"values": [ "marvel" ]
"values": [ "pugs", "puggles" ]
}
],
"actions": {
Expand All @@ -134,9 +134,9 @@ A sample response:
"type": "pinned",
"criteria": [
{
"type": "exact",
"type": "fuzzy",
"metadata": "query_string",
"values": [ "dc" ]
"values": [ "rescue dogs" ]
}
],
"actions": {
Expand Down
72 changes: 36 additions & 36 deletions docs/reference/query-rules/apis/put-query-ruleset.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,29 +22,27 @@ Requires the `manage_search_query_rules` privilege.

[role="child_attributes"]
[[put-query-ruleset-request-body]]
(Required, object)
Contains parameters for a query ruleset:
(Required, object) Contains parameters for a query ruleset:

==== {api-request-body-title}

`rules`::
(Required, array of objects)
The specific rules included in this query ruleset.
(Required, array of objects) The specific rules included in this query ruleset.

Each rule must have the following information:

- `rule_id` (Required, string)
A unique identifier for this rule.
- `type` (Required, string)
The type of rule. At this time only `pinned` query rule types are allowed.
- `criteria` (Required, array of objects)
The criteria that must be met for the rule to be applied. If multiple criteria are specified for a rule, all criteria must be met for the rule to be applied.
- `actions` (Required, object)
The actions to take when the rule is matched. The format of this action depends on the rule type.
- `rule_id` (Required, string) A unique identifier for this rule.
- `type` (Required, string) The type of rule.
At this time only `pinned` query rule types are allowed.
- `criteria` (Required, array of objects) The criteria that must be met for the rule to be applied.
If multiple criteria are specified for a rule, all criteria must be met for the rule to be applied.
- `actions` (Required, object) The actions to take when the rule is matched.
The format of this action depends on the rule type.

Criteria must have the following information:

- `type` (Required, string)
The type of criteria. The following criteria types are supported:
- `type` (Required, string) The type of criteria.
The following criteria types are supported:
+
--
- `exact`
Expand Down Expand Up @@ -77,30 +75,32 @@ Only applicable for numerical values.
- `always`
Matches all queries, regardless of input.
--
- `metadata` (Optional, string)
The metadata field to match against. Required for all criteria types except `global`.
- `values` (Optional, array of strings)
The values to match against the metadata field. Only one value must match for the criteria to be met. Required for all criteria types except `global`.
- `metadata` (Optional, string) The metadata field to match against.
This metadata will be used to match against `match_criteria` sent in the <<query-dsl-rule-query>>.
Required for all criteria types except `global`.
- `values` (Optional, array of strings) The values to match against the metadata field.
Only one value must match for the criteria to be met.
Required for all criteria types except `global`.

Actions depend on the rule type.
For `pinned` rules, actions follow the format specified by the <<query-dsl-pinned-query,Pinned Query>>.
The following actions are allowed:

- `ids` (Optional, array of strings)
The The unique <<mapping-id-field, document IDs>> of the documents to pin.
Only one of `ids` or `docs` may be specified, and at least one must be specified.
- `docs` (Optional, array of objects)
The documents to pin. Only one of `ids` or `docs` may be specified, and at least one must be specified.
You can specify the following attributes for each document:
- `ids` (Optional, array of strings) The unique <<mapping-id-field, document IDs>> of the documents to pin.
Only one of `ids` or `docs` may be specified, and at least one must be specified.
- `docs` (Optional, array of objects) The documents to pin.
Only one of `ids` or `docs` may be specified, and at least one must be specified.
You can specify the following attributes for each document:
+
--
- `_index` (Required, string)
The index of the document to pin.
- `_id` (Required, string)
The unique <<mapping-id-field, document ID>>.
- `_index` (Required, string) The index of the document to pin.
- `_id` (Required, string) The unique <<mapping-id-field, document ID>>.
--

IMPORTANT: Due to limitations within <<query-dsl-pinned-query,Pinned queries>>, you can only pin documents using `ids` or `docs`, but cannot use both in single rule. It is advised to use one or the other in query rulesets, to avoid errors. Additionally, pinned queries have a maximum limit of 100 pinned hits. If multiple matching rules pin more than 100 documents, only the first 100 documents are pinned in the order they are specified in the ruleset.
IMPORTANT: Due to limitations within <<query-dsl-pinned-query,Pinned queries>>, you can only pin documents using `ids` or `docs`, but cannot use both in single rule.
It is advised to use one or the other in query rulesets, to avoid errors.
Additionally, pinned queries have a maximum limit of 100 pinned hits.
If multiple matching rules pin more than 100 documents, only the first 100 documents are pinned in the order they are specified in the ruleset.

[[put-query-ruleset-example]]
==== {api-examples-title}
Expand All @@ -109,8 +109,8 @@ The following example creates a new query ruleset called `my-ruleset`.

Two rules are associated with `my-ruleset`:

- `my-rule1` will pin documents with IDs `id1` and `id2` when `user.query` exactly matches `marvel` _or_ `dc` **and** `user.country` exactly matches `us`.
- `my-rule2` will pin documents from different, specified indices with IDs `id3` and `id4` when the `query_string` fuzzily matches `comic`.
- `my-rule1` will pin documents with IDs `id1` and `id2` when `user_query` contains `pugs` _or_ `puggles` **and** `user_country` exactly matches `us`.
- `my-rule2` will pin documents from different, specified indices with IDs `id3` and `id4` when the `query_string` fuzzily matches `rescue dogs`.

[source,console]
----
Expand All @@ -123,12 +123,12 @@ PUT _query_rules/my-ruleset
"criteria": [
{
"type": "contains",
"metadata": "user.query",
"values": [ "marvel", "dc" ]
"metadata": "user_query",
"values": [ "pugs", "puggles" ]
},
{
"type": "exact",
"metadata": "user.country",
"metadata": "user_country",
"values": [ "us" ]
}
],
Expand All @@ -145,8 +145,8 @@ PUT _query_rules/my-ruleset
"criteria": [
{
"type": "fuzzy",
"metadata": "query_string",
"values": [ "comic" ]
"metadata": "user_query",
"values": [ "rescue dogs" ]
}
],
"actions": {
Expand Down
10 changes: 10 additions & 0 deletions qa/mixed-cluster/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,16 @@ excludeList.add('aggregations/filter/Standard queries get cached')
excludeList.add('aggregations/filter/Terms lookup gets cached')
excludeList.add('aggregations/filters_bucket/cache hits')

// These tests check setting validations in the desired_node API.
// Validation (and associated tests) are supposed to be skipped/have
// different behaviour for versions before and after 8.10 but mixed
// cluster tests may not respect that - see the comment above.
excludeList.add('cluster.desired_nodes/10_basic/Test settings are validated')
excludeList.add('cluster.desired_nodes/10_basic/Test unknown settings are forbidden in known versions')
excludeList.add('cluster.desired_nodes/10_basic/Test unknown settings are allowed in future versions')
excludeList.add('cluster.desired_nodes/10_basic/Test some settings can be overridden')
excludeList.add('cluster.desired_nodes/20_dry_run/Test validation works for dry run updates')

BuildParams.bwcVersions.withWireCompatible { bwcVersion, baseName ->

if (bwcVersion != VersionProperties.getElasticsearchVersion()) {
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
---
setup:
- skip:
version: "8.7.00 - 8.9.99"
reason: "Synthetic source shows up in the mapping in 8.10 and on, may trigger assert failures in mixed cluster tests"
version: " - 8.9.99"
reason: "position metric introduced in 8.8.0, synthetic source shows up in the mapping in 8.10 and on, may trigger assert failures in mixed cluster tests"

- do:
indices.create:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -424,6 +424,10 @@ nested fields:

---
"Synthetic source":
- skip:
version: " - 8.9.99"
reason: Synthetic source shows up in the mapping in 8.10

- do:
indices.create:
index: tsdb-synthetic
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.test.ESIntegTestCase;
import org.elasticsearch.test.disruption.NetworkDisruption;
import org.elasticsearch.test.junit.annotations.TestLogging;
import org.elasticsearch.test.transport.MockTransportService;

import java.util.Collection;
Expand All @@ -36,6 +37,10 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {
return List.of(BlockingClusterSettingTestPlugin.class, MockTransportService.TestPlugin.class);
}

@TestLogging(
reason = "https://github.com/elastic/elasticsearch/issues/98918",
value = "org.elasticsearch.action.admin.cluster.settings.TransportClusterUpdateSettingsAction:TRACE"
)
public void testClusterSettingsUpdateNotAcknowledged() throws Exception {
final var nodes = internalCluster().startMasterOnlyNodes(3);
final String masterNode = internalCluster().getMasterName();
Expand All @@ -52,26 +57,26 @@ public void testClusterSettingsUpdateNotAcknowledged() throws Exception {
);
internalCluster().setDisruptionScheme(networkDisruption);

logger.debug("--> updating cluster settings");
logger.info("--> updating cluster settings");
var future = client(masterNode).admin()
.cluster()
.prepareUpdateSettings()
.setPersistentSettings(Settings.builder().put(BlockingClusterSettingTestPlugin.TEST_BLOCKING_SETTING.getKey(), true).build())
.setMasterNodeTimeout(TimeValue.timeValueMillis(10L))
.execute();

logger.debug("--> waiting for cluster state update to be blocked");
BlockingClusterSettingTestPlugin.blockLatch.await();
logger.info("--> waiting for cluster state update to be blocked");
safeAwait(BlockingClusterSettingTestPlugin.blockLatch);

logger.debug("--> isolating master eligible node [{}] from other nodes", blockedNode);
logger.info("--> isolating master eligible node [{}] from other nodes", blockedNode);
networkDisruption.startDisrupting();

logger.debug("--> unblocking cluster state update");
logger.info("--> unblocking cluster state update");
BlockingClusterSettingTestPlugin.releaseLatch.countDown();

assertThat("--> cluster settings update should not be acknowledged", future.get().isAcknowledged(), equalTo(false));

logger.debug("--> stop network disruption");
logger.info("--> stop network disruption");
networkDisruption.stopDisrupting();
ensureStableCluster(3);
}
Expand All @@ -86,11 +91,13 @@ public static class BlockingClusterSettingTestPlugin extends Plugin {

public static final Setting<Boolean> TEST_BLOCKING_SETTING = Setting.boolSetting("cluster.test.blocking_setting", false, value -> {
if (blockOnce.compareAndSet(false, true)) {
logger.debug("--> setting validation is now blocking cluster state update");
logger.info("--> setting validation is now blocking cluster state update");
blockLatch.countDown();
logger.debug("--> setting validation is now waiting for release");
logger.info("--> setting validation is now waiting for release");
safeAwait(releaseLatch);
logger.debug("--> setting validation is done");
logger.info("--> setting validation is done");
} else {
logger.info("--> setting validation was blocked before");
}
}, Setting.Property.NodeScope, Setting.Property.Dynamic);

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -383,6 +383,7 @@ private void snapshot(
SnapshotIndexCommit snapshotIndexCommit = null;
try {
snapshotIndexCommit = new SnapshotIndexCommit(indexShard.acquireIndexCommitForSnapshot());
final var shardStateId = getShardStateId(indexShard, snapshotIndexCommit.indexCommit()); // not aborted so indexCommit() ok
snapshotStatus.addAbortListener(makeAbortListener(indexShard.shardId(), snapshot, snapshotIndexCommit));
snapshotStatus.ensureNotAborted();
repository.snapshotShard(
Expand All @@ -392,7 +393,7 @@ private void snapshot(
snapshot.getSnapshotId(),
indexId,
snapshotIndexCommit,
getShardStateId(indexShard, snapshotIndexCommit.indexCommit()),
shardStateId,
snapshotStatus,
version,
entryStartTime,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1364,7 +1364,6 @@ public void testCancelViaTasksAPI() throws Exception {
assertThat(json, matchesRegex(".*task (was)?\s*cancelled.*"));
}

@AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/99519")
public void testCancelViaAsyncSearchDelete() throws Exception {
Map<String, Object> testClusterInfo = setupTwoClusters();
String localIndex = (String) testClusterInfo.get("local.index");
Expand Down
Loading

0 comments on commit a58dbef

Please sign in to comment.