Skip to content

Commit

Permalink
Merge branch 'main' into better-docs
Browse files Browse the repository at this point in the history
  • Loading branch information
fang-xing-esql committed Apr 22, 2024
2 parents 114e009 + e8dc840 commit eddfa1a
Show file tree
Hide file tree
Showing 371 changed files with 2,320 additions and 1,553 deletions.
5 changes: 5 additions & 0 deletions docs/changelog/107578.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 107578
summary: "ESQL: Allow reusing BUCKET grouping expressions in aggs"
area: ES|QL
type: bug
issues: []
6 changes: 6 additions & 0 deletions docs/changelog/107655.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 107655
summary: "Use #addWithoutBreaking when adding a negative number of bytes to the circuit\
\ breaker in `SequenceMatcher`"
area: EQL
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/107663.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 107663
summary: Optimize `GeoBounds` and `GeoCentroid` aggregations for single value fields
area: Geo
type: enhancement
issues: []
4 changes: 3 additions & 1 deletion docs/reference/cluster/nodes-hot-threads.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,9 @@ include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=node-id]
troubleshooting, set this parameter to a large number (e.g.
`9999`) to get information about all the threads in the system.

include::{es-ref-dir}/rest-api/common-parms.asciidoc[tag=timeoutparms]
`timeout`::
(Optional, <<time-units, time units>>) Specifies how long to wait for a
response from each node. If omitted, waits forever.

`type`::
(Optional, string) The type to sample. Available options are `block`, `cpu`, and
Expand Down
62 changes: 62 additions & 0 deletions docs/reference/cluster/nodes-stats.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -1853,6 +1853,68 @@ store.
(integer)
Total number of bytes available to this Java virtual machine on this file
store.

`low_watermark_free_space`::
(<<byte-units,byte value>>)
The effective low disk watermark for this data path on this node: when a node
has less free space than this value for at least one data path, its disk usage
has exceeded the low watermark. See <<disk-based-shard-allocation>> for more
information about disk watermarks and their effects on shard allocation.

`low_watermark_free_space_in_bytes`::
(integer)
The effective low disk watermark, in bytes, for this data path on this node:
when a node has less free space than this value for at least one data path, its
disk usage has exceeded the low watermark. See <<disk-based-shard-allocation>>
for more information about disk watermarks and their effects on shard
allocation.

`high_watermark_free_space`::
(<<byte-units,byte value>>)
The effective high disk watermark for this data path on this node: when a node
has less free space than this value for at least one data path, its disk usage
has exceeded the high watermark. See <<disk-based-shard-allocation>> for more
information about disk watermarks and their effects on shard allocation.

`high_watermark_free_space_in_bytes`::
(integer)
The effective high disk watermark, in bytes, for this data path on this node:
when a node has less free space than this value for at least one data path, its
disk usage has exceeded the high watermark. See <<disk-based-shard-allocation>>
for more information about disk watermarks and their effects on shard
allocation.

`flood_stage_free_space`::
(<<byte-units,byte value>>)
The effective flood stage disk watermark for this data path on this node: when
a node has less free space than this value for at least one data path, its disk
usage has exceeded the flood stage watermark. See
<<disk-based-shard-allocation>> for more information about disk watermarks and
their effects on shard allocation.

`flood_stage_free_space_in_bytes`::
(integer)
The effective flood stage disk watermark, in bytes, for this data path on this
node: when a node has less free space than this value for at least one data
path, its disk usage has exceeded the flood stage watermark. See
<<disk-based-shard-allocation>> for more information about disk watermarks and
their effects on shard allocation.

`frozen_flood_stage_free_space`::
(<<byte-units,byte value>>)
The effective flood stage disk watermark for this data path on a dedicated
frozen node: when a dedicated frozen node has less free space than this value
for at least one data path, its disk usage has exceeded the flood stage
watermark. See <<disk-based-shard-allocation>> for more information about disk
watermarks and their effects on shard allocation.

`frozen_flood_stage_free_space_in_bytes`::
(integer)
The effective flood stage disk watermark, in bytes, for this data path on a
dedicated frozen node: when a dedicated frozen node has less free space than
this value for at least one data path, its disk usage has exceeded the flood
stage watermark. See <<disk-based-shard-allocation>> for more information about
disk watermarks and their effects on shard allocation.
=======
`io_stats` (Linux only)::
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ protected void masterOperation(
request.getStartTime(),
systemDataStreamDescriptor,
request.masterNodeTimeout(),
request.timeout(),
request.ackTimeout(),
true
);
metadataCreateDataStreamService.createDataStream(updateRequest, listener);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,9 +51,7 @@ public class DataStreamsStatsTransportAction extends TransportBroadcastByNodeAct
DataStreamsStatsAction.Response,
DataStreamsStatsAction.DataStreamShardStats> {

private final ClusterService clusterService;
private final IndicesService indicesService;
private final IndexNameExpressionResolver indexNameExpressionResolver;

@Inject
public DataStreamsStatsTransportAction(
Expand All @@ -72,9 +70,7 @@ public DataStreamsStatsTransportAction(
DataStreamsStatsAction.Request::new,
transportService.getThreadPool().executor(ThreadPool.Names.MANAGEMENT)
);
this.clusterService = clusterService;
this.indicesService = indicesService;
this.indexNameExpressionResolver = indexNameExpressionResolver;
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ protected void masterOperation(
new MetadataMigrateToDataStreamService.MigrateToDataStreamClusterStateUpdateRequest(
request.getAliasName(),
request.masterNodeTimeout(),
request.timeout()
request.ackTimeout()
);
metadataMigrateToDataStreamService.migrateToDataStream(updateRequest, listener);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient cli
PutDataStreamLifecycleAction.Request putLifecycleRequest = PutDataStreamLifecycleAction.Request.parseRequest(parser);
putLifecycleRequest.indices(Strings.splitStringByCommaToArray(request.param("name")));
putLifecycleRequest.masterNodeTimeout(request.paramAsTime("master_timeout", putLifecycleRequest.masterNodeTimeout()));
putLifecycleRequest.timeout(request.paramAsTime("timeout", putLifecycleRequest.timeout()));
putLifecycleRequest.ackTimeout(request.paramAsTime("timeout", putLifecycleRequest.ackTimeout()));
putLifecycleRequest.indicesOptions(IndicesOptions.fromRequest(request, putLifecycleRequest.indicesOptions()));
return channel -> client.execute(
PutDataStreamLifecycleAction.INSTANCE,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient cli
throw new IllegalArgumentException("no data stream actions specified, at least one must be specified");
}
modifyDsRequest.masterNodeTimeout(request.paramAsTime("master_timeout", modifyDsRequest.masterNodeTimeout()));
modifyDsRequest.timeout(request.paramAsTime("timeout", modifyDsRequest.timeout()));
modifyDsRequest.ackTimeout(request.paramAsTime("timeout", modifyDsRequest.ackTimeout()));
return channel -> client.execute(ModifyDataStreamsAction.INSTANCE, modifyDsRequest, new RestToXContentListener<>(channel));
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@

public class GeoIpStatsTransportAction extends TransportNodesAction<Request, Response, NodeRequest, NodeResponse> {

private final TransportService transportService;
private final DatabaseNodeService registry;
private final GeoIpDownloaderTaskExecutor geoIpDownloaderTaskExecutor;

Expand All @@ -52,7 +51,6 @@ public GeoIpStatsTransportAction(
NodeRequest::new,
threadPool.executor(ThreadPool.Names.MANAGEMENT)
);
this.transportService = transportService;
this.registry = registry;
this.geoIpDownloaderTaskExecutor = geoIpDownloaderTaskExecutor;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
package org.elasticsearch.http;

import org.apache.http.client.methods.HttpGet;
import org.elasticsearch.action.admin.cluster.stats.ClusterStatsAction;
import org.elasticsearch.action.admin.cluster.stats.TransportClusterStatsAction;
import org.elasticsearch.action.support.PlainActionFuture;
import org.elasticsearch.client.Cancellable;
import org.elasticsearch.client.Request;
Expand Down Expand Up @@ -104,7 +104,7 @@ public void testClusterStateRestCancellation() throws Exception {
logger.info("--> sending cluster state request");
final Cancellable cancellable = getRestClient().performRequestAsync(clusterStatsRequest, wrapAsRestResponseListener(future));

awaitTaskWithPrefix(ClusterStatsAction.NAME);
awaitTaskWithPrefix(TransportClusterStatsAction.TYPE.name());

logger.info("--> waiting for at least one task to hit a block");
assertBusy(() -> assertTrue(statsBlocks.stream().anyMatch(Semaphore::hasQueuedThreads)));
Expand All @@ -113,12 +113,12 @@ public void testClusterStateRestCancellation() throws Exception {
cancellable.cancel();
expectThrows(CancellationException.class, future::actionGet);

assertAllCancellableTasksAreCancelled(ClusterStatsAction.NAME);
assertAllCancellableTasksAreCancelled(TransportClusterStatsAction.TYPE.name());
} finally {
Releasables.close(releasables);
}

assertAllTasksHaveFinished(ClusterStatsAction.NAME);
assertAllTasksHaveFinished(TransportClusterStatsAction.TYPE.name());
}

public static class StatsBlockingPlugin extends Plugin implements EnginePlugin {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
package org.elasticsearch.http.snapshots;

import org.apache.http.client.methods.HttpGet;
import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsAction;
import org.elasticsearch.action.admin.cluster.snapshots.get.TransportGetSnapshotsAction;
import org.elasticsearch.action.support.PlainActionFuture;
import org.elasticsearch.client.Cancellable;
import org.elasticsearch.client.Request;
Expand Down Expand Up @@ -48,13 +48,13 @@ public void testGetSnapshotsCancellation() throws Exception {
final Cancellable cancellable = getRestClient().performRequestAsync(request, wrapAsRestResponseListener(future));

assertThat(future.isDone(), equalTo(false));
awaitTaskWithPrefix(GetSnapshotsAction.NAME);
awaitTaskWithPrefix(TransportGetSnapshotsAction.TYPE.name());
assertBusy(() -> assertTrue(repository.blocked()), 30L, TimeUnit.SECONDS);
cancellable.cancel();
assertAllCancellableTasksAreCancelled(GetSnapshotsAction.NAME);
assertAllCancellableTasksAreCancelled(TransportGetSnapshotsAction.TYPE.name());
repository.unblock();
expectThrows(CancellationException.class, future::actionGet);

assertAllTasksHaveFinished(GetSnapshotsAction.NAME);
assertAllTasksHaveFinished(TransportGetSnapshotsAction.TYPE.name());
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
package org.elasticsearch.http.snapshots;

import org.apache.http.client.methods.HttpGet;
import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotsStatusAction;
import org.elasticsearch.action.admin.cluster.snapshots.status.TransportSnapshotsStatusAction;
import org.elasticsearch.action.support.PlainActionFuture;
import org.elasticsearch.client.Cancellable;
import org.elasticsearch.client.Request;
Expand Down Expand Up @@ -54,13 +54,13 @@ public void testSnapshotStatusCancellation() throws Exception {
final Cancellable cancellable = getRestClient().performRequestAsync(request, wrapAsRestResponseListener(future));

assertFalse(future.isDone());
awaitTaskWithPrefix(SnapshotsStatusAction.NAME);
awaitTaskWithPrefix(TransportSnapshotsStatusAction.TYPE.name());
assertBusy(() -> assertTrue(repository.blocked()), 30L, TimeUnit.SECONDS);
cancellable.cancel();
assertAllCancellableTasksAreCancelled(SnapshotsStatusAction.NAME);
assertAllCancellableTasksAreCancelled(TransportSnapshotsStatusAction.TYPE.name());
repository.unblock();
expectThrows(CancellationException.class, future::actionGet);

assertAllTasksHaveFinished(SnapshotsStatusAction.NAME);
assertAllTasksHaveFinished(TransportSnapshotsStatusAction.TYPE.name());
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@

package org.elasticsearch.cluster.coordination;

import org.elasticsearch.action.admin.cluster.stats.ClusterStatsAction;
import org.elasticsearch.action.admin.cluster.stats.ClusterStatsRequest;
import org.elasticsearch.action.admin.cluster.stats.ClusterStatsResponse;
import org.elasticsearch.action.admin.cluster.stats.TransportClusterStatsAction;
import org.elasticsearch.action.support.PlainActionFuture;
import org.elasticsearch.cluster.metadata.Metadata;
import org.elasticsearch.cluster.service.ClusterService;
Expand Down Expand Up @@ -41,7 +41,7 @@ private static void assertClusterUuid(boolean expectCommitted, String expectedVa
assertEquals(expectedValue, metadata.clusterUUID());

final ClusterStatsResponse response = PlainActionFuture.get(
fut -> client(nodeName).execute(ClusterStatsAction.INSTANCE, new ClusterStatsRequest(), fut),
fut -> client(nodeName).execute(TransportClusterStatsAction.TYPE, new ClusterStatsRequest(), fut),
10,
TimeUnit.SECONDS
);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
import org.elasticsearch.cluster.routing.ShardRoutingState;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.core.TimeValue;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.IndexService;
import org.elasticsearch.index.engine.Engine;
Expand All @@ -40,7 +41,7 @@ public void testThatNonDynamicSettingChangesTakeEffect() throws Exception {
MetadataUpdateSettingsService metadataUpdateSettingsService = internalCluster().getCurrentMasterNodeInstance(
MetadataUpdateSettingsService.class
);
UpdateSettingsClusterStateUpdateRequest request = new UpdateSettingsClusterStateUpdateRequest();
UpdateSettingsClusterStateUpdateRequest request = new UpdateSettingsClusterStateUpdateRequest().ackTimeout(TimeValue.ZERO);
List<Index> indices = new ArrayList<>();
for (IndicesService indicesService : internalCluster().getInstances(IndicesService.class)) {
for (IndexService indexService : indicesService) {
Expand Down Expand Up @@ -108,7 +109,7 @@ public void testThatNonDynamicSettingChangesDoNotUnncessesarilyCauseReopens() th
MetadataUpdateSettingsService metadataUpdateSettingsService = internalCluster().getCurrentMasterNodeInstance(
MetadataUpdateSettingsService.class
);
UpdateSettingsClusterStateUpdateRequest request = new UpdateSettingsClusterStateUpdateRequest();
UpdateSettingsClusterStateUpdateRequest request = new UpdateSettingsClusterStateUpdateRequest().ackTimeout(TimeValue.ZERO);
List<Index> indices = new ArrayList<>();
for (IndicesService indicesService : internalCluster().getInstances(IndicesService.class)) {
for (IndexService indexService : indicesService) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@

import org.elasticsearch.action.ActionFuture;
import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse;
import org.elasticsearch.action.admin.cluster.snapshots.get.shard.GetShardSnapshotAction;
import org.elasticsearch.action.admin.cluster.snapshots.get.shard.GetShardSnapshotRequest;
import org.elasticsearch.action.admin.cluster.snapshots.get.shard.GetShardSnapshotResponse;
import org.elasticsearch.action.admin.cluster.snapshots.get.shard.TransportGetShardSnapshotAction;
import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;
import org.elasticsearch.action.support.PlainActionFuture;
import org.elasticsearch.cluster.metadata.IndexMetadata;
Expand Down Expand Up @@ -346,7 +346,7 @@ private PlainActionFuture<GetShardSnapshotResponse> getLatestSnapshotForShardFut
request = GetShardSnapshotRequest.latestSnapshotInRepositories(shardId, repositories);
}

client().execute(GetShardSnapshotAction.INSTANCE, request, future);
client().execute(TransportGetShardSnapshotAction.TYPE, request, future);
return future;
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -177,6 +177,7 @@ static TransportVersion def(int id) {
public static final TransportVersion GEOIP_CACHE_STATS = def(8_636_00_0);
public static final TransportVersion WATERMARK_THRESHOLDS_STATS = def(8_637_00_0);
public static final TransportVersion ENRICH_CACHE_ADDITIONAL_STATS = def(8_638_00_0);
public static final TransportVersion ML_INFERENCE_RATE_LIMIT_SETTINGS_ADDED = def(8_639_00_0);

/*
* STOP! READ THIS FIRST! No, really,
Expand Down
Loading

0 comments on commit eddfa1a

Please sign in to comment.