Skip to content

Commit

Permalink
Merge branch 'elastic:main' into kql-nested-field-support
Browse files Browse the repository at this point in the history
  • Loading branch information
afoucret authored Nov 14, 2024
2 parents 7cedcf0 + 8cd4a26 commit 6e77a5e
Show file tree
Hide file tree
Showing 50 changed files with 981 additions and 566 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@
package org.elasticsearch.benchmark.index.mapper;

import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.util.Accountable;
import org.elasticsearch.TransportVersion;
import org.elasticsearch.cluster.ClusterModule;
import org.elasticsearch.cluster.metadata.IndexMetadata;
Expand All @@ -28,7 +27,6 @@
import org.elasticsearch.index.mapper.MapperRegistry;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.mapper.ProvidedIdFieldMapper;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.index.similarity.SimilarityService;
import org.elasticsearch.indices.IndicesModule;
import org.elasticsearch.script.Script;
Expand Down Expand Up @@ -56,13 +54,7 @@ public static MapperService create(String mappings) {
MapperRegistry mapperRegistry = new IndicesModule(Collections.emptyList()).getMapperRegistry();

SimilarityService similarityService = new SimilarityService(indexSettings, null, Map.of());
BitsetFilterCache bitsetFilterCache = new BitsetFilterCache(indexSettings, new BitsetFilterCache.Listener() {
@Override
public void onCache(ShardId shardId, Accountable accountable) {}

@Override
public void onRemoval(ShardId shardId, Accountable accountable) {}
});
BitsetFilterCache bitsetFilterCache = new BitsetFilterCache(indexSettings, BitsetFilterCache.Listener.NOOP);
MapperService mapperService = new MapperService(
() -> TransportVersion.current(),
indexSettings,
Expand Down
6 changes: 6 additions & 0 deletions docs/changelog/115814.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 115814
summary: "[ES|QL] Implicit casting string literal to intervals"
area: ES|QL
type: enhancement
issues:
- 115352
40 changes: 26 additions & 14 deletions docs/reference/esql/implicit-casting.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
<titleabbrev>Implicit casting</titleabbrev>
++++

Often users will input `datetime`, `ip`, `version`, or geospatial objects as simple strings in their queries for use in predicates, functions, or expressions. {esql} provides <<esql-type-conversion-functions, type conversion functions>> to explicitly convert these strings into the desired data types.
Often users will input `date`, `ip`, `version`, `date_period` or `time_duration` as simple strings in their queries for use in predicates, functions, or expressions. {esql} provides <<esql-type-conversion-functions, type conversion functions>> to explicitly convert these strings into the desired data types.

Without implicit casting users must explicitly code these `to_X` functions in their queries, when string literals don't match the target data types they are assigned or compared to. Here is an example of using `to_datetime` to explicitly perform a data type conversion.

Expand All @@ -18,7 +18,7 @@ FROM employees
| LIMIT 1
----

Implicit casting improves usability, by automatically converting string literals to the target data type. This is most useful when the target data type is `datetime`, `ip`, `version` or a geo spatial. It is natural to specify these as a string in queries.
Implicit casting improves usability, by automatically converting string literals to the target data type. This is most useful when the target data type is `date`, `ip`, `version`, `date_period` or `time_duration`. It is natural to specify these as a string in queries.

The first query can be coded without calling the `to_datetime` function, as follows:

Expand All @@ -38,16 +38,28 @@ The following table details which {esql} operations support implicit casting for

[%header.monospaced.styled,format=dsv,separator=|]
|===
||ScalarFunction|BinaryComparison|ArithmeticOperation|InListPredicate|AggregateFunction
|DATETIME|Y|Y|Y|Y|N
|DOUBLE|Y|N|N|N|N
|LONG|Y|N|N|N|N
|INTEGER|Y|N|N|N|N
|IP|Y|Y|Y|Y|N
|VERSION|Y|Y|Y|Y|N
|GEO_POINT|Y|N|N|N|N
|GEO_SHAPE|Y|N|N|N|N
|CARTESIAN_POINT|Y|N|N|N|N
|CARTESIAN_SHAPE|Y|N|N|N|N
|BOOLEAN|Y|Y|Y|Y|N
||ScalarFunction*|Operator*|<<esql-group-functions, GroupingFunction>>|<<esql-agg-functions, AggregateFunction>>
|DATE|Y|Y|Y|N
|IP|Y|Y|Y|N
|VERSION|Y|Y|Y|N
|BOOLEAN|Y|Y|Y|N
|DATE_PERIOD/TIME_DURATION|Y|N|Y|N
|===

ScalarFunction* includes:

<<esql-conditional-functions-and-expressions, Conditional Functions and Expressions>>

<<esql-date-time-functions, Date and Time Functions>>

<<esql-ip-functions, IP Functions>>


Operator* includes:

<<esql-binary-operators, Binary Operators>>

<<esql-unary-operators, Unary Operator>>

<<esql-in-operator, IN>>

Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ public void testThrottleResponsesAreCountedInMetrics() throws IOException {
blobContainer.blobExists(purpose, blobName);

// Correct metrics are recorded
metricsAsserter(dataNodeName, purpose, AzureBlobStore.Operation.GET_BLOB_PROPERTIES, repository).expectMetrics()
metricsAsserter(dataNodeName, purpose, AzureBlobStore.Operation.GET_BLOB_PROPERTIES).expectMetrics()
.withRequests(numThrottles + 1)
.withThrottles(numThrottles)
.withExceptions(numThrottles)
Expand All @@ -137,7 +137,7 @@ public void testRangeNotSatisfiedAreCountedInMetrics() throws IOException {
assertThrows(RequestedRangeNotSatisfiedException.class, () -> blobContainer.readBlob(purpose, blobName));

// Correct metrics are recorded
metricsAsserter(dataNodeName, purpose, AzureBlobStore.Operation.GET_BLOB, repository).expectMetrics()
metricsAsserter(dataNodeName, purpose, AzureBlobStore.Operation.GET_BLOB).expectMetrics()
.withRequests(1)
.withThrottles(0)
.withExceptions(1)
Expand Down Expand Up @@ -170,7 +170,7 @@ public void testErrorResponsesAreCountedInMetrics() throws IOException {
blobContainer.blobExists(purpose, blobName);

// Correct metrics are recorded
metricsAsserter(dataNodeName, purpose, AzureBlobStore.Operation.GET_BLOB_PROPERTIES, repository).expectMetrics()
metricsAsserter(dataNodeName, purpose, AzureBlobStore.Operation.GET_BLOB_PROPERTIES).expectMetrics()
.withRequests(numErrors + 1)
.withThrottles(throttles.get())
.withExceptions(numErrors)
Expand All @@ -191,7 +191,7 @@ public void testRequestFailuresAreCountedInMetrics() {
assertThrows(IOException.class, () -> blobContainer.listBlobs(purpose));

// Correct metrics are recorded
metricsAsserter(dataNodeName, purpose, AzureBlobStore.Operation.LIST_BLOBS, repository).expectMetrics()
metricsAsserter(dataNodeName, purpose, AzureBlobStore.Operation.LIST_BLOBS).expectMetrics()
.withRequests(4)
.withThrottles(0)
.withExceptions(4)
Expand Down Expand Up @@ -322,20 +322,14 @@ private void clearMetrics(String discoveryNode) {
.forEach(TestTelemetryPlugin::resetMeter);
}

private MetricsAsserter metricsAsserter(
String dataNodeName,
OperationPurpose operationPurpose,
AzureBlobStore.Operation operation,
String repository
) {
return new MetricsAsserter(dataNodeName, operationPurpose, operation, repository);
private MetricsAsserter metricsAsserter(String dataNodeName, OperationPurpose operationPurpose, AzureBlobStore.Operation operation) {
return new MetricsAsserter(dataNodeName, operationPurpose, operation);
}

private class MetricsAsserter {
private final String dataNodeName;
private final OperationPurpose purpose;
private final AzureBlobStore.Operation operation;
private final String repository;

enum Result {
Success,
Expand All @@ -361,11 +355,10 @@ List<Measurement> getMeasurements(TestTelemetryPlugin testTelemetryPlugin, Strin
abstract List<Measurement> getMeasurements(TestTelemetryPlugin testTelemetryPlugin, String name);
}

private MetricsAsserter(String dataNodeName, OperationPurpose purpose, AzureBlobStore.Operation operation, String repository) {
private MetricsAsserter(String dataNodeName, OperationPurpose purpose, AzureBlobStore.Operation operation) {
this.dataNodeName = dataNodeName;
this.purpose = purpose;
this.operation = operation;
this.repository = repository;
}

private class Expectations {
Expand Down Expand Up @@ -458,7 +451,6 @@ private void assertMatchingMetricRecorded(MetricType metricType, String metricNa
.filter(
m -> m.attributes().get("operation").equals(operation.getKey())
&& m.attributes().get("purpose").equals(purpose.getKey())
&& m.attributes().get("repo_name").equals(repository)
&& m.attributes().get("repo_type").equals("azure")
)
.findFirst()
Expand All @@ -470,8 +462,6 @@ private void assertMatchingMetricRecorded(MetricType metricType, String metricNa
+ operation.getKey()
+ " and purpose="
+ purpose.getKey()
+ " and repo_name="
+ repository
+ " in "
+ measurements
)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -402,10 +402,7 @@ public void testMetrics() throws Exception {
)
);
metrics.forEach(metric -> {
assertThat(
metric.attributes(),
allOf(hasEntry("repo_type", AzureRepository.TYPE), hasKey("repo_name"), hasKey("operation"), hasKey("purpose"))
);
assertThat(metric.attributes(), allOf(hasEntry("repo_type", AzureRepository.TYPE), hasKey("operation"), hasKey("purpose")));
final AzureBlobStore.Operation operation = AzureBlobStore.Operation.fromKey((String) metric.attributes().get("operation"));
final AzureBlobStore.StatsKey statsKey = new AzureBlobStore.StatsKey(
operation,
Expand Down
5 changes: 5 additions & 0 deletions modules/repository-s3/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ import org.elasticsearch.gradle.internal.test.InternalClusterTestPlugin
*/
apply plugin: 'elasticsearch.internal-yaml-rest-test'
apply plugin: 'elasticsearch.internal-cluster-test'
apply plugin: 'elasticsearch.internal-java-rest-test'

esplugin {
description 'The S3 repository plugin adds S3 repositories'
Expand Down Expand Up @@ -48,6 +49,10 @@ dependencies {
yamlRestTestImplementation project(':test:fixtures:minio-fixture')
internalClusterTestImplementation project(':test:fixtures:minio-fixture')

javaRestTestImplementation project(":test:framework")
javaRestTestImplementation project(':test:fixtures:s3-fixture')
javaRestTestImplementation project(':modules:repository-s3')

yamlRestTestRuntimeOnly "org.slf4j:slf4j-simple:${versions.slf4j}"
internalClusterTestRuntimeOnly "org.slf4j:slf4j-simple:${versions.slf4j}"
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -300,10 +300,7 @@ public void testMetrics() throws Exception {
)
);
metrics.forEach(metric -> {
assertThat(
metric.attributes(),
allOf(hasEntry("repo_type", S3Repository.TYPE), hasKey("repo_name"), hasKey("operation"), hasKey("purpose"))
);
assertThat(metric.attributes(), allOf(hasEntry("repo_type", S3Repository.TYPE), hasKey("operation"), hasKey("purpose")));
final S3BlobStore.Operation operation = S3BlobStore.Operation.parse((String) metric.attributes().get("operation"));
final S3BlobStore.StatsKey statsKey = new S3BlobStore.StatsKey(
operation,
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the "Elastic License
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
* Public License v 1"; you may not use this file except in compliance with, at
* your election, the "Elastic License 2.0", the "GNU Affero General Public
* License v3.0 only", or the "Server Side Public License, v 1".
*/

package org.elasticsearch.repositories.s3;

import fixture.s3.S3HttpFixture;

import org.elasticsearch.client.Request;
import org.elasticsearch.client.ResponseException;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.test.cluster.ElasticsearchCluster;
import org.elasticsearch.test.cluster.MutableSettingsProvider;
import org.elasticsearch.test.rest.ESRestTestCase;
import org.junit.ClassRule;
import org.junit.rules.RuleChain;
import org.junit.rules.TestRule;

import java.io.IOException;

import static org.hamcrest.CoreMatchers.containsString;
import static org.hamcrest.Matchers.allOf;
import static org.hamcrest.Matchers.equalTo;

public class RepositoryS3RestIT extends ESRestTestCase {

private static final String BUCKET = "RepositoryS3JavaRestTest-bucket";
private static final String BASE_PATH = "RepositoryS3JavaRestTest-base-path";

public static final S3HttpFixture s3Fixture = new S3HttpFixture(true, BUCKET, BASE_PATH, "ignored");

private static final MutableSettingsProvider keystoreSettings = new MutableSettingsProvider();

public static ElasticsearchCluster cluster = ElasticsearchCluster.local()
.module("repository-s3")
.keystore(keystoreSettings)
.setting("s3.client.default.endpoint", s3Fixture::getAddress)
.build();

@ClassRule
public static TestRule ruleChain = RuleChain.outerRule(s3Fixture).around(cluster);

@Override
protected String getTestRestCluster() {
return cluster.getHttpAddresses();
}

public void testReloadCredentialsFromKeystore() throws IOException {
// Register repository (?verify=false because we don't have access to the blob store yet)
final var repositoryName = randomIdentifier();
registerRepository(
repositoryName,
S3Repository.TYPE,
false,
Settings.builder().put("bucket", BUCKET).put("base_path", BASE_PATH).build()
);
final var verifyRequest = new Request("POST", "/_snapshot/" + repositoryName + "/_verify");

// Set up initial credentials
final var accessKey1 = randomIdentifier();
s3Fixture.setAccessKey(accessKey1);
keystoreSettings.put("s3.client.default.access_key", accessKey1);
keystoreSettings.put("s3.client.default.secret_key", randomIdentifier());
cluster.updateStoredSecureSettings();
assertOK(client().performRequest(new Request("POST", "/_nodes/reload_secure_settings")));

// Check access using initial credentials
assertOK(client().performRequest(verifyRequest));

// Rotate credentials in blob store
final var accessKey2 = randomValueOtherThan(accessKey1, ESTestCase::randomIdentifier);
s3Fixture.setAccessKey(accessKey2);

// Ensure that initial credentials now invalid
final var accessDeniedException2 = expectThrows(ResponseException.class, () -> client().performRequest(verifyRequest));
assertThat(accessDeniedException2.getResponse().getStatusLine().getStatusCode(), equalTo(500));
assertThat(
accessDeniedException2.getMessage(),
allOf(containsString("Bad access key"), containsString("Status Code: 403"), containsString("Error Code: AccessDenied"))
);

// Set up refreshed credentials
keystoreSettings.put("s3.client.default.access_key", accessKey2);
cluster.updateStoredSecureSettings();
assertOK(client().performRequest(new Request("POST", "/_nodes/reload_secure_settings")));

// Check access using refreshed credentials
assertOK(client().performRequest(verifyRequest));
}

}
Original file line number Diff line number Diff line change
Expand Up @@ -327,8 +327,6 @@ private Map<String, Object> metricAttributes(String action) {
return Map.of(
"repo_type",
S3Repository.TYPE,
"repo_name",
blobStore.getRepositoryMetadata().name(),
"operation",
Operation.GET_OBJECT.getKey(),
"purpose",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1106,7 +1106,7 @@ private List<Measurement> getRetryHistogramMeasurements() {
}

private Map<String, Object> metricAttributes(String action) {
return Map.of("repo_type", "s3", "repo_name", "repository", "operation", "GetObject", "purpose", "Indices", "action", action);
return Map.of("repo_type", "s3", "operation", "GetObject", "purpose", "Indices", "action", action);
}

/**
Expand Down
17 changes: 9 additions & 8 deletions muted-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -150,12 +150,6 @@ tests:
- class: org.elasticsearch.xpack.restart.CoreFullClusterRestartIT
method: testSnapshotRestore {cluster=UPGRADED}
issue: https://github.com/elastic/elasticsearch/issues/111799
- class: org.elasticsearch.xpack.restart.CoreFullClusterRestartIT
method: testSnapshotRestore {cluster=OLD}
issue: https://github.com/elastic/elasticsearch/issues/111774
- class: org.elasticsearch.upgrades.FullClusterRestartIT
method: testSnapshotRestore {cluster=OLD}
issue: https://github.com/elastic/elasticsearch/issues/111777
- class: org.elasticsearch.xpack.ml.integration.DatafeedJobsRestIT
method: testLookbackWithIndicesOptions
issue: https://github.com/elastic/elasticsearch/issues/116127
Expand Down Expand Up @@ -236,13 +230,20 @@ tests:
- class: org.elasticsearch.snapshots.SnapshotShutdownIT
method: testRestartNodeDuringSnapshot
issue: https://github.com/elastic/elasticsearch/issues/116730
- class: org.elasticsearch.xpack.inference.InferenceRestIT
issue: https://github.com/elastic/elasticsearch/issues/116740
- class: org.elasticsearch.action.search.SearchRequestTests
method: testSerializationConstants
issue: https://github.com/elastic/elasticsearch/issues/116752
- class: org.elasticsearch.xpack.security.authc.ldap.ActiveDirectoryGroupsResolverTests
issue: https://github.com/elastic/elasticsearch/issues/116182
- class: org.elasticsearch.http.netty4.Netty4IncrementalRequestHandlingIT
method: testServerCloseConnectionMidStream
issue: https://github.com/elastic/elasticsearch/issues/116774
- class: org.elasticsearch.xpack.test.rest.XPackRestIT
method: test {p0=snapshot/20_operator_privileges_disabled/Operator only settings can be set and restored by non-operator user when operator privileges is disabled}
issue: https://github.com/elastic/elasticsearch/issues/116775
- class: org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT
method: test {p0=synonyms/90_synonyms_reloading_for_synset/Reload analyzers for specific synonym set}
issue: https://github.com/elastic/elasticsearch/issues/116777

# Examples:
#
Expand Down
Loading

0 comments on commit 6e77a5e

Please sign in to comment.