-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split searchable snapshot into multiple repo operations #116918
Merged
elasticsearchmachine
merged 10 commits into
elastic:main
from
DaveCTurner:2024/11/17/mutable-searchable-snapshot-repository
Nov 18, 2024
Merged
Changes from 1 commit
Commits
Show all changes
10 commits
Select commit
Hold shift + click to select a range
cef1b54
Split searchable snapshot into multiple repo operations
DaveCTurner a1b19cf
Update docs/changelog/116918.yaml
DaveCTurner 333511a
Comment
DaveCTurner 8a31dcf
before writing
DaveCTurner e326a48
Merge branch 'main' into 2024/11/17/mutable-searchable-snapshot-repos…
DaveCTurner f12848a
Skip tests in FIPS JVMs
DaveCTurner 3a8e7eb
Just assign thread names directly
DaveCTurner df424fa
Inline lastKnownState
DaveCTurner a547e2c
Debug logs
DaveCTurner 4473e75
Merge branch 'main' into 2024/11/17/mutable-searchable-snapshot-repos…
DaveCTurner File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
275 changes: 275 additions & 0 deletions
275
.../elasticsearch/xpack/searchablesnapshots/s3/S3SearchableSnapshotsCredentialsReloadIT.java
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,275 @@ | ||
/* | ||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one | ||
* or more contributor license agreements. Licensed under the Elastic License | ||
* 2.0; you may not use this file except in compliance with the Elastic License | ||
* 2.0. | ||
*/ | ||
|
||
package org.elasticsearch.xpack.searchablesnapshots.s3; | ||
|
||
import fixture.s3.S3HttpFixture; | ||
import io.netty.handler.codec.http.HttpMethod; | ||
|
||
import org.apache.http.client.methods.HttpPut; | ||
import org.apache.http.entity.ByteArrayEntity; | ||
import org.apache.http.entity.ContentType; | ||
import org.elasticsearch.client.Request; | ||
import org.elasticsearch.client.RequestOptions; | ||
import org.elasticsearch.client.ResponseException; | ||
import org.elasticsearch.client.WarningsHandler; | ||
import org.elasticsearch.common.settings.Settings; | ||
import org.elasticsearch.core.Nullable; | ||
import org.elasticsearch.test.ESTestCase; | ||
import org.elasticsearch.test.cluster.ElasticsearchCluster; | ||
import org.elasticsearch.test.cluster.MutableSettingsProvider; | ||
import org.elasticsearch.test.cluster.local.distribution.DistributionType; | ||
import org.elasticsearch.test.rest.ESRestTestCase; | ||
import org.elasticsearch.test.rest.ObjectPath; | ||
import org.elasticsearch.xcontent.XContentBuilder; | ||
import org.elasticsearch.xcontent.XContentType; | ||
import org.junit.ClassRule; | ||
import org.junit.rules.RuleChain; | ||
import org.junit.rules.TestRule; | ||
|
||
import java.io.ByteArrayOutputStream; | ||
import java.io.IOException; | ||
import java.util.function.UnaryOperator; | ||
|
||
import static org.hamcrest.CoreMatchers.containsString; | ||
import static org.hamcrest.Matchers.allOf; | ||
|
||
public class S3SearchableSnapshotsCredentialsReloadIT extends ESRestTestCase { | ||
|
||
private static final String BUCKET = "S3SearchableSnapshotsCredentialsReloadIT-bucket"; | ||
private static final String BASE_PATH = "S3SearchableSnapshotsCredentialsReloadIT-base-path"; | ||
|
||
public static final S3HttpFixture s3Fixture = new S3HttpFixture(true, BUCKET, BASE_PATH, "ignored"); | ||
|
||
private static final MutableSettingsProvider keystoreSettings = new MutableSettingsProvider(); | ||
|
||
public static ElasticsearchCluster cluster = ElasticsearchCluster.local() | ||
.distribution(DistributionType.DEFAULT) | ||
.setting("xpack.license.self_generated.type", "trial") | ||
.keystore(keystoreSettings) | ||
.setting("xpack.searchable.snapshot.shared_cache.size", "4kB") | ||
.setting("xpack.searchable.snapshot.shared_cache.region_size", "4kB") | ||
.setting("xpack.searchable_snapshots.cache_fetch_async_thread_pool.keep_alive", "0ms") | ||
.setting("xpack.security.enabled", "false") | ||
.systemProperty("es.allow_insecure_settings", "true") | ||
.build(); | ||
|
||
@ClassRule | ||
public static TestRule ruleChain = RuleChain.outerRule(s3Fixture).around(cluster); | ||
|
||
@Override | ||
protected String getTestRestCluster() { | ||
return cluster.getHttpAddresses(); | ||
} | ||
|
||
public void testReloadCredentialsFromKeystore() throws IOException { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. May want to skip it when in fips. |
||
final TestHarness testHarness = new TestHarness(); | ||
testHarness.putRepository(); | ||
|
||
// Set up initial credentials | ||
final String accessKey1 = randomIdentifier(); | ||
s3Fixture.setAccessKey(accessKey1); | ||
keystoreSettings.put("s3.client.default.access_key", accessKey1); | ||
keystoreSettings.put("s3.client.default.secret_key", randomIdentifier()); | ||
cluster.updateStoredSecureSettings(); | ||
assertOK(client().performRequest(new Request("POST", "/_nodes/reload_secure_settings"))); | ||
|
||
testHarness.createFrozenSearchableSnapshotIndex(); | ||
|
||
// Verify searchable snapshot functionality | ||
testHarness.ensureSearchSuccess(); | ||
|
||
// Rotate credentials in blob store | ||
logger.info("--> rotate credentials"); | ||
final String accessKey2 = randomValueOtherThan(accessKey1, ESTestCase::randomIdentifier); | ||
s3Fixture.setAccessKey(accessKey2); | ||
|
||
// Ensure searchable snapshot now does not work due to invalid credentials | ||
logger.info("--> expect failure"); | ||
testHarness.ensureSearchFailure(); | ||
|
||
// Set up refreshed credentials | ||
logger.info("--> update keystore contents"); | ||
keystoreSettings.put("s3.client.default.access_key", accessKey2); | ||
cluster.updateStoredSecureSettings(); | ||
assertOK(client().performRequest(new Request("POST", "/_nodes/reload_secure_settings"))); | ||
|
||
// Check access using refreshed credentials | ||
logger.info("--> expect success"); | ||
testHarness.ensureSearchSuccess(); | ||
} | ||
|
||
public void testReloadCredentialsFromAlternativeClient() throws IOException { | ||
final TestHarness testHarness = new TestHarness(); | ||
testHarness.putRepository(); | ||
|
||
// Set up credentials | ||
final String accessKey1 = randomIdentifier(); | ||
final String accessKey2 = randomValueOtherThan(accessKey1, ESTestCase::randomIdentifier); | ||
final String alternativeClient = randomValueOtherThan("default", ESTestCase::randomIdentifier); | ||
|
||
s3Fixture.setAccessKey(accessKey1); | ||
keystoreSettings.put("s3.client.default.access_key", accessKey1); | ||
keystoreSettings.put("s3.client.default.secret_key", randomIdentifier()); | ||
keystoreSettings.put("s3.client." + alternativeClient + ".access_key", accessKey2); | ||
keystoreSettings.put("s3.client." + alternativeClient + ".secret_key", randomIdentifier()); | ||
cluster.updateStoredSecureSettings(); | ||
assertOK(client().performRequest(new Request("POST", "/_nodes/reload_secure_settings"))); | ||
|
||
testHarness.createFrozenSearchableSnapshotIndex(); | ||
|
||
// Verify searchable snapshot functionality | ||
testHarness.ensureSearchSuccess(); | ||
|
||
// Rotate credentials in blob store | ||
logger.info("--> rotate credentials"); | ||
s3Fixture.setAccessKey(accessKey2); | ||
|
||
// Ensure searchable snapshot now does not work due to invalid credentials | ||
logger.info("--> expect failure"); | ||
testHarness.ensureSearchFailure(); | ||
|
||
// Adjust repository to use new client | ||
logger.info("--> update repository metadata"); | ||
testHarness.putRepository(b -> b.put("client", alternativeClient)); | ||
|
||
// Check access using refreshed credentials | ||
logger.info("--> expect success"); | ||
testHarness.ensureSearchSuccess(); | ||
} | ||
|
||
public void testReloadCredentialsFromMetadata() throws IOException { | ||
final TestHarness testHarness = new TestHarness(); | ||
testHarness.warningsHandler = WarningsHandler.PERMISSIVE; | ||
|
||
// Set up credentials | ||
final String accessKey1 = randomIdentifier(); | ||
final String accessKey2 = randomValueOtherThan(accessKey1, ESTestCase::randomIdentifier); | ||
|
||
testHarness.putRepository(b -> b.put("access_key", accessKey1).put("secret_key", randomIdentifier())); | ||
s3Fixture.setAccessKey(accessKey1); | ||
|
||
testHarness.createFrozenSearchableSnapshotIndex(); | ||
|
||
// Verify searchable snapshot functionality | ||
testHarness.ensureSearchSuccess(); | ||
|
||
// Rotate credentials in blob store | ||
logger.info("--> rotate credentials"); | ||
s3Fixture.setAccessKey(accessKey2); | ||
|
||
// Ensure searchable snapshot now does not work due to invalid credentials | ||
logger.info("--> expect failure"); | ||
testHarness.ensureSearchFailure(); | ||
|
||
// Adjust repository to use new client | ||
logger.info("--> update repository metadata"); | ||
testHarness.putRepository(b -> b.put("access_key", accessKey2).put("secret_key", randomIdentifier())); | ||
|
||
// Check access using refreshed credentials | ||
logger.info("--> expect success"); | ||
testHarness.ensureSearchSuccess(); | ||
} | ||
|
||
private class TestHarness { | ||
private final String mountedIndexName = randomIdentifier(); | ||
private final String repositoryName = randomIdentifier(); | ||
|
||
@Nullable // to use the default | ||
WarningsHandler warningsHandler; | ||
|
||
void putRepository() throws IOException { | ||
putRepository(UnaryOperator.identity()); | ||
} | ||
|
||
void putRepository(UnaryOperator<Settings.Builder> settingsOperator) throws IOException { | ||
// Register repository | ||
final Request request = newXContentRequest( | ||
HttpMethod.PUT, | ||
"/_snapshot/" + repositoryName, | ||
(b, p) -> b.field("type", "s3") | ||
.startObject("settings") | ||
.value( | ||
settingsOperator.apply( | ||
Settings.builder().put("bucket", BUCKET).put("base_path", BASE_PATH).put("endpoint", s3Fixture.getAddress()) | ||
).build() | ||
) | ||
.endObject() | ||
); | ||
request.addParameter("verify", "false"); // because we don't have access to the blob store yet | ||
request.setOptions(RequestOptions.DEFAULT.toBuilder().setWarningsHandler(warningsHandler)); | ||
assertOK(client().performRequest(request)); | ||
} | ||
|
||
void createFrozenSearchableSnapshotIndex() throws IOException { | ||
// Create an index, large enough that its data is not all captured in the file headers | ||
final String indexName = randomValueOtherThan(mountedIndexName, ESTestCase::randomIdentifier); | ||
createIndex(indexName, indexSettings(1, 0).build()); | ||
try (var bodyStream = new ByteArrayOutputStream()) { | ||
for (int i = 0; i < 1024; i++) { | ||
try (XContentBuilder bodyLineBuilder = new XContentBuilder(XContentType.JSON.xContent(), bodyStream)) { | ||
bodyLineBuilder.startObject().startObject("index").endObject().endObject(); | ||
} | ||
bodyStream.write(0x0a); | ||
try (XContentBuilder bodyLineBuilder = new XContentBuilder(XContentType.JSON.xContent(), bodyStream)) { | ||
bodyLineBuilder.startObject().field("foo", "bar").endObject(); | ||
} | ||
bodyStream.write(0x0a); | ||
} | ||
bodyStream.flush(); | ||
final Request request = new Request("PUT", indexName + "/_bulk"); | ||
request.setEntity(new ByteArrayEntity(bodyStream.toByteArray(), ContentType.APPLICATION_JSON)); | ||
client().performRequest(request); | ||
} | ||
|
||
// Take a snapshot and delete the original index | ||
final String snapshotName = randomIdentifier(); | ||
final Request createSnapshotRequest = new Request(HttpPut.METHOD_NAME, "_snapshot/" + repositoryName + '/' + snapshotName); | ||
createSnapshotRequest.addParameter("wait_for_completion", "true"); | ||
createSnapshotRequest.setOptions(RequestOptions.DEFAULT.toBuilder().setWarningsHandler(warningsHandler)); | ||
assertOK(client().performRequest(createSnapshotRequest)); | ||
|
||
deleteIndex(indexName); | ||
|
||
// Mount the snapshotted index as a searchable snapshot | ||
final Request mountRequest = newXContentRequest( | ||
HttpMethod.POST, | ||
"/_snapshot/" + repositoryName + "/" + snapshotName + "/_mount", | ||
(b, p) -> b.field("index", indexName).field("renamed_index", mountedIndexName) | ||
); | ||
mountRequest.addParameter("wait_for_completion", "true"); | ||
mountRequest.addParameter("storage", "shared_cache"); | ||
assertOK(client().performRequest(mountRequest)); | ||
ensureGreen(mountedIndexName); | ||
} | ||
|
||
void ensureSearchSuccess() throws IOException { | ||
final Request searchRequest = new Request("GET", mountedIndexName + "/_search"); | ||
searchRequest.addParameter("size", "10000"); | ||
assertEquals( | ||
"bar", | ||
ObjectPath.createFromResponse(assertOK(client().performRequest(searchRequest))).evaluate("hits.hits.0._source.foo") | ||
); | ||
} | ||
|
||
void ensureSearchFailure() throws IOException { | ||
assertOK(client().performRequest(new Request("POST", "/_searchable_snapshots/cache/clear"))); | ||
final Request searchRequest = new Request("GET", mountedIndexName + "/_search"); | ||
searchRequest.addParameter("size", "10000"); | ||
assertThat( | ||
expectThrows(ResponseException.class, () -> client().performRequest(searchRequest)).getMessage(), | ||
allOf( | ||
containsString("Bad access key"), | ||
containsString("Status Code: 403"), | ||
containsString("Error Code: AccessDenied"), | ||
containsString("failed to read data from cache") | ||
) | ||
); | ||
} | ||
} | ||
|
||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -548,6 +548,12 @@ public Map<String, RecoveryStateFactory> getRecoveryStateFactories() { | |
public static final String CACHE_PREWARMING_THREAD_POOL_NAME = "searchable_snapshots_cache_prewarming"; | ||
public static final String CACHE_PREWARMING_THREAD_POOL_SETTING = "xpack.searchable_snapshots.cache_prewarming_thread_pool"; | ||
|
||
static { | ||
// these thread names must be aligned with those in :server | ||
assert CACHE_FETCH_ASYNC_THREAD_POOL_NAME.equals(BlobStoreRepository.SEARCHABLE_SNAPSHOTS_CACHE_FETCH_ASYNC_THREAD_NAME); | ||
assert CACHE_PREWARMING_THREAD_POOL_NAME.equals(BlobStoreRepository.SEARCHABLE_SNAPSHOTS_CACHE_PREWARMING_THREAD_NAME); | ||
} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Nit: We can also actively assign them to be equal? eg:
|
||
|
||
public static ScalingExecutorBuilder[] executorBuilders(Settings settings) { | ||
final int processors = EsExecutors.allocatedProcessors(settings); | ||
// searchable snapshots cache thread pools should always reject tasks once they are shutting down, otherwise some threads might be | ||
|
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we copy the UUID from the previous repository instance rather than using
_na_
. The next time we load theRepositoryData
we update the metadata if needed:elasticsearch/server/src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java
Lines 2465 to 2475 in cef1b54
We could conceivably be stricter here, see #109936, but it doesn't seem necessary today. Instead note that in
RepositorySupplier
we'll notice the change in UUID and look for a different repository with matching UUID before eventually throwing aRepositoryMissingException
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand this is an optimisation when UUID does not change which should be most of the cases.
When the UUID does change, are you saying that
It is not clear to me why a cached repositoryData would not be loaded in 2 and further delays the UUID consistency update?
The change somehow feels not belong here. But I may be too paranoid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're updating settings that fundamentally change the underlying repository then
org.elasticsearch.repositories.RepositoriesService#applyClusterState
will create a brand-newRepository
instance to replace the existing one (i.e.org.elasticsearch.repositories.RepositoriesService#canUpdateInPlace
will returnfalse
) and this new instance will have no cachedRepositoryData
.It's kind of an optimization but also kind of vital for the behaviour here. If we don't do this then we can't see that the new
Repository
instance is the one we should use for searchable snapshot operations in future (at least not without blocking some thread somewhere while waiting for the new UUID to be loaded).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! This is an important information that I originally missed.
I see the point now. I guess that means searchable snapshot actions do not always load repo data as the first step? If the repo UUID did change, does that mean it would take a while before searchable snapshot related code realise it? Would that lead to any issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Searchable snapshot actions essentially never load the
RepositoryData
. They already know how to find the shard data within the blob store (from theindex.store.snapshot.index_uuid
andindex.store.snapshot.snapshot_uuid
settings in the index metadata, and the shard ID). If the repo switches out from underneath them then they'll get exceptions indicating that the blobs they need are no longer found.