Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CoolDown Period to S3 Repository #51074

Merged
merged 10 commits into from
Jan 20, 2020

Conversation

original-brownbear
Copy link
Member

WIP still needs tests, would just like to confirm we're good with the approach first.
7.6.0 label is intentional here, this one is important for 7.6.

WIP, still missing tests but would like to confirm we agree on the approach taken here first.
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed (:Distributed/Snapshot/Restore)

return new ActionListener<>() {
@Override
public void onResponse(T response) {
final Scheduler.Cancellable existing = finalizationFuture.getAndSet(
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ywelsch I went with doing it this way instead of just keeping track of a timestamp and then failing a new snapshot if it's started to close to the last timestamp. I'm afraid having random failures from concurrent snapshot exceptions when no running snapshot is visible to APIs could mess with Cloud orchestration (not necessarily breaking it but causing an unreasonable amount of _status requests).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to think a bit about this, and consulted @DaveCTurner as well. We both agree that this is the right path forward (simpler to explain to users and simpler for existing orchestration tools).
In short, this artificially extends the duration of the snapshot, i.e., taking or deleting a snapshot takes 3 minutes longer. Can we add a log message that details why we are doing this (and that we are in a repo with legacy snapshots)? Let's also document this somewhere (with the setting). This gives users the choice e.g. to move to a different repo.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's also document this somewhere (with the setting)

Should we really document this? It seems to me that if you're on AWS S3 not having the cool down is a risk in 100% of cases. If we document it, those that this functionality is intended to protect might opt to turn it off to "speed things up"?
Maybe just document the waiting but not the setting?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we must document how users can safely speed it up (i.e. by moving to a new repo or deleting all their legacy snapshots). I'm ok with not documenting the setting itself - we already have form for leaving dangerous settings undocumented (see MergePolicyConfig for instance). Let's add this reasoning to its Javadoc along with explicit instructions not to adjust it and instead to move to a new repo or delete all the legacy snapshots, to deal with the inevitable user who comes across it in the source code.

Can we also mention {@link Version#V_7_6_0} in the Javadoc so we get a reminder to remove this in v9?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, added docs to the setting, a link to 7.6, an explanatory log message and a test in f9047d7 :)

@original-brownbear
Copy link
Member Author

original-brownbear commented Jan 16, 2020

@ywelsch could you take a look here and see if this approach is agreeable to you. I'm on PTO today and tomorrow but it's no problem to find the time to code up a BwC test for this to verify that it works fine (worked fine in manual testing though).

@DaveCTurner
Copy link
Contributor

Should we raise this as a blocker for 7.6.0?

@original-brownbear
Copy link
Member Author

@DaveCTurner yes I think we have to. I'll deal with this tomorrow at the very latest.

@@ -92,6 +111,41 @@ protected Settings nodeSettings(int nodeOrdinal) {
.build();
}

public void testEnforcedCooldownPeriod() throws IOException {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is admittedly quite the hacky test and it takes 2x 5s of hard waits to verify behaviour.

We could create a cleaner test by adding some BwC test infrastructure to the S3 plugin tests but I'm not sure it's worth the complexity. Also, running a real rest test to verify the timing here makes the test even more prone to run into random CI slowness and fail in the last step that verifies no waiting is happening when moving to a repo without any old version snapshot => this seemed like the least bad option to me.
We are using a O(5s) hard timeout in some other repo IO-timeout tests and so far that hasn't failed us due to CI running into a longer pause so I'm hopeful this will be stable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have a SnapshotResiliencyTests with a mock repo that is eventually consistent on actions for X seconds, and then becomes consistent, and then use that one to verify all is going well? Would be a stronger test than this, which just verifies that some sleep is somewhere in place.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not trivial but doable => On it :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Argh never mind ... then we'd have to move the cool down logic to BlobStoreRepository. We can't use the mock repository together with the S3 plugin. We also don't really have any mock infrastructure left for S3 so either we test this on the BlobStoreRepository or this requires some infrastructure that combines the mock S3 REST api infra + the snapshot resiliency test infrastructure.

Think this is worth it, given that this is a stop-gap solution?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bummer :/

Copy link
Contributor

@ywelsch ywelsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've left some more comments and ideas for tests

public void onResponse(T response) {
logCooldownInfo();
final Scheduler.Cancellable existing = finalizationFuture.getAndSet(
threadPool.schedule(() -> wrappedListener.onResponse(response), coolDown, ThreadPool.Names.SNAPSHOT));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if we are rejected from the snapshot threadpool? Let's force it onto the threadpool (use AbstractRunnable), and notify listener as welll on AbstractRunnable.onFaillure

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if we are rejected from the snapshot threadpool?

I think that's impossible unless the snapshot pool is shutting down (in which case it's kinda irrelevant what we do anyway I guess). No other action will go onto the snapshot pool until this listener is resolved (because the snapshot or delete in progress in the CS will prevent anything else from running + we specifically made it so that no steps in the snapshot operations runs on the SNAPSHOT pool before we checked the CS to avoid any deadlocks here).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't bet my life on that (think e.g. about master failover). Let's make this safe.

protected void doClose() {
final Scheduler.Cancellable cancellable = finalizationFuture.getAndSet(null);
if (cancellable != null) {
logger.warn("Repository closed during cooldown period");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to log this at warn level?

Copy link
Member Author

@original-brownbear original-brownbear Jan 17, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I figured this should be somewhat visible, it's not great if this happens because the next master will not start waiting again. This may be something we want to add but I figured it might not be worth the extra complication because a master failover won't be instant (since the current master must have worked fine to set safe and pending generation equal before getting to the wait) so that "wait' might be good enough?

... retracted see below

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually the situation here is better than I described above ... since if we're running into this we're always re-running the last step of the delete or snapshot operation on the next master (and will fail there because the repository generation has already moved) which will trigger another wait period. So this isn't a bad spot at all :) => moving this to DEBUG.

@@ -92,6 +111,41 @@ protected Settings nodeSettings(int nodeOrdinal) {
.build();
}

public void testEnforcedCooldownPeriod() throws IOException {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have a SnapshotResiliencyTests with a mock repo that is eventually consistent on actions for X seconds, and then becomes consistent, and then use that one to verify all is going well? Would be a stronger test than this, which just verifies that some sleep is somewhere in place.


final long beforeFastDelete = repository.threadPool().relativeTimeInNanos();
client().admin().cluster().prepareDeleteSnapshot(repoName, fakeOldSnapshot.getName()).get();
assertThat(repository.threadPool().relativeTimeInNanos() - beforeFastDelete, lessThan(TEST_COOLDOWN_PERIOD.getNanos()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if there are situations where ThreadPool.schedule will return before the time value specified. This would then fail here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if there are situations where ThreadPool.schedule will return before the time value specified.

I figured that's impossible since I turned off the timestamp cache? I think otherwise the underlying primitives in ThreadPoolExecutor are accurate (at least on Linux).

This would then fail here.

And rightfully so?

=> that said :) ... let me see about the suggested test via the resiliency tests

@original-brownbear
Copy link
Member Author

@ywelsch better tests turned out to be really complex to set up, let me know if you still want them :)

@original-brownbear
Copy link
Member Author

All points addressed in 9f1c00d I think @ywelsch

Copy link
Contributor

@ywelsch ywelsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -92,6 +111,41 @@ protected Settings nodeSettings(int nodeOrdinal) {
.build();
}

public void testEnforcedCooldownPeriod() throws IOException {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bummer :/

@original-brownbear
Copy link
Member Author

Thanks Yannick + David!

@original-brownbear original-brownbear merged commit f429359 into elastic:master Jan 20, 2020
@original-brownbear original-brownbear deleted the s3-cooldown branch January 20, 2020 09:32
original-brownbear added a commit to original-brownbear/elasticsearch that referenced this pull request Jan 20, 2020
Add cool down period after snapshot finalization and delete to prevent eventually consistent AWS S3 from corrupting shard level metadata as long as the repository is using the old format metadata on the shard level.
original-brownbear added a commit to original-brownbear/elasticsearch that referenced this pull request Jan 20, 2020
Add cool down period after snapshot finalization and delete to prevent eventually consistent AWS S3 from corrupting shard level metadata as long as the repository is using the old format metadata on the shard level.
original-brownbear added a commit that referenced this pull request Jan 20, 2020
Add cool down period after snapshot finalization and delete to prevent eventually consistent AWS S3 from corrupting shard level metadata as long as the repository is using the old format metadata on the shard level.
original-brownbear added a commit that referenced this pull request Jan 20, 2020
Add cool down period after snapshot finalization and delete to prevent eventually consistent AWS S3 from corrupting shard level metadata as long as the repository is using the old format metadata on the shard level.
SivagurunathanV pushed a commit to SivagurunathanV/elasticsearch that referenced this pull request Jan 23, 2020
Add cool down period after snapshot finalization and delete to prevent eventually consistent AWS S3 from corrupting shard level metadata as long as the repository is using the old format metadata on the shard level.
@mfussenegger mfussenegger mentioned this pull request Mar 24, 2020
37 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants