Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

org.elasticsearch.snapshots.SnapshotResiliencyTests.testConcurrentSnapshotDeleteAndDeleteIndex failure #61208

Closed
przemekwitek opened this issue Aug 17, 2020 · 1 comment · Fixed by #61228
Assignees
Labels
:Distributed Coordination/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs Team:Distributed Meta label for distributed team (obsolete) >test-failure Triaged test failures from CI

Comments

@przemekwitek
Copy link
Contributor

Build scan:
https://gradle-enterprise.elastic.co/s/vpvjriaf3z6r2

Repro line:

REPRODUCE WITH: ./gradlew ':server:test' --tests "org.elasticsearch.snapshots.SnapshotResiliencyTests.testConcurrentSnapshotDeleteAndDeleteIndex" \
  -Dtests.seed=A43FDB00C4693097 \
  -Dtests.security.manager=true \
  -Dtests.locale=pt-BR \
  -Dtests.timezone=America/Argentina/San_Luis \
  -Druntime.java=11

REPRODUCE WITH: ./gradlew ':server:test' --tests "org.elasticsearch.snapshots.SnapshotResiliencyTests.testConcurrentSnapshotDeleteAndDeleteIndex" \
  -Dtests.seed=A43FDB00C4693097 \
  -Dtests.security.manager=true \
  -Dtests.locale=pt-BR \
  -Dtests.timezone=America/Argentina/San_Luis \
  -Druntime.java=11

Reproduces locally?:
No

Applicable branches:
master

Failure history:
Last failure from over a month ago so probably irrelevant.

Failure excerpt:

java.lang.AssertionError: expected:<SUCCESS> but was:<PARTIAL>
	at __randomizedtesting.SeedInfo.seed([A43FDB00C4693097:F1742EA707B4B2DA]:0)
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:834)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:144)
	at org.elasticsearch.snapshots.SnapshotResiliencyTests.testConcurrentSnapshotDeleteAndDeleteIndex(SnapshotResiliencyTests.java:779)
@przemekwitek przemekwitek added :Distributed Coordination/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs >test-failure Triaged test failures from CI labels Aug 17, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed (:Distributed/Snapshot/Restore)

@elasticmachine elasticmachine added the Team:Distributed Meta label for distributed team (obsolete) label Aug 17, 2020
@original-brownbear original-brownbear self-assigned this Aug 17, 2020
original-brownbear added a commit to original-brownbear/elasticsearch that referenced this issue Aug 17, 2020
There is a corner case here in which during partial snapshot the index is
deleted right between starting the snapshot in the CS and the data node getting to work
on it, causing the data node the fail that shard snapshot and making the snapshot `PARTIAL`.

Closes elastic#61208
original-brownbear added a commit that referenced this issue Aug 18, 2020
There is a corner case here in which during partial snapshot the index is
deleted right between starting the snapshot in the CS and the data node getting to work
on it, causing the data node the fail that shard snapshot and making the snapshot `PARTIAL`.

Closes #61208
original-brownbear added a commit to original-brownbear/elasticsearch that referenced this issue Aug 18, 2020
There is a corner case here in which during partial snapshot the index is
deleted right between starting the snapshot in the CS and the data node getting to work
on it, causing the data node the fail that shard snapshot and making the snapshot `PARTIAL`.

Closes elastic#61208
original-brownbear added a commit that referenced this issue Aug 18, 2020
There is a corner case here in which during partial snapshot the index is
deleted right between starting the snapshot in the CS and the data node getting to work
on it, causing the data node the fail that shard snapshot and making the snapshot `PARTIAL`.

Closes #61208
javanna pushed a commit that referenced this issue Aug 24, 2020
There is a corner case here in which during partial snapshot the index is
deleted right between starting the snapshot in the CS and the data node getting to work
on it, causing the data node the fail that shard snapshot and making the snapshot `PARTIAL`.

Closes #61208
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Coordination/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs Team:Distributed Meta label for distributed team (obsolete) >test-failure Triaged test failures from CI
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants