Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase ensureGreen timeout for testReplicaCorruption #47136

Merged
merged 1 commit into from
Sep 25, 2019

Conversation

dnhatn
Copy link
Member

@dnhatn dnhatn commented Sep 25, 2019

We can have a large number of shard copies in this test. For example, the two recent failures have 24 and 27 copies respectively and all replicas have to copy segment files as their stores are corrupted. Our CI needs more than 30 seconds to start all these copies.

Note that in two recent failures, the cluster was green just after the cluster health timed out.

Closes #41899

@dnhatn dnhatn added >test-failure Triaged test failures from CI :Distributed Indexing/Distributed A catch all label for anything in the Distributed Area. Please avoid if you can. v8.0.0 v7.5.0 v6.8.4 v7.4.1 v7.3.3 labels Sep 25, 2019
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed

Copy link
Contributor

@ywelsch ywelsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dnhatn
Copy link
Member Author

dnhatn commented Sep 25, 2019

Thanks @ywelsch.

@dnhatn dnhatn merged commit 7ceff60 into elastic:master Sep 25, 2019
@dnhatn dnhatn deleted the fix-corrupted-replica branch September 25, 2019 20:52
dnhatn added a commit that referenced this pull request Sep 25, 2019
We can have a large number of shard copies in this test. For example,
the two recent failures have 24 and 27 copies respectively and all
replicas have to copy segment files as their stores are corrupted. Our
CI needs more than 30 seconds to start all these copies.

Note that in two recent failures, the cluster was green just after the
cluster health timed out.

Closes #41899
dnhatn added a commit that referenced this pull request Sep 25, 2019
We can have a large number of shard copies in this test. For example,
the two recent failures have 24 and 27 copies respectively and all
replicas have to copy segment files as their stores are corrupted. Our
CI needs more than 30 seconds to start all these copies.

Note that in two recent failures, the cluster was green just after the
cluster health timed out.

Closes #41899
dnhatn added a commit that referenced this pull request Sep 25, 2019
We can have a large number of shard copies in this test. For example,
the two recent failures have 24 and 27 copies respectively and all
replicas have to copy segment files as their stores are corrupted. Our
CI needs more than 30 seconds to start all these copies.

Note that in two recent failures, the cluster was green just after the
cluster health timed out.

Closes #41899
dnhatn added a commit that referenced this pull request Sep 26, 2019
We can have a large number of shard copies in this test. For example,
the two recent failures have 24 and 27 copies respectively and all
replicas have to copy segment files as their stores are corrupted. Our
CI needs more than 30 seconds to start all these copies.

Note that in two recent failures, the cluster was green just after the
cluster health timed out.

Closes #41899
@colings86 colings86 added v7.4.0 and removed v7.4.1 labels Sep 27, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Indexing/Distributed A catch all label for anything in the Distributed Area. Please avoid if you can. >test-failure Triaged test failures from CI v6.8.4 v7.3.3 v7.4.0 v7.5.0 v8.0.0-alpha1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[CI] CorruptedFileIT.testReplicaCorruption failure
5 participants