Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ClusterDisruptionIT#testSendingShardFailure fails on CI #32431

Closed
javanna opened this issue Jul 27, 2018 · 3 comments
Closed

ClusterDisruptionIT#testSendingShardFailure fails on CI #32431

javanna opened this issue Jul 27, 2018 · 3 comments
Labels
:Distributed Coordination/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) >test-failure Triaged test failures from CI

Comments

@javanna
Copy link
Member

javanna commented Jul 27, 2018

This does not reproduce for me. I can see that the whole suite has trace logging on for more than a year, which is a signal that it was failing in the past. I didn't find this specific issue though in our repo.

https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+6.x+multijob-unix-compatibility/os=sles/1206

REPRODUCE WITH: ./gradlew :server:integTest -Dtests.seed=E77D94CFECE59A17 -Dtests.class=org.elasticsearch.discovery.ClusterDisruptionIT -Dtests.method="testSendingShardFailure" -Dtests.security.manager=true -Dtests.locale=sq -Dtests.timezone=PST8PDT

With all the trace logging I have not identified the exact cause of the failure. I did notice quite some "Connection reset by peer" errors in the logs, bu they may actually be expected. Somebody more familiar with this test is definitely going to understand more.

@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed

@javanna javanna added >test-failure Triaged test failures from CI :Distributed Coordination/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) labels Jul 27, 2018
@javanna
Copy link
Member Author

javanna commented Jul 27, 2018

@ywelsch kindly pointed out to me that the failure is

08:42:18    > Throwable #1: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=3973, name=elasticsearch[node_t2][write][T#1], state=RUNNABLE, group=TGRP-ClusterDisruptionIT]
08:42:18    > 	at __randomizedtesting.SeedInfo.seed([E77D94CFECE59A17:4127A25150EDF080]:0)
08:42:18    > Caused by: java.lang.AssertionError: shard term already update.  op term [2], shardTerm [3]
08:42:18    > 	at __randomizedtesting.SeedInfo.seed([E77D94CFECE59A17]:0)
08:42:18    > 	at org.elasticsearch.index.shard.IndexShard.lambda$acquireReplicaOperationPermit$9(IndexShard.java:2269)
08:42:18    > 	at org.elasticsearch.index.shard.IndexShardOperationPermits.doBlockOperations(IndexShardOperationPermits.java:177)
08:42:18    > 	at org.elasticsearch.index.shard.IndexShardOperationPermits.blockOperations(IndexShardOperationPermits.java:114)
08:42:18    > 	at org.elasticsearch.index.shard.IndexShard.acquireReplicaOperationPermit(IndexShard.java:2268)
08:42:18    > 	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.doRun(TransportReplicationAction.java:633)
08:42:18    > 	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
08:42:18    > 	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:510)
08:42:18    > 	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:490)
08:42:18    > 	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66)
08:42:18    > 	at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1605)
08:42:18    > 	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:723)
08:42:18    > 	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
08:42:18    > 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
08:42:18    > 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
08:42:18    > 	at java.lang.Thread.run(Thread.java:748)

This is the same failure as #32304 and #32118 although it is triggered in a different test.

ywelsch added a commit that referenced this issue Aug 3, 2018
We've recently seen a number of test failures that tripped an assertion in IndexShard (see issues
linked below), leading to the discovery of a race between resetting a replica when it learns about a
higher term and when the same replica is promoted to primary. This commit fixes the race by
distinguishing between a cluster state primary term (called pendingPrimaryTerm) and a shard-level
operation term. The former is set during the cluster state update or when a replica learns about a
new primary. The latter is only incremented under the operation block, which can happen in a
delayed fashion. It also solves the issue where a replica that's still adjusting to the new term
receives a cluster state update that promotes it to primary, which can happen in the situation of
multiple nodes being shut down in short succession. In that case, the cluster state update thread
would call `asyncBlockOperations` in `updateShardState`, which in turn would throw an exception
as blocking permits is not allowed while an ongoing block is in place, subsequently failing the shard.
This commit therefore extends the IndexShardOperationPermits to allow it to queue multiple blocks
(which will all take precedence over operations acquiring permits). Finally, it also moves the primary
activation of the replication tracker under the operation block, so that the actual transition to
primary only happens under the operation block.

Relates to #32431, #32304 and #32118
ywelsch added a commit that referenced this issue Aug 3, 2018
We've recently seen a number of test failures that tripped an assertion in IndexShard (see issues
linked below), leading to the discovery of a race between resetting a replica when it learns about a
higher term and when the same replica is promoted to primary. This commit fixes the race by
distinguishing between a cluster state primary term (called pendingPrimaryTerm) and a shard-level
operation term. The former is set during the cluster state update or when a replica learns about a
new primary. The latter is only incremented under the operation block, which can happen in a
delayed fashion. It also solves the issue where a replica that's still adjusting to the new term
receives a cluster state update that promotes it to primary, which can happen in the situation of
multiple nodes being shut down in short succession. In that case, the cluster state update thread
would call `asyncBlockOperations` in `updateShardState`, which in turn would throw an exception
as blocking permits is not allowed while an ongoing block is in place, subsequently failing the shard.
This commit therefore extends the IndexShardOperationPermits to allow it to queue multiple blocks
(which will all take precedence over operations acquiring permits). Finally, it also moves the primary
activation of the replication tracker under the operation block, so that the actual transition to
primary only happens under the operation block.

Relates to #32431, #32304 and #32118
ywelsch added a commit that referenced this issue Aug 3, 2018
We've recently seen a number of test failures that tripped an assertion in IndexShard (see issues
linked below), leading to the discovery of a race between resetting a replica when it learns about a
higher term and when the same replica is promoted to primary. This commit fixes the race by
distinguishing between a cluster state primary term (called pendingPrimaryTerm) and a shard-level
operation term. The former is set during the cluster state update or when a replica learns about a
new primary. The latter is only incremented under the operation block, which can happen in a
delayed fashion. It also solves the issue where a replica that's still adjusting to the new term
receives a cluster state update that promotes it to primary, which can happen in the situation of
multiple nodes being shut down in short succession. In that case, the cluster state update thread
would call `asyncBlockOperations` in `updateShardState`, which in turn would throw an exception
as blocking permits is not allowed while an ongoing block is in place, subsequently failing the shard.
This commit therefore extends the IndexShardOperationPermits to allow it to queue multiple blocks
(which will all take precedence over operations acquiring permits). Finally, it also moves the primary
activation of the replication tracker under the operation block, so that the actual transition to
primary only happens under the operation block.

Relates to #32431, #32304 and #32118
@ywelsch
Copy link
Contributor

ywelsch commented Aug 3, 2018

Closed by #32442. If this still occurs, please reopen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Coordination/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) >test-failure Triaged test failures from CI
Projects
None yet
Development

No branches or pull requests

3 participants