Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TEST] org.elasticsearch.upgrades.RecoveryIT fails intermittently on 6.x #26769

Closed
colings86 opened this issue Sep 25, 2017 · 2 comments
Closed
Assignees
Labels
>test-failure Triaged test failures from CI

Comments

@colings86
Copy link
Contributor

Build URL: https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+6.x+periodic/112/console

Reproduce command (doesn't not seem to reproduce locally):

gradle :qa:rolling-upgrade:v5.6.2-SNAPSHOT#mixedClusterTestRunner -Dtests.seed=665141B0D990DC23 -Dtests.class=org.elasticsearch.upgrades.RecoveryIT -Dtests.method="testHistoryUUIDIsGenerated" -Dtests.security.manager=true -Dtests.locale=ar-QA -Dtests.timezone=Europe/Andorra

Failure stack trace:

06:46:49 ERROR   30.0s | RecoveryIT.testHistoryUUIDIsGenerated <<< FAILURES!
06:46:49    > Throwable #1: org.elasticsearch.client.ResponseException: method [GET], host [http://[::1]:38533], URI [_cluster/health?wait_for_no_relocating_shards=true&wait_for_status=green], status line [HTTP/1.1 408 Request Timeout]
06:46:49    > {"cluster_name":"rolling-upgrade","status":"yellow","timed_out":true,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":16,"active_shards":21,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":95.45454545454545}
06:46:49    > 	at __randomizedtesting.SeedInfo.seed([665141B0D990DC23:4EF37EC7341D4925]:0)
06:46:49    > 	at org.elasticsearch.client.RestClient$1.completed(RestClient.java:355)
06:46:49    > 	at org.elasticsearch.client.RestClient$1.completed(RestClient.java:344)
06:46:50    > 	at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:119)
06:46:50    > 	at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:177)
06:46:50    > 	at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:436)
06:46:50    > 	at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:326)
06:46:50    > 	at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
06:46:50    > 	at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
06:46:50    > 	at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
06:46:50    > 	at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
06:46:50    > 	at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
06:46:50    > 	at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
06:46:50    > 	at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
06:46:50    > 	at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
06:46:50    > 	at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
06:46:50    > 	at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:588)
06:46:50    > 	at java.lang.Thread.run(Thread.java:748)

Its possible, given that this doesn't seem to reproduce locally) that this was a transient failure with compatibility between 6.x and 5.6.2-SNAPSHOT but opening this issue so that can be determined

@colings86 colings86 added the >test-failure Triaged test failures from CI label Sep 25, 2017
@spinscale
Copy link
Contributor

the same test also failed on the master branch today, with a rolling upgrade on 6.1.0

https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+bwc-tests/399/console

@dnhatn dnhatn self-assigned this Nov 28, 2017
@dnhatn
Copy link
Member

dnhatn commented Dec 22, 2017

I think this is fixed by #27580. /cc @bleskes

@dnhatn dnhatn closed this as completed Dec 22, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>test-failure Triaged test failures from CI
Projects
None yet
Development

No branches or pull requests

4 participants