-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CI] rolling-upgrade-multi-cluster tests failing to start node #91517
Labels
:Distributed Indexing/Distributed
A catch all label for anything in the Distributed Area. Please avoid if you can.
Team:Distributed (Obsolete)
Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination.
>test-failure
Triaged test failures from CI
Comments
mark-vieira
added
:Distributed Indexing/Distributed
A catch all label for anything in the Distributed Area. Please avoid if you can.
>test-failure
Triaged test failures from CI
labels
Nov 10, 2022
Pinging @elastic/es-distributed (Team:Distributed) |
elasticsearchmachine
added
the
Team:Distributed (Obsolete)
Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination.
label
Nov 10, 2022
I was looking into the logs and found following:
and
I suspect above error derailed the |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
:Distributed Indexing/Distributed
A catch all label for anything in the Distributed Area. Please avoid if you can.
Team:Distributed (Obsolete)
Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination.
>test-failure
Triaged test failures from CI
I've seen three builds fail now witht he same error and this started happening on Nov 7 so it's likely we've introduced something here:
https://gradle-enterprise.elastic.co/scans/failures?failures.failureClassification=all_failures&failures.failureMessage=Execution%20failed%20for%20task%20%27:x-pack:qa:rolling-upgrade-multi-cluster:v8.6.0%23follower%23oneThirdUpgradedTest%27.%0A%3E%20%60cluster%7B:x-pack:qa:rolling-upgrade-multi-cluster:v8.6.0-follower%7D%60%20failed%20to%20wait%20for%20cluster%20health%20yellow%20after%2040%20SECONDS%0A%20%20%20%20IO%20error%20while%20waiting%20cluster%0A%20%20%20%20%20%20503%20Service%20Unavailable&search.relativeStartTime=P28D&search.timeZoneId=America/Los_Angeles#
It seems the follower cluster is having issues coming back up after upgrade.
The text was updated successfully, but these errors were encountered: