Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CI] S3BlobContainerRetriesTests testReadRetriesAfterMeaningfulProgress failing #115583

Closed
elasticsearchmachine opened this issue Oct 24, 2024 · 5 comments · Fixed by #115613
Closed
Labels
:Distributed Coordination/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs low-risk An open issue or test failure that is a low risk to future releases Team:Distributed Coordination Meta label for Distributed Coordination team >test-failure Triaged test failures from CI

Comments

@elasticsearchmachine
Copy link
Collaborator

elasticsearchmachine commented Oct 24, 2024

Build Scans:

Reproduction Line:

./gradlew ':modules:repository-s3:test' --tests "org.elasticsearch.repositories.s3.S3BlobContainerRetriesTests.testReadRetriesAfterMeaningfulProgress" -Dtests.seed=6060EB63436251FB -Dtests.locale=ug-Arab-CN -Dtests.timezone=Africa/Freetown -Druntime.java=23

Applicable branches:
8.15

Reproduces locally?:
N/A

Failure History:
See dashboard

Failure Message:

com.amazonaws.SdkClientException: Unable to execute HTTP request: The target server failed to respond

Issue Reasons:

  • [8.15] 3 consecutive failures in step openjdk23_checkpart1_java-matrix
  • [8.15] 9 failures in test testReadRetriesAfterMeaningfulProgress (1.9% fail rate in 478 executions)
  • [8.15] 9 failures in step openjdk23_checkpart1_java-matrix (52.9% fail rate in 17 executions)
  • [8.15] 9 failures in pipeline elasticsearch-periodic (52.9% fail rate in 17 executions)

Note:
This issue was created using new test triage automation. Please report issues or feedback to es-delivery.

@elasticsearchmachine elasticsearchmachine added :Distributed Coordination/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs >test-failure Triaged test failures from CI labels Oct 24, 2024
elasticsearchmachine added a commit that referenced this issue Oct 24, 2024
@elasticsearchmachine
Copy link
Collaborator Author

This has been muted on branch 8.16

Mute Reasons:

  • [8.16] 2 failures in test testReadRetriesAfterMeaningfulProgress (1.3% fail rate in 149 executions)

Build Scans:

@elasticsearchmachine elasticsearchmachine added Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. needs:risk Requires assignment of a risk label (low, medium, blocker) labels Oct 24, 2024
@elasticsearchmachine
Copy link
Collaborator Author

Pinging @elastic/es-distributed (Team:Distributed)

@ywangd ywangd added low-risk An open issue or test failure that is a low risk to future releases and removed needs:risk Requires assignment of a risk label (low, medium, blocker) labels Oct 25, 2024
@ywangd
Copy link
Member

ywangd commented Oct 25, 2024

This just needs backport of #115177

@ywangd ywangd linked a pull request Oct 25, 2024 that will close this issue
@repantis repantis added Team:Distributed Coordination Meta label for Distributed Coordination team and removed Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. labels Nov 5, 2024
@elasticsearchmachine
Copy link
Collaborator Author

Pinging @elastic/es-distributed-coordination (Team:Distributed Coordination)

@ywangd
Copy link
Member

ywangd commented Nov 8, 2024

Closing this since the fix is backported to 8.16 #115613

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Coordination/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs low-risk An open issue or test failure that is a low risk to future releases Team:Distributed Coordination Meta label for Distributed Coordination team >test-failure Triaged test failures from CI
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants