Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

roachtest: restore/tpce/8TB/aws/nodes=10/cpus=8 failed #110764

Closed
cockroach-teamcity opened this issue Sep 16, 2023 · 7 comments
Closed

roachtest: restore/tpce/8TB/aws/nodes=10/cpus=8 failed #110764

cockroach-teamcity opened this issue Sep 16, 2023 · 7 comments
Assignees
Labels
A-kv-replication Relating to Raft, consensus, and coordination. branch-release-23.1 Used to mark GA and release blockers, technical advisories, and bugs for 23.1 C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. T-disaster-recovery
Milestone

Comments

@cockroach-teamcity
Copy link
Member

cockroach-teamcity commented Sep 16, 2023

roachtest.restore/tpce/8TB/aws/nodes=10/cpus=8 failed with artifacts on release-23.1 @ 12a0fdf76785787a3a7e83198f1adfd7184ea910:

(monitor.go:153).Wait: monitor failure: unexpected node event: n3: cockroach process died (exit code 137)
test artifacts and logs in: /artifacts/restore/tpce/8TB/aws/nodes=10/cpus=8/run_1

Parameters: ROACHTEST_arch=amd64 , ROACHTEST_cloud=aws , ROACHTEST_cpu=8 , ROACHTEST_encrypted=false , ROACHTEST_fs=ext4 , ROACHTEST_localSSD=false , ROACHTEST_ssd=0

Help

See: roachtest README

See: How To Investigate (internal)

/cc @cockroachdb/disaster-recovery

This test on roachdash | Improve this report!

Jira issue: CRDB-31597

@cockroach-teamcity cockroach-teamcity added branch-release-23.1 Used to mark GA and release blockers, technical advisories, and bugs for 23.1 C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. T-disaster-recovery labels Sep 16, 2023
@cockroach-teamcity cockroach-teamcity added this to the 23.1 milestone Sep 16, 2023
@rhu713 rhu713 removed the release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. label Sep 18, 2023
@rhu713
Copy link
Contributor

rhu713 commented Sep 18, 2023

Screenshot 2023-09-18 at 6 18 39 PM
Looks like an oom from MaybeInlineSideloadedRaftCommand like #73376

@erikgrinaker
Copy link
Contributor

@pavelkalinnikov I thought we mitigated this by increasing the AWS disk bandwidth and memory, to correspond better to GCP nodes? Did we not cover this test?

@pav-kv
Copy link
Collaborator

pav-kv commented Sep 20, 2023

We did cover this test in #109221 (backport #109278). However, this failure is still possible until we properly fix #73376.

@pav-kv pav-kv assigned pav-kv and unassigned rhu713 Sep 20, 2023
@pav-kv pav-kv added C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. A-kv-replication Relating to Raft, consensus, and coordination. labels Sep 20, 2023
@blathers-crl
Copy link

blathers-crl bot commented Sep 20, 2023

cc @cockroachdb/replication

@rhu713
Copy link
Contributor

rhu713 commented Sep 20, 2023

@pavelkalinnikov do you think we can close this issue as a duplicate of #73376 or does KV want to take a closer look at this failure?

@pav-kv
Copy link
Collaborator

pav-kv commented Sep 21, 2023

@rhu713 Nah, I think it looks exactly like a duplicate. See #106496 (comment) too (and #106496 (comment) which predicted what we're seeing). Can close this, or maybe better keep open so that new failures are added here instead of creating new issues.

@pav-kv
Copy link
Collaborator

pav-kv commented Sep 25, 2023

We are closing this, because it's a known issue #73376, and fixing it is on our mid-term roadmap. At the moment, the likelihood of this test failing with an OOM is low, but please let us @cockroachdb/replication know if it get out of control.

#111140 might help reducing the chances of these failures further, once there is a better parity in hardware provisioning for tests on AWS and GCE.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-kv-replication Relating to Raft, consensus, and coordination. branch-release-23.1 Used to mark GA and release blockers, technical advisories, and bugs for 23.1 C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. T-disaster-recovery
Projects
No open projects
Archived in project
Development

No branches or pull requests

4 participants