Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

roachtest: disk-stalled/wal-failover/among-stores failed #136355

Open
cockroach-teamcity opened this issue Nov 28, 2024 · 4 comments
Open

roachtest: disk-stalled/wal-failover/among-stores failed #136355

cockroach-teamcity opened this issue Nov 28, 2024 · 4 comments
Assignees
Labels
A-storage Relating to our storage engine (Pebble) on-disk storage. B-runtime-assertions-enabled branch-release-24.2 Used to mark GA and release blockers, technical advisories, and bugs for 24.2 C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. T-storage Storage Team

Comments

@cockroach-teamcity
Copy link
Member

cockroach-teamcity commented Nov 28, 2024

Note: This build has runtime assertions enabled. If the same failure was hit in a run without assertions enabled, there should be a similar failure without this message. If there isn't one, then this failure is likely due to an assertion violation or (assertion) timeout.

roachtest.disk-stalled/wal-failover/among-stores failed with artifacts on release-24.2 @ fdb6c4cd2758a1b915e5815bce30dde3616b7ff9:

(disk_stall.go:154).runDiskStalledWALFailover: process exited unexpectedly
(cluster.go:2398).Run: context canceled
test artifacts and logs in: /artifacts/disk-stalled/wal-failover/among-stores/run_1

Parameters:

  • arch=amd64
  • cloud=gce
  • coverageBuild=false
  • cpu=16
  • encrypted=true
  • fs=ext4
  • localSSD=true
  • runtimeAssertionsBuild=true
  • ssd=2
Help

See: roachtest README

See: How To Investigate (internal)

See: Grafana

Same failure on other branches

/cc @cockroachdb/storage

This test on roachdash | Improve this report!

Jira issue: CRDB-44989

@cockroach-teamcity cockroach-teamcity added B-runtime-assertions-enabled branch-release-24.2 Used to mark GA and release blockers, technical advisories, and bugs for 24.2 C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. T-storage Storage Team labels Nov 28, 2024
@blathers-crl blathers-crl bot added the A-storage Relating to our storage engine (Pebble) on-disk storage. label Nov 28, 2024
@itsbilal
Copy link
Member

itsbilal commented Dec 5, 2024

F241128 10:28:51.424654 1042534 storage/pebble.go:1614 â‹® [n1,s1,pebble] 2432  disk stall detected: disk slowness detected: syncdata on file 006843.log has been ongoing for 60.7s

Same issue as #135983 where we spend more than the expected 30s in running the resume after the stall:

2024/11/28 10:27:49 cluster.go:2418: running cmd `sudo dmsetup suspend --nofl...` on nodes [:1]
2024/11/28 10:27:49 cluster.go:2420: details in run_102749.755490429_n1_sudo-dmsetup-suspend.log
2024/11/28 10:28:51 cluster.go:2418: running cmd `sudo dmsetup resume data1` on nodes [:1]
2024/11/28 10:28:51 cluster.go:2420: details in run_102851.487656511_n1_sudo-dmsetup-resume-.log

@itsbilal itsbilal assigned jbowens and unassigned itsbilal Dec 5, 2024
@itsbilal itsbilal removed the release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. label Dec 5, 2024
@cockroach-teamcity
Copy link
Member Author

roachtest.disk-stalled/wal-failover/among-stores failed with artifacts on release-24.2 @ b953d44261553c714c996b500b3afc120d06cb5a:

(disk_stall.go:154).runDiskStalledWALFailover: process exited unexpectedly
(cluster.go:2398).Run: context canceled
test artifacts and logs in: /artifacts/disk-stalled/wal-failover/among-stores/run_1

Parameters:

  • arch=amd64
  • cloud=gce
  • coverageBuild=false
  • cpu=16
  • encrypted=false
  • fs=ext4
  • localSSD=true
  • runtimeAssertionsBuild=false
  • ssd=2
Help

See: roachtest README

See: How To Investigate (internal)

See: Grafana

Same failure on other branches

This test on roachdash | Improve this report!

@jbowens
Copy link
Collaborator

jbowens commented Dec 12, 2024

From the latest:

2024/12/12 11:09:41 cluster.go:2418: running cmd `sudo dmsetup suspend --nofl...` on nodes [:1]
2024/12/12 11:09:41 cluster.go:2420: details in run_110941.345492352_n1_sudo-dmsetup-suspend.log
2024/12/12 11:10:51 cluster.go:2418: running cmd `sudo dmsetup resume data1` on nodes [:1]
2024/12/12 11:10:51 cluster.go:2420: details in run_111051.362137662_n1_sudo-dmsetup-resume-.log

I don't understand how we're waiting 1m10s to unstall the drive.

@jbowens
Copy link
Collaborator

jbowens commented Dec 12, 2024

2024/12/12 10:50:27 disk_stall.go:120: test status: pausing 9m13.00163258s before next simulated disk stall on n1
2024/12/12 10:51:54 disk_stall.go:120: test status: pausing 7m45.927980773s before next simulated disk stall on n1
2024/12/12 10:52:55 disk_stall.go:120: test status: pausing 6m45.017319747s before next simulated disk stall on n1
2024/12/12 10:53:55 disk_stall.go:120: test status: pausing 5m45.013785091s before next simulated disk stall on n1
2024/12/12 10:54:56 disk_stall.go:120: test status: pausing 4m44.007206021s before next simulated disk stall on n1
2024/12/12 10:55:57 disk_stall.go:120: test status: pausing 3m43.004393167s before next simulated disk stall on n1
2024/12/12 10:57:08 disk_stall.go:120: test status: pausing 2m32.101525933s before next simulated disk stall on n1
2024/12/12 10:58:08 disk_stall.go:120: test status: pausing 1m32.017502862s before next simulated disk stall on n1
2024/12/12 10:59:08 disk_stall.go:120: test status: pausing 32.009286837s before next simulated disk stall on n1
2024/12/12 10:59:41 cluster.go:2418: running cmd `sudo dmsetup suspend --nofl...` on nodes [:1]
2024/12/12 10:59:41 cluster.go:2420: details in run_105941.339794742_n1_sudo-dmsetup-suspend.log
2024/12/12 11:00:11 cluster.go:2418: running cmd `sudo dmsetup resume data1` on nodes [:1]
2024/12/12 11:00:11 cluster.go:2420: details in run_110011.357222994_n1_sudo-dmsetup-resume-.log
2024/12/12 11:00:11 disk_stall.go:120: test status: pausing 9m58.994510517s before next simulated disk stall on n1
2024/12/12 11:01:12 disk_stall.go:120: test status: pausing 8m28.992463719s before next simulated disk stall on n1
2024/12/12 11:02:26 disk_stall.go:120: test status: pausing 7m14.723282699s before next simulated disk stall on n1
2024/12/12 11:03:27 disk_stall.go:120: test status: pausing 6m13.996563342s before next simulated disk stall on n1
2024/12/12 11:05:09 disk_stall.go:120: test status: pausing 4m31.769474073s before next simulated disk stall on n1
2024/12/12 11:06:10 disk_stall.go:120: test status: pausing 3m30.989524721s before next simulated disk stall on n1
2024/12/12 11:08:08 disk_stall.go:120: test status: pausing 1m32.935923396s before next simulated disk stall on n1
2024/12/12 11:09:09 disk_stall.go:120: test status: pausing 31.985847956s before next simulated disk stall on n1

Even between the log statements there's sometimes >1m pauses. I'm very confused where these pauses are coming from. They seem excessively long to be scheduling latency.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-storage Relating to our storage engine (Pebble) on-disk storage. B-runtime-assertions-enabled branch-release-24.2 Used to mark GA and release blockers, technical advisories, and bugs for 24.2 C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. T-storage Storage Team
Projects
Status: Incoming
Development

No branches or pull requests

3 participants