Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

roachtest: c2c/BulkOps/full failed #115415

Closed
cockroach-teamcity opened this issue Dec 1, 2023 · 4 comments
Closed

roachtest: c2c/BulkOps/full failed #115415

cockroach-teamcity opened this issue Dec 1, 2023 · 4 comments
Assignees
Labels
branch-release-23.2 Used to mark GA and release blockers, technical advisories, and bugs for 23.2 C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. P-2 Issues/test failures with a fix SLA of 3 months T-disaster-recovery
Milestone

Comments

@cockroach-teamcity
Copy link
Member

cockroach-teamcity commented Dec 1, 2023

roachtest.c2c/BulkOps/full failed with artifacts on release-23.2 @ 6ea625b1713d2d6720164e260d186e7a6837ea7c:

(assertions.go:333).Fail: 
	Error Trace:	github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cluster_to_cluster.go:749
	            				github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cluster_to_cluster.go:978
	            				github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cluster_to_cluster.go:1268
	            				main/pkg/cmd/roachtest/monitor.go:119
	            				golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:75
	            				src/runtime/asm_amd64.s:1598
	Error:      	Received unexpected error:
	            	expected job status succeeded, but got running
	            	(1) attached stack trace
	            	  -- stack trace:
	            	  | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*replicationDriver).stopReplicationStream.func1
	            	  | 	github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cluster_to_cluster.go:745
	            	  | github.com/cockroachdb/cockroach/pkg/util/retry.ForDuration
	            	  | 	github.com/cockroachdb/cockroach/pkg/util/retry/retry.go:213
	            	  | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*replicationDriver).stopReplicationStream
	            	  | 	github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cluster_to_cluster.go:727
	            	  | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*replicationDriver).main
	            	  | 	github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cluster_to_cluster.go:978
	            	  | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerClusterToCluster.func1.1
	            	  | 	github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cluster_to_cluster.go:1268
	            	  | main.(*monitorImpl).Go.func1
	            	  | 	main/pkg/cmd/roachtest/monitor.go:119
	            	  | golang.org/x/sync/errgroup.(*Group).Go.func1
	            	  | 	golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:75
	            	  | runtime.goexit
	            	  | 	src/runtime/asm_amd64.s:1598
	            	Wraps: (2) expected job status succeeded, but got running
	            	Error types: (1) *withstack.withStack (2) *errutil.leafError
	Test:       	c2c/BulkOps/full
(require.go:1360).NoError: FailNow called
(monitor.go:153).Wait: monitor failure: monitor user task failed: t.Fatal() was called
test artifacts and logs in: /artifacts/c2c/BulkOps/full/run_1

Parameters: ROACHTEST_arch=amd64 , ROACHTEST_cloud=gce , ROACHTEST_cpu=8 , ROACHTEST_encrypted=false , ROACHTEST_fs=ext4 , ROACHTEST_localSSD=false , ROACHTEST_metamorphicBuild=false , ROACHTEST_ssd=0

Help

See: roachtest README

See: How To Investigate (internal)

See: Grafana

Same failure on other branches

/cc @cockroachdb/disaster-recovery

This test on roachdash | Improve this report!

Jira issue: CRDB-34020

@cockroach-teamcity cockroach-teamcity added branch-release-23.2 Used to mark GA and release blockers, technical advisories, and bugs for 23.2 C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. T-disaster-recovery labels Dec 1, 2023
@cockroach-teamcity cockroach-teamcity added this to the 23.2 milestone Dec 1, 2023
@msbutler msbutler added P-2 Issues/test failures with a fix SLA of 3 months and removed release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. labels Dec 1, 2023
@msbutler msbutler self-assigned this Dec 1, 2023
@msbutler
Copy link
Collaborator

msbutler commented Dec 1, 2023

One mysery here is why replanning took so long to trigger-- it started 20 minutes after a node clearly got far behind, instead of the expected 10 minutes:
image

@msbutler
Copy link
Collaborator

msbutler commented Dec 1, 2023

ah it takes 20 minutes because we now double check that a node lagging behind, and we only check once every stream_replication.replan_flow_frequency minutes, which is currently set to 10 in the roachtest. Given that we double check for lagging nodes, we probably ought to check with double the frequency of this cluster setting.

msbutler added a commit to msbutler/cockroach that referenced this issue Dec 1, 2023
PR cockroachdb#115000 refactored the lagging node checker to only trigger a replanning
event if the checker detects a lagging node 2 times in a row without hwm
advancement, but maintained the frequency the checker runs. This implies that
it takes twice as long for the checker to trigger a replanning event, relative
to the `stream_replication.replan_flow_frequency` setting. This patch doubles
the frequency the checker runs, implying a replanning event would trigger after
`stream_replication.replan_flow_frequency` time has elapsed.

Informs cockroachdb#115415

Release note: none
@cockroach-teamcity
Copy link
Member Author

roachtest.c2c/BulkOps/full failed with artifacts on release-23.2 @ db3bccef11456826ea231771b147e513a3a44c3b:

(assertions.go:333).Fail: 
	Error Trace:	github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cluster_to_cluster.go:749
	            				github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cluster_to_cluster.go:978
	            				github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cluster_to_cluster.go:1268
	            				main/pkg/cmd/roachtest/monitor.go:119
	            				golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:75
	            				src/runtime/asm_amd64.s:1598
	Error:      	Received unexpected error:
	            	expected job status succeeded, but got running
	            	(1) attached stack trace
	            	  -- stack trace:
	            	  | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*replicationDriver).stopReplicationStream.func1
	            	  | 	github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cluster_to_cluster.go:745
	            	  | github.com/cockroachdb/cockroach/pkg/util/retry.ForDuration
	            	  | 	github.com/cockroachdb/cockroach/pkg/util/retry/retry.go:213
	            	  | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*replicationDriver).stopReplicationStream
	            	  | 	github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cluster_to_cluster.go:727
	            	  | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*replicationDriver).main
	            	  | 	github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cluster_to_cluster.go:978
	            	  | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerClusterToCluster.func1.1
	            	  | 	github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cluster_to_cluster.go:1268
	            	  | main.(*monitorImpl).Go.func1
	            	  | 	main/pkg/cmd/roachtest/monitor.go:119
	            	  | golang.org/x/sync/errgroup.(*Group).Go.func1
	            	  | 	golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:75
	            	  | runtime.goexit
	            	  | 	src/runtime/asm_amd64.s:1598
	            	Wraps: (2) expected job status succeeded, but got running
	            	Error types: (1) *withstack.withStack (2) *errutil.leafError
	Test:       	c2c/BulkOps/full
(require.go:1360).NoError: FailNow called
(monitor.go:153).Wait: monitor failure: monitor user task failed: t.Fatal() was called
test artifacts and logs in: /artifacts/c2c/BulkOps/full/run_1

Parameters: ROACHTEST_arch=amd64 , ROACHTEST_cloud=gce , ROACHTEST_cpu=8 , ROACHTEST_encrypted=false , ROACHTEST_fs=ext4 , ROACHTEST_localSSD=false , ROACHTEST_metamorphicBuild=false , ROACHTEST_ssd=0

Help

See: roachtest README

See: How To Investigate (internal)

See: Grafana

Same failure on other branches

This test on roachdash | Improve this report!

craig bot pushed a commit that referenced this issue Dec 4, 2023
115459: streamingccl: double the frequency we check for lagging nodes r=stevendanna a=msbutler

PR #115000 refactored the lagging node checker to only trigger a replanning event if the checker detects a lagging node 2 times in a row without hwm advancement, but maintained the frequency the checker runs. This implies that it takes twice as long for the checker to trigger a replanning event, relative to the `stream_replication.replan_flow_frequency` setting. This patch doubles the frequency the checker runs, implying a replanning event would trigger after `stream_replication.replan_flow_frequency` time has elapsed.

Informs #115415

Release note: none

Co-authored-by: Michael Butler <[email protected]>
blathers-crl bot pushed a commit that referenced this issue Dec 4, 2023
PR #115000 refactored the lagging node checker to only trigger a replanning
event if the checker detects a lagging node 2 times in a row without hwm
advancement, but maintained the frequency the checker runs. This implies that
it takes twice as long for the checker to trigger a replanning event, relative
to the `stream_replication.replan_flow_frequency` setting. This patch doubles
the frequency the checker runs, implying a replanning event would trigger after
`stream_replication.replan_flow_frequency` time has elapsed.

Informs #115415

Release note: none
@msbutler
Copy link
Collaborator

msbutler commented Dec 6, 2023

closing now that #115543 has merged

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
branch-release-23.2 Used to mark GA and release blockers, technical advisories, and bugs for 23.2 C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. P-2 Issues/test failures with a fix SLA of 3 months T-disaster-recovery
Projects
No open projects
Archived in project
Development

No branches or pull requests

2 participants