Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

roachtest: version/mixed/nodes=5 failed #72029

Closed
cockroach-teamcity opened this issue Oct 27, 2021 · 7 comments · Fixed by #72239
Closed

roachtest: version/mixed/nodes=5 failed #72029

cockroach-teamcity opened this issue Oct 27, 2021 · 7 comments · Fixed by #72239
Assignees
Labels
branch-master Failures and bugs on the master branch. C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked.

Comments

@cockroach-teamcity
Copy link
Member

roachtest.version/mixed/nodes=5 failed with artifacts on master @ 965b99b1a05eb893ee908d852680824a0bd4ffe3:

The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/version/mixed/nodes=5/run_1
	monitor.go:128,version.go:211,version.go:224,test_runner.go:777: monitor failure: unexpected node event: 3: dead (exit status 7)
		(1) attached stack trace
		  -- stack trace:
		  | main.(*monitorImpl).WaitE
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/monitor.go:116
		  | main.(*monitorImpl).Wait
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/monitor.go:124
		  | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerVersion.func1
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/version.go:211
		  | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerVersion.func2
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/version.go:224
		  | main.(*testRunner).runTest.func2
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:777
		  | runtime.goexit
		  | 	/usr/local/go/src/runtime/asm_amd64.s:1371
		Wraps: (2) monitor failure
		Wraps: (3) unexpected node event: 3: dead (exit status 7)
		Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *errors.errorString

	cluster.go:1300,context.go:91,cluster.go:1288,test_runner.go:866: dead node detection: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod monitor teamcity-3641256-1635315289-38-n6cpu4 --oneshot --ignore-empty-nodes: exit status 1 3: dead (exit status 7)
		6: skipped
		5: 17957
		4: 18653
		2: 20110
		1: 21508
		Error: UNCLASSIFIED_PROBLEM: 3: dead (exit status 7)
		(1) UNCLASSIFIED_PROBLEM
		Wraps: (2) attached stack trace
		  -- stack trace:
		  | main.glob..func14
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1175
		  | main.wrap.func1
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:281
		  | github.com/spf13/cobra.(*Command).execute
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:856
		  | github.com/spf13/cobra.(*Command).ExecuteC
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:960
		  | github.com/spf13/cobra.(*Command).Execute
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:897
		  | main.main
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:2104
		  | runtime.main
		  | 	/usr/local/go/src/runtime/proc.go:225
		  | runtime.goexit
		  | 	/usr/local/go/src/runtime/asm_amd64.s:1371
		Wraps: (3) 3: dead (exit status 7)
		Error types: (1) errors.Unclassified (2) *withstack.withStack (3) *errutil.leafError
Help

See: roachtest README

|

See: How To Investigate (internal)

/cc @cockroachdb/kv-triage

This test on roachdash | Improve this report!

@cockroach-teamcity cockroach-teamcity added branch-master Failures and bugs on the master branch. C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. labels Oct 27, 2021
@AlexTalks
Copy link
Contributor

Node 3 died while node 2 was upgrading from version 21.1-1164:

upgrader: 09:44:23 version.go:132: cluster version is 21.1-1164
...
09:54:44 version.go:136: test worker status: upgrading 2
upgrader: 09:54:44 version.go:137: upgrading 2
upgrader: 09:54:44 version.go:124: stopping node 2
...
09:54:50 cluster.go:681: test status: uploading file
...
09:55:01 cluster.go:681: test status: starting nodes :2
09:55:01 cluster.go:388: > /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod start --encrypt=false teamcity-3641256-1635315289-38-n6cpu4:2
teamcity-3641256-1635315289-38-n6cpu4: starting nodes
09:55:04 version.go:79: test worker status: sleeping
2: 20110
3: dead (exit status 7) 

It looks like what happened here is that node 3 is trying to add a learner replica of r44, but when attempting to apply snapshot, it seems that since the Raft group leader is on already-upgraded node 1, we have a discrepancy in the replica state:

F211027 09:56:08.882953 612333 kv/kvserver/replica.go:1216 ⋮ [n3,s3,r44/7:‹/Table/53{-/1/-1765…}›] 354  on-disk and in-memory state diverged: [UsingAppliedStateKey: true != false]

This seems to be related to @irfansharif recently merging #70464, as this change removed using_applied_state_key from the Replica.State proto.

@irfansharif any thoughts on how to handle this discrepancy?

@cockroach-teamcity
Copy link
Member Author

roachtest.version/mixed/nodes=5 failed with artifacts on master @ 40bd241158c8dd4a68d277fa732b91f56e411b38:

The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/version/mixed/nodes=5/run_1
	monitor.go:128,version.go:211,version.go:224,test_runner.go:777: monitor failure: unexpected node event: 4: dead (exit status 7)
		(1) attached stack trace
		  -- stack trace:
		  | main.(*monitorImpl).WaitE
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/monitor.go:116
		  | main.(*monitorImpl).Wait
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/monitor.go:124
		  | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerVersion.func1
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/version.go:211
		  | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerVersion.func2
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/version.go:224
		  | main.(*testRunner).runTest.func2
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:777
		  | runtime.goexit
		  | 	/usr/local/go/src/runtime/asm_amd64.s:1371
		Wraps: (2) monitor failure
		Wraps: (3) unexpected node event: 4: dead (exit status 7)
		Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *errors.errorString

	cluster.go:1300,context.go:91,cluster.go:1288,test_runner.go:866: dead node detection: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod monitor teamcity-3653448-1635488645-39-n6cpu4 --oneshot --ignore-empty-nodes: exit status 1 4: dead (exit status 7)
		3: 21588
		2: 21401
		1: 23053
		6: skipped
		5: 18463
		Error: UNCLASSIFIED_PROBLEM: 4: dead (exit status 7)
		(1) UNCLASSIFIED_PROBLEM
		Wraps: (2) attached stack trace
		  -- stack trace:
		  | main.glob..func14
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1175
		  | main.wrap.func1
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:281
		  | github.com/spf13/cobra.(*Command).execute
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:856
		  | github.com/spf13/cobra.(*Command).ExecuteC
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:960
		  | github.com/spf13/cobra.(*Command).Execute
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:897
		  | main.main
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:2104
		  | runtime.main
		  | 	/usr/local/go/src/runtime/proc.go:225
		  | runtime.goexit
		  | 	/usr/local/go/src/runtime/asm_amd64.s:1371
		Wraps: (3) 4: dead (exit status 7)
		Error types: (1) errors.Unclassified (2) *withstack.withStack (3) *errutil.leafError
Help

See: roachtest README

|

See: How To Investigate (internal)

/cc @cockroachdb/kv-triage

This test on roachdash | Improve this report!

@tbg
Copy link
Member

tbg commented Oct 29, 2021

Nice sleuthing Alex!

@irfansharif I think something must have gone wrong here, could you look into this?

master (at time of writing) runs mixed with this version:

verMap := map[string]string{
"22.1": "21.2.0-beta.2",

In the crash, we see that the active cluster version was 21.1-1164, which (the snippet below is from 21.2-beta.2) is older than the 21.2 final version:

{
Key: MarkerDataKeysRegistry,
Version: roachpb.Version{Major: 21, Minor: 1, Internal: 1164},
},
{
Key: PebbleSetWithDelete,
Version: roachpb.Version{Major: 21, Minor: 1, Internal: 1166},
},
{
Key: TenantUsageSingleConsumptionColumn,
Version: roachpb.Version{Major: 21, Minor: 1, Internal: 1168},
},
{
Key: SQLStatsTables,
Version: roachpb.Version{Major: 21, Minor: 1, Internal: 1170},
},
{
Key: SQLStatsCompactionScheduledJob,
Version: roachpb.Version{Major: 21, Minor: 1, Internal: 1172},
},
{
// V21_2 is CockroachDB v21.2. It's used for all v21.2.x patch releases.
Key: V21_2,
Version: roachpb.Version{Major: 21, Minor: 2},
},

Ok, so we probably created that fixture in one of the earlier 21.2 candidates, before we had minted the final cluster version. However, none of this should matter because the migration that phased out the applied state ran is way older:

Key: TruncatedAndRangeAppliedStateMigration,
Version: roachpb.Version{Major: 20, Minor: 2, Internal: 22},
},
{
Key: PostTruncatedAndRangeAppliedStateMigration,
Version: roachpb.Version{Major: 20, Minor: 2, Internal: 24},
},

The fixture should certainly not have anyone not using the range applied state.

Is the problem here that we needed to "migrate out of the assertion"? 21.2 nodes have the assertion, so they require the field to always be set to "true". But 22.1 nodes have forgotten about the field and will never set it. Isn't that a problem?

@tbg
Copy link
Member

tbg commented Oct 29, 2021

Concretely, in the instance Alex looked at, if n3 is receiving a snapshot, that snapshot will certainly be using the applied state. The stateloader (on n3, the old binary) will thus hit line 79 here:

if as, err := rsl.LoadRangeAppliedState(ctx, reader); err != nil {
return kvserverpb.ReplicaState{}, err
} else if as != nil {
s.UsingAppliedStateKey = true

But the in-memory state is populated here (again, on old binary n3) here:

// Update the rest of the Raft state. Changes to r.mu.state.Desc must be
// managed by r.setDescRaftMuLocked and changes to r.mu.state.Lease must be handled
// by r.leasePostApply, but we called those above, so now it's safe to
// wholesale replace r.mu.state.
r.mu.state = s

where s is the state that comes in through the snapshot:

On the sender side (new binary), the UsingAppliedStateKey field is no longer populated. So here we are, setting up n3 with an in-memory false and "on-disk" true.

The lesson here is that we shouldn't use s.State but instead load everything from the snapshot to prevent problems like this. However, at this point we have to bring the RangeAppliedStateKey back in 22.1 and hard-code it to true whenever we send it to followers. @irfansharif would you do the honors?

I'll file an issue for the long-term fix.

@tbg
Copy link
Member

tbg commented Oct 29, 2021

#72222

@irfansharif
Copy link
Contributor

However, at this point we have to bring the RangeAppliedStateKey back in 22.1 and hard-code it to true whenever we send it to followers

Just sanity checking, you mean the bool using_applied_state_key = 11; proto field, right?

@irfansharif
Copy link
Contributor

Also, thanks for the all the legwork! Nice to have friends in europe, I can wake up to root-caused issues I introduced the evening before.

craig bot pushed a commit that referenced this issue Oct 29, 2021
72132: coldata: fix BenchmarkAppend so that it could be run with multiple count r=yuzefovich a=yuzefovich

Previously, we could get an index out of bounds because we'd allocate
larger `Bytes.data` slice than `int32` offset can support.

Release note: None

72239: kvserver: resurrect using_applied_state_key in ReplicaState r=irfansharif a=irfansharif

Fixes #72029.

`using_applied_state_key` was previously used to check whether the Range
had upgraded to begin using the `RangeAppliedState` key. When set to true,
replicas receiving the state (through a `ReplicatedEvalResult`) knew to
begin using the new key. In #58088 (in 21.1) we introduced a migration
to iterate through all ranges in the system and have them start using
the `RangeAppliedState` key. In 21.2, this field was always set to true --
21.2 nodes were already using the `RangeAppliedState` key, or receiving
messages from 21.1 nodes that were also using the `RangeAppliedState` key.
In 21.2 (and in 21.1 for that matter) we didn't need to trigger the "if
set to true in an incoming message, start using the `RangeAppliedState`
key" code path.

When looking to get rid of this field in 22.1 (#70464), we observed that
there was an unintentional read of this field in 21.2 nodes (see #72029 and
\#72222); the saga is as follows:
 - Removing this field in 22.1 meant it was set as false when received at
   21.2 nodes.
 - This should've been fine! We weren't using this field to trigger any
   upgrade code paths (like it was originally intended for).
 - It turns out that in 21.2 we were using the `ReplicaState` from the
   incoming snapshot to update our in-memory replica state
 - Because the proto field was being phased out, there was now a divergence
   between the on-disk state (field set to true, from earlier 21.2
   operations) and the in-memory state (field set to false, because sent
   from a version that attempted to get rid of this field).

Removing proto fields from the replica state are not possible until we stop
using the protobuf copy of the replica state when applying a snapshot
(#72222). Once that's done, we should be able to stop sending the replica
state as part of the snapshot in the subsequent release.

Release note: None

Co-authored-by: Yahor Yuzefovich <[email protected]>
Co-authored-by: irfan sharif <[email protected]>
@craig craig bot closed this as completed in 95e5b69 Oct 29, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
branch-master Failures and bugs on the master branch. C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants