Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

roachtest: tpcc/mixed-headroom/n5cpu16 failed #53535

Closed
cockroach-teamcity opened this issue Aug 27, 2020 · 11 comments · Fixed by #54199
Closed

roachtest: tpcc/mixed-headroom/n5cpu16 failed #53535

cockroach-teamcity opened this issue Aug 27, 2020 · 11 comments · Fixed by #54199
Assignees
Labels
C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked.
Milestone

Comments

@cockroach-teamcity
Copy link
Member

(roachtest).tpcc/mixed-headroom/n5cpu16 failed on provisional_202008261913_v20.2.0-beta.1@eaa939ce6548a54a23970814ff00f30ad87680ac:

		  | _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
		  |  2393.0s        0           23.0           19.0     54.5     75.5    121.6    121.6 delivery
		  |  2393.0s        0          199.3          189.3     33.6     92.3    130.0    151.0 newOrder
		  |  2393.0s        0           19.0           19.1      7.1     11.0     13.6     13.6 orderStatus
		  |  2393.0s        0          199.3          190.1     19.9     71.3    121.6    142.6 payment
		  |  2393.0s        0           18.0           19.0     24.1     60.8     65.0     65.0 stockLevel
		  |  2394.0s        0           20.0           19.0     52.4     71.3     92.3     92.3 delivery
		  |  2394.0s        0          171.7          189.3     32.5     50.3     83.9     92.3 newOrder
		  |  2394.0s        0           15.0           19.1      7.9      9.4     16.3     16.3 orderStatus
		  |  2394.0s        0          183.7          190.1     17.8     29.4     37.7     88.1 payment
		  |  2394.0s        0           20.0           19.0     24.1     56.6     71.3     71.3 stockLevel
		  |  2395.0s        0           35.0           19.0     58.7     92.3    134.2    134.2 delivery
		  |  2395.0s        0          203.3          189.3     33.6     62.9     92.3    113.2 newOrder
		  |  2395.0s        0           15.0           19.0      7.3      8.9     13.1     13.1 orderStatus
		  |  2395.0s        0          176.2          190.1     21.0     39.8     46.1     48.2 payment
		  |  2395.0s        0           19.0           19.0     30.4     67.1     96.5     96.5 stockLevel
		Wraps: (4) exit status 20
		Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError

	cluster.go:2613,tpcc.go:187,tpcc.go:286,test_runner.go:754: monitor failure: monitor task failed: t.Fatal() was called
		(1) attached stack trace
		  -- stack trace:
		  | main.(*monitor).WaitE
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2601
		  | main.(*monitor).Wait
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2609
		  | main.runTPCC
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:187
		  | main.registerTPCC.func2
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:286
		  | main.(*testRunner).runTest.func2
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:754
		Wraps: (2) monitor failure
		Wraps: (3) attached stack trace
		  -- stack trace:
		  | main.(*monitor).wait.func2
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2657
		Wraps: (4) monitor task failed
		Wraps: (5) attached stack trace
		  -- stack trace:
		  | main.init
		  | 	/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2571
		  | runtime.doInit
		  | 	/usr/local/go/src/runtime/proc.go:5228
		  | runtime.main
		  | 	/usr/local/go/src/runtime/proc.go:190
		  | runtime.goexit
		  | 	/usr/local/go/src/runtime/asm_amd64.s:1357
		Wraps: (6) t.Fatal() was called
		Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError

More

Artifacts: /tpcc/mixed-headroom/n5cpu16
Related:

See this test on roachdash
powered by pkg/cmd/internal/issues

@cockroach-teamcity cockroach-teamcity added branch-provisional_202008261913_v20.2.0-beta.1 C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. labels Aug 27, 2020
@cockroach-teamcity cockroach-teamcity added this to the 20.2 milestone Aug 27, 2020
@tbg
Copy link
Member

tbg commented Aug 28, 2020

error in newOrder: missing stock row

this sounds concerning and it's blocking the v20.2 beta. Can you dig into this on Monday, @andreimatei?

@knz knz removed the release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. label Aug 31, 2020
@andreimatei
Copy link
Contributor

I've ran 20 iterations and got multiple problems:

  1. 2x error in newOrder: missing stock row
  2. 1x check failed: 3.3.2.7: 141 rows returned, expected zero
  3. one restore that is seemingly stuck, with 3 AdminSplit requests trying to split the same range and retrying endlessly on the server-side. The split keeps getting retried on the server side as if the range descriptor is constantly changing, although it is not changing. I think this might have something to do with the mixed-version cluster and the fact that in 20.2 we’ve removed a field (GenerationComparable) from the range desc proto and the code in the split txn uses the generated desc.Equal(other) method to figure out if two descriptors are the same, but I don’t quite see the problem.

@andreimatei andreimatei added the release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. label Aug 31, 2020
@nvanbenschoten
Copy link
Member

@andreimatei I think there's a chance that this is due to the same underlying issue as #53540. For that issue, we're narrowing in on an optimization to Pebble as being the culprit. I just kicked off a series of tests with and without that optimization on tpcc/mixed-headroom/n5cpu16 to see if I can reproduce.

The other thing this reminds me of is some of the issues we've had with the propagation of read within uncertainty errors. Given that this test is using a mixed-version cluster, I wonder if we're hitting something like that.

@nvanbenschoten
Copy link
Member

It's looking like 95a13a8 is the issue here as well. Over the course of 12 runs, I saw no failures without that commit but 2x missing stock row errors and a check failed: 3.3.2.7: 36 rows returned, expected zero error with it. cc. @sumeerbhola.

@nvanbenschoten
Copy link
Member

My batch of runs on master (including #54032) all passed except for one stuck AdminSplit (being investigated by @andreimatei elsewhere). So that's more indication that 95a13a8 was the issue. cc. @petermattis.

By the way, @andreimatei the stuck split looked like:

I200909 07:09:31.223765 274338228 kv/kvserver/split_queue.go:149  [n1,split,s1,r79/1:/Table/5{6-8}] split saw concurrent descriptor modification; maybe retrying
I200909 07:09:31.223857 274338230 kv/kvserver/replica_command.go:397  [n1,split,s1,r79/1:/Table/5{6-8}] initiating a split of this range at key /Table/57 [r137003066] (zone config)
I200909 07:09:31.224033 274338230 kv/kvserver/split_queue.go:149  [n1,split,s1,r79/1:/Table/5{6-8}] split saw concurrent descriptor modification; maybe retrying
I200909 07:09:31.224089 274338232 kv/kvserver/replica_command.go:397  [n1,split,s1,r79/1:/Table/5{6-8}] initiating a split of this range at key /Table/57 [r137003067] (zone config)
I200909 07:09:31.224288 274338232 kv/kvserver/split_queue.go:149  [n1,split,s1,r79/1:/Table/5{6-8}] split saw concurrent descriptor modification; maybe retrying
I200909 07:09:31.224349 274338234 kv/kvserver/replica_command.go:397  [n1,split,s1,r79/1:/Table/5{6-8}] initiating a split of this range at key /Table/57 [r137003068] (zone config)
I200909 07:09:31.224525 274338234 kv/kvserver/split_queue.go:149  [n1,split,s1,r79/1:/Table/5{6-8}] split saw concurrent descriptor modification; maybe retrying
I200909 07:09:31.224590 274338236 kv/kvserver/replica_command.go:397  [n1,split,s1,r79/1:/Table/5{6-8}] initiating a split of this range at key /Table/57 [r137003069] (zone config)
I200909 07:09:31.224780 274338236 kv/kvserver/split_queue.go:149  [n1,split,s1,r79/1:/Table/5{6-8}] split saw concurrent descriptor modification; maybe retrying
I200909 07:09:31.224862 274338238 kv/kvserver/replica_command.go:397  [n1,split,s1,r79/1:/Table/5{6-8}] initiating a split of this range at key /Table/57 [r137003070] (zone config)
I200909 07:09:31.225046 274338238 kv/kvserver/split_queue.go:149  [n1,split,s1,r79/1:/Table/5{6-8}] split saw concurrent descriptor modification; maybe retrying
I200909 07:09:31.225111 274338240 kv/kvserver/replica_command.go:397  [n1,split,s1,r79/1:/Table/5{6-8}] initiating a split of this range at key /Table/57 [r137003071] (zone config)
I200909 07:09:31.225287 274338240 kv/kvserver/split_queue.go:149  [n1,split,s1,r79/1:/Table/5{6-8}] split saw concurrent descriptor modification; maybe retrying
I200909 07:09:31.225359 274338242 kv/kvserver/replica_command.go:397  [n1,split,s1,r79/1:/Table/5{6-8}] initiating a split of this range at key /Table/57 [r137003072] (zone config)
I200909 07:09:31.225531 274338242 kv/kvserver/split_queue.go:149  [n1,split,s1,r79/1:/Table/5{6-8}] split saw concurrent descriptor modification; maybe retrying
I200909 07:09:31.225627 274338244 kv/kvserver/replica_command.go:397  [n1,split,s1,r79/1:/Table/5{6-8}] initiating a split of this range at key /Table/57 [r137003073] (zone config)
I200909 07:09:31.225804 274338244 kv/kvserver/split_queue.go:149  [n1,split,s1,r79/1:/Table/5{6-8}] split saw concurrent descriptor modification; maybe retrying
I200909 07:09:31.225876 274338246 kv/kvserver/replica_command.go:397  [n1,split,s1,r79/1:/Table/5{6-8}] initiating a split of this range at key /Table/57 [r137003074] (zone config)
I200909 07:09:31.226058 274338246 kv/kvserver/split_queue.go:149  [n1,split,s1,r79/1:/Table/5{6-8}] split saw concurrent descriptor modification; maybe retrying
I200909 07:09:31.226133 274338248 kv/kvserver/replica_command.go:397  [n1,split,s1,r79/1:/Table/5{6-8}] initiating a split of this range at key /Table/57 [r137003075] (zone config)
I200909 07:09:31.226321 274338248 kv/kvserver/split_queue.go:149  [n1,split,s1,r79/1:/Table/5{6-8}] split saw concurrent descriptor modification; maybe retrying
I200909 07:09:31.226378 274338250 kv/kvserver/replica_command.go:397  [n1,split,s1,r79/1:/Table/5{6-8}] initiating a split of this range at key /Table/57 [r137003076] (zone config)
I200909 07:09:31.226555 274338250 kv/kvserver/split_queue.go:149  [n1,split,s1,r79/1:/Table/5{6-8}] split saw concurrent descriptor modification; maybe retrying
I200909 07:09:31.226634 274338252 kv/kvserver/replica_command.go:397  [n1,split,s1,r79/1:/Table/5{6-8}] initiating a split of this range at key /Table/57 [r137003077] (zone config)
I200909 07:09:31.226818 274338252 kv/kvserver/split_queue.go:149  [n1,split,s1,r79/1:/Table/5{6-8}] split saw concurrent descriptor modification; maybe retrying
I200909 07:09:31.226866 274338254 kv/kvserver/replica_command.go:397  [n1,split,s1,r79/1:/Table/5{6-8}] initiating a split of this range at key /Table/57 [r137003078] (zone config)
I200909 07:09:31.227038 274338254 kv/kvserver/split_queue.go:149  [n1,split,s1,r79/1:/Table/5{6-8}] split saw concurrent descriptor modification; maybe retrying
I200909 07:09:31.227084 274338256 kv/kvserver/replica_command.go:397  [n1,split,s1,r79/1:/Table/5{6-8}] initiating a split of this range at key /Table/57 [r137003079] (zone config)
I200909 07:09:31.227279 274338256 kv/kvserver/split_queue.go:149  [n1,split,s1,r79/1:/Table/5{6-8}] split saw concurrent descriptor modification; maybe retrying

Just a shot in the dark, but I wonder if this is related to #50408. Or maybe a recent change to the RangeDescriptor proto? Does that ring a bell?

@andreimatei
Copy link
Contributor

Just a shot in the dark, but I wonder if this is related to #50408. Or maybe a recent change to the RangeDescriptor proto? Does that ring a bell?

Yeah, that was my initial thinking too because we did remove a field from the proto, but I think it didn't add up because it was a new-version node where the request was looping, not an old-version node. I'm getting back to this now.

@andreimatei
Copy link
Contributor

Focusing on the stuck splits now:

AdminSplit requests sometimes retry endlessly in this loop on 20.1 nodes because, every time around the loop, the code thinks the descriptor has changed. The reason why it thinks that the desc changed is because the GenerationComparable field appears to be different: the version from the database appears to have the field set, and the version from r.Desc() doesn't.
We've removed that field in version 20.2 (so, descriptors written by 20.2 nodes will appear to 20.1 nodes to not have it set). I'm not sure yet how that causes what we're seeing here.

@tbg
Copy link
Member

tbg commented Sep 10, 2020

If a descriptor on the replicas actually has it set, but a 20.2 node is the one loading the descriptor, it will lose that bit, meaning that any attempts at CPut using the marshalled repr of that proto will fail. This is a problem we've already solved in other places, essentially by holding on to the actual KV bytes. But we're not doing that here yet, note how we're using oldDesc as the base of the CPut in this method:

oldDesc *roachpb.RangeDescriptor,

@tbg
Copy link
Member

tbg commented Sep 10, 2020

Given where we're at in the cycle, I say - bring back that field, make it clear that is unused, and mention that we can't remove it until all code paths, and in particular split/unsplit have moved off the pattern of marshalling a RangeDescriptor to bytes to get the expected value of a CPut.

@andreimatei
Copy link
Contributor

We do not re-marshal the proto as the base of the CPut (and we weren't in 20.1). The code reads the bytes from the DB again for that here:

_, dbDescValue, err := conditionalGetDescValueFromDB(ctx, txn, oldDesc.StartKey, checkDescsEqualXXX(ctx, oldDesc))

@tbg
Copy link
Member

tbg commented Sep 10, 2020

Oops, right, I stopped reading too early. Ok, where does the ConditionFailedError originate? Is it

return nil, nil, &roachpb.ConditionFailedError{ActualValue: existingDescKV.Value}

or from the actual CPut? Either way, I'm also confused. Do you have more of your custom logging here?

andreimatei added a commit to andreimatei/cockroach that referenced this issue Sep 10, 2020
We dropped this field recently, but unfortunately that wasn't safe for
mixed-version clusters. The rub is that 20.1 nodes need to roundtrip the
proto through 20.2 nodes in a fairly subtle way. When it comes back to
the 20.1 node, the descriptor needs to compare Equal() to the original.
We configure our protos to not preserve unrecognized fields, so removing
the field breaks this round-tripping.

Specifically, the scenario which broke is the following:
1. A 20.1 node processes an AdminSplit, and performs the tranction
   writing the new descriptors. The descriptors have the
   GenerationCompable field set.
2. The lease changes while the respective txn is running. The lease
   moves to a 20.2 node.
3. The 20.2 node evaluates the EndTxn, and computes the split trigger
   info that's going to be replicated. The EndTxn has split info in it
   containing the field set, but the field is dropped when converting
   that into the proposed SplitTrigger (since the 20.2 unmarshalls and
   re-marshalls the descriptors).
4. All the 20.1 replicas of the ranges involved now apply the respective
   trigger via Raft, and their in-memory state doesn't have the field
   set. This doesn't match the bytes written in the database, which have
   the field.
5. The discrepancy between the in-memory state and the db state is a
   problem, as it causes the 20.1 node to spin if it tries to perform
   subsequent merge/split operations. The reason is that the code
   performing these operations short-circuits itself if it detects that
   the descriptor has changed while the operation was running. This
   detection is performed via the generated Equals() method, and it
   mis-fires because of the phantom field. That detection happens here:
   https://github.com/cockroachdb/cockroach/blob/79c01d28da9c379f67bb41beef3d85ad3bee1da1/pkg/kv/kvserver/replica_command.go#L1957

This patch takes precautions so that we can remove the field again in
21.1 - I'm merging this in 21.1, I'll backport it to 20.2, and then I'll
come back to 20.2 and remove the field.

Fixes cockroachdb#53535

Release note: None
andreimatei added a commit to andreimatei/cockroach that referenced this issue Sep 14, 2020
We dropped this field recently, but unfortunately that wasn't safe for
mixed-version clusters. The rub is that 20.1 nodes need to roundtrip the
proto through 20.2 nodes in a fairly subtle way. When it comes back to
the 20.1 node, the descriptor needs to compare Equal() to the original.
We configure our protos to not preserve unrecognized fields, so removing
the field breaks this round-tripping.

Specifically, the scenario which broke is the following:
1. A 20.1 node processes an AdminSplit, and performs the tranction
   writing the new descriptors. The descriptors have the
   GenerationCompable field set.
2. The lease changes while the respective txn is running. The lease
   moves to a 20.2 node.
3. The 20.2 node evaluates the EndTxn, and computes the split trigger
   info that's going to be replicated. The EndTxn has split info in it
   containing the field set, but the field is dropped when converting
   that into the proposed SplitTrigger (since the 20.2 unmarshalls and
   re-marshalls the descriptors).
4. All the 20.1 replicas of the ranges involved now apply the respective
   trigger via Raft, and their in-memory state doesn't have the field
   set. This doesn't match the bytes written in the database, which have
   the field.
5. The discrepancy between the in-memory state and the db state is a
   problem, as it causes the 20.1 node to spin if it tries to perform
   subsequent merge/split operations. The reason is that the code
   performing these operations short-circuits itself if it detects that
   the descriptor has changed while the operation was running. This
   detection is performed via the generated Equals() method, and it
   mis-fires because of the phantom field. That detection happens here:
   https://github.com/cockroachdb/cockroach/blob/79c01d28da9c379f67bb41beef3d85ad3bee1da1/pkg/kv/kvserver/replica_command.go#L1957

This patch takes precautions so that we can remove the field again in
21.1 - I'm merging this in 21.1, I'll backport it to 20.2, and then I'll
come back to 20.2 and remove the field. Namely, the patch changes
RangeDesc.Equal() to ignore that field (and the method is no longer
generated).

Fixes cockroachdb#53535

Release note: None
andreimatei added a commit to andreimatei/cockroach that referenced this issue Sep 16, 2020
We dropped this field recently, but unfortunately that wasn't safe for
mixed-version clusters. The rub is that 20.1 nodes need to roundtrip the
proto through 20.2 nodes in a fairly subtle way. When it comes back to
the 20.1 node, the descriptor needs to compare Equal() to the original.
We configure our protos to not preserve unrecognized fields, so removing
the field breaks this round-tripping.

Specifically, the scenario which broke is the following:
1. A 20.1 node processes an AdminSplit, and performs the tranction
   writing the new descriptors. The descriptors have the
   GenerationCompable field set.
2. The lease changes while the respective txn is running. The lease
   moves to a 20.2 node.
3. The 20.2 node evaluates the EndTxn, and computes the split trigger
   info that's going to be replicated. The EndTxn has split info in it
   containing the field set, but the field is dropped when converting
   that into the proposed SplitTrigger (since the 20.2 unmarshalls and
   re-marshalls the descriptors).
4. All the 20.1 replicas of the ranges involved now apply the respective
   trigger via Raft, and their in-memory state doesn't have the field
   set. This doesn't match the bytes written in the database, which have
   the field.
5. The discrepancy between the in-memory state and the db state is a
   problem, as it causes the 20.1 node to spin if it tries to perform
   subsequent merge/split operations. The reason is that the code
   performing these operations short-circuits itself if it detects that
   the descriptor has changed while the operation was running. This
   detection is performed via the generated Equals() method, and it
   mis-fires because of the phantom field. That detection happens here:
   https://github.com/cockroachdb/cockroach/blob/79c01d28da9c379f67bb41beef3d85ad3bee1da1/pkg/kv/kvserver/replica_command.go#L1957

This patch takes precautions so that we can remove the field again in
21.1 - I'm merging this in 21.1, I'll backport it to 20.2, and then I'll
come back to 20.2 and remove the field. Namely, the patch changes
RangeDesc.Equal() to ignore that field (and the method is no longer
generated).

Fixes cockroachdb#53535

Release note: None
craig bot pushed a commit that referenced this issue Sep 16, 2020
54199: kvserver: reintroduce RangeDesc.GenerationComparable r=andreimatei a=andreimatei

We dropped this field recently, but unfortunately that wasn't safe for
mixed-version clusters. The rub is that 20.1 nodes need to roundtrip the
proto through 20.2 nodes in a fairly subtle way. When it comes back to
the 20.1 node, the descriptor needs to compare Equal() to the original.
We configure our protos to not preserve unrecognized fields, so removing
the field breaks this round-tripping.

Specifically, the scenario which broke is the following:
1. A 20.1 node processes an AdminSplit, and performs the tranction
   writing the new descriptors. The descriptors have the
   GenerationCompable field set.
2. The lease changes while the respective txn is running. The lease
   moves to a 20.2 node.
3. The 20.2 node evaluates the EndTxn, and computes the split trigger
   info that's going to be replicated. The EndTxn has split info in it
   containing the field set, but the field is dropped when converting
   that into the proposed SplitTrigger (since the 20.2 unmarshalls and
   re-marshalls the descriptors).
4. All the 20.1 replicas of the ranges involved now apply the respective
   trigger via Raft, and their in-memory state doesn't have the field
   set. This doesn't match the bytes written in the database, which have
   the field.
5. The discrepancy between the in-memory state and the db state is a
   problem, as it causes the 20.1 node to spin if it tries to perform
   subsequent merge/split operations. The reason is that the code
   performing these operations short-circuits itself if it detects that
   the descriptor has changed while the operation was running. This
   detection is performed via the generated Equals() method, and it
   mis-fires because of the phantom field. That detection happens here:
   https://github.com/cockroachdb/cockroach/blob/79c01d28da9c379f67bb41beef3d85ad3bee1da1/pkg/kv/kvserver/replica_command.go#L1957

This patch takes precautions so that we can remove the field again in
21.1 - I'm merging this in 21.1, I'll backport it to 20.2, and then I'll
come back to 20.2 and remove the field.

Fixes #53535

Release note: None

Co-authored-by: Andrei Matei <[email protected]>
@craig craig bot closed this as completed in a578719 Sep 16, 2020
arulajmani pushed a commit that referenced this issue Sep 18, 2020
We dropped this field recently, but unfortunately that wasn't safe for
mixed-version clusters. The rub is that 20.1 nodes need to roundtrip the
proto through 20.2 nodes in a fairly subtle way. When it comes back to
the 20.1 node, the descriptor needs to compare Equal() to the original.
We configure our protos to not preserve unrecognized fields, so removing
the field breaks this round-tripping.

Specifically, the scenario which broke is the following:
1. A 20.1 node processes an AdminSplit, and performs the tranction
   writing the new descriptors. The descriptors have the
   GenerationCompable field set.
2. The lease changes while the respective txn is running. The lease
   moves to a 20.2 node.
3. The 20.2 node evaluates the EndTxn, and computes the split trigger
   info that's going to be replicated. The EndTxn has split info in it
   containing the field set, but the field is dropped when converting
   that into the proposed SplitTrigger (since the 20.2 unmarshalls and
   re-marshalls the descriptors).
4. All the 20.1 replicas of the ranges involved now apply the respective
   trigger via Raft, and their in-memory state doesn't have the field
   set. This doesn't match the bytes written in the database, which have
   the field.
5. The discrepancy between the in-memory state and the db state is a
   problem, as it causes the 20.1 node to spin if it tries to perform
   subsequent merge/split operations. The reason is that the code
   performing these operations short-circuits itself if it detects that
   the descriptor has changed while the operation was running. This
   detection is performed via the generated Equals() method, and it
   mis-fires because of the phantom field. That detection happens here:
   https://github.com/cockroachdb/cockroach/blob/79c01d28da9c379f67bb41beef3d85ad3bee1da1/pkg/kv/kvserver/replica_command.go#L1957

This patch takes precautions so that we can remove the field again in
21.1 - I'm merging this in 21.1, I'll backport it to 20.2, and then I'll
come back to 20.2 and remove the field. Namely, the patch changes
RangeDesc.Equal() to ignore that field (and the method is no longer
generated).

Fixes #53535

Release note: None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants