Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage: TestReplicateQueueRebalance failed under stress #38156

Closed
cockroach-teamcity opened this issue Jun 14, 2019 · 3 comments
Closed

storage: TestReplicateQueueRebalance failed under stress #38156

cockroach-teamcity opened this issue Jun 14, 2019 · 3 comments
Assignees
Labels
branch-master Failures and bugs on the master branch. C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.

Comments

@cockroach-teamcity
Copy link
Member

SHA: https://github.com/cockroachdb/cockroach/commits/91f2f85c13d1465875326cc1c0ecacdf1874a291

Parameters:

TAGS=
GOFLAGS=-race

To repro, try:

# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stressrace instead of stress and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stress TESTS=TestReplicateQueueRebalance PKG=github.com/cockroachdb/cockroach/pkg/storage TESTTIMEOUT=5m STRESSFLAGS='-stderr=false -maxtime 20m -timeout 10m'

Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1337745&tab=buildLog

I190614 08:25:40.234766 115750 rpc/nodedialer/nodedialer.go:95  [ct-client] unable to connect to n5: failed to grpc dial n5 at 127.0.0.1:34487: context canceled
I190614 08:25:40.236497 115751 rpc/nodedialer/nodedialer.go:95  [ct-client] unable to connect to n2: failed to grpc dial n2 at 127.0.0.1:37855: context canceled
I190614 08:25:40.239026 115735 rpc/nodedialer/nodedialer.go:95  [ct-client] unable to connect to n1: failed to grpc dial n1 at 127.0.0.1:43479: context canceled
I190614 08:25:40.240145 115739 rpc/nodedialer/nodedialer.go:95  [ct-client] unable to connect to n1: failed to grpc dial n1 at 127.0.0.1:43479: context canceled
I190614 08:25:40.240287 115734 rpc/nodedialer/nodedialer.go:95  [ct-client] unable to connect to n3: failed to grpc dial n3 at 127.0.0.1:36321: context canceled
I190614 08:25:40.241103 115738 rpc/nodedialer/nodedialer.go:95  [ct-client] unable to connect to n2: failed to grpc dial n2 at 127.0.0.1:37855: context canceled
I190614 08:25:40.242243 115740 rpc/nodedialer/nodedialer.go:95  [ct-client] unable to connect to n4: failed to grpc dial n4 at 127.0.0.1:41115: context canceled
I190614 08:25:40.243097 115741 rpc/nodedialer/nodedialer.go:95  [ct-client] unable to connect to n5: failed to grpc dial n5 at 127.0.0.1:34487: context canceled
I190614 08:25:40.241240 115707 util/stop/stopper.go:548  quiescing; tasks left:
1      [async] transport racer
1      [async] closedts-rangefeed-subscriber
I190614 08:25:40.244290 115707 util/stop/stopper.go:548  quiescing; tasks left:
1      [async] transport racer
I190614 08:25:40.293988 115757 rpc/nodedialer/nodedialer.go:95  [ct-client] unable to connect to n4: failed to grpc dial n4 at 127.0.0.1:41115: context canceled
I190614 08:25:40.296327 115755 rpc/nodedialer/nodedialer.go:95  [ct-client] unable to connect to n2: failed to grpc dial n2 at 127.0.0.1:37855: context canceled
I190614 08:25:40.298203 115756 rpc/nodedialer/nodedialer.go:95  [ct-client] unable to connect to n3: failed to grpc dial n3 at 127.0.0.1:36321: context canceled
I190614 08:25:40.384646 115784 rpc/nodedialer/nodedialer.go:95  [ct-client] unable to connect to n1: failed to grpc dial n1 at 127.0.0.1:43479: context canceled
I190614 08:25:40.396568 115787 rpc/nodedialer/nodedialer.go:95  [ct-client] unable to connect to n2: failed to grpc dial n2 at 127.0.0.1:37855: context canceled
W190614 08:25:40.483703 112156 storage/store.go:1654  [n2,s2,r1/3:/{Min-System/}] could not gossip first range descriptor: node unavailable; try another peer
W190614 08:25:40.528396 112156 storage/store.go:1654  [n2,s2,r1/3:/{Min-System/}] could not gossip first range descriptor: node unavailable; try another peer
W190614 08:25:40.635514 112156 storage/store.go:1654  [n2,s2,r1/3:/{Min-System/}] could not gossip first range descriptor: node unavailable; try another peer
W190614 08:25:40.841916 112156 storage/store.go:1654  [n2,s2,r1/3:/{Min-System/}] could not gossip first range descriptor: node unavailable; try another peer
I190614 08:25:41.018602 111683 kv/transport_race.go:91  transport race promotion: ran 113 iterations on up to 2958 requests
W190614 08:25:41.071362 112459 storage/store.go:1654  [n3,s3,r1/2:/{Min-System/}] could not gossip first range descriptor: node unavailable; try another peer
--- FAIL: TestReplicateQueueRebalance (62.82s)
	soon.go:49: condition failed to evaluate within 45s: not balanced: [13 13 22 14 13]
		goroutine 111452 [running]:
		runtime/debug.Stack(0xa7a358200, 0xc4217731d0, 0x3c484a0)
			/usr/local/go/src/runtime/debug/stack.go:24 +0xb5
		github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x3ca61e0, 0xc4206c0000, 0xc4203d8ce0)
			/go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:50 +0x172
		github.com/cockroachdb/cockroach/pkg/storage_test.TestReplicateQueueRebalance(0xc4206c0000)
			/go/src/github.com/cockroachdb/cockroach/pkg/storage/replicate_queue_test.go:100 +0x717
		testing.tRunner(0xc4206c0000, 0x3505b68)
			/usr/local/go/src/testing/testing.go:777 +0x16e
		created by testing.(*T).Run
			/usr/local/go/src/testing/testing.go:824 +0x565

@cockroach-teamcity cockroach-teamcity added C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. labels Jun 14, 2019
@tbg tbg assigned darinpp and unassigned petermattis Jun 17, 2019
@andreimatei
Copy link
Contributor

This test is also routinely extremely slow: #38550

@cockroach-teamcity
Copy link
Member Author

(storage).TestReplicateQueueRebalance failed on master@dba32b33f6eda3af1eef4f8636fb0de592f4cd86:

Fatal error:

F200115 22:59:17.131527 1110933 kv/txn_interceptor_span_refresher.go:149  [n1,replicate,s1,r44/1:/Table/5{5-6},txn=55dc1354] unexpected batch read timestamp: 1579129156.400563836,0. Expected refreshed timestamp: 1579129156.562737774,1. ba: [txn: 55dc1354], EndTxn(commit:false) [/Min] tsflex:false

Stack:

goroutine 1110933 [running]:
github.com/cockroachdb/cockroach/pkg/util/log.getStacks(0x9c10d01, 0x0, 0x0, 0xc0083264c0)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/log/get_stacks.go:25 +0xc6
github.com/cockroachdb/cockroach/pkg/util/log.(*loggerT).outputLogEntry(0x9245d00, 0xc000000004, 0x8a1d41a, 0x24, 0x95, 0xc00067a8c0, 0xdf)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:211 +0xbe3
github.com/cockroachdb/cockroach/pkg/util/log.addStructured(0x6989f40, 0xc001fb84b0, 0xc000000004, 0x2, 0x5ca0609, 0x4d, 0xc007c13880, 0x3, 0x3)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/log/structured.go:66 +0x291
github.com/cockroachdb/cockroach/pkg/util/log.logDepth(0x6989f40, 0xc001fb84b0, 0x1, 0xc000000004, 0x5ca0609, 0x4d, 0xc007c13880, 0x3, 0x3)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/log/log.go:44 +0x9a
github.com/cockroachdb/cockroach/pkg/util/log.Fatalf(...)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/log/log.go:155
github.com/cockroachdb/cockroach/pkg/kv.(*txnSpanRefresher).SendLocked(0xc000d3a7c0, 0x6989f40, 0xc001fb84b0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_interceptor_span_refresher.go:149 +0xb2d
github.com/cockroachdb/cockroach/pkg/kv.(*txnPipeliner).SendLocked(0xc000d3a710, 0x6989f40, 0xc001fb84b0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_interceptor_pipeliner.go:208 +0x1ea
github.com/cockroachdb/cockroach/pkg/kv.(*txnSeqNumAllocator).SendLocked(0xc000d3a6f0, 0x6989f40, 0xc001fb84b0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_interceptor_seq_num_allocator.go:104 +0x2c3
github.com/cockroachdb/cockroach/pkg/kv.(*txnHeartbeater).SendLocked(0xc000d3a650, 0x6989f40, 0xc001fb84b0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_interceptor_heartbeater.go:168 +0xff
github.com/cockroachdb/cockroach/pkg/kv.(*TxnCoordSender).Send(0xc000d3a480, 0x6989f40, 0xc001fb84b0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_coord_sender.go:482 +0x6b7
github.com/cockroachdb/cockroach/pkg/internal/client.(*DB).sendUsingSender(0xc00625cf80, 0x6989f00, 0xc006e534a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/db.go:754 +0x174
github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).Send(0xc00731b8c0, 0x6989f00, 0xc006e534a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:862 +0x1d2
github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).rollback(0xc00731b8c0, 0x6989f00, 0xc006e534a0, 0xc00731b8f8)
	/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:660 +0x5fc
github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).CleanupOnError(0xc00731b8c0, 0x6989f00, 0xc006e534a0, 0x690b1c0, 0xc0047e4bb0)
	/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:554 +0x9e
github.com/cockroachdb/cockroach/pkg/internal/client.(*DB).Txn(0xc00625cf80, 0x6989f00, 0xc006e534a0, 0xc009a38ec0, 0x4, 0x4)
	/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/db.go:720 +0x21d
github.com/cockroachdb/cockroach/pkg/storage.execChangeReplicasTxn(0x6989f00, 0xc006e534a0, 0xc003f5a700, 0xc0031b6a80, 0x5bd454a, 0x9, 0xc000238f00, 0x92, 0xc005353f10, 0x1, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_command.go:1688 +0x2fa
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).atomicReplicationChange(0xc000e16400, 0x6989f00, 0xc006e534a0, 0xc0031b6a80, 0x2, 0x5bd454a, 0x9, 0xc000238f00, 0x92, 0xc00575b880, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_command.go:1343 +0x99b
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).changeReplicasImpl(0xc000e16400, 0x6989f00, 0xc006e534a0, 0xc005c8f760, 0x2, 0x5bd454a, 0x9, 0xc000238f00, 0x92, 0xc00575b880, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_command.go:1055 +0x604
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).ChangeReplicas(0xc000e16400, 0x6989f00, 0xc006e534a0, 0xc005c8f760, 0x2, 0x5bd454a, 0x9, 0xc000238f00, 0x92, 0xc00575b880, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_command.go:975 +0x298
github.com/cockroachdb/cockroach/pkg/storage.(*replicateQueue).changeReplicas(0xc00056be30, 0x6989f00, 0xc006e534a0, 0xc000e16400, 0xc00575b880, 0x2, 0x2, 0xc005c8f760, 0x2, 0x5bd454a, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replicate_queue.go:1000 +0xee
github.com/cockroachdb/cockroach/pkg/storage.(*replicateQueue).considerRebalance(0xc00056be30, 0x6989f00, 0xc006e534a0, 0xc000e16400, 0xc0019f8240, 0x3, 0x4, 0xc006e919a8, 0x0, 0x1, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replicate_queue.go:886 +0x949
github.com/cockroachdb/cockroach/pkg/storage.(*replicateQueue).processOneChange(0xc00056be30, 0x6989f00, 0xc006e534a0, 0xc000e16400, 0xc0088f79a8, 0x0, 0x3fc3333333333333, 0x0, 0x2faf080)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replicate_queue.go:409 +0x12ff
github.com/cockroachdb/cockroach/pkg/storage.(*replicateQueue).process(0xc00056be30, 0x6989f00, 0xc006e534a0, 0xc000e16400, 0xc0069e4b90, 0x5bdd544, 0xd)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replicate_queue.go:267 +0x22a
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processReplica.func1(0x6989f00, 0xc006e534a0, 0xdf8475800, 0x6989f00)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:898 +0x24e
github.com/cockroachdb/cockroach/pkg/util/contextutil.RunWithTimeout(0x6989f00, 0xc006e534a0, 0xc006fa8570, 0x22, 0xdf8475800, 0xc002cc3d98, 0x0, 0x0)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/contextutil/context.go:135 +0xde
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processReplica(0xc001f3e420, 0x6989f40, 0xc0075244b0, 0x69ff600, 0xc000e16400, 0x0, 0x0)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:857 +0x34b
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1.2(0x6989f40, 0xc005c46690)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:785 +0x116
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask.func1(0xc002f8c5a0, 0x6989f40, 0xc005c46690, 0xc006fa8390, 0x2d, 0x0, 0x0, 0xc005c466c0)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:322 +0x163
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:317 +0x14b
Log preceding fatal error

W200115 22:59:17.051759 521175 storage/raft_transport.go:637  [n5] while processing outgoing Raft queue to node 4: rpc error: code = Canceled desc = grpc: the client connection is closing:
W200115 22:59:17.052987 385871 storage/raft_transport.go:637  [n5] while processing outgoing Raft queue to node 1: rpc error: code = Canceled desc = grpc: the client connection is closing:
I200115 22:59:17.053368 1118997 rpc/nodedialer/nodedialer.go:160  [ct-client] unable to connect to n5: failed to connect to n5 at 127.0.0.1:43613: context canceled
I200115 22:59:17.058944 380142 storage/queue.go:524  [n5,s5] rate limited in MaybeAdd (split): node unavailable; try another peer
E200115 22:59:17.065389 1118544 storage/queue.go:1035  [n1,merge,s1,r40/1:/Table/5{1-2}] failed to send RPC: sending to all 3 replicas failed; last error: <nil> failed to connect to n3 at 127.0.0.1:43783: context canceled
W200115 22:59:17.066698 384827 storage/raft_transport.go:637  [n1] while processing outgoing Raft queue to node 5: rpc error: code = Unavailable desc = transport is closing:
I200115 22:59:17.069522 379975 storage/queue.go:524  [n5,s5,r3/3:/System/{NodeLive…-tsd}] rate limited in MaybeAdd (raftlog): node unavailable; try another peer
E200115 22:59:17.073090 1118796 storage/queue.go:1035  [n2,replicaGC,s2,r44/4:/Table/5{5-6}] failed to send RPC: sending to all 3 replicas failed; last error: <nil> failed to connect to n3 at 127.0.0.1:43783: context canceled
I200115 22:59:17.076124 1110933 rpc/nodedialer/nodedialer.go:160  [n1,replicate,s1,r44/1:/Table/5{5-6},txn=55dc1354] unable to connect to n3: failed to connect to n3 at 127.0.0.1:43783: context canceled
I200115 22:59:17.077921 370220 storage/queue.go:524  [n1,s1,r3/1:/System/{NodeLive…-tsd}] rate limited in MaybeAdd (raftlog): node unavailable; try another peer
W200115 22:59:17.080369 374030 ts/db.go:194  [n2,ts-poll] error writing time series data: failed to send RPC: sending to all 3 replicas failed; last error: <nil> failed to connect to n4 at 127.0.0.1:45123: context canceled
I200115 22:59:17.084829 370267 storage/queue.go:524  [n1,s1] rate limited in MaybeAdd (replicaGC): node unavailable; try another peer
W200115 22:59:17.085798 375194 ts/db.go:194  [n3,ts-poll] error writing time series data: node unavailable; try another peer
I200115 22:59:17.095742 376929 storage/queue.go:524  [n4,s4] rate limited in MaybeAdd (replicaGC): node unavailable; try another peer
I200115 22:59:17.095889 1118772 rpc/nodedialer/nodedialer.go:160  [ct-client] unable to connect to n3: failed to connect to n3 at 127.0.0.1:43783: context canceled
I200115 22:59:17.097744 1118770 rpc/nodedialer/nodedialer.go:160  [ct-client] unable to connect to n2: failed to connect to n2 at 127.0.0.1:38285: context canceled
I200115 22:59:17.098666 1118866 rpc/nodedialer/nodedialer.go:160  [ct-client] unable to connect to n4: failed to connect to n4 at 127.0.0.1:45123: context canceled
I200115 22:59:17.099639 1119013 rpc/nodedialer/nodedialer.go:160  [ct-client] unable to connect to n1: failed to connect to n1 at 127.0.0.1:44429: context canceled
I200115 22:59:17.100047 1118867 rpc/nodedialer/nodedialer.go:160  [ct-client] unable to connect to n5: failed to connect to n5 at 127.0.0.1:43613: context canceled
I200115 22:59:17.102087 1118865 rpc/nodedialer/nodedialer.go:160  [ct-client] unable to connect to n2: failed to connect to n2 at 127.0.0.1:38285: context canceled
I200115 22:59:17.103397 1118771 rpc/nodedialer/nodedialer.go:160  [ct-client] unable to connect to n5: failed to connect to n5 at 127.0.0.1:43613: context canceled
I200115 22:59:17.104490 375173 storage/queue.go:524  [n3,s3] rate limited in MaybeAdd (consistencyChecker): node unavailable; try another peer
W200115 22:59:17.106335 370180 ts/db.go:194  [n1,ts-poll] error writing time series data: failed to send RPC: sending to all 3 replicas failed; last error: <nil> failed to connect to n4 at 127.0.0.1:45123: context canceled
I200115 22:59:17.106770 380149 storage/node_liveness.go:802  [n5,liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (error=rpc error: code = Unavailable desc = transport is closing [exhausted])
W200115 22:59:17.107382 1112466 internal/client/txn.go:558  [n3,replicate,s3,r42/3:/Table/5{3-4}] failure aborting transaction: node unavailable; try another peer; abort caused by: log-range-event: failed to send RPC: sending to all 5 replicas failed; last error: (err: node unavailable; try another peer) <nil>
I200115 22:59:17.108918 380142 storage/queue.go:524  [n5,s5] rate limited in MaybeAdd (replicaGC): node unavailable; try another peer
E200115 22:59:17.109208 1119002 vendor/google.golang.org/grpc/pickfirst.go:61  pickfirstBalancer: failed to NewSubConn: rpc error: code = Canceled desc = grpc: the client connection is closing
E200115 22:59:17.109618 1119003 vendor/google.golang.org/grpc/pickfirst.go:61  pickfirstBalancer: failed to NewSubConn: rpc error: code = Canceled desc = grpc: the client connection is closing
I200115 22:59:17.112621 370267 storage/queue.go:524  [n1,s1] rate limited in MaybeAdd (raftsnapshot): node unavailable; try another peer
I200115 22:59:17.113334 1110933 rpc/nodedialer/nodedialer.go:160  [n1,replicate,s1,r44/1:/Table/5{5-6},txn=55dc1354] unable to connect to n5: failed to connect to n5 at 127.0.0.1:43613: context canceled
E200115 22:59:17.113988 1119017 vendor/google.golang.org/grpc/pickfirst.go:61  pickfirstBalancer: failed to NewSubConn: rpc error: code = Canceled desc = grpc: the client connection is closing
I200115 22:59:17.115695 376929 storage/queue.go:524  [n4,s4] rate limited in MaybeAdd (raftsnapshot): node unavailable; try another peer
I200115 22:59:17.116765 375173 storage/queue.go:524  [n3,s3] rate limited in MaybeAdd (timeSeriesMaintenance): node unavailable; try another peer
W200115 22:59:17.119206 380149 internal/client/txn.go:558  [n5,liveness-hb] failure aborting transaction: node unavailable; try another peer; abort caused by: node unavailable; try another peer
I200115 22:59:17.123581 1112466 storage/replica_command.go:1068  [n3,replicate,s3,r42/3:/Table/5{3-4}] could not promote [n2,s2] to voter, rolling back: change replicas of r42 failed: log-range-event: failed to send RPC: sending to all 5 replicas failed; last error: (err: node unavailable; try another peer) <nil>
I200115 22:59:17.123722 1118976 rpc/nodedialer/nodedialer.go:160  [ct-client] unable to connect to n3: failed to connect to n3 at 127.0.0.1:43783: context canceled
I200115 22:59:17.124468 380142 storage/queue.go:524  [n5,s5] rate limited in MaybeAdd (raftsnapshot): node unavailable; try another peer
I200115 22:59:17.125652 1118974 rpc/nodedialer/nodedialer.go:160  [ct-client] unable to connect to n1: failed to connect to n1 at 127.0.0.1:44429: context canceled
I200115 22:59:17.125807 1118977 rpc/nodedialer/nodedialer.go:160  [ct-client] unable to connect to n2: failed to connect to n2 at 127.0.0.1:38285: context canceled
I200115 22:59:17.125961 370267 storage/queue.go:524  [n1,s1] rate limited in MaybeAdd (consistencyChecker): node unavailable; try another peer
E200115 22:59:17.126770 1119062 vendor/google.golang.org/grpc/pickfirst.go:61  pickfirstBalancer: failed to NewSubConn: rpc error: code = Canceled desc = grpc: the client connection is closing
I200115 22:59:17.126874 376929 storage/queue.go:524  [n4,s4] rate limited in MaybeAdd (consistencyChecker): node unavailable; try another peer
I200115 22:59:17.127099 1118975 rpc/nodedialer/nodedialer.go:160  [ct-client] unable to connect to n4: failed to connect to n4 at 127.0.0.1:45123: context canceled
W200115 22:59:17.127607 380149 storage/node_liveness.go:469  [n5,liveness-hb] failed node liveness heartbeat: node unavailable; try another peer
I200115 22:59:17.128588 1110933 rpc/nodedialer/nodedialer.go:160  [n1,replicate,s1,r44/1:/Table/5{5-6},txn=55dc1354] unable to connect to n4: failed to connect to n4 at 127.0.0.1:45123: context canceled
I200115 22:59:17.128874 1112466 storage/replica_command.go:1392  [n3,replicate,s3,r42/3:/Table/5{3-4}] failed to rollback learner n2,s2, abandoning it for the replicate queue: change replicas of r42 failed: fetching current range descriptor value: node unavailable; try another peer
I200115 22:59:17.128954 375173 storage/queue.go:524  [n3,s3] rate limited in MaybeAdd (split): node unavailable; try another peer
I200115 22:59:17.129179 380142 storage/queue.go:524  [n5,s5] rate limited in MaybeAdd (consistencyChecker): node unavailable; try another peer
I200115 22:59:17.129763 370267 storage/queue.go:524  [n1,s1] rate limited in MaybeAdd (timeSeriesMaintenance): node unavailable; try another peer
I200115 22:59:17.130146 376929 storage/queue.go:524  [n4,s4] rate limited in MaybeAdd (timeSeriesMaintenance): node unavailable; try another peer

Repro

Parameters:

  • GOFLAGS=-json
make stressrace TESTS=TestReplicateQueueRebalance PKG=./pkg/storage TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1

powered by pkg/cmd/internal/issues

@cockroach-teamcity
Copy link
Member Author

(storage).TestReplicateQueueRebalance failed on master@1a6c7b9c97a121c2a35c45e8b008dff7262614e7:

Fatal error:

F200116 21:54:35.411031 1159825 kv/txn_interceptor_span_refresher.go:149  [n2,replicate,s2,r41/2:/Table/5{2-3},txn=3faf8b47] unexpected batch read timestamp: 1579211674.802674101,0. Expected refreshed timestamp: 1579211675.134172458,1. ba: [txn: 3faf8b47], EndTxn(commit:false) [/Min] tsflex:false

Stack:

goroutine 1159825 [running]:
github.com/cockroachdb/cockroach/pkg/util/log.getStacks(0x9c13d01, 0x0, 0x0, 0xc00050b000)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/log/get_stacks.go:25 +0xc6
github.com/cockroachdb/cockroach/pkg/util/log.(*loggerT).outputLogEntry(0x9248d40, 0xc000000004, 0x8a2056a, 0x24, 0x95, 0xc0032b6460, 0xdf)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:211 +0xbe3
github.com/cockroachdb/cockroach/pkg/util/log.addStructured(0x698c6c0, 0xc003ed7620, 0xc000000004, 0x2, 0x5ca2510, 0x4d, 0xc00775b768, 0x3, 0x3)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/log/structured.go:66 +0x291
github.com/cockroachdb/cockroach/pkg/util/log.logDepth(0x698c6c0, 0xc003ed7620, 0x1, 0xc000000004, 0x5ca2510, 0x4d, 0xc00775b768, 0x3, 0x3)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/log/log.go:44 +0x9a
github.com/cockroachdb/cockroach/pkg/util/log.Fatalf(...)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/log/log.go:155
github.com/cockroachdb/cockroach/pkg/kv.(*txnSpanRefresher).SendLocked(0xc000a0cc40, 0x698c6c0, 0xc003ed7620, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_interceptor_span_refresher.go:149 +0xb2d
github.com/cockroachdb/cockroach/pkg/kv.(*txnPipeliner).SendLocked(0xc000a0cb90, 0x698c6c0, 0xc003ed7620, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_interceptor_pipeliner.go:208 +0x1ea
github.com/cockroachdb/cockroach/pkg/kv.(*txnSeqNumAllocator).SendLocked(0xc000a0cb70, 0x698c6c0, 0xc003ed7620, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_interceptor_seq_num_allocator.go:104 +0x2c3
github.com/cockroachdb/cockroach/pkg/kv.(*txnHeartbeater).SendLocked(0xc000a0cad0, 0x698c6c0, 0xc003ed7620, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_interceptor_heartbeater.go:168 +0xff
github.com/cockroachdb/cockroach/pkg/kv.(*TxnCoordSender).Send(0xc000a0c900, 0x698c6c0, 0xc003ed7620, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/kv/txn_coord_sender.go:482 +0x6b7
github.com/cockroachdb/cockroach/pkg/internal/client.(*DB).sendUsingSender(0xc00137b780, 0x698c680, 0xc007a9b920, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/db.go:754 +0x174
github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).Send(0xc0057c13b0, 0x698c680, 0xc007a9b920, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:862 +0x1d2
github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).rollback(0xc0057c13b0, 0x698c680, 0xc007a9b920, 0xc0057c13e8)
	/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:660 +0x5fc
github.com/cockroachdb/cockroach/pkg/internal/client.(*Txn).CleanupOnError(0xc0057c13b0, 0x698c680, 0xc007a9b920, 0x690d780, 0x9c140b8)
	/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/txn.go:554 +0x9e
github.com/cockroachdb/cockroach/pkg/internal/client.(*DB).Txn(0xc00137b780, 0x698c680, 0xc007a9b920, 0xc00775cda8, 0x4, 0x4)
	/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/db.go:720 +0x21d
github.com/cockroachdb/cockroach/pkg/storage.execChangeReplicasTxn(0x698c680, 0xc007a9b920, 0xc000445c00, 0xc00004fb90, 0x5bfef0e, 0x19, 0x0, 0x0, 0xc00775ced4, 0x1, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_command.go:1688 +0x2fa
github.com/cockroachdb/cockroach/pkg/storage.maybeLeaveAtomicChangeReplicasAndRemoveLearners(0x698c680, 0xc007a9b920, 0xc000445c00, 0xc00004fb90, 0x5bd640a, 0x9, 0xc001911a40)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_command.go:1134 +0x3f0
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).atomicReplicationChange(0xc001664000, 0x698c680, 0xc007a9b920, 0xc002b6aee0, 0x2, 0x5bd640a, 0x9, 0xc001911a40, 0x92, 0xc00686caac, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_command.go:1354 +0xa8d
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).changeReplicasImpl(0xc001664000, 0x698c680, 0xc007a9b920, 0xc003e668c0, 0x2, 0x5bd640a, 0x9, 0xc001911a40, 0x92, 0xc00686caac, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_command.go:1055 +0x604
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).ChangeReplicas(0xc001664000, 0x698c680, 0xc007a9b920, 0xc001a9cc60, 0x2, 0x5bd640a, 0x9, 0xc001911a40, 0x92, 0xc00686caa0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_command.go:975 +0x298
github.com/cockroachdb/cockroach/pkg/storage.(*replicateQueue).changeReplicas(0xc00471d110, 0x698c680, 0xc007a9b920, 0xc001664000, 0xc00686caa0, 0x2, 0x2, 0xc001a9cc60, 0x2, 0x5bd640a, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replicate_queue.go:1000 +0xee
github.com/cockroachdb/cockroach/pkg/storage.(*replicateQueue).considerRebalance(0xc00471d110, 0x698c680, 0xc007a9b920, 0xc001664000, 0xc00766dd40, 0x3, 0x4, 0xc0048b19a8, 0x0, 0x1, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replicate_queue.go:886 +0x949
github.com/cockroachdb/cockroach/pkg/storage.(*replicateQueue).processOneChange(0xc00471d110, 0x698c680, 0xc007a9b920, 0xc001664000, 0xc004ee99a8, 0x0, 0x3fc3333333333333, 0x0, 0x2faf080)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replicate_queue.go:409 +0x12ff
github.com/cockroachdb/cockroach/pkg/storage.(*replicateQueue).process(0xc00471d110, 0x698c680, 0xc007a9b920, 0xc001664000, 0xc000bb5310, 0x5bdf404, 0xd)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replicate_queue.go:267 +0x22a
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processReplica.func1(0x698c680, 0xc007a9b920, 0xdf8475800, 0x698c680)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:898 +0x24e
github.com/cockroachdb/cockroach/pkg/util/contextutil.RunWithTimeout(0x698c680, 0xc007a9b920, 0xc0038a5410, 0x22, 0xdf8475800, 0xc007fbdd98, 0x0, 0x0)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/contextutil/context.go:135 +0xde
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processReplica(0xc00315eb00, 0x698c6c0, 0xc003b3ac30, 0x6a01de0, 0xc001664000, 0x0, 0x0)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:857 +0x34b
github.com/cockroachdb/cockroach/pkg/storage.(*baseQueue).processLoop.func1.2(0x698c6c0, 0xc005f84a50)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/queue.go:785 +0x116
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask.func1(0xc002706a00, 0x698c6c0, 0xc005f84a50, 0xc0038a5170, 0x2d, 0x0, 0x0, 0xc005f84a80)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:322 +0x163
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:317 +0x14b
Log preceding fatal error

I200116 21:54:33.827944 381977 storage/queue.go:524  [n4,s4] rate limited in MaybeAdd (replicate): throttled on async limiting semaphore
I200116 21:54:33.847815 384678 storage/queue.go:524  [n5,s5] rate limited in MaybeAdd (replicate): throttled on async limiting semaphore
I200116 21:54:33.859487 379943 storage/queue.go:524  [n3,s3] rate limited in MaybeAdd (gc): throttled on async limiting semaphore
I200116 21:54:33.863727 374858 gossip/gossip.go:567  [n1] gossip status (ok, 5 nodes)
gossip client (0/3 cur/max conns)
gossip server (3/3 cur/max conns, infos 3451/851 sent/received, bytes 735043B/210022B sent/received)
  2: 127.0.0.1:39685 (2m58s)
  3: 127.0.0.1:37393 (2m58s)
  4: 127.0.0.1:33117 (2m57s)
I200116 21:54:33.874327 384678 storage/queue.go:524  [n5,s5] rate limited in MaybeAdd (gc): throttled on async limiting semaphore
I200116 21:54:33.898493 381977 storage/queue.go:524  [n4,s4] rate limited in MaybeAdd (merge): throttled on async limiting semaphore
I200116 21:54:33.935958 374862 server/status/runtime.go:498  [n1] runtime stats: 2.3 GiB RSS, 1941 goroutines, 124 MiB/41 MiB/184 MiB GO alloc/idle/total, 250 MiB/312 MiB CGO alloc/total, 3861.6 CGO/sec, 632.1/61.3 %(u/s)time, 0.0 %gc (10x), 1.7 MiB/1.7 MiB (r/w)net
I200116 21:54:33.992606 1159825 storage/replica_raft.go:248  [n2,s2,r41/2:/Table/5{2-3}] proposing ENTER_JOINT(r3 l3) REMOVE_REPLICA[(n5,s5):3VOTER_DEMOTING]: after=[(n3,s3):1 (n2,s2):2 (n5,s5):3VOTER_DEMOTING (n1,s1):4] next=5
I200116 21:54:34.000054 1159314 storage/replica_command.go:1706  [n3,replicate,s3,r29/4:/Table/5{0-1}] change replicas (add [] remove [(n2,s2):2LEARNER]): existing descriptor r29:/Table/5{0-1} [(n3,s3):4, (n2,s2):2LEARNER, (n4,s4):6, (n1,s1):7, next=8, gen=25, sticky=9223372036.854775807,2147483647]
I200116 21:54:34.016156 379943 storage/queue.go:524  [n3,s3] rate limited in MaybeAdd (merge): throttled on async limiting semaphore
I200116 21:54:34.076988 378714 storage/queue.go:524  [n2,s2] rate limited in MaybeAdd (merge): throttled on async limiting semaphore
I200116 21:54:34.102128 378714 storage/queue.go:524  [n2,s2] rate limited in MaybeAdd (gc): throttled on async limiting semaphore
I200116 21:54:34.224171 379943 storage/queue.go:524  [n3,s3] rate limited in MaybeAdd (replicate): throttled on async limiting semaphore
I200116 21:54:34.231734 381977 storage/queue.go:524  [n4,s4] rate limited in MaybeAdd (gc): throttled on async limiting semaphore
I200116 21:54:34.270384 1159825 storage/replica_command.go:1706  [n2,replicate,s2,r41/2:/Table/5{2-3}] change replicas (add [] remove []): existing descriptor r41:/Table/5{2-3} [(n3,s3):1, (n2,s2):2, (n5,s5):3VOTER_DEMOTING, (n1,s1):4, next=5, gen=21, sticky=9223372036.854775807,2147483647]
I200116 21:54:34.396726 378714 storage/queue.go:524  [n2,s2] rate limited in MaybeAdd (replicate): throttled on async limiting semaphore
I200116 21:54:34.715208 1159825 storage/replica_raft.go:248  [n2,s2,r41/2:/Table/5{2-3}] proposing LEAVE_JOINT: after=[(n3,s3):1 (n2,s2):2 (n5,s5):3LEARNER (n1,s1):4] next=5
I200116 21:54:34.807077 1159825 storage/replica_command.go:1706  [n2,replicate,s2,r41/2:/Table/5{2-3}] change replicas (add [] remove [(n5,s5):3LEARNER]): existing descriptor r41:/Table/5{2-3} [(n3,s3):1, (n2,s2):2, (n5,s5):3LEARNER, (n1,s1):4, next=5, gen=22, sticky=9223372036.854775807,2147483647]
I200116 21:54:34.910005 1159314 storage/replica_raft.go:248  [n3,s3,r29/4:/Table/5{0-1}] proposing SIMPLE(r2) REMOVE_REPLICA[(n2,s2):2LEARNER]: after=[(n3,s3):4 (n1,s1):7 (n4,s4):6] next=8
I200116 21:54:34.991899 378775 storage/store_remove_replica.go:129  [n2,s2,r29/2:/Table/5{0-1}] removing replica r29/2
I200116 21:54:35.371482 374627 util/stop/stopper.go:539  quiescing
I200116 21:54:35.379045 1178711 util/stop/stopper.go:539  quiescing
I200116 21:54:35.380484 1178712 util/stop/stopper.go:539  quiescing
I200116 21:54:35.382347 1178710 util/stop/stopper.go:539  quiescing
I200116 21:54:35.382917 379943 storage/queue.go:524  [n3,s3] rate limited in MaybeAdd (split): node unavailable; try another peer
I200116 21:54:35.383426 379943 storage/queue.go:524  [n3,s3] rate limited in MaybeAdd (replicaGC): node unavailable; try another peer
I200116 21:54:35.383659 379943 storage/queue.go:524  [n3,s3] rate limited in MaybeAdd (raftlog): node unavailable; try another peer
I200116 21:54:35.383872 379943 storage/queue.go:524  [n3,s3] rate limited in MaybeAdd (raftsnapshot): node unavailable; try another peer
I200116 21:54:35.384032 379943 storage/queue.go:524  [n3,s3] rate limited in MaybeAdd (consistencyChecker): node unavailable; try another peer
I200116 21:54:35.384196 379943 storage/queue.go:524  [n3,s3] rate limited in MaybeAdd (timeSeriesMaintenance): node unavailable; try another peer
I200116 21:54:35.386081 1178713 util/stop/stopper.go:539  quiescing
I200116 21:54:35.387521 381977 storage/queue.go:524  [n4,s4] rate limited in MaybeAdd (replicaGC): node unavailable; try another peer
I200116 21:54:35.387927 381977 storage/queue.go:524  [n4,s4] rate limited in MaybeAdd (raftlog): node unavailable; try another peer
I200116 21:54:35.388297 381977 storage/queue.go:524  [n4,s4] rate limited in MaybeAdd (raftsnapshot): node unavailable; try another peer
I200116 21:54:35.388516 381977 storage/queue.go:524  [n4,s4] rate limited in MaybeAdd (consistencyChecker): node unavailable; try another peer
I200116 21:54:35.388687 381977 storage/queue.go:524  [n4,s4] rate limited in MaybeAdd (timeSeriesMaintenance): node unavailable; try another peer
I200116 21:54:35.390783 374785 storage/queue.go:524  [n1,s1] rate limited in MaybeAdd (split): node unavailable; try another peer
I200116 21:54:35.391141 1178714 util/stop/stopper.go:539  quiescing
W200116 21:54:35.392656 380123 storage/raft_transport.go:637  [n1] while processing outgoing Raft queue to node 2: rpc error: code = Unavailable desc = transport is closing:
I200116 21:54:35.393634 384678 storage/queue.go:524  [n5,s5] rate limited in MaybeAdd (split): node unavailable; try another peer
I200116 21:54:35.394300 384678 storage/queue.go:524  [n5,s5] rate limited in MaybeAdd (replicaGC): node unavailable; try another peer
I200116 21:54:35.394596 384678 storage/queue.go:524  [n5,s5] rate limited in MaybeAdd (raftlog): node unavailable; try another peer
I200116 21:54:35.394761 384678 storage/queue.go:524  [n5,s5] rate limited in MaybeAdd (raftsnapshot): node unavailable; try another peer
I200116 21:54:35.394890 384678 storage/queue.go:524  [n5,s5] rate limited in MaybeAdd (consistencyChecker): node unavailable; try another peer
I200116 21:54:35.395093 384678 storage/queue.go:524  [n5,s5] rate limited in MaybeAdd (timeSeriesMaintenance): node unavailable; try another peer

Repro

Parameters:

  • GOFLAGS=-json
make stressrace TESTS=TestReplicateQueueRebalance PKG=./pkg/storage TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1

powered by pkg/cmd/internal/issues

@tbg tbg added the branch-master Failures and bugs on the master branch. label Jan 22, 2020
andreimatei added a commit to andreimatei/cockroach that referenced this issue Jan 27, 2020
Before this patch, the refresher interceptor was erroneously asserting
its tracking of the refreshed timestamp is in sync with the
TxnCoordSender. It may, in fact, not be in sync in edge cases where a
refresh succeeded but the TxnCoordSender doesn't hear about that
success.

Touches cockroachdb#38156
Touches cockroachdb#41941
Touches cockroachdb#43707

Release note: None
andreimatei added a commit to andreimatei/cockroach that referenced this issue Jan 29, 2020
Before this patch, the refresher interceptor was erroneously asserting
its tracking of the refreshed timestamp is in sync with the
TxnCoordSender. It may, in fact, not be in sync in edge cases where a
refresh succeeded but the TxnCoordSender doesn't hear about that
success.

Touches cockroachdb#38156
Touches cockroachdb#41941
Touches cockroachdb#43707

Release note: None
craig bot pushed a commit that referenced this issue Jan 30, 2020
44407: storage: improve the migration away from txn.DeprecatedOrigTimestamp r=andreimatei a=andreimatei

19.2 doesn't generally set txn.ReadTimestamp. Instead, it sets
txn.DeprecatedOrigTimestamp. Before this patch, all code dealing with
txn.ReadTimestamp had to deal with the possibility of it not being set.
This is fragile; I recently forgot to deal with it in a patch.
This patch sets txn.ReadTimestamp to txn.DeprecatedOrigTimestamp when it
wasn't set, thereby releaving most other code of they worry.

This comes at the cost of an extra txn clone for requests coming from
19.2 nodes.

Release note: None

44428: storage: fix handling of refreshed timestamp r=andreimatei a=andreimatei

Before this patch, the refresher interceptor was erroneously asserting
its tracking of the refreshed timestamp is in sync with the
TxnCoordSender. It may, in fact, not be in sync in edge cases where a
refresh succeeded but the TxnCoordSender doesn't hear about that
success.

Touches #38156
Touches #41941
Touches #43707

Release note: None

44503: roachpb: fix txn.Update() commutativity r=andreimatei a=andreimatei

Updates to the WriteTooOld field were not commutative. This patch fixes
that, by clarifying that the transaction with the higher ReadTimestamp
gets to dictate the WriteTooOld value.
I'm not sure what consequences this used to have, besides allowing for
the confusing case where the server would receive a request with the
WriteTooOld flag set, but with the ReadTimestamp==WriteTimestamp.  A
future commit introduces a sanity assertion that all the requests with
the WTO flag have a bumped WriteTimestamp.

Release note: None

Co-authored-by: Andrei Matei <[email protected]>
@tbg tbg closed this as completed Mar 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
branch-master Failures and bugs on the master branch. C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Projects
None yet
Development

No branches or pull requests

5 participants