Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sql: TestTxnUserRestart failed under stress #33503

Closed
cockroach-teamcity opened this issue Jan 4, 2019 · 7 comments
Closed

sql: TestTxnUserRestart failed under stress #33503

cockroach-teamcity opened this issue Jan 4, 2019 · 7 comments
Assignees
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Milestone

Comments

@cockroach-teamcity
Copy link
Member

SHA: https://github.com/cockroachdb/cockroach/commits/431b1846249fd2d110706ad221504706014e8b70

Parameters:

TAGS=
GOFLAGS=-race

To repro, try:

# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stressrace instead of stress and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stress TESTS=TestTxnUserRestart PKG=github.com/cockroachdb/cockroach/pkg/sql TESTTIMEOUT=5m STRESSFLAGS='-stderr=false -maxtime 20m -timeout 10m'

Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1080582&tab=buildLog

I190104 14:38:55.367276 163928 storage/replica_command.go:348  [n1,split,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53 [r24]
I190104 14:38:55.383766 163975 sql/txn_restart_test.go:350  [n1,client=127.0.0.1:54848,user=root,intExec=lease-insert] statement filter running on: INSERT INTO system.public.lease("descID", version, "nodeID", expiration) VALUES (53, 1, 1, '2019-01-04 14:43:07.914426+00:00'), with err=<nil>
I190104 14:38:55.410333 163941 sql/txn_restart_test.go:350  [n1,client=127.0.0.1:54848,user=root] statement filter running on: INSERT INTO t.public.test(k, v) VALUES (1, 'boulanger'), with err=HandledRetryableTxnError: TransactionAbortedError(ABORT_REASON_TIMESTAMP_CACHE_REJECTED_POSSIBLE_REPLAY): "sql txn" id=963396ec key=/Table/53/1/1/0 rw=true pri=0.00000000 iso=SERIALIZABLE stat=ABORTED epo=0 ts=1546612735.333484838,1 orig=1546612735.331375820,0 max=1546612735.333484838,0 wto=false rop=false seq=2
I190104 14:38:55.413363 163415 util/stop/stopper.go:537  quiescing; tasks left:
2      node.Node: batch
1      [async] txnHeartbeat: aborting txn
1      [async] transport racer
1      [async] storage.split: processing replica
1      [async] kv.TxnCoordSender: heartbeat loop
1      [async] closedts-rangefeed-subscriber
I190104 14:38:55.416098 163415 util/stop/stopper.go:537  quiescing; tasks left:
1      node.Node: batch
1      [async] transport racer
1      [async] storage.split: processing replica
1      [async] kv.TxnCoordSender: heartbeat loop
1      [async] closedts-rangefeed-subscriber
I190104 14:38:55.419204 163415 util/stop/stopper.go:537  quiescing; tasks left:
1      node.Node: batch
1      [async] transport racer
1      [async] storage.split: processing replica
1      [async] closedts-rangefeed-subscriber
I190104 14:38:55.421053 163415 util/stop/stopper.go:537  quiescing; tasks left:
1      node.Node: batch
1      [async] transport racer
1      [async] storage.split: processing replica
I190104 14:38:55.422013 164008 sql/txn_restart_test.go:350  [n1,split,s1,r23/1:/{Table/52-Max},intExec=log-range-event] statement filter running on: INSERT INTO system.public.rangelog("timestamp", "rangeID", "storeID", "eventType", "otherRangeID", info) VALUES ($1, $2, $3, $4, $5, $6), with err=result is ambiguous (server shutdown)
I190104 14:38:55.422723 163415 util/stop/stopper.go:537  quiescing; tasks left:
1      [async] transport racer
1      [async] storage.split: processing replica
W190104 14:38:55.424798 163928 internal/client/txn.go:532  [n1,split,s1,r23/1:/{Table/52-Max}] failure aborting transaction: node unavailable; try another peer; abort caused by: log-range-event: result is ambiguous (server shutdown)
E190104 14:38:55.429864 163928 storage/queue.go:846  [n1,split,s1,r23/1:/{Table/52-Max}] unable to split [n1,s1,r23/1:/{Table/52-Max}] at key "/Table/53": split at key /Table/53 failed: log-range-event: result is ambiguous (server shutdown)
I190104 14:38:55.443888 163415 util/stop/stopper.go:537  quiescing; tasks left:
1      [async] transport racer
I190104 14:38:55.561121 163653 kv/transport_race.go:91  transport race promotion: ran 33 iterations on up to 858 requests
W190104 14:38:55.568656 163941 internal/client/txn.go:532  [n1,client=127.0.0.1:54848,user=root] failure aborting transaction: node unavailable; try another peer; abort caused by: connExecutor closing
    --- FAIL: TestTxnUserRestart/err=RETRY_POSSIBLE_REPLAY,stgy=0 (4.05s)
    	txn_restart_test.go:896: unexpected error: pq: restart transaction: HandledRetryableTxnError: TransactionAbortedError(ABORT_REASON_TIMESTAMP_CACHE_REJECTED_POSSIBLE_REPLAY): "sql txn" id=963396ec key=/Table/53/1/1/0 rw=true pri=0.00000000 iso=SERIALIZABLE stat=ABORTED epo=0 ts=1546612735.333484838,1 orig=1546612735.331375820,0 max=1546612735.333484838,0 wto=false rop=false seq=2
    	txn_restart_test.go:434: /usr/local/go/src/runtime/asm_amd64.s:573 statement "INSERT INTO t.public.test(k, v) VALUES (0, 'sentinel')" error: 1 additional aborts expected

@cockroach-teamcity cockroach-teamcity added this to the 2.1 milestone Jan 4, 2019
@cockroach-teamcity cockroach-teamcity added C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. labels Jan 4, 2019
@tbg
Copy link
Member

tbg commented Jan 7, 2019

@andreimatei could you give an initial analysis of this?

@cockroach-teamcity
Copy link
Member Author

SHA: https://github.com/cockroachdb/cockroach/commits/8179cd9efec890f1ba063488c7a502a96b8241dc

Parameters:

TAGS=
GOFLAGS=-race

To repro, try:

# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stressrace instead of stress and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stress TESTS=TestTxnUserRestart PKG=github.com/cockroachdb/cockroach/pkg/sql TESTTIMEOUT=5m STRESSFLAGS='-stderr=false -maxtime 20m -timeout 10m'

Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1120630&tab=buildLog

I190201 14:18:21.590555 163917 sql/txn_restart_test.go:350  [n1,client=127.0.0.1:49396,user=root] statement filter running on: DELETE FROM t.public.test WHERE true, with err=HandledRetryableTxnError: TransactionAbortedError(ABORT_REASON_TIMESTAMP_CACHE_REJECTED_POSSIBLE_REPLAY): "sql txn" id=96f33e4e key=/Table/53/1 rw=true pri=0.00000000 iso=SERIALIZABLE stat=ABORTED epo=0 ts=1549030701.472039273,1 orig=1549030701.462643639,0 max=1549030701.472039273,0 wto=false rop=false seq=3
I190201 14:18:21.593250 163414 util/stop/stopper.go:537  quiescing; tasks left:
2      node.Node: batch
2      [async] kv.TxnCoordSender: heartbeat loop
1      [async] txnHeartbeat: aborting txn
1      [async] transport racer
1      [async] storage.split: processing replica
1      [async] closedts-rangefeed-subscriber
I190201 14:18:21.594497 163414 util/stop/stopper.go:537  quiescing; tasks left:
2      node.Node: batch
1      [async] txnHeartbeat: aborting txn
1      [async] transport racer
1      [async] storage.split: processing replica
1      [async] kv.TxnCoordSender: heartbeat loop
1      [async] closedts-rangefeed-subscriber
I190201 14:18:21.595093 163414 util/stop/stopper.go:537  quiescing; tasks left:
2      node.Node: batch
1      [async] txnHeartbeat: aborting txn
1      [async] transport racer
1      [async] storage.split: processing replica
1      [async] closedts-rangefeed-subscriber
I190201 14:18:21.597948 164007 sql/txn_restart_test.go:350  [n1,split,s1,r23/1:/{Table/52-Max},intExec=log-range-event] statement filter running on: INSERT INTO system.public.rangelog("timestamp", "rangeID", "storeID", "eventType", "otherRangeID", info) VALUES ($1, $2, $3, $4, $5, $6), with err=kv/txn_interceptor_heartbeat.go:405: node already quiescing
I190201 14:18:21.598533 163414 util/stop/stopper.go:537  quiescing; tasks left:
1      [async] txnHeartbeat: aborting txn
1      [async] transport racer
1      [async] storage.split: processing replica
1      [async] closedts-rangefeed-subscriber
I190201 14:18:21.600608 163414 util/stop/stopper.go:537  quiescing; tasks left:
1      [async] transport racer
1      [async] storage.split: processing replica
1      [async] closedts-rangefeed-subscriber
I190201 14:18:21.601043 163414 util/stop/stopper.go:537  quiescing; tasks left:
1      [async] transport racer
1      [async] storage.split: processing replica
W190201 14:18:21.601227 163865 internal/client/txn.go:532  [n1,split,s1,r23/1:/{Table/52-Max}] failure aborting transaction: node unavailable; try another peer; abort caused by: log-range-event: kv/txn_interceptor_heartbeat.go:405: node already quiescing
E190201 14:18:21.601871 163865 storage/queue.go:845  [n1,split,s1,r23/1:/{Table/52-Max}] unable to split [n1,s1,r23/1:/{Table/52-Max}] at key "/Table/53": split at key /Table/53 failed: log-range-event: kv/txn_interceptor_heartbeat.go:405: node already quiescing
I190201 14:18:21.603722 163414 util/stop/stopper.go:537  quiescing; tasks left:
1      [async] transport racer
I190201 14:18:21.701722 163435 kv/transport_race.go:91  transport race promotion: ran 69 iterations on up to 856 requests
W190201 14:18:21.715745 163917 internal/client/txn.go:532  [n1,client=127.0.0.1:49396,user=root] failure aborting transaction: node unavailable; try another peer; abort caused by: connExecutor closing
    --- FAIL: TestTxnUserRestart/err=RETRY_POSSIBLE_REPLAY,stgy=1 (4.54s)
    	txn_restart_test.go:904: pq: restart transaction: HandledRetryableTxnError: TransactionAbortedError(ABORT_REASON_TIMESTAMP_CACHE_REJECTED_POSSIBLE_REPLAY): "sql txn" id=96f33e4e key=/Table/53/1 rw=true pri=0.00000000 iso=SERIALIZABLE stat=ABORTED epo=0 ts=1549030701.472039273,1 orig=1549030701.462643639,0 max=1549030701.472039273,0 wto=false rop=false seq=3
    	txn_restart_test.go:434: /usr/local/go/src/runtime/asm_amd64.s:573 statement "INSERT INTO t.public.test(k, v) VALUES (0, 'sentinel')" error: 1 additional aborts expected

@tbg
Copy link
Member

tbg commented Feb 13, 2019

friendly ping, @andreimatei

@cockroach-teamcity
Copy link
Member Author

SHA: https://github.com/cockroachdb/cockroach/commits/bf87ee9d6d5d75cb0ce3bc814fc28f9d16b8ce9d

Parameters:

TAGS=
GOFLAGS=

To repro, try:

# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stressrace instead of stress and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stress TESTS=TestTxnUserRestart PKG=github.com/cockroachdb/cockroach/pkg/sql TESTTIMEOUT=5m STRESSFLAGS='-stderr=false -maxtime 20m -timeout 10m'

Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1301069&tab=buildLog

I190522 05:26:32.807957 177499 sql/txn_restart_test.go:350  [n1,client=127.0.0.1:34590,user=root] statement filter running on: CREATE TABLE t.public.test (k INT PRIMARY KEY, v STRING), with err=<nil>
I190522 05:26:32.811642 177499 sql/event_log.go:126  [n1,client=127.0.0.1:34590,user=root] Event: "create_table", target: 53, info: {TableName:t.public.test Statement:CREATE TABLE t.public.test (k INT PRIMARY KEY, v STRING) User:root}
I190522 05:26:32.813010 177499 sql/txn_restart_test.go:350  [n1,client=127.0.0.1:34590,user=root] statement filter running on: SAVEPOINT cockroach_restart, with err=<nil>
I190522 05:26:32.813181 177499 sql/txn_restart_test.go:350  [n1,client=127.0.0.1:34590,user=root] statement filter running on: SET TRANSACTION PRIORITY LOW, with err=<nil>
I190522 05:26:32.819301 177499 sql/txn_restart_test.go:350  [n1,client=127.0.0.1:34590,user=root] statement filter running on: SELECT 1, with err=<nil>
I190522 05:26:32.821186 177170 storage/replica_proposal.go:211  [n1,s1,r23/1:/{Table/52-Max}] new range lease repl=(n1,s1):1 seq=3 start=1558502792.812727384,0 epo=1 pro=1558502792.813646076,0 following repl=(n1,s1):1 seq=2 start=1558502792.414574099,0 exp=1558502801.415341380,0 pro=1558502792.415359920,0
I190522 05:26:32.821437 177559 storage/replica_command.go:348  [n1,split,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53 [r24]
I190522 05:26:32.823547 177532 sql/txn_restart_test.go:350  [n1,client=127.0.0.1:34590,user=root,intExec=lease-insert] statement filter running on: INSERT INTO system.public.lease("descID", version, "nodeID", expiration) VALUES (53, 1, 1, '2019-05-22 05:31:33.998635+00:00'), with err=<nil>
I190522 05:26:32.824840 177499 sql/txn_restart_test.go:350  [n1,client=127.0.0.1:34590,user=root] statement filter running on: DELETE FROM t.public.test WHERE true, with err=HandledRetryableTxnError: TransactionAbortedError(ABORT_REASON_TIMESTAMP_CACHE_REJECTED_POSSIBLE_REPLAY): "sql txn" id=3aecbfc5 key=/Table/53/1 rw=true pri=0.00000000 iso=SERIALIZABLE stat=ABORTED epo=0 ts=1558502792.812727384,1 orig=1558502792.812480703,0 max=1558502792.812727384,0 wto=false rop=false seq=3
I190522 05:26:32.828492 177589 sql/txn_restart_test.go:350  [n1,split,s1,r23/1:/{Table/52-Max},intExec=log-range-event] statement filter running on: INSERT INTO system.public.rangelog("timestamp", "rangeID", "storeID", "eventType", "otherRangeID", info) VALUES ($1, $2, $3, $4, $5, $6), with err=<nil>
I190522 05:26:32.833047 176940 util/stop/stopper.go:537  quiescing; tasks left:
1      node.Node: batch
1      [async] storage.split: processing replica
1      [async] storage.Store: gossip on capacity change
1      [async] kv.TxnCoordSender: heartbeat loop
1      [async] closedts-rangefeed-subscriber
I190522 05:26:32.833119 176940 util/stop/stopper.go:537  quiescing; tasks left:
1      node.Node: batch
1      [async] storage.split: processing replica
1      [async] storage.Store: gossip on capacity change
1      [async] closedts-rangefeed-subscriber
I190522 05:26:32.834335 176940 util/stop/stopper.go:537  quiescing; tasks left:
1      node.Node: batch
1      [async] storage.split: processing replica
1      [async] closedts-rangefeed-subscriber
I190522 05:26:32.842769 176940 util/stop/stopper.go:537  quiescing; tasks left:
1      node.Node: batch
1      [async] storage.split: processing replica
W190522 05:26:32.843506 177559 storage/replica.go:3338  [n1,s1,r23/1:/Table/5{2-3}] during async intent resolution: node unavailable; try another peer
    --- FAIL: TestTxnUserRestart/err=RETRY_POSSIBLE_REPLAY,stgy=1 (0.48s)
    	txn_restart_test.go:904: pq: restart transaction: HandledRetryableTxnError: TransactionAbortedError(ABORT_REASON_TIMESTAMP_CACHE_REJECTED_POSSIBLE_REPLAY): "sql txn" id=3aecbfc5 key=/Table/53/1 rw=true pri=0.00000000 iso=SERIALIZABLE stat=ABORTED epo=0 ts=1558502792.812727384,1 orig=1558502792.812480703,0 max=1558502792.812727384,0 wto=false rop=false seq=3
    	txn_restart_test.go:434: /usr/local/go/src/runtime/asm_amd64.s:573 statement "INSERT INTO t.public.test(k, v) VALUES (0, 'sentinel')" error: 1 additional aborts expected

@andreimatei
Copy link
Contributor

I can't repro this for the life of me. I've tried stressing and stress-racing. And I've tried stressing the whole package too.

What the failures seem to say is that a statement encountered a wiped write timestamp cache - which generally happens after lease transfers. Why a lease transfer happened in this single-node cluster I don't know. I'd say that it was the thing that we fixed when we started with multiple ranges on cluster bootstrap instead of just one - cause before that there used to be a disruptive transition from expiration based lease to epoch based lease for user ranges. Except that disruption was happening a few seconds into the test run, where at least the last failure here is quick.

All these failures are on the 2.1 branch. I'm gonna ignore it and close it later when we're no longer backporting stuff to 2.1...

@cockroach-teamcity
Copy link
Member Author

SHA: https://github.com/cockroachdb/cockroach/commits/7dec3577461a1e53ea70582de62bbd96bf512b73

Parameters:

TAGS=
GOFLAGS=-race

To repro, try:

# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stressrace instead of stress and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stress TESTS=TestTxnUserRestart PKG=github.com/cockroachdb/cockroach/pkg/sql TESTTIMEOUT=5m STRESSFLAGS='-stderr=false -maxtime 20m -timeout 10m'

Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1614544&tab=buildLog

I191127 12:15:09.063968 164930 sql/txn_restart_test.go:350  [n1,client=127.0.0.1:39170,user=root] statement filter running on: SELECT 1, with err=<nil>
I191127 12:15:09.106994 164946 storage/replica_command.go:348  [n1,split,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53 [r24]
I191127 12:15:09.112711 164487 storage/replica_proposal.go:211  [n1,s1,r23/1:/{Table/52-Max}] new range lease repl=(n1,s1):1 seq=3 start=1574856909.066151849,0 epo=1 pro=1574856909.068551850,0 following repl=(n1,s1):1 seq=2 start=1574856905.494276145,0 exp=1574856914.511820052,0 pro=1574856905.511921959,0
I191127 12:15:09.143203 164939 sql/txn_restart_test.go:350  [n1,client=127.0.0.1:39170,user=root,intExec=lease-insert] statement filter running on: INSERT INTO system.public.lease("descID", version, "nodeID", expiration) VALUES (53, 1, 1, '2019-11-27 12:20:47.312428+00:00'), with err=<nil>
I191127 12:15:09.161540 164930 sql/txn_restart_test.go:350  [n1,client=127.0.0.1:39170,user=root] statement filter running on: DELETE FROM t.public.test WHERE true, with err=HandledRetryableTxnError: TransactionAbortedError(ABORT_REASON_TIMESTAMP_CACHE_REJECTED_POSSIBLE_REPLAY): "sql txn" id=4fd3dbc9 key=/Table/53/1 rw=true pri=0.00000000 iso=SERIALIZABLE stat=ABORTED epo=0 ts=1574856909.066151849,1 orig=1574856909.053471822,0 max=1574856909.066151849,0 wto=false rop=false seq=3
I191127 12:15:09.164262 164286 util/stop/stopper.go:548  quiescing; tasks left:
1      node.Node: batch
1      [async] txnHeartbeat: aborting txn
1      [async] transport racer
1      [async] storage.split: processing replica
1      [async] kv.TxnCoordSender: heartbeat loop
1      [async] closedts-rangefeed-subscriber
I191127 12:15:09.164272 164478 kv/transport_race.go:91  transport race promotion: ran 27 iterations on up to 915 requests
I191127 12:15:09.168146 164286 util/stop/stopper.go:548  quiescing; tasks left:
1      node.Node: batch
1      [async] txnHeartbeat: aborting txn
1      [async] storage.split: processing replica
1      [async] closedts-rangefeed-subscriber
I191127 12:15:09.176644 164286 util/stop/stopper.go:548  quiescing; tasks left:
1      node.Node: batch
1      [async] storage.split: processing replica
1      [async] closedts-rangefeed-subscriber
I191127 12:15:09.180407 164286 util/stop/stopper.go:548  quiescing; tasks left:
1      node.Node: batch
1      [async] storage.split: processing replica
I191127 12:15:09.183412 164286 util/stop/stopper.go:548  quiescing; tasks left:
1      [async] storage.split: processing replica
I191127 12:15:09.184026 164887 sql/txn_restart_test.go:350  [n1,split,s1,r23/1:/{Table/52-Max},intExec=log-range-event] statement filter running on: INSERT INTO system.public.rangelog("timestamp", "rangeID", "storeID", "eventType", "otherRangeID", info) VALUES ($1, $2, $3, $4, $5, $6), with err=<nil>
W191127 12:15:09.186805 164946 internal/client/txn.go:532  [n1,split,s1,r23/1:/{Table/52-Max}] failure aborting transaction: node unavailable; try another peer; abort caused by: kv/txn_interceptor_heartbeat.go:405: node already quiescing
E191127 12:15:09.187438 164946 storage/queue.go:845  [n1,split,s1,r23/1:/{Table/52-Max}] unable to split [n1,s1,r23/1:/{Table/52-Max}] at key "/Table/53": split at key /Table/53 failed: kv/txn_interceptor_heartbeat.go:405: node already quiescing
W191127 12:15:09.192705 164930 internal/client/txn.go:532  [n1,client=127.0.0.1:39170,user=root] failure aborting transaction: node unavailable; try another peer; abort caused by: connExecutor closing
    --- FAIL: TestTxnUserRestart/err=TransactionAbortedError\(ABORT_REASON_ABORTED_RECORD_FOUND\),stgy=1 (4.02s)
    	txn_restart_test.go:904: pq: restart transaction: HandledRetryableTxnError: TransactionAbortedError(ABORT_REASON_TIMESTAMP_CACHE_REJECTED_POSSIBLE_REPLAY): "sql txn" id=4fd3dbc9 key=/Table/53/1 rw=true pri=0.00000000 iso=SERIALIZABLE stat=ABORTED epo=0 ts=1574856909.066151849,1 orig=1574856909.053471822,0 max=1574856909.066151849,0 wto=false rop=false seq=3
    	txn_restart_test.go:434: /usr/local/go/src/runtime/asm_amd64.s:573 statement "INSERT INTO t.public.test(k, v) VALUES (0, 'sentinel')" error: 1 additional aborts expected

@tbg tbg added the branch-master Failures and bugs on the master branch. label Jan 22, 2020
@knz knz removed the branch-master Failures and bugs on the master branch. label Mar 6, 2020
@knz
Copy link
Contributor

knz commented Mar 6, 2020

Now that #45566 I am removing the branch-master label.
I am pretty sure this is not relevant any more. I would recommend closing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Projects
None yet
Development

No branches or pull requests

4 participants