Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

teamcity: failed test: TestDropIndexWithZoneConfigCCL #31678

Closed
cockroach-teamcity opened this issue Oct 21, 2018 · 0 comments
Closed

teamcity: failed test: TestDropIndexWithZoneConfigCCL #31678

cockroach-teamcity opened this issue Oct 21, 2018 · 0 comments
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Milestone

Comments

@cockroach-teamcity
Copy link
Member

The following tests appear to have failed on master (testrace): TestDropIndexWithZoneConfigCCL

You may want to check for open issues.

#978156:

TestDropIndexWithZoneConfigCCL
...log.go:126  [n1,client=127.0.0.1:49890,user=root] Event: "set_zone_config", target: 53, info: {Target:t.kv@primary Config: Options: User:root}
I181021 09:04:07.236813 786 storage/replica_command.go:300  [n1,split,s1,r22/1:/{Table/53/1-Max}] initiating a split of this range at key /Table/53/2 [r23]
I181021 09:04:07.398343 599 sql/event_log.go:126  [n1,client=127.0.0.1:49890,user=root] Event: "set_zone_config", target: 53, info: {Target:t.kv@i Config:gc: {ttlseconds: 1} Options: User:root}
I181021 09:04:07.412181 741 storage/replica_command.go:300  [n1,split,s1,r23/1:/{Table/53/2-Max}] initiating a split of this range at key /Table/53/3 [r24]
I181021 09:04:07.451894 769 storage/replica_command.go:300  [n1,split,s1,r24/1:/{Table/53/3-Max}] initiating a split of this range at key /Table/53/4 [r25]
I181021 09:04:07.540919 599 sql/event_log.go:126  [n1,client=127.0.0.1:49890,user=root] Event: "set_zone_config", target: 53, info: {Target:t.kv.p2 Config:gc: {ttlseconds: 1} Options: User:root}
I181021 09:04:07.550891 807 storage/replica_command.go:300  [n1,split,s1,r24/1:/Table/53/{3-4}] initiating a split of this range at key /Table/53/3/2 [r26]
I181021 09:04:07.582036 599 sql/lease.go:345  [n1,client=127.0.0.1:49890,user=root] publish: descID=53 (kv) version=8 mtime=2018-10-21 09:04:07.54189151 +0000 UTC
I181021 09:04:07.592865 657 storage/replica_command.go:300  [n1,split,s1,r26/1:/Table/53/{3/2-4}] initiating a split of this range at key /Table/53/3/3 [r27]
I181021 09:04:07.644884 599 sql/event_log.go:126  [n1,client=127.0.0.1:49890,user=root] Event: "drop_index", target: 53, info: {TableName:t.public.kv IndexName:i Statement:DROP INDEX t.public.kv@i User:root MutationID:3 CascadeDroppedViews:[]}
I181021 09:04:07.694218 812 sql/lease.go:877  new lease: 53("kv") ver=8:1540112945.550088829,0, refcount=0
I181021 09:04:07.776949 599 sql/lease.go:345  [n1,client=127.0.0.1:49890,user=root,scExec] publish: descID=53 (kv) version=9 mtime=2018-10-21 09:04:07.775894162 +0000 UTC
I181021 09:04:07.809634 852 sql/lease.go:877  new lease: 53("kv") ver=9:1540112918.080928968,0, refcount=0
I181021 09:04:07.817096 599 sql/lease.go:315  publish (1 leases): desc=[{kv 53 8}]
I181021 09:04:07.854840 599 sql/backfill.go:132  [n1,client=127.0.0.1:49890,user=root,scExec] Running backfill for "kv", v=9, m=3
I181021 09:04:07.875949 599 sql/lease.go:345  [n1,client=127.0.0.1:49890,user=root,scExec] publish: descID=53 (kv) version=10 mtime=2018-10-21 09:04:07.868268247 +0000 UTC
I181021 09:04:07.975329 599 sql/event_log.go:126  [n1,client=127.0.0.1:49890,user=root,scExec] Event: "finish_schema_change", target: 53, info: {MutationID:3}
I181021 09:04:08.018651 857 sql/lease.go:877  new lease: 53("kv") ver=10:1540112933.419412470,0, refcount=0
I181021 09:04:08.042729 599 sql/lease.go:315  publish (1 leases): desc=[{kv 53 9}]
I181021 09:04:08.944519 51 util/stop/stopper.go:537  quiescing; tasks left:
1      node.Node: batch
1      [async] transport racer
1      [async] kv.TxnCoordSender: heartbeat loop
1      [async] closedts-rangefeed-subscriber
I181021 09:04:08.945098 324 kv/transport_race.go:113  transport race promotion: ran 20 iterations on up to 2051 requests
I181021 09:04:08.945326 51 util/stop/stopper.go:537  quiescing; tasks left:
1      node.Node: batch
1      [async] kv.TxnCoordSender: heartbeat loop
1      [async] closedts-rangefeed-subscriber
W181021 09:04:08.947065 118 internal/client/txn.go:532  [n1,scExec] failure aborting transaction: node unavailable; try another peer; abort caused by: kv/txn_interceptor_heartbeat.go:405: node already quiescing
W181021 09:04:08.947471 118 sql/schema_changer.go:757  [n1,scExec] node unavailable; try another peer
W181021 09:04:08.947621 118 sql/schema_changer.go:1522  [n1] Error executing schema change: kv/txn_interceptor_heartbeat.go:405: node already quiescing
I181021 09:04:08.949558 51 util/stop/stopper.go:537  quiescing; tasks left:
1      [async] closedts-rangefeed-subscriber



TestDropIndexWithZoneConfigCCL
...oot,scExec] Event: "finish_schema_change", target: 53, info: {MutationID:2}
I181021 09:05:43.934654 587 sql/event_log.go:126  [n1,client=127.0.0.1:43650,user=root] Event: "set_zone_config", target: 53, info: {Target:t.kv@primary Config: Options: User:root}
I181021 09:05:43.935946 520 storage/replica_command.go:300  [n1,split,s1,r21/1:/{Table/53-Max}] initiating a split of this range at key /Table/53/1 [r22]
I181021 09:05:43.940784 772 storage/replica_command.go:300  [n1,split,s1,r22/1:/{Table/53/1-Max}] initiating a split of this range at key /Table/53/2 [r23]
I181021 09:05:43.941669 587 sql/event_log.go:126  [n1,client=127.0.0.1:43650,user=root] Event: "set_zone_config", target: 53, info: {Target:t.kv@i Config:gc: {ttlseconds: 1} Options: User:root}
I181021 09:05:43.946175 792 storage/replica_command.go:300  [n1,split,s1,r23/1:/{Table/53/2-Max}] initiating a split of this range at key /Table/53/3 [r24]
I181021 09:05:43.950366 587 sql/event_log.go:126  [n1,client=127.0.0.1:43650,user=root] Event: "set_zone_config", target: 53, info: {Target:t.kv.p2 Config:gc: {ttlseconds: 1} Options: User:root}
I181021 09:05:43.953748 587 sql/lease.go:345  [n1,client=127.0.0.1:43650,user=root] publish: descID=53 (kv) version=8 mtime=2018-10-21 09:05:43.950650091 +0000 UTC
I181021 09:05:43.953751 727 storage/replica_command.go:300  [n1,split,s1,r24/1:/{Table/53/3-Max}] initiating a split of this range at key /Table/53/3/2 [r25]
I181021 09:05:43.959069 737 storage/replica_command.go:300  [n1,split,s1,r25/1:/{Table/53/3/2-Max}] initiating a split of this range at key /Table/53/3/3 [r26]
I181021 09:05:43.965772 587 sql/event_log.go:126  [n1,client=127.0.0.1:43650,user=root] Event: "drop_index", target: 53, info: {TableName:t.public.kv IndexName:i Statement:DROP INDEX t.public.kv@i User:root MutationID:3 CascadeDroppedViews:[]}
I181021 09:05:43.969696 769 storage/replica_command.go:300  [n1,split,s1,r26/1:/{Table/53/3/3-Max}] initiating a split of this range at key /Table/53/4 [r27]
I181021 09:05:43.977449 765 sql/lease.go:877  new lease: 53("kv") ver=8:1540113060.293186899,0, refcount=0
I181021 09:05:44.009719 587 sql/lease.go:345  [n1,client=127.0.0.1:43650,user=root,scExec] publish: descID=53 (kv) version=9 mtime=2018-10-21 09:05:44.009574047 +0000 UTC
I181021 09:05:44.014061 815 sql/lease.go:877  new lease: 53("kv") ver=9:1540113071.562652268,0, refcount=0
I181021 09:05:44.019925 587 sql/backfill.go:132  [n1,client=127.0.0.1:43650,user=root,scExec] Running backfill for "kv", v=9, m=3
I181021 09:05:44.022276 587 sql/lease.go:345  [n1,client=127.0.0.1:43650,user=root,scExec] publish: descID=53 (kv) version=10 mtime=2018-10-21 09:05:44.022123018 +0000 UTC
I181021 09:05:44.031979 587 sql/event_log.go:126  [n1,client=127.0.0.1:43650,user=root,scExec] Event: "finish_schema_change", target: 53, info: {MutationID:3}
I181021 09:05:44.034514 587 sql/lease.go:315  publish (1 leases): desc=[{kv 53 9}]
I181021 09:05:44.040260 855 sql/lease.go:877  new lease: 53("kv") ver=10:1540113024.443665630,0, refcount=0
I181021 09:05:44.957520 85 gossip/gossip.go:1510  [n1] node has connected to cluster via gossip
I181021 09:05:44.957723 85 storage/stores.go:261  [n1] wrote 0 node addresses to persistent storage
I181021 09:05:45.048871 266 sql/lease.go:345  [n1,scExec] publish: descID=53 (kv) version=11 mtime=2018-10-21 09:05:45.048718965 +0000 UTC
I181021 09:05:45.049923 688 storage/replica_command.go:432  [n1,merge,s1,r25/1:/Table/53/3/{2-3}] initiating a merge of r26:/Table/53/{3/3-4} [(n1,s1):1, next=2, gen=1] into this range
I181021 09:05:45.095447 266 sql/lease.go:315  publish (1 leases): desc=[{kv 53 10}]
I181021 09:05:45.097409 221 storage/store.go:2580  [n1,s1,r25/1:/Table/53/3/{2-3}] removing replica r26/1
I181021 09:05:45.101033 998 sql/lease.go:877  new lease: 53("kv") ver=11:1540113014.088685589,0, refcount=0
I181021 09:05:45.234851 29 util/stop/stopper.go:537  quiescing; tasks left:
1      [async] closedts-rangefeed-subscriber




Please assign, take a look and update the issue accordingly.

@cockroach-teamcity cockroach-teamcity added this to the 2.2 milestone Oct 21, 2018
@cockroach-teamcity cockroach-teamcity added C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. labels Oct 21, 2018
benesch added a commit to benesch/cockroach that referenced this issue Oct 22, 2018
A particularly adversarial goroutine schedule can cause this test to
observe the moment in time where the data is dropped but the zone config
is not. Deflake by retrying the check for the dropped zone config until
it succeeds (or times out).

Fix cockroachdb#31678.

Release note: None
craig bot pushed a commit that referenced this issue Oct 22, 2018
31556: importccl: re-enable job control tests r=mjibson a=mjibson

I tracked down the problem. It was that after the CANCEL JOB was issued,
sometimes the 2nd stage of the IMPORT (the shuffle) would have started,
and sometimes it wouldn't have. If it did not start then RunJob would
block forever trying to send on the allowResponse chan. Fix this by
making a draining go routine after the first block.

Closes #24623
Closes #24658

Release note: None

31627: storage: remove spurious call to maybeInlineSideloadedRaftCommand r=nvanbenschoten,benesch a=tschottdorf

Entries are "thinned" only when passed to `r.append()` (i.e. written to
disk) and they are always returned "fat" from `Entries()` (i.e. Raft's way
to get entries from disk). Consequently Raft never sees thin entries and
won't ask us to commit them.

Touches #31618.

Release note: None

31695: bitarray: don't allow FromEncodingParts to return invalid bit array r=knz a=benesch

It is invalid for a bit array's lastBitsUsed field to be greater than
64. The FromEncodingParts function, however, would happily construct
an invalid bitarray if given a too-large lastBitsUsed value. Teach the
FromEncodingParts to return an error instead.

This presented as a panic when attempting to pretty-print a key with a
bitarray whose lastBitsUsed encoded value was 65. Such a key can be
created when calling PrefixEnd on a key with a bitarray whose
lastBitsUsed value is 64. By returning an error instead, the
pretty-printing code will try again after calling UndoPrefixEnd and be
able to print the key.

Fix #31115.

Release note: None

31697: partitionccl: deflake TestDropIndexWithZoneConfigCCL r=danhhz,eriktrinh a=benesch

A particularly adversarial goroutine schedule can cause this test to
observe the moment in time where the data is dropped but the zone config
is not. Deflake by retrying the check for the dropped zone config until
it succeeds (or times out).

Fix #31678.

Release note: None

Co-authored-by: Matt Jibson <[email protected]>
Co-authored-by: Tobias Schottdorf <[email protected]>
Co-authored-by: Nikhil Benesch <[email protected]>
@craig craig bot closed this as completed in #31697 Oct 22, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Projects
None yet
Development

No branches or pull requests

1 participant