Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

: TestStoreRangeRebalance failed under stress #10315

Closed
cockroach-teamcity opened this issue Oct 30, 2016 · 1 comment
Closed

: TestStoreRangeRebalance failed under stress #10315

cockroach-teamcity opened this issue Oct 30, 2016 · 1 comment
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.

Comments

@cockroach-teamcity
Copy link
Member

SHA: https://github.com/cockroachdb/cockroach/commits/7207111aa3a43df0552509365fdec741a53f873f

Stress build found a failed test:

I161030 07:28:12.489382 22019 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161030 07:28:12.490933 22019 gossip/gossip.go:237  [n?] initial resolvers: []
W161030 07:28:12.491336 22019 gossip/gossip.go:1055  [n?] no resolvers found; use --join to specify a connected node
I161030 07:28:12.491678 22019 base/node_id.go:62  NodeID set to 1
I161030 07:28:12.520754 22019 storage/store.go:1152  [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I161030 07:28:12.521076 22019 gossip/gossip.go:280  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:35852" > attrs:<> locality:<> 
I161030 07:28:12.530706 21996 storage/replica_proposal.go:292  [s1,r1/1:/M{in-ax}] new range lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 900ms following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0s [physicalTime=1970-01-01 00:00:00 +0000 UTC]
I161030 07:28:12.539566 22019 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161030 07:28:12.540986 22019 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:35852]
W161030 07:28:12.541392 22019 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
I161030 07:28:12.568136 22019 base/node_id.go:62  NodeID set to 2
I161030 07:28:12.593473 22019 storage/store.go:1152  [n2,s2]: failed initial metrics computation: [n2,s2]: system config not yet available
I161030 07:28:12.593791 22019 gossip/gossip.go:280  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:50800" > attrs:<> locality:<> 
I161030 07:28:12.604195 22019 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161030 07:28:12.605649 22019 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:35852]
W161030 07:28:12.606114 22019 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
I161030 07:28:12.606414 22019 base/node_id.go:62  NodeID set to 3
I161030 07:28:12.617763 22104 gossip/client.go:126  [n2] node 2: started gossip client to 127.0.0.1:35852
I161030 07:28:12.623195 22019 storage/store.go:1152  [n3,s3]: failed initial metrics computation: [n3,s3]: system config not yet available
I161030 07:28:12.625825 22019 gossip/gossip.go:280  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:55672" > attrs:<> locality:<> 
I161030 07:28:12.635039 22019 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161030 07:28:12.645622 22120 gossip/client.go:126  [n3] node 3: started gossip client to 127.0.0.1:35852
I161030 07:28:12.654126 22019 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:35852]
W161030 07:28:12.654550 22019 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
I161030 07:28:12.654872 22019 base/node_id.go:62  NodeID set to 4
I161030 07:28:12.684423 22019 storage/store.go:1152  [n4,s4]: failed initial metrics computation: [n4,s4]: system config not yet available
I161030 07:28:12.684745 22019 gossip/gossip.go:280  [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:38996" > attrs:<> locality:<> 
I161030 07:28:12.687104 22263 gossip/client.go:126  [n4] node 4: started gossip client to 127.0.0.1:35852
I161030 07:28:12.689078 22019 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161030 07:28:12.694306 22019 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:35852]
W161030 07:28:12.694709 22019 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
I161030 07:28:12.695024 22019 base/node_id.go:62  NodeID set to 5
I161030 07:28:12.726357 22019 storage/store.go:1152  [n5,s5]: failed initial metrics computation: [n5,s5]: system config not yet available
I161030 07:28:12.726673 22019 gossip/gossip.go:280  [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:35804" > attrs:<> locality:<> 
I161030 07:28:12.736742 22019 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161030 07:28:12.738192 22019 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:35852]
W161030 07:28:12.738592 22019 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
I161030 07:28:12.738895 22019 base/node_id.go:62  NodeID set to 6
I161030 07:28:12.742547 22340 gossip/client.go:126  [n5] node 5: started gossip client to 127.0.0.1:35852
I161030 07:28:12.789913 22298 gossip/server.go:263  [n1] refusing gossip from node 5 (max 3 conns); forwarding to 2 ({tcp 127.0.0.1:50800})
I161030 07:28:12.804167 22019 storage/store.go:1152  [n6,s6]: failed initial metrics computation: [n6,s6]: system config not yet available
I161030 07:28:12.804552 22019 gossip/gossip.go:280  [n6] NodeDescriptor set to node_id:6 address:<network_field:"tcp" address_field:"127.0.0.1:37547" > attrs:<> locality:<> 
I161030 07:28:12.807517 22419 gossip/client.go:126  [n6] node 6: started gossip client to 127.0.0.1:35852
I161030 07:28:12.814596 22298 gossip/server.go:263  [n1] refusing gossip from node 5 (max 3 conns); forwarding to 3 ({tcp 127.0.0.1:55672})
I161030 07:28:12.823706 22340 gossip/client.go:131  [n5] node 5: closing client to node 1 (127.0.0.1:35852): received forward from node 1 to 2 (127.0.0.1:50800)
I161030 07:28:12.827486 21903 gossip/client.go:126  [n5] node 5: started gossip client to 127.0.0.1:50800
I161030 07:28:12.877306 22378 gossip/server.go:263  [n1] refusing gossip from node 6 (max 3 conns); forwarding to 4 ({tcp 127.0.0.1:38996})
I161030 07:28:12.880241 22419 gossip/client.go:131  [n6] node 6: closing client to node 1 (127.0.0.1:35852): received forward from node 1 to 4 (127.0.0.1:38996)
I161030 07:28:12.880807 22184 gossip/client.go:126  [n6] node 6: started gossip client to 127.0.0.1:38996
I161030 07:28:12.886882 22298 gossip/server.go:263  [n1] refusing gossip from node 5 (max 3 conns); forwarding to 3 ({tcp 127.0.0.1:55672})
I161030 07:28:12.898677 22378 gossip/server.go:263  [n1] refusing gossip from node 6 (max 3 conns); forwarding to 4 ({tcp 127.0.0.1:38996})
I161030 07:28:12.956277 22019 storage/replica_command.go:2363  [s1,r1/1:/M{in-ax}] initiating a split of this range at key "split" [r2]
E161030 07:28:13.104702 22069 storage/queue.go:569  [replicate] (purgatory) on [n1,s1,r1/1:{/Min-"split"}]: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
I161030 07:28:13.115543 22019 storage/replica_raftstorage.go:446  [s1,r1/1:{/Min-"split"}] generated snapshot d47845bc for range 1 at index 30 in 93.8µs.
I161030 07:28:13.121052 22019 storage/store.go:3061  streamed snapshot: kv pairs: 32, log entries: 20
E161030 07:28:13.121889 22069 storage/queue.go:569  [replicate] (purgatory) on [n1,s1,r2/1:{"split"-/Max}]: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
I161030 07:28:13.151827 22572 storage/replica_raftstorage.go:577  [s2] [n2,s2,r1/?:{-}]: with replicaID [?], applying preemptive snapshot at index 30 (id=d47845bc, encoded size=16, 1 rocksdb batches, 20 log entries)
I161030 07:28:13.165905 22572 storage/replica_raftstorage.go:580  [s2] [n2,s2,r1/?:{/Min-"split"}]: with replicaID [?], applied preemptive snapshot in 0.014s
I161030 07:28:13.211422 22019 storage/replica_command.go:3240  change replicas: read existing descriptor range_id:1 start_key:"" end_key:"split" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2 
I161030 07:28:13.254247 22626 storage/replica.go:1860  [s1,r1/1:{/Min-"split"}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2} for range 1: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161030 07:28:13.287039 22019 storage/replica_raftstorage.go:446  [s1,r1/1:{/Min-"split"}] generated snapshot 5c48d1ae for range 1 at index 39 in 106.1µs.
I161030 07:28:13.290814 22019 storage/store.go:3061  streamed snapshot: kv pairs: 42, log entries: 29
I161030 07:28:13.294410 22640 storage/raft_transport.go:423  raft transport stream to node 1 established
I161030 07:28:13.303946 22511 storage/replica_raftstorage.go:577  [s3] [n3,s3,r1/?:{-}]: with replicaID [?], applying preemptive snapshot at index 39 (id=5c48d1ae, encoded size=16, 1 rocksdb batches, 29 log entries)
I161030 07:28:13.309558 22511 storage/replica_raftstorage.go:580  [s3] [n3,s3,r1/?:{/Min-"split"}]: with replicaID [?], applied preemptive snapshot in 0.005s
I161030 07:28:13.333946 22019 storage/replica_command.go:3240  change replicas: read existing descriptor range_id:1 start_key:"" end_key:"split" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3 
I161030 07:28:13.613685 22619 storage/replica.go:1860  [s1,r1/1:{/Min-"split"}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3} for range 1: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I161030 07:28:13.848161 22019 storage/replica_raftstorage.go:446  [s1,r2/1:{"split"-/Max}] generated snapshot 273859a7 for range 2 at index 11 in 113.999µs.
I161030 07:28:13.851984 22019 storage/store.go:3061  streamed snapshot: kv pairs: 28, log entries: 1
I161030 07:28:13.895247 22759 storage/replica_raftstorage.go:577  [s2] [n2,s2,r2/?:{-}]: with replicaID [?], applying preemptive snapshot at index 11 (id=273859a7, encoded size=16, 1 rocksdb batches, 1 log entries)
I161030 07:28:13.896830 22759 storage/replica_raftstorage.go:580  [s2] [n2,s2,r2/?:{"split"-/Max}]: with replicaID [?], applied preemptive snapshot in 0.001s
I161030 07:28:13.904355 22019 storage/replica_command.go:3240  change replicas: read existing descriptor range_id:2 start_key:"split" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2 
I161030 07:28:13.942148 22760 storage/raft_transport.go:423  raft transport stream to node 1 established
I161030 07:28:14.128113 22850 storage/replica.go:1860  [s1,r2/1:{"split"-/Max}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2} for range 2: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
W161030 07:28:14.229966 22050 raft/raft.go:696  [s1,r2/1:{"split"-/Max}] 1 stepped down to follower since quorum is not active
I161030 07:28:14.273533 22019 storage/replica_raftstorage.go:446  [s1,r2/1:{"split"-/Max}] generated snapshot ccbe0ddc for range 2 at index 13 in 100.7µs.
I161030 07:28:14.332865 22019 storage/store.go:3061  streamed snapshot: kv pairs: 30, log entries: 3
I161030 07:28:14.348064 22831 storage/replica_raftstorage.go:577  [s4] [n4,s4,r2/?:{-}]: with replicaID [?], applying preemptive snapshot at index 13 (id=ccbe0ddc, encoded size=16, 1 rocksdb batches, 3 log entries)
I161030 07:28:14.360677 22831 storage/replica_raftstorage.go:580  [s4] [n4,s4,r2/?:{"split"-/Max}]: with replicaID [?], applied preemptive snapshot in 0.012s
I161030 07:28:14.405461 22019 storage/replica_command.go:3240  change replicas: read existing descriptor range_id:2 start_key:"split" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3 
I161030 07:28:15.887308 22752 storage/replica.go:1860  [s1,r2/1:{"split"-/Max}] proposing ADD_REPLICA {NodeID:4 StoreID:4 ReplicaID:3} for range 2: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3}]
I161030 07:28:16.116718 22019 storage/replica_raftstorage.go:446  [s1,r2/1:{"split"-/Max}] generated snapshot c9049689 for range 2 at index 19 in 105.399µs.
I161030 07:28:16.160078 22019 storage/store.go:3061  streamed snapshot: kv pairs: 31, log entries: 9
I161030 07:28:16.203389 22677 storage/replica_raftstorage.go:577  [s5] [n5,s5,r2/?:{-}]: with replicaID [?], applying preemptive snapshot at index 19 (id=c9049689, encoded size=16, 1 rocksdb batches, 9 log entries)
I161030 07:28:16.206121 22677 storage/replica_raftstorage.go:580  [s5] [n5,s5,r2/?:{"split"-/Max}]: with replicaID [?], applied preemptive snapshot in 0.002s
I161030 07:28:16.210806 22019 storage/replica_command.go:3240  change replicas: read existing descriptor range_id:2 start_key:"split" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > next_replica_id:4 
I161030 07:28:16.255597 22991 storage/raft_transport.go:423  raft transport stream to node 1 established
I161030 07:28:17.074583 23194 storage/replica.go:1860  [s1,r2/1:{"split"-/Max}] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:4} for range 2: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3} {NodeID:5 StoreID:5 ReplicaID:4}]
I161030 07:28:17.243997 23200 storage/raft_transport.go:423  raft transport stream to node 1 established
I161030 07:28:17.258532 22019 storage/replica_command.go:3240  change replicas: read existing descriptor range_id:2 start_key:"split" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > replicas:<node_id:5 store_id:5 replica_id:4 > next_replica_id:5 
I161030 07:28:17.323735 23137 storage/raft_transport.go:423  raft transport stream to node 2 established
I161030 07:28:17.899467 23270 storage/raft_transport.go:423  raft transport stream to node 3 established
I161030 07:28:18.742567 23337 storage/replica.go:1860  [s1,r2/1:{"split"-/Max}] proposing REMOVE_REPLICA {NodeID:1 StoreID:1 ReplicaID:1} for range 2: [{NodeID:5 StoreID:5 ReplicaID:4} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3}]
I161030 07:28:18.967087 23199 storage/store.go:2922  [s1] [n1,s1,r2/1:{"split"-/Max}]: added to replica GC queue (peer suggestion)
I161030 07:28:19.204024 23399 storage/raft_transport.go:423  raft transport stream to node 4 established
I161030 07:28:19.204367 23400 storage/raft_transport.go:423  raft transport stream to node 5 established
I161030 07:28:19.215066 23428 storage/raft_transport.go:423  raft transport stream to node 2 established
I161030 07:28:19.232089 23343 storage/raft_transport.go:423  raft transport stream to node 2 established
I161030 07:28:19.260273 22090 storage/replica_proposal.go:292  [s2,r2/2:{"split"-/Max}] new range lease replica {2 2 2} 1970-01-01 00:00:00.9 +0000 UTC 1.8s following replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 900ms [physicalTime=1970-01-01 00:00:01.8 +0000 UTC]
W161030 07:28:19.437374 21992 raft/raft.go:696  [s1,r2/1:{"split"-/Max}] 1 stepped down to follower since quorum is not active
W161030 07:28:20.469658 23523 storage/replica.go:1233  [s3,r1/3:{/Min-"split"}] context canceled while in command queue: ResolveIntent [/Meta2/Max,/Min)
I161030 07:28:20.591605 22316 storage/replica_proposal.go:292  [s5,r2/4:{"split"-/Max}] new range lease replica {5 5 4} 1970-01-01 00:00:02.7 +0000 UTC 10.8s following replica {2 2 2} 1970-01-01 00:00:00.9 +0000 UTC 1.8s [physicalTime=1970-01-01 00:00:14.4 +0000 UTC]
I161030 07:28:20.779524 22089 storage/replica_proposal.go:341  [s2,r2/2:{"split"-/Max}] range [n2,s2,r2/2:{"split"-/Max}]: transferring raft leadership to replica ID 4
I161030 07:28:20.910251 23415 storage/raft_transport.go:423  raft transport stream to node 4 established
W161030 07:28:20.914425 23470 storage/store.go:745  storeMu: mutex held by github.com/cockroachdb/cockroach/pkg/storage.(*Store).LookupReplica for 137.117874ms (>100ms):
goroutine 23470 [running]:
runtime/debug.Stack(0x1bbe4c8, 0x1bbe508, 0x43)
    /usr/local/go/src/runtime/debug/stack.go:24 +0x79
github.com/cockroachdb/cockroach/pkg/util/syncutil.ThresholdLogger.func1(0x82c40b2)
    /go/src/github.com/cockroachdb/cockroach/pkg/util/syncutil/timedmutex.go:65 +0xe1
github.com/cockroachdb/cockroach/pkg/util/syncutil.(*TimedMutex).Unlock(0xc42c32b0a8)
    /go/src/github.com/cockroachdb/cockroach/pkg/util/syncutil/timedmutex.go:92 +0x80
github.com/cockroachdb/cockroach/pkg/storage.(*Store).LookupReplica(0xc42c32ae00, 0xc42da8fe80, 0xc, 0x14, 0x0, 0x0, 0x0, 0xc42c32b180)
    /go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:1445 +0x1f6
github.com/cockroachdb/cockroach/pkg/storage.(*Stores).LookupReplica(0xc42d2b4960, 0xc42da8fe80, 0xc, 0x14, 0xc42da8fe80, 0xd, 0x14, 0x0, 0x0, 0x0, ...)
    /go/src/github.com/cockroachdb/cockroach/pkg/storage/stores.go:209 +0x1a6
github.com/cockroachdb/cockroach/pkg/storage.(*Stores).Send(0xc42d2b4960, 0x7f0d3ea5cbb0, 0xc421f65180, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0xc42d2eb380, ...)
    /go/src/github.com/cockroachdb/cockroach/pkg/storage/stores.go:152 +0x131
github.com/cockroachdb/cockroach/pkg/storage_test.(*multiTestContextKVTransport).SendNext.func1(0x7f0d3ea5cbb0, 0xc421f65180)
    /go/src/github.com/cockroachdb/cockroach/pkg/storage/client_test.go:477 +0x136
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask.func1(0xc42abb3680, 0x204844f, 0x16, 0x1fb, 0x0, 0x0, 0xc423950140, 0x7f0d3ea5cbb0, 0xc421f65180)
    /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:264 +0xdf
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask
    /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:265 +0x234
I161030 07:28:21.031310 23418 storage/raft_transport.go:423  raft transport stream to node 5 established
I161030 07:28:21.062434 22324 storage/replica_proposal.go:341  [s5,r2/4:{"split"-/Max}] range [n5,s5,r2/4:{"split"-/Max}]: transferring raft leadership to replica ID 2
I161030 07:28:21.097349 22324 storage/replica_proposal.go:341  [s5,r2/4:{"split"-/Max}] range [n5,s5,r2/4:{"split"-/Max}]: transferring raft leadership to replica ID 2
I161030 07:28:21.138269 22047 storage/replica_proposal.go:292  [s2,r2/2:{"split"-/Max}] new range lease replica {2 2 2} 1970-01-01 00:00:15.3 +0000 UTC 5.4s following replica {5 5 4} 1970-01-01 00:00:02.7 +0000 UTC 12.6s [physicalTime=1970-01-01 00:00:19.8 +0000 UTC]
W161030 07:28:21.234306 23570 storage/replica.go:1233  [s3,r1/3:{/Min-"split"}] context canceled while in command queue: ResolveIntent [/Meta2/Max,/Min)
I161030 07:28:21.653370 22087 storage/replica_proposal.go:341  [s2,r2/2:{"split"-/Max}] range [n2,s2,r2/2:{"split"-/Max}]: transferring raft leadership to replica ID 3
I161030 07:28:21.800052 22234 storage/replica_proposal.go:292  [s4,r2/3:{"split"-/Max}] new range lease replica {4 4 3} 1970-01-01 00:00:20.7 +0000 UTC 5.4s following replica {2 2 2} 1970-01-01 00:00:15.3 +0000 UTC 5.4s [physicalTime=1970-01-01 00:00:25.2 +0000 UTC]
I161030 07:28:21.873690 22320 storage/replica.go:1908  [s5,r2/4:{"split"-/Max}] not quiescing: 1 pending commands
I161030 07:28:22.501330 22085 storage/replica_proposal.go:292  [s2,r1/2:{/Min-"split"}] new range lease replica {2 2 2} 1970-01-01 00:00:20.7 +0000 UTC 7.2s following replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 20.7s [physicalTime=1970-01-01 00:00:41.4 +0000 UTC]
W161030 07:28:22.980190 23345 storage/replica.go:1233  [s1,r1/1:{/Min-"split"}] context canceled while in command queue: ResolveIntent [/Meta2/Max,/Min)
I161030 07:28:23.142836 22145 storage/replica_proposal.go:292  [s3,r1/3:{/Min-"split"}] new range lease replica {3 3 3} 1970-01-01 00:00:42.3 +0000 UTC 25.2s following replica {2 2 2} 1970-01-01 00:00:20.7 +0000 UTC 21.6s [physicalTime=1970-01-01 00:01:12 +0000 UTC]
I161030 07:28:23.337330 22048 storage/replica_proposal.go:341  [s2,r1/2:{/Min-"split"}] range [n2,s2,r1/2:{/Min-"split"}]: transferring raft leadership to replica ID 1
I161030 07:28:23.366770 22048 storage/replica_proposal.go:341  [s2,r1/2:{/Min-"split"}] range [n2,s2,r1/2:{/Min-"split"}]: transferring raft leadership to replica ID 1
I161030 07:28:23.391968 21999 storage/replica_proposal.go:292  [s1,r1/1:{/Min-"split"}] new range lease replica {1 1 1} 1970-01-01 00:01:07.5 +0000 UTC 5.4s following replica {3 3 3} 1970-01-01 00:00:42.3 +0000 UTC 25.2s [physicalTime=1970-01-01 00:01:12 +0000 UTC]
E161030 07:28:23.656792 21976 storage/node_liveness.go:141  [hb] failed liveness heartbeat: replay txn
E161030 07:28:23.712516 22077 storage/node_liveness.go:141  [hb] failed liveness heartbeat: replay txn
I161030 07:28:23.931064 22569 storage/queue.go:620  [replicate] purgatory is now empty
I161030 07:28:24.278840 22019 storage/client_test.go:420  gossip network initialized
I161030 07:28:24.299156 22019 storage/replica_raftstorage.go:446  [replicate,s1,r1/1:{/Min-"split"}] generated snapshot 8311d131 for range 1 at index 219 in 149.999µs.
I161030 07:28:24.455985 22019 storage/store.go:3061  [replicate,s1,r1/1:{/Min-"split"}] streamed snapshot: kv pairs: 150, log entries: 209
I161030 07:28:24.491570 23718 storage/replica_raftstorage.go:577  [s6] [n6,s6,r1/?:{-}]: with replicaID [?], applying preemptive snapshot at index 219 (id=8311d131, encoded size=16, 1 rocksdb batches, 209 log entries)
I161030 07:28:24.668039 23718 storage/replica_raftstorage.go:580  [s6] [n6,s6,r1/?:{/Min-"split"}]: with replicaID [?], applied preemptive snapshot in 0.176s
I161030 07:28:24.681461 22019 storage/replica_command.go:3240  [replicate,s1,r1/1:{/Min-"split"}] change replicas: read existing descriptor range_id:1 start_key:"" end_key:"split" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:3 store_id:3 replica_id:3 > next_replica_id:4 
I161030 07:28:25.073591 23902 storage/replica.go:1860  [s1,r1/1:{/Min-"split"}] proposing ADD_REPLICA {NodeID:6 StoreID:6 ReplicaID:4} for range 1: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3} {NodeID:6 StoreID:6 ReplicaID:4}]
I161030 07:28:25.508368 23958 storage/raft_transport.go:423  raft transport stream to node 1 established
I161030 07:28:25.536359 22019 storage/replica_command.go:3240  [replicate,s1,r1/1:{/Min-"split"}] change replicas: read existing descriptor range_id:1 start_key:"" end_key:"split" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:3 store_id:3 replica_id:3 > replicas:<node_id:6 store_id:6 replica_id:4 > next_replica_id:5 
I161030 07:28:25.746077 22052 storage/replica_raftstorage.go:446  [s1,r1/1:{/Min-"split"}] generated snapshot d7a80b51 for range 1 at index 247 in 1.758389ms.
I161030 07:28:25.874023 23983 storage/store.go:3061  [s1,r1/1:{/Min-"split"}] streamed snapshot: kv pairs: 174, log entries: 11
I161030 07:28:25.897669 24021 storage/replica_raftstorage.go:577  [s6,r1/?:{/Min-"split"}] [n6,s6,r1/?:{/Min-"split"}]: with replicaID 4, applying Raft snapshot at index 247 (id=d7a80b51, encoded size=16, 1 rocksdb batches, 11 log entries)
I161030 07:28:25.902351 24021 storage/replica_raftstorage.go:580  [s6,r1/4:{/Min-"split"}] [n6,s6,r1/4:{/Min-"split"}]: with replicaID 4, applied Raft snapshot in 0.004s
I161030 07:28:26.092385 22052 storage/replica_raftstorage.go:446  [s1,r1/1:{/Min-"split"}] generated snapshot 5534c818 for range 1 at index 252 in 160.999µs.
I161030 07:28:26.208651 23944 storage/store.go:3061  [s1,r1/1:{/Min-"split"}] streamed snapshot: kv pairs: 177, log entries: 16
I161030 07:28:26.209972 23999 storage/replica_raftstorage.go:577  [s6,r1/4:{/Min-"split"}] [n6,s6,r1/4:{/Min-"split"}]: with replicaID 4, applying Raft snapshot at index 252 (id=5534c818, encoded size=16, 1 rocksdb batches, 16 log entries)
I161030 07:28:26.249778 23999 storage/replica_raftstorage.go:580  [s6,r1/4:{/Min-"split"}] [n6,s6,r1/4:{/Min-"split"}]: with replicaID 4, applied Raft snapshot in 0.039s
I161030 07:28:26.375657 23948 storage/replica.go:1860  [s1,r1/1:{/Min-"split"}] proposing REMOVE_REPLICA {NodeID:2 StoreID:2 ReplicaID:2} for range 1: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:6 StoreID:6 ReplicaID:4} {NodeID:3 StoreID:3 ReplicaID:3}]
W161030 07:28:26.590283 24071 storage/store.go:745  storeMu: mutex held by github.com/cockroachdb/cockroach/pkg/storage.(*Store).LookupReplica for 101.877804ms (>100ms):
goroutine 24071 [running]:
runtime/debug.Stack(0x1bbe4c8, 0x1bbe508, 0x43)
    /usr/local/go/src/runtime/debug/stack.go:24 +0x79
github.com/cockroachdb/cockroach/pkg/util/syncutil.ThresholdLogger.func1(0x612882c)
    /go/src/github.com/cockroachdb/cockroach/pkg/util/syncutil/timedmutex.go:65 +0xe1
github.com/cockroachdb/cockroach/pkg/util/syncutil.(*TimedMutex).Unlock(0xc42c32b0a8)
    /go/src/github.com/cockroachdb/cockroach/pkg/util/syncutil/timedmutex.go:92 +0x80
github.com/cockroachdb/cockroach/pkg/storage.(*Store).LookupReplica(0xc42c32ae00, 0xc42a93d8c0, 0xc, 0x14, 0x0, 0x0, 0x0, 0xc42c32b180)
    /go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:1445 +0x1f6
github.com/cockroachdb/cockroach/pkg/storage.(*Stores).LookupReplica(0xc42d2b4960, 0xc42a93d8c0, 0xc, 0x14, 0xc42a93d8c0, 0xd, 0x14, 0x0, 0x0, 0x0, ...)
    /go/src/github.com/cockroachdb/cockroach/pkg/storage/stores.go:209 +0x1a6
github.com/cockroachdb/cockroach/pkg/storage.(*Stores).Send(0xc42d2b4960, 0x7f0d3ea5cbb0, 0xc42884f180, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0xc4223b0f00, ...)
    /go/src/github.com/cockroachdb/cockroach/pkg/storage/stores.go:152 +0x131
github.com/cockroachdb/cockroach/pkg/storage_test.(*multiTestContextKVTransport).SendNext.func1(0x7f0d3ea5cbb0, 0xc42884f180)
    /go/src/github.com/cockroachdb/cockroach/pkg/storage/client_test.go:477 +0x136
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask.func1(0xc42abb3680, 0x204844f, 0x16, 0x1fb, 0x0, 0x0, 0xc42a93db00, 0x7f0d3ea5cbb0, 0xc42884f180)
    /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:264 +0xdf
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask
    /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:265 +0x234
I161030 07:28:26.828624 22658 storage/store.go:2922  [s2] [n2,s2,r1/2:{/Min-"split"}]: added to replica GC queue (peer suggestion)
I161030 07:28:27.476487 24217 util/stop/stopper.go:455  quiescing; tasks left:
5      storage/client_test.go:507
I161030 07:28:27.491184 24217 util/stop/stopper.go:455  quiescing; tasks left:
4      storage/client_test.go:507
E161030 07:28:27.498568 22077 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
I161030 07:28:27.521842 24217 util/stop/stopper.go:455  quiescing; tasks left:
3      storage/client_test.go:507
I161030 07:28:27.523249 24217 util/stop/stopper.go:455  quiescing; tasks left:
2      storage/client_test.go:507
E161030 07:28:27.523793 21976 storage/node_liveness.go:141  [hb] failed liveness heartbeat: transaction commit result is ambiguous
E161030 07:28:27.536962 22377 storage/node_liveness.go:141  [hb] failed liveness heartbeat: transaction commit result is ambiguous
I161030 07:28:27.548788 24217 util/stop/stopper.go:455  quiescing; tasks left:
1      storage/client_test.go:507
I161030 07:28:27.549021 24216 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
E161030 07:28:27.549457 22377 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
E161030 07:28:27.549760 22292 storage/node_liveness.go:141  [hb] failed liveness heartbeat: transaction commit result is ambiguous
E161030 07:28:27.552492 22292 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
E161030 07:28:27.555908 22016 storage/node_liveness.go:141  [hb] failed liveness heartbeat: transaction commit result is ambiguous
E161030 07:28:27.557763 21976 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
E161030 07:28:27.558831 22200 storage/node_liveness.go:141  [hb] failed liveness heartbeat: transaction commit result is ambiguous
E161030 07:28:27.559724 22200 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
I161030 07:28:27.565675 24216 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
E161030 07:28:27.606059 22016 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
E161030 07:28:27.606832 22377 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
W161030 07:28:27.609629 22820 storage/store.go:2926  [s3] got error from range 0, replica {1 1 0}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:0}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:0}
W161030 07:28:27.609829 22820 storage/store.go:2926  [s3] got error from range 1, replica {1 1 1}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
E161030 07:28:27.615689 22077 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
I161030 07:28:27.619236 24216 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
W161030 07:28:27.619501 22820 storage/store.go:2926  [s3] got error from range 1, replica {1 1 1}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161030 07:28:27.619706 22820 storage/store.go:2926  [s3] got error from range 1, replica {1 1 1}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161030 07:28:27.619895 22820 storage/store.go:2926  [s3] got error from range 1, replica {1 1 1}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161030 07:28:27.620274 22820 storage/store.go:2926  [s3] got error from range 1, replica {1 1 1}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161030 07:28:27.621669 23973 storage/store.go:2926  [s6] got error from range 1, replica {1 1 1}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:6 StoreID:6 ReplicaID:4}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161030 07:28:27.621866 23973 storage/store.go:2926  [s6] got error from range 1, replica {1 1 1}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:6 StoreID:6 ReplicaID:4}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161030 07:28:27.622050 23973 storage/store.go:2926  [s6] got error from range 1, replica {1 1 1}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:6 StoreID:6 ReplicaID:4}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161030 07:28:27.622231 23973 storage/store.go:2926  [s6] got error from range 1, replica {1 1 1}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:6 StoreID:6 ReplicaID:4}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161030 07:28:27.622412 23973 storage/store.go:2926  [s6] got error from range 1, replica {1 1 1}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:6 StoreID:6 ReplicaID:4}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161030 07:28:27.623177 23973 storage/store.go:2926  [s6] got error from range 1, replica {1 1 1}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:6 StoreID:6 ReplicaID:4}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161030 07:28:27.623379 23973 storage/store.go:2926  [s6] got error from range 1, replica {1 1 1}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:6 StoreID:6 ReplicaID:4}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
I161030 07:28:27.624638 24216 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
I161030 07:28:27.626641 24216 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
I161030 07:28:27.628670 24216 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
I161030 07:28:27.630637 24216 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
I161030 07:28:27.631913 22338 http2_server.go:276  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:35804->127.0.0.1:43830: use of closed network connection
I161030 07:28:27.632184 22195 http2_server.go:276  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:38996->127.0.0.1:48551: use of closed network connection
I161030 07:28:27.632486 22373 http2_server.go:276  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:37547->127.0.0.1:42868: read: connection reset by peer
I161030 07:28:27.632958 22005 http2_server.go:276  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:50800->127.0.0.1:35806: use of closed network connection
I161030 07:28:27.633255 21988 http2_server.go:276  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:35852->127.0.0.1:52008: use of closed network connection
I161030 07:28:27.633457 22114 http2_server.go:276  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:55672->127.0.0.1:52498: use of closed network connection
W161030 07:28:27.633977 23400 storage/raft_transport.go:428  raft transport stream to node 5 failed: rpc error: code = 13 desc = transport is closing
I161030 07:28:27.634501 24216 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
I161030 07:28:27.634868 24216 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
I161030 07:28:27.635213 24216 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
I161030 07:28:27.635532 24216 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
I161030 07:28:27.635930 24216 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
I161030 07:28:27.636287 24216 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
    client_raft_test.go:1937: expected 0 normal snapshots, but found 2
@cockroach-teamcity cockroach-teamcity added O-robot Originated from a bot. C-test-failure Broken test (automatically or manually discovered). labels Oct 30, 2016
@tamird
Copy link
Contributor

tamird commented Oct 31, 2016

#10156.

@tamird tamird closed this as completed Oct 31, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Projects
None yet
Development

No branches or pull requests

2 participants