Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

: TestStoreRangeRebalance failed under stress #10497

Closed
cockroach-teamcity opened this issue Nov 7, 2016 · 2 comments
Closed

: TestStoreRangeRebalance failed under stress #10497

cockroach-teamcity opened this issue Nov 7, 2016 · 2 comments
Assignees
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.

Comments

@cockroach-teamcity
Copy link
Member

SHA: https://github.com/cockroachdb/cockroach/commits/278e8b16fe22ff1e9e20359fe306cf8daac70d51

Stress build found a failed test:

I161107 08:42:45.551226 22323 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161107 08:42:45.552760 22323 gossip/gossip.go:237  [n?] initial resolvers: []
W161107 08:42:45.554049 22323 gossip/gossip.go:1055  [n?] no resolvers found; use --join to specify a connected node
I161107 08:42:45.554493 22323 base/node_id.go:62  NodeID set to 1
I161107 08:42:45.581772 22323 storage/store.go:1174  [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I161107 08:42:45.592642 22323 gossip/gossip.go:280  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:38958" > attrs:<> locality:<> 
I161107 08:42:45.593508 22342 storage/replica_proposal.go:334  [s1,r1/1:/M{in-ax}] new range lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 900.000124ms following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0s [physicalTime=1970-01-01 00:00:00.000000123 +0000 UTC]
I161107 08:42:45.610591 22323 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161107 08:42:45.630384 22323 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:38958]
W161107 08:42:45.630857 22323 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
I161107 08:42:45.631221 22323 base/node_id.go:62  NodeID set to 2
I161107 08:42:45.654482 22323 storage/store.go:1174  [n2,s2]: failed initial metrics computation: [n2,s2]: system config not yet available
I161107 08:42:45.654851 22323 gossip/gossip.go:280  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:48481" > attrs:<> locality:<> 
I161107 08:42:45.656255 22434 gossip/client.go:125  [n2] started gossip client to 127.0.0.1:38958
I161107 08:42:45.660808 22323 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161107 08:42:45.662345 22323 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:38958]
W161107 08:42:45.677036 22323 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
I161107 08:42:45.677359 22323 base/node_id.go:62  NodeID set to 3
I161107 08:42:45.707745 22323 storage/store.go:1174  [n3,s3]: failed initial metrics computation: [n3,s3]: system config not yet available
I161107 08:42:45.710768 22244 gossip/client.go:125  [n3] started gossip client to 127.0.0.1:38958
I161107 08:42:45.711546 22323 gossip/gossip.go:280  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:55397" > attrs:<> locality:<> 
I161107 08:42:45.722899 22323 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161107 08:42:45.759511 22323 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:38958]
W161107 08:42:45.760016 22323 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
I161107 08:42:45.760391 22323 base/node_id.go:62  NodeID set to 4
I161107 08:42:45.776728 22323 storage/store.go:1174  [n4,s4]: failed initial metrics computation: [n4,s4]: system config not yet available
I161107 08:42:45.777055 22323 gossip/gossip.go:280  [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:53487" > attrs:<> locality:<> 
I161107 08:42:45.779339 22323 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161107 08:42:45.780773 22323 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:38958]
W161107 08:42:45.781176 22323 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
I161107 08:42:45.782030 22581 gossip/client.go:125  [n4] started gossip client to 127.0.0.1:38958
I161107 08:42:45.793791 22323 base/node_id.go:62  NodeID set to 5
I161107 08:42:45.812576 22323 storage/store.go:1174  [n5,s5]: failed initial metrics computation: [n5,s5]: system config not yet available
I161107 08:42:45.815652 22323 gossip/gossip.go:280  [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:54643" > attrs:<> locality:<> 
I161107 08:42:45.816416 22323 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161107 08:42:45.817825 22525 gossip/client.go:125  [n5] started gossip client to 127.0.0.1:38958
I161107 08:42:45.820077 22323 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:38958]
W161107 08:42:45.820481 22323 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
I161107 08:42:45.820815 22323 base/node_id.go:62  NodeID set to 6
I161107 08:42:45.840022 22558 gossip/server.go:263  [n1] refusing gossip from node 5 (max 3 conns); forwarding to 2 ({tcp 127.0.0.1:48481})
I161107 08:42:45.856516 22323 storage/store.go:1174  [n6,s6]: failed initial metrics computation: [n6,s6]: system config not yet available
I161107 08:42:45.856844 22323 gossip/gossip.go:280  [n6] NodeDescriptor set to node_id:6 address:<network_field:"tcp" address_field:"127.0.0.1:51037" > attrs:<> locality:<> 
I161107 08:42:45.861330 22525 gossip/client.go:130  [n5] closing client to node 1 (127.0.0.1:38958): received forward from node 1 to 2 (127.0.0.1:48481)
I161107 08:42:45.861894 22671 gossip/client.go:125  [n5] started gossip client to 127.0.0.1:48481
I161107 08:42:45.884816 22753 gossip/server.go:263  [n1] refusing gossip from node 6 (max 3 conns); forwarding to 4 ({tcp 127.0.0.1:53487})
I161107 08:42:45.888861 22750 gossip/client.go:125  [n6] started gossip client to 127.0.0.1:38958
I161107 08:42:45.890039 22558 gossip/server.go:263  [n1] refusing gossip from node 5 (max 3 conns); forwarding to 2 ({tcp 127.0.0.1:48481})
I161107 08:42:45.892581 22750 gossip/client.go:130  [n6] closing client to node 1 (127.0.0.1:38958): received forward from node 1 to 4 (127.0.0.1:53487)
I161107 08:42:45.893279 22775 gossip/client.go:125  [n6] started gossip client to 127.0.0.1:53487
I161107 08:42:45.901595 22753 gossip/server.go:263  [n1] refusing gossip from node 6 (max 3 conns); forwarding to 2 ({tcp 127.0.0.1:48481})
I161107 08:42:45.989546 22323 storage/replica_command.go:2360  [s1,r1/1:/M{in-ax}] initiating a split of this range at key "split" [r2]
E161107 08:42:46.113178 22372 storage/queue.go:579  [replicate] (purgatory) on [n1,s1,r1/1:{/Min-"split"}]: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
E161107 08:42:46.117972 22372 storage/queue.go:579  [replicate] (purgatory) on [n1,s1,r2/1:{"split"-/Max}]: 0 of 0 stores with an attribute matching []; likely not enough nodes in cluster
I161107 08:42:46.124125 22323 storage/replica_raftstorage.go:446  [s1,r1/1:{/Min-"split"}] generated snapshot a89019b5 for range 1 at index 29 in 154.792µs.
I161107 08:42:46.183603 22323 storage/store.go:3107  streamed snapshot: kv pairs: 31, log entries: 19
I161107 08:42:46.185734 22815 storage/replica_raftstorage.go:592  [s2] [n2,s2,r1/?:{-}]: with replicaID [?], applying preemptive snapshot at index 29 (id=a89019b5, encoded size=16, 1 rocksdb batches, 19 log entries)
I161107 08:42:46.189739 22815 storage/replica_raftstorage.go:595  [s2] [n2,s2,r1/?:{/Min-"split"}]: with replicaID [?], applied preemptive snapshot in 0.004s
I161107 08:42:46.200228 22323 storage/replica_command.go:3242  change replicas: read existing descriptor range_id:1 start_key:"" end_key:"split" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2 
I161107 08:42:46.260413 22848 storage/replica.go:2026  [s1,r1/1:{/Min-"split"}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2} for range 1: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161107 08:42:46.298505 22323 storage/replica_raftstorage.go:446  [s1,r1/1:{/Min-"split"}] generated snapshot 15e1e0eb for range 1 at index 38 in 96.895µs.
I161107 08:42:46.304285 22323 storage/store.go:3107  streamed snapshot: kv pairs: 41, log entries: 28
I161107 08:42:46.306140 22921 storage/replica_raftstorage.go:592  [s3] [n3,s3,r1/?:{-}]: with replicaID [?], applying preemptive snapshot at index 38 (id=15e1e0eb, encoded size=16, 1 rocksdb batches, 28 log entries)
I161107 08:42:46.311791 22921 storage/replica_raftstorage.go:595  [s3] [n3,s3,r1/?:{/Min-"split"}]: with replicaID [?], applied preemptive snapshot in 0.005s
I161107 08:42:46.345212 22323 storage/replica_command.go:3242  change replicas: read existing descriptor range_id:1 start_key:"" end_key:"split" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3 
I161107 08:42:46.364937 22951 storage/raft_transport.go:432  raft transport stream to node 1 established
I161107 08:42:46.499769 22980 storage/replica.go:2026  [s1,r1/1:{/Min-"split"}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3} for range 1: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I161107 08:42:46.636151 22323 storage/replica_raftstorage.go:446  [s1,r2/1:{"split"-/Max}] generated snapshot fcb31e5a for range 2 at index 11 in 105.694µs.
I161107 08:42:46.741577 22323 storage/store.go:3107  streamed snapshot: kv pairs: 28, log entries: 1
I161107 08:42:46.755015 22964 storage/replica_raftstorage.go:592  [s2] [n2,s2,r2/?:{-}]: with replicaID [?], applying preemptive snapshot at index 11 (id=fcb31e5a, encoded size=16, 1 rocksdb batches, 1 log entries)
I161107 08:42:46.756563 22964 storage/replica_raftstorage.go:595  [s2] [n2,s2,r2/?:{"split"-/Max}]: with replicaID [?], applied preemptive snapshot in 0.001s
I161107 08:42:46.776958 22936 storage/raft_transport.go:432  raft transport stream to node 1 established
I161107 08:42:46.841732 22323 storage/replica_command.go:3242  change replicas: read existing descriptor range_id:2 start_key:"split" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2 
I161107 08:42:47.174499 23070 storage/replica.go:2026  [s1,r2/1:{"split"-/Max}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2} for range 2: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
W161107 08:42:47.254531 22346 raft/raft.go:794  [s1,r2/1:{"split"-/Max}] 1 stepped down to follower since quorum is not active
I161107 08:42:47.260483 22323 storage/replica_raftstorage.go:446  [s1,r2/1:{"split"-/Max}] generated snapshot 39149fe2 for range 2 at index 13 in 101.195µs.
I161107 08:42:47.334008 22323 storage/store.go:3107  streamed snapshot: kv pairs: 30, log entries: 3
I161107 08:42:47.367463 23158 storage/replica_raftstorage.go:592  [s4] [n4,s4,r2/?:{-}]: with replicaID [?], applying preemptive snapshot at index 13 (id=39149fe2, encoded size=16, 1 rocksdb batches, 3 log entries)
I161107 08:42:47.429556 23158 storage/replica_raftstorage.go:595  [s4] [n4,s4,r2/?:{"split"-/Max}]: with replicaID [?], applied preemptive snapshot in 0.061s
I161107 08:42:47.476312 22323 storage/replica_command.go:3242  change replicas: read existing descriptor range_id:2 start_key:"split" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3 
I161107 08:42:48.310374 23274 storage/replica.go:2026  [s1,r2/1:{"split"-/Max}] proposing ADD_REPLICA {NodeID:4 StoreID:4 ReplicaID:3} for range 2: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3}]
I161107 08:42:48.401429 22323 storage/replica_raftstorage.go:446  [s1,r2/1:{"split"-/Max}] generated snapshot 7b5e629d for range 2 at index 17 in 127.794µs.
I161107 08:42:48.539443 23280 storage/raft_transport.go:432  raft transport stream to node 1 established
I161107 08:42:48.541582 22323 storage/store.go:3107  streamed snapshot: kv pairs: 31, log entries: 7
I161107 08:42:48.560227 22687 storage/replica_raftstorage.go:592  [s5] [n5,s5,r2/?:{-}]: with replicaID [?], applying preemptive snapshot at index 17 (id=7b5e629d, encoded size=16, 1 rocksdb batches, 7 log entries)
I161107 08:42:48.562578 22687 storage/replica_raftstorage.go:595  [s5] [n5,s5,r2/?:{"split"-/Max}]: with replicaID [?], applied preemptive snapshot in 0.002s
I161107 08:42:48.785167 22323 storage/replica_command.go:3242  change replicas: read existing descriptor range_id:2 start_key:"split" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > next_replica_id:4 
I161107 08:42:49.208855 23254 storage/replica.go:2026  [s1,r2/1:{"split"-/Max}] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:4} for range 2: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3} {NodeID:5 StoreID:5 ReplicaID:4}]
I161107 08:42:49.521747 22323 storage/replica_command.go:3242  change replicas: read existing descriptor range_id:2 start_key:"split" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > replicas:<node_id:5 store_id:5 replica_id:4 > next_replica_id:5 
I161107 08:42:49.626351 23437 storage/raft_transport.go:432  raft transport stream to node 1 established
I161107 08:42:49.922797 23415 storage/raft_transport.go:432  raft transport stream to node 3 established
I161107 08:42:49.940576 23418 storage/raft_transport.go:432  raft transport stream to node 2 established
I161107 08:42:50.672949 23562 storage/replica.go:2026  [s1,r2/1:{"split"-/Max}] proposing REMOVE_REPLICA {NodeID:1 StoreID:1 ReplicaID:1} for range 2: [{NodeID:5 StoreID:5 ReplicaID:4} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3}]
I161107 08:42:50.719122 22356 storage/replica.go:2073  [s1,r1/1:{/Min-"split"}] not quiescing: 6 pending commands
I161107 08:42:50.750491 22350 storage/replica.go:2073  [s1,r1/1:{/Min-"split"}] not quiescing: 6 pending commands
I161107 08:42:50.918575 22689 storage/store.go:2964  [s1] [n1,s1,r2/1:{"split"-/Max}]: added to replica GC queue (peer suggestion)
I161107 08:42:51.098881 23542 storage/raft_transport.go:432  raft transport stream to node 5 established
I161107 08:42:51.102447 23537 storage/raft_transport.go:432  raft transport stream to node 2 established
I161107 08:42:51.104862 23541 storage/raft_transport.go:432  raft transport stream to node 4 established
I161107 08:42:51.110484 23546 storage/raft_transport.go:432  raft transport stream to node 2 established
W161107 08:42:51.314944 22347 raft/raft.go:794  [s1,r2/1:{"split"-/Max}] 1 stepped down to follower since quorum is not active
I161107 08:42:51.400286 22419 storage/replica_proposal.go:334  [s2,r2/2:{"split"-/Max}] new range lease replica {2 2 2} 1970-01-01 00:00:00.900000124 +0000 UTC 3.600000004s following replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 900.000124ms [physicalTime=1970-01-01 00:00:09.000000133 +0000 UTC]
I161107 08:42:51.737105 22422 storage/replica_proposal.go:334  [s2,r1/2:{/Min-"split"}] new range lease replica {2 2 2} 1970-01-01 00:00:00.900000124 +0000 UTC 3.600000004s following replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 900.000124ms [physicalTime=1970-01-01 00:00:10.800000135 +0000 UTC]
I161107 08:42:52.129658 22419 storage/replica_proposal.go:383  [s2,r2/2:{"split"-/Max}] range [n2,s2,r2/2:{"split"-/Max}]: transferring raft leadership to replica ID 4
I161107 08:42:52.169247 22612 storage/replica_proposal.go:334  [s5,r2/4:{"split"-/Max}] new range lease replica {5 5 4} 1970-01-01 00:00:09.900000134 +0000 UTC 1.800000002s following replica {2 2 2} 1970-01-01 00:00:00.900000124 +0000 UTC 9.00000001s [physicalTime=1970-01-01 00:00:10.800000135 +0000 UTC]
I161107 08:42:52.339565 22419 storage/replica_proposal.go:383  [s2,r2/2:{"split"-/Max}] range [n2,s2,r2/2:{"split"-/Max}]: transferring raft leadership to replica ID 4
I161107 08:42:52.552897 23638 storage/raft_transport.go:432  raft transport stream to node 4 established
I161107 08:42:52.594124 23639 storage/raft_transport.go:432  raft transport stream to node 5 established
I161107 08:42:53.184987 22323 storage/client_test.go:414  gossip network initialized
I161107 08:42:53.584808 23738 util/stop/stopper.go:468  quiescing; tasks left:
1      storage/intent_resolver.go:317
I161107 08:42:53.585052 23737 util/stop/stopper.go:468  quiescing; tasks left:
9      storage/client_test.go:501
1      storage/replica_range_lease.go:150
1      storage/queue.go:624
1      storage/intent_resolver.go:383
W161107 08:42:53.585399 23624 storage/replica.go:1774  [s1,r1/1:{/Min-"split"}] shutdown cancellation of command RequestLease [/Min,/Min)
I161107 08:42:53.605731 23737 util/stop/stopper.go:468  quiescing; tasks left:
8      storage/client_test.go:501
1      storage/replica_range_lease.go:150
1      storage/queue.go:624
1      storage/intent_resolver.go:383
E161107 08:42:53.606036 22866 storage/queue.go:568  [replicate] on [n1,s1,r1/1:{/Min-"split"}]: [n1,s1,r1/1:{/Min-"split"}]: could not obtain lease: result is ambiguous
I161107 08:42:53.606342 22866 storage/queue.go:630  [replicate] purgatory is now empty
I161107 08:42:53.606567 23737 util/stop/stopper.go:468  quiescing; tasks left:
8      storage/client_test.go:501
1      storage/replica_range_lease.go:150
1      storage/intent_resolver.go:383
E161107 08:42:53.607047 22318 storage/node_liveness.go:141  [hb] failed liveness heartbeat: result is ambiguous
I161107 08:42:53.618247 23737 util/stop/stopper.go:468  quiescing; tasks left:
7      storage/client_test.go:501
1      storage/replica_range_lease.go:150
1      storage/intent_resolver.go:383
I161107 08:42:53.620304 23739 util/stop/stopper.go:468  quiescing; tasks left:
1      storage/intent_resolver.go:317
I161107 08:42:53.621652 23741 util/stop/stopper.go:468  quiescing; tasks left:
3      storage/client_test.go:501
1      storage/replica_range_lease.go:150
E161107 08:42:53.622851 22554 storage/node_liveness.go:141  [hb] failed liveness heartbeat: result is ambiguous
W161107 08:42:53.625248 23765 storage/replica.go:1774  [s5,r2/4:{"split"-/Max}] shutdown cancellation of command RequestLease ["split",/Min)
I161107 08:42:53.640177 23741 util/stop/stopper.go:468  quiescing; tasks left:
3      storage/client_test.go:501
I161107 08:42:53.640845 23741 util/stop/stopper.go:468  quiescing; tasks left:
2      storage/client_test.go:501
I161107 08:42:53.652366 23737 util/stop/stopper.go:468  quiescing; tasks left:
6      storage/client_test.go:501
1      storage/replica_range_lease.go:150
1      storage/intent_resolver.go:383
I161107 08:42:53.652842 23737 util/stop/stopper.go:468  quiescing; tasks left:
5      storage/client_test.go:501
1      storage/replica_range_lease.go:150
1      storage/intent_resolver.go:383
E161107 08:42:53.653868 22554 storage/node_liveness.go:141  [hb] failed liveness heartbeat: result is ambiguous
I161107 08:42:53.658412 23737 util/stop/stopper.go:468  quiescing; tasks left:
4      storage/client_test.go:501
1      storage/replica_range_lease.go:150
1      storage/intent_resolver.go:383
E161107 08:42:53.676697 22318 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
I161107 08:42:53.701149 23737 util/stop/stopper.go:468  quiescing; tasks left:
3      storage/client_test.go:501
1      storage/replica_range_lease.go:150
1      storage/intent_resolver.go:383
E161107 08:42:53.701880 22528 storage/node_liveness.go:141  [hb] failed liveness heartbeat: result is ambiguous
W161107 08:42:53.702532 23540 storage/intent_resolver.go:380  could not GC completed transaction anchored at /Local/Range/"split"/RangeDescriptor: result is ambiguous
I161107 08:42:53.702806 23737 util/stop/stopper.go:468  quiescing; tasks left:
3      storage/client_test.go:501
1      storage/replica_range_lease.go:150
W161107 08:42:53.704988 23549 storage/intent_resolver.go:310  [n3,s3,r1/3:{/Min-"split"}]: failed to resolve intents: node unavailable; try another peer
I161107 08:42:53.705189 23741 util/stop/stopper.go:468  quiescing; tasks left:
1      storage/client_test.go:501
E161107 08:42:53.706833 22286 storage/node_liveness.go:141  [hb] failed liveness heartbeat: result is ambiguous
E161107 08:42:53.707682 22286 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
I161107 08:42:53.712678 22416 storage/replica_proposal.go:383  [s2,r2/2:{"split"-/Max}] range [n2,s2,r2/2:{"split"-/Max}]: transferring raft leadership to replica ID 4
I161107 08:42:53.771318 23737 util/stop/stopper.go:468  quiescing; tasks left:
3      storage/client_test.go:501
E161107 08:42:53.787586 22528 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
E161107 08:42:53.792427 22318 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
E161107 08:42:53.792718 22474 storage/node_liveness.go:141  [hb] failed liveness heartbeat: result is ambiguous
W161107 08:42:53.813102 23588 storage/intent_resolver.go:310  [n2,s2,r1/2:{/Min-"split"}]: failed to resolve intents: result is ambiguous
I161107 08:42:53.813372 23706 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
E161107 08:42:53.854154 22135 storage/node_liveness.go:141  [hb] failed liveness heartbeat: result is ambiguous
E161107 08:42:53.855046 22135 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
E161107 08:42:53.864848 22286 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
E161107 08:42:53.864981 22528 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
E161107 08:42:53.865105 22474 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
E161107 08:42:53.893537 22554 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
E161107 08:42:53.915916 22554 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
I161107 08:42:53.916053 23706 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
E161107 08:42:53.925218 22528 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
W161107 08:42:53.927394 22953 storage/raft_transport.go:472  no handler found for store 2 in response range_id:0 from_replica:<node_id:1 store_id:1 replica_id:0 > to_replica:<node_id:2 store_id:2 replica_id:0 > union:<error:<message:"storage/raft_transport.go:258: unable to accept Raft message from {NodeID:2 StoreID:2 ReplicaID:0}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:0}" transaction_restart:NONE origin_node:0 now:<wall_time:0 logical:0 > > > 
I161107 08:42:53.928153 23706 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
W161107 08:42:53.930092 23619 storage/store.go:2968  [s4] got error from range 2, replica {2 2 2}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:4 StoreID:4 ReplicaID:3}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}
W161107 08:42:53.930277 23603 storage/store.go:2968  [s5] got error from range 2, replica {2 2 2}: storage/raft_transport.go:258: unable to accept Raft message from {NodeID:5 StoreID:5 ReplicaID:4}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}
I161107 08:42:53.930808 23706 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
I161107 08:42:53.932733 23706 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
I161107 08:42:53.934649 23706 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
I161107 08:42:53.936469 23706 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
W161107 08:42:53.938122 22965 storage/raft_transport.go:437  raft transport stream to node 3 failed: rpc error: code = 13 desc = transport is closing
I161107 08:42:53.938515 22668 http2_server.go:276  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:51037->127.0.0.1:54601: use of closed network connection
I161107 08:42:53.938842 22522 http2_server.go:276  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:54643->127.0.0.1:49955: use of closed network connection
I161107 08:42:53.939153 22552 http2_server.go:276  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:53487->127.0.0.1:49599: use of closed network connection
I161107 08:42:53.939462 22439 http2_server.go:276  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:55397->127.0.0.1:49960: use of closed network connection
I161107 08:42:53.939844 22332 http2_server.go:276  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:38958->127.0.0.1:40393: use of closed network connection
I161107 08:42:53.940139 22388 http2_server.go:276  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:48481->127.0.0.1:45571: use of closed network connection
W161107 08:42:53.940385 23542 storage/raft_transport.go:437  raft transport stream to node 5 failed: rpc error: code = 13 desc = transport is closing
W161107 08:42:53.940760 23418 storage/raft_transport.go:437  raft transport stream to node 2 failed: rpc error: code = 13 desc = transport is closing
W161107 08:42:53.941003 23638 storage/raft_transport.go:437  raft transport stream to node 4 failed: rpc error: code = 13 desc = transport is closing
W161107 08:42:53.941286 22948 storage/raft_transport.go:437  raft transport stream to node 2 failed: rpc error: code = 13 desc = transport is closing
I161107 08:42:53.941832 23706 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
I161107 08:42:53.942195 23706 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
I161107 08:42:53.942582 23706 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
I161107 08:42:53.942921 23706 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
I161107 08:42:53.943284 23706 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
I161107 08:42:53.943680 23706 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
	client_raft_test.go:2093: range 1: replica {2 2 2} not lease holder; node_id:2 store_id:2 replica_id:2  is
@cockroach-teamcity cockroach-teamcity added O-robot Originated from a bot. C-test-failure Broken test (automatically or manually discovered). labels Nov 7, 2016
@petermattis petermattis self-assigned this Nov 7, 2016
@petermattis
Copy link
Collaborator

Likely related to #10193.

petermattis added a commit to petermattis/cockroach that referenced this issue Nov 7, 2016
Rather than the somewhat complicated rebalancing scenario, use a simple
scenario that we perform up-replication of range 1 from 1 to 3 nodes. We
check that this up-replication is performed using preemptive
snapshots. The more complicated scenario was very fragile, frequently
being broken by innocuous changes.

Fixes cockroachdb#10497
Fixes cockroachdb#10193
Fixes cockroachdb#10156
Fixes cockroachdb#9395
petermattis added a commit to petermattis/cockroach that referenced this issue Nov 7, 2016
Rather than the somewhat complicated rebalancing scenario, use a simple
scenario that we perform up-replication of range 1 from 1 to 3 nodes. We
check that this up-replication is performed using preemptive
snapshots. The more complicated scenario was very fragile, frequently
being broken by innocuous changes.

Fixes cockroachdb#10497
Fixes cockroachdb#10193
Fixes cockroachdb#10156
Fixes cockroachdb#9395
@petermattis
Copy link
Collaborator

Duplicate of #10193.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Projects
None yet
Development

No branches or pull requests

2 participants