Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

: TestStoreRangeDownReplicate failed under stress #10324

Closed
cockroach-teamcity opened this issue Oct 31, 2016 · 1 comment
Closed

: TestStoreRangeDownReplicate failed under stress #10324

cockroach-teamcity opened this issue Oct 31, 2016 · 1 comment
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.

Comments

@cockroach-teamcity
Copy link
Member

SHA: https://github.com/cockroachdb/cockroach/commits/f4a8abc6abd9d15c995d5f749dbacf593e6a7900

Stress build found a failed test:

I161031 07:26:33.365637 14342 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161031 07:26:33.367157 14342 gossip/gossip.go:237  [n?] initial resolvers: []
W161031 07:26:33.367566 14342 gossip/gossip.go:1055  [n?] no resolvers found; use --join to specify a connected node
I161031 07:26:33.367881 14342 base/node_id.go:62  NodeID set to 1
I161031 07:26:33.385089 14342 storage/store.go:1152  [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I161031 07:26:33.385415 14342 gossip/gossip.go:280  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:50684" > attrs:<> locality:<> 
I161031 07:26:33.395114 14407 storage/replica_proposal.go:292  [s1,r1/1:/M{in-ax}] new range lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 900ms following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0s [physicalTime=1970-01-01 00:00:00 +0000 UTC]
I161031 07:26:33.417835 14342 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161031 07:26:33.419334 14342 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:50684]
W161031 07:26:33.419759 14342 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
I161031 07:26:33.420098 14342 base/node_id.go:62  NodeID set to 2
I161031 07:26:33.451784 14342 storage/store.go:1152  [n2,s2]: failed initial metrics computation: [n2,s2]: system config not yet available
I161031 07:26:33.456223 14425 gossip/client.go:126  [n2] node 2: started gossip client to 127.0.0.1:50684
I161031 07:26:33.457148 14342 gossip/gossip.go:280  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:36658" > attrs:<> locality:<> 
I161031 07:26:33.458933 14342 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161031 07:26:33.462128 14342 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:50684]
W161031 07:26:33.462892 14342 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
I161031 07:26:33.466825 14342 base/node_id.go:62  NodeID set to 3
I161031 07:26:33.486324 14342 storage/store.go:1152  [n3,s3]: failed initial metrics computation: [n3,s3]: system config not yet available
I161031 07:26:33.486645 14342 gossip/gossip.go:280  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:33938" > attrs:<> locality:<> 
I161031 07:26:33.496375 14342 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161031 07:26:33.497938 14342 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:50684]
W161031 07:26:33.498415 14342 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
I161031 07:26:33.498773 14342 base/node_id.go:62  NodeID set to 4
I161031 07:26:33.525088 14548 gossip/client.go:126  [n3] node 3: started gossip client to 127.0.0.1:50684
I161031 07:26:33.527276 14342 storage/store.go:1152  [n4,s4]: failed initial metrics computation: [n4,s4]: system config not yet available
I161031 07:26:33.527600 14342 gossip/gossip.go:280  [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:38769" > attrs:<> locality:<> 
I161031 07:26:33.530107 14342 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161031 07:26:33.532383 14342 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:50684]
W161031 07:26:33.532819 14342 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
I161031 07:26:33.533139 14342 base/node_id.go:62  NodeID set to 5
I161031 07:26:33.538400 13978 gossip/client.go:126  [n4] node 4: started gossip client to 127.0.0.1:50684
I161031 07:26:33.609907 14620 gossip/client.go:126  [n5] node 5: started gossip client to 127.0.0.1:50684
I161031 07:26:33.627029 14342 storage/store.go:1152  [n5,s5]: failed initial metrics computation: [n5,s5]: system config not yet available
I161031 07:26:33.627420 14342 gossip/gossip.go:280  [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:41542" > attrs:<> locality:<> 
I161031 07:26:33.640435 14695 gossip/server.go:263  [n1] refusing gossip from node 5 (max 3 conns); forwarding to 4 ({tcp 127.0.0.1:38769})
I161031 07:26:33.640853 14695 gossip/server.go:263  [n1] refusing gossip from node 5 (max 3 conns); forwarding to 2 ({tcp 127.0.0.1:36658})
I161031 07:26:33.656409 14695 gossip/server.go:263  [n1] refusing gossip from node 5 (max 3 conns); forwarding to 2 ({tcp 127.0.0.1:36658})
I161031 07:26:33.669430 14620 gossip/client.go:131  [n5] node 5: closing client to node 1 (127.0.0.1:50684): received forward from node 1 to 4 (127.0.0.1:38769)
I161031 07:26:33.677201 14745 gossip/client.go:126  [n5] node 5: started gossip client to 127.0.0.1:38769
I161031 07:26:33.696058 14695 gossip/server.go:263  [n1] refusing gossip from node 5 (max 3 conns); forwarding to 2 ({tcp 127.0.0.1:36658})
I161031 07:26:33.790577 14342 storage/client_test.go:420  gossip network initialized
I161031 07:26:33.811983 14342 storage/replica_raftstorage.go:446  [s1,r1/1:/M{in-ax}] generated snapshot cb7f4dcf for range 1 at index 22 in 101.503µs.
I161031 07:26:33.819574 14342 storage/store.go:3061  streamed snapshot: kv pairs: 41, log entries: 12
I161031 07:26:33.824050 14822 storage/replica_raftstorage.go:577  [s2] [n2,s2,r1/?:{-}]: with replicaID [?], applying preemptive snapshot at index 22 (id=cb7f4dcf, encoded size=16, 1 rocksdb batches, 12 log entries)
I161031 07:26:33.832063 14822 storage/replica_raftstorage.go:580  [s2] [n2,s2,r1/?:/M{in-ax}]: with replicaID [?], applied preemptive snapshot in 0.008s
I161031 07:26:33.840595 14342 storage/replica_command.go:3240  change replicas: read existing descriptor range_id:1 start_key:"" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2 
I161031 07:26:33.880015 14840 storage/replica.go:1860  [s1,r1/1:/M{in-ax}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2} for range 1: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161031 07:26:33.900944 14342 storage/replica_raftstorage.go:446  [s1,r1/1:/M{in-ax}] generated snapshot aaa8d666 for range 1 at index 27 in 100.703µs.
W161031 07:26:33.905627 14399 raft/raft.go:696  [s1,r1/1:/M{in-ax}] 1 stepped down to follower since quorum is not active
I161031 07:26:33.917339 14342 storage/store.go:3061  streamed snapshot: kv pairs: 47, log entries: 17
I161031 07:26:33.917554 14586 storage/replica_raftstorage.go:577  [s3] [n3,s3,r1/?:{-}]: with replicaID [?], applying preemptive snapshot at index 27 (id=aaa8d666, encoded size=16, 1 rocksdb batches, 17 log entries)
I161031 07:26:33.922381 14586 storage/replica_raftstorage.go:580  [s3] [n3,s3,r1/?:/M{in-ax}]: with replicaID [?], applied preemptive snapshot in 0.004s
I161031 07:26:33.940011 14342 storage/replica_command.go:3240  change replicas: read existing descriptor range_id:1 start_key:"" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3 
I161031 07:26:34.190080 14899 storage/raft_transport.go:423  raft transport stream to node 1 established
I161031 07:26:34.306507 14885 storage/replica.go:1860  [s1,r1/1:/M{in-ax}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3} for range 1: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I161031 07:26:34.528560 14949 storage/raft_transport.go:423  raft transport stream to node 1 established
I161031 07:26:34.634691 14342 storage/replica_command.go:2363  initiating a split of this range at key "m" [r2]
I161031 07:26:35.382170 14342 storage/replica_raftstorage.go:446  [s1,r2/1:{"m"-/Max}] generated snapshot 148ba74b for range 2 at index 10 in 126.404µs.
I161031 07:26:35.400737 14342 storage/store.go:3061  streamed snapshot: kv pairs: 28, log entries: 0
I161031 07:26:35.425086 15045 storage/replica_raftstorage.go:577  [s4] [n4,s4,r2/?:{-}]: with replicaID [?], applying preemptive snapshot at index 10 (id=148ba74b, encoded size=16, 1 rocksdb batches, 0 log entries)
I161031 07:26:35.479407 15045 storage/replica_raftstorage.go:580  [s4] [n4,s4,r2/?:{"m"-/Max}]: with replicaID [?], applied preemptive snapshot in 0.001s
I161031 07:26:35.565081 14342 storage/replica_command.go:3240  change replicas: read existing descriptor range_id:2 start_key:"m" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:3 store_id:3 replica_id:3 > next_replica_id:4 
W161031 07:26:36.178978 15067 storage/stores.go:218  range not contained in one range: [/Meta2/Max,"m\x00"), but have [/Min,"m")
I161031 07:26:36.644674 15188 storage/replica.go:1860  [s1,r2/1:{"m"-/Max}] proposing ADD_REPLICA {NodeID:4 StoreID:4 ReplicaID:4} for range 2: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3} {NodeID:4 StoreID:4 ReplicaID:4}]
W161031 07:26:36.784405 15187 storage/intent_resolver.go:314  [n3,s3,r1/3:{/Min-"m"}]: failed to push during intent resolution: failed to push "change-replica" id=7fd63e29 key=/Local/Range/"m"/RangeDescriptor rw=true pri=0.00104033 iso=SERIALIZABLE stat=PENDING epo=0 ts=0.000000000,733 orig=0.000000000,733 max=0.000000000,733 wto=false rop=false
I161031 07:26:36.864641 14342 storage/replica_raftstorage.go:446  [s1,r2/1:{"m"-/Max}] generated snapshot 3c28d225 for range 2 at index 15 in 104.904µs.
I161031 07:26:36.922315 15165 storage/raft_transport.go:423  raft transport stream to node 1 established
I161031 07:26:36.948683 14342 storage/store.go:3061  streamed snapshot: kv pairs: 30, log entries: 5
I161031 07:26:36.950918 15161 storage/replica_raftstorage.go:577  [s5] [n5,s5,r2/?:{-}]: with replicaID [?], applying preemptive snapshot at index 15 (id=3c28d225, encoded size=16, 1 rocksdb batches, 5 log entries)
I161031 07:26:36.960043 15161 storage/replica_raftstorage.go:580  [s5] [n5,s5,r2/?:{"m"-/Max}]: with replicaID [?], applied preemptive snapshot in 0.009s
I161031 07:26:36.984211 14342 storage/replica_command.go:3240  change replicas: read existing descriptor range_id:2 start_key:"m" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:3 store_id:3 replica_id:3 > replicas:<node_id:4 store_id:4 replica_id:4 > next_replica_id:5 
I161031 07:26:37.230473 15234 storage/raft_transport.go:423  raft transport stream to node 3 established
I161031 07:26:37.984209 15230 storage/replica.go:1860  [s1,r2/1:{"m"-/Max}] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:5} for range 2: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3} {NodeID:4 StoreID:4 ReplicaID:4} {NodeID:5 StoreID:5 ReplicaID:5}]
I161031 07:26:38.521200 15210 storage/raft_transport.go:423  raft transport stream to node 1 established
I161031 07:26:38.716443 15296 storage/raft_transport.go:423  raft transport stream to node 2 established
I161031 07:26:40.543579 15426 storage/raft_transport.go:423  raft transport stream to node 4 established
I161031 07:26:40.548767 15345 storage/raft_transport.go:423  raft transport stream to node 3 established
I161031 07:26:40.619417 15344 storage/raft_transport.go:423  raft transport stream to node 2 established
I161031 07:26:40.648455 14467 storage/replica_proposal.go:292  [s2,r1/2:{/Min-"m"}] new range lease replica {2 2 2} 1970-01-01 00:00:00.9 +0000 UTC 1.8s following replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 900ms [physicalTime=1970-01-01 00:00:07.2 +0000 UTC]
I161031 07:26:41.187319 14395 storage/replica_proposal.go:292  [s1,r1/1:{/Min-"m"}] new range lease replica {1 1 1} 1970-01-01 00:00:02.7 +0000 UTC 1.8s following replica {2 2 2} 1970-01-01 00:00:00.9 +0000 UTC 1.8s [physicalTime=1970-01-01 00:00:12.6 +0000 UTC]
I161031 07:26:41.217423 15369 storage/raft_transport.go:423  raft transport stream to node 5 established
I161031 07:26:41.226085 15356 storage/raft_transport.go:423  raft transport stream to node 5 established
I161031 07:26:41.237625 15355 storage/raft_transport.go:423  raft transport stream to node 4 established
I161031 07:26:41.265546 15459 storage/raft_transport.go:423  raft transport stream to node 3 established
I161031 07:26:41.572533 14467 storage/replica_proposal.go:292  [s2,r1/2:{/Min-"m"}] new range lease replica {2 2 2} 1970-01-01 00:00:04.5 +0000 UTC 7.2s following replica {1 1 1} 1970-01-01 00:00:02.7 +0000 UTC 1.8s [physicalTime=1970-01-01 00:00:12.6 +0000 UTC]
I161031 07:26:41.691904 14467 storage/replica_proposal.go:341  [s2,r1/2:{/Min-"m"}] range [n2,s2,r1/2:{/Min-"m"}]: transferring raft leadership to replica ID 1
I161031 07:26:41.726082 14541 storage/replica_proposal.go:292  [s3,r2/3:{"m"-/Max}] new range lease replica {3 3 3} 1970-01-01 00:00:08.1 +0000 UTC 5.4s following replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 8.1s [physicalTime=1970-01-01 00:00:12.6 +0000 UTC]
I161031 07:26:41.781285 14405 storage/replica_proposal.go:341  [s1,r2/1:{"m"-/Max}] range [n1,s1,r2/1:{"m"-/Max}]: transferring raft leadership to replica ID 3
I161031 07:26:41.840119 14471 storage/replica_proposal.go:341  [s2,r1/2:{/Min-"m"}] range [n2,s2,r1/2:{/Min-"m"}]: transferring raft leadership to replica ID 1
I161031 07:26:42.012564 14401 storage/replica_proposal.go:292  [s1,r1/1:{/Min-"m"}] new range lease replica {1 1 1} 1970-01-01 00:00:11.7 +0000 UTC 1.8s following replica {2 2 2} 1970-01-01 00:00:04.5 +0000 UTC 7.2s [physicalTime=1970-01-01 00:00:12.6 +0000 UTC]
E161031 07:26:42.586150 14497 storage/node_liveness.go:141  [hb] failed liveness heartbeat: transaction commit result is ambiguous
E161031 07:26:42.629228 14626 storage/node_liveness.go:141  [hb] failed liveness heartbeat: transaction commit result is ambiguous
I161031 07:26:43.051149 15524 storage/raft_transport.go:423  raft transport stream to node 5 established
E161031 07:26:43.202006 14739 storage/node_liveness.go:141  [hb] failed liveness heartbeat: transaction commit result is ambiguous
E161031 07:26:43.211524 14441 storage/node_liveness.go:141  [hb] failed liveness heartbeat: transaction commit result is ambiguous
I161031 07:26:43.218102 15523 storage/raft_transport.go:423  raft transport stream to node 2 established
W161031 07:26:43.282073 15278 storage/store.go:745  storeMu: mutex held by github.com/cockroachdb/cockroach/pkg/storage.(*Store).HandleRaftUncoalescedRequest for 151.961638ms (>100ms):
goroutine 15278 [running]:
runtime/debug.Stack(0x1bc2240, 0x1bc2280, 0x52)
	/usr/local/go/src/runtime/debug/stack.go:24 +0x79
github.com/cockroachdb/cockroach/pkg/util/syncutil.ThresholdLogger.func1(0x90ec026)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/syncutil/timedmutex.go:65 +0xe1
github.com/cockroachdb/cockroach/pkg/util/syncutil.(*TimedMutex).Unlock(0xc42742bea8)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/syncutil/timedmutex.go:92 +0x80
github.com/cockroachdb/cockroach/pkg/storage.(*Store).HandleRaftUncoalescedRequest(0xc42742bc00, 0x7f36e0946a90, 0xc422db4480, 0xc426dd2480, 0x7f36e092b768, 0xc420ddefe0, 0xd7a5c1)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:2647 +0x20e
github.com/cockroachdb/cockroach/pkg/storage.(*Store).HandleRaftRequest(0xc42742bc00, 0x7f36e0946a90, 0xc422db4480, 0xc426dd2480, 0x7f36e092b768, 0xc420ddefe0, 0x0)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:2613 +0x243
github.com/cockroachdb/cockroach/pkg/storage.(*RaftTransport).handleRaftRequest(0xc4252f8de0, 0x7f36e0946a90, 0xc422db4480, 0xc426dd2480, 0x7f36e092b768, 0xc420ddefe0, 0x0)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/raft_transport.go:252 +0xfb
github.com/cockroachdb/cockroach/pkg/storage.(*RaftTransport).RaftMessageBatch.func1.1.1(0x235cc20, 0xc420ddefe0, 0xc4252f8de0, 0xc424403f88, 0xc424403f90)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/raft_transport.go:306 +0x173
github.com/cockroachdb/cockroach/pkg/storage.(*RaftTransport).RaftMessageBatch.func1.1()
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/raft_transport.go:314 +0x45
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc427fb6120, 0xc422faf020)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x7d
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x66
I161031 07:26:43.632494 15511 storage/raft_transport.go:423  raft transport stream to node 4 established
I161031 07:26:43.732997 14478 storage/replica_proposal.go:292  [s2,r1/2:{/Min-"m"}] new range lease replica {2 2 2} 1970-01-01 00:00:24.3 +0000 UTC 1.8s following replica {1 1 1} 1970-01-01 00:00:11.7 +0000 UTC 12.6s [physicalTime=1970-01-01 00:00:27 +0000 UTC]
I161031 07:26:45.243567 14595 storage/replica_proposal.go:341  [s4,r2/4:{"m"-/Max}] range [n4,s4,r2/4:{"m"-/Max}]: transferring raft leadership to replica ID 3
I161031 07:26:45.906345 14400 storage/replica_proposal.go:292  [s1,r1/1:{/Min-"m"}] new range lease replica {1 1 1} 1970-01-01 00:00:26.1 +0000 UTC 9s following replica {2 2 2} 1970-01-01 00:00:24.3 +0000 UTC 1.8s [physicalTime=1970-01-01 00:00:45 +0000 UTC]
I161031 07:26:46.932403 14539 storage/replica_proposal.go:341  [s3,r1/3:{/Min-"m"}] range [n3,s3,r1/3:{/Min-"m"}]: transferring raft leadership to replica ID 1
I161031 07:26:47.118826 14534 storage/replica_proposal.go:341  [s3,r1/3:{/Min-"m"}] range [n3,s3,r1/3:{/Min-"m"}]: transferring raft leadership to replica ID 1
E161031 07:26:47.618235 14374 storage/node_liveness.go:141  [hb] failed liveness heartbeat: transaction commit result is ambiguous
I161031 07:26:47.729853 14399 storage/replica_proposal.go:341  [s1,r1/1:{/Min-"m"}] range [n1,s1,r1/1:{/Min-"m"}]: transferring raft leadership to replica ID 3
I161031 07:26:47.794149 14540 storage/replica_proposal.go:292  [s3,r1/3:{/Min-"m"}] new range lease replica {3 3 3} 1970-01-01 00:00:45.9 +0000 UTC 1.8s following replica {1 1 1} 1970-01-01 00:00:26.1 +0000 UTC 19.8s [physicalTime=1970-01-01 00:00:52.2 +0000 UTC]
E161031 07:26:48.529734 14626 storage/node_liveness.go:141  [hb] failed liveness heartbeat: transaction commit result is ambiguous
E161031 07:26:48.596015 14497 storage/node_liveness.go:141  [hb] failed liveness heartbeat: transaction commit result is ambiguous
I161031 07:26:48.955296 15578 util/stop/stopper.go:455  quiescing; tasks left:
2      storage/client_test.go:507
1      storage/replica_range_lease.go:150
I161031 07:26:48.959325 15578 util/stop/stopper.go:455  quiescing; tasks left:
2      storage/client_test.go:507
I161031 07:26:48.959915 15577 util/stop/stopper.go:455  quiescing; tasks left:
3      storage/client_test.go:507
1      storage/replica_range_lease.go:150
I161031 07:26:48.966871 15578 util/stop/stopper.go:455  quiescing; tasks left:
1      storage/client_test.go:507
I161031 07:26:48.970058 15576 util/stop/stopper.go:455  quiescing; tasks left:
4      storage/client_test.go:507
1      storage/replica_range_lease.go:150
E161031 07:26:48.970802 14626 storage/node_liveness.go:141  [hb] failed liveness heartbeat: transaction commit result is ambiguous
E161031 07:26:48.971722 14626 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
E161031 07:26:48.981381 14497 storage/node_liveness.go:141  [hb] failed liveness heartbeat: transaction commit result is ambiguous
E161031 07:26:48.982300 14497 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
I161031 07:26:48.983544 15577 util/stop/stopper.go:455  quiescing; tasks left:
3      storage/client_test.go:507
I161031 07:26:48.984567 15577 util/stop/stopper.go:455  quiescing; tasks left:
2      storage/client_test.go:507
I161031 07:26:48.984956 15577 util/stop/stopper.go:455  quiescing; tasks left:
1      storage/client_test.go:507
E161031 07:26:48.986633 14739 storage/node_liveness.go:141  [hb] failed liveness heartbeat: transaction commit result is ambiguous
E161031 07:26:48.987580 14739 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
I161031 07:26:48.990301 15575 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
E161031 07:26:49.005035 14374 storage/node_liveness.go:141  [hb] failed liveness heartbeat: failed to send RPC: sending to all 3 replicas failed; last error: node unavailable; try another peer
E161031 07:26:49.005967 14374 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
E161031 07:26:49.014872 14441 storage/node_liveness.go:141  [hb] failed liveness heartbeat: transaction commit result is ambiguous
E161031 07:26:49.015811 14441 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
I161031 07:26:49.016057 15575 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
E161031 07:26:49.021363 14626 storage/node_liveness.go:141  [hb] failed liveness heartbeat: node unavailable; try another peer
W161031 07:26:49.026705 14932 storage/store.go:2926  [s3] got error from range 1, replica {1 1 1}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161031 07:26:49.030072 15277 storage/store.go:2926  [s3] got error from range 1, replica {2 2 2}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}
W161031 07:26:49.030310 14932 storage/store.go:2926  [s3] got error from range 1, replica {1 1 1}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161031 07:26:49.030937 15277 storage/store.go:2926  [s3] got error from range 1, replica {2 2 2}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}
W161031 07:26:49.031183 15277 storage/store.go:2926  [s3] got error from range 1, replica {2 2 2}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}
W161031 07:26:49.045734 14932 storage/store.go:2926  [s3] got error from range 1, replica {1 1 1}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161031 07:26:49.045940 14932 storage/store.go:2926  [s3] got error from range 1, replica {1 1 1}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161031 07:26:49.046139 14932 storage/store.go:2926  [s3] got error from range 1, replica {1 1 1}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}
W161031 07:26:49.046604 15277 storage/store.go:2926  [s3] got error from range 1, replica {2 2 2}: storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}
I161031 07:26:49.047153 15575 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
I161031 07:26:49.050630 15575 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
I161031 07:26:49.052774 15575 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
W161031 07:26:49.054393 15277 storage/raft_transport.go:463  no handler found for store 3 in response range_id:1 from_replica:<node_id:2 store_id:2 replica_id:2 > to_replica:<node_id:3 store_id:3 replica_id:3 > union:<error:<message:"storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}" transaction_restart:NONE origin_node:0 now:<wall_time:0 logical:0 > > > 
W161031 07:26:49.054600 15277 storage/raft_transport.go:463  no handler found for store 3 in response range_id:1 from_replica:<node_id:2 store_id:2 replica_id:2 > to_replica:<node_id:3 store_id:3 replica_id:3 > union:<error:<message:"storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:2}" transaction_restart:NONE origin_node:0 now:<wall_time:0 logical:0 > > > 
W161031 07:26:49.054849 15277 storage/raft_transport.go:463  no handler found for store 3 in response range_id:0 from_replica:<node_id:2 store_id:2 replica_id:0 > to_replica:<node_id:3 store_id:3 replica_id:0 > union:<error:<message:"storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:0}: no handler registered for {NodeID:2 StoreID:2 ReplicaID:0}" transaction_restart:NONE origin_node:0 now:<wall_time:0 logical:0 > > > 
W161031 07:26:49.055515 14932 storage/raft_transport.go:463  no handler found for store 3 in response range_id:0 from_replica:<node_id:1 store_id:1 replica_id:0 > to_replica:<node_id:3 store_id:3 replica_id:0 > union:<error:<message:"storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:0}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:0}" transaction_restart:NONE origin_node:0 now:<wall_time:0 logical:0 > > > 
W161031 07:26:49.055728 14932 storage/raft_transport.go:463  no handler found for store 3 in response range_id:1 from_replica:<node_id:1 store_id:1 replica_id:1 > to_replica:<node_id:3 store_id:3 replica_id:3 > union:<error:<message:"storage/raft_transport.go:249: unable to accept Raft message from {NodeID:3 StoreID:3 ReplicaID:3}: no handler registered for {NodeID:1 StoreID:1 ReplicaID:1}" transaction_restart:NONE origin_node:0 now:<wall_time:0 logical:0 > > > 
I161031 07:26:49.056185 15575 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
W161031 07:26:49.057020 14878 storage/raft_transport.go:428  raft transport stream to node 2 failed: EOF
W161031 07:26:49.059051 14899 storage/raft_transport.go:428  raft transport stream to node 1 failed: EOF
W161031 07:26:49.059206 15165 storage/raft_transport.go:428  raft transport stream to node 1 failed: EOF
I161031 07:26:49.059359 14503 http2_client.go:1053  transport: http2Client.notifyError got notified that the client transport was broken EOF.
I161031 07:26:49.059646 14505 /go/src/google.golang.org/grpc/clientconn.go:667  grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:33938: operation was canceled"; Reconnecting to {"127.0.0.1:33938" <nil>}
I161031 07:26:49.059985 14505 /go/src/google.golang.org/grpc/clientconn.go:767  grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
I161031 07:26:49.060137 14423 http2_server.go:276  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:36658->127.0.0.1:35652: use of closed network connection
I161031 07:26:49.060320 14618 http2_server.go:276  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:41542->127.0.0.1:42701: use of closed network connection
I161031 07:26:49.060483 14363 http2_server.go:276  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:50684->127.0.0.1:35712: use of closed network connection
W161031 07:26:49.060642 15113 storage/raft_transport.go:428  raft transport stream to node 4 failed: EOF
I161031 07:26:49.060800 14351 http2_client.go:1053  transport: http2Client.notifyError got notified that the client transport was broken EOF.
I161031 07:26:49.061086 14353 /go/src/google.golang.org/grpc/clientconn.go:667  grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:50684: operation was canceled"; Reconnecting to {"127.0.0.1:50684" <nil>}
I161031 07:26:49.061283 14353 /go/src/google.golang.org/grpc/clientconn.go:767  grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
I161031 07:26:49.061440 14376 http2_server.go:276  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:33938->127.0.0.1:37428: use of closed network connection
W161031 07:26:49.061615 15523 storage/raft_transport.go:428  raft transport stream to node 2 failed: EOF
W161031 07:26:49.061782 15459 storage/raft_transport.go:428  raft transport stream to node 3 failed: EOF
W161031 07:26:49.061948 14949 storage/raft_transport.go:428  raft transport stream to node 1 failed: EOF
I161031 07:26:49.062115 14378 http2_client.go:1053  transport: http2Client.notifyError got notified that the client transport was broken EOF.
I161031 07:26:49.062419 14420 http2_client.go:1053  transport: http2Client.notifyError got notified that the client transport was broken EOF.
I161031 07:26:49.062705 14422 /go/src/google.golang.org/grpc/clientconn.go:667  grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:36658: operation was canceled"; Reconnecting to {"127.0.0.1:36658" <nil>}
I161031 07:26:49.062905 14380 /go/src/google.golang.org/grpc/clientconn.go:667  grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:38769: getsockopt: connection refused"; Reconnecting to {"127.0.0.1:38769" <nil>}
I161031 07:26:49.063016 12965 http2_server.go:276  transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:38769->127.0.0.1:41306: use of closed network connection
I161031 07:26:49.063662 15575 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
I161031 07:26:49.064042 15575 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
I161031 07:26:49.065421 15575 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
I161031 07:26:49.065786 15575 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
I161031 07:26:49.066256 15575 util/stop/stopper.go:383  stop has been called, stopping or quiescing all running tasks
	client_raft_test.go:1010: Failed to achieve proper replication within 10 seconds
@cockroach-teamcity cockroach-teamcity added O-robot Originated from a bot. C-test-failure Broken test (automatically or manually discovered). labels Oct 31, 2016
@tamird
Copy link
Contributor

tamird commented Oct 31, 2016

#10306.

@tamird tamird closed this as completed Oct 31, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Projects
None yet
Development

No branches or pull requests

2 participants