Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sql: BenchmarkInsertDistinct100Multinode_Cockroach fails instantly #10551

Closed
jordanlewis opened this issue Nov 8, 2016 · 1 comment
Closed
Assignees

Comments

@jordanlewis
Copy link
Member

This benchmark fails instantly on my machine, with make bench PKG=./pkg/sql TESTS=BenchmarkInsertDistinct100Multinode.

go test  -tags '' -run - -bench "BenchmarkInsertDistinct100Multinode" -timeout 5m ./pkg/sql
I161108 15:44:02.769399 1 rand.go:76  Random seed: -5045255980897232548
I161108 15:44:02.772414 35 gossip/gossip.go:237  [n?] initial resolvers: []
W161108 15:44:02.772456 35 gossip/gossip.go:1055  [n?] no resolvers found; use --join to specify a connected node
W161108 15:44:02.775737 35 server/status/runtime.go:116  Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161108 15:44:02.776193 35 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161108 15:44:02.777535 35 server/config.go:443  1 storage engine initialized
I161108 15:44:02.777944 35 server/node.go:421  [n?] store [n0,s0] not bootstrapped
I161108 15:44:02.779862 63 storage/replica_proposal.go:328  [s1,r1/1:/M{in-ax}] new range lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 410732h44m11.779036123s following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0s [physicalTime=2016-11-08 20:44:02.779811043 +0000 UTC]
I161108 15:44:02.780302 35 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
I161108 15:44:02.780364 35 server/node.go:350  [n?] **** cluster {91a7e489-6f10-4e2e-ae2d-7c76e2d07dcc} has been created
I161108 15:44:02.780380 35 server/node.go:351  [n?] **** add additional nodes by specifying --join=127.0.0.1:60936
I161108 15:44:02.780575 35 base/node_id.go:62  [n1] NodeID set to 1
I161108 15:44:02.780799 35 storage/store.go:1183  [n1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I161108 15:44:02.780831 35 server/node.go:434  [n1] initialized store [n1,s1]: {Capacity:536870912 Available:536870912 RangeCount:0 LeaseCount:0}
I161108 15:44:02.780884 35 server/node.go:319  [n1] node ID 1 initialized
I161108 15:44:02.781007 35 gossip/gossip.go:280  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:60936" > attrs:<> locality:<>
I161108 15:44:02.781139 35 storage/stores.go:296  [n1] read 0 node addresses from persistent storage
I161108 15:44:02.781260 35 server/node.go:564  [n1] connecting to gossip network to verify cluster ID...
I161108 15:44:02.781295 35 server/node.go:585  [n1] node connected via gossip and verified as part of cluster {"91a7e489-6f10-4e2e-ae2d-7c76e2d07dcc"}
I161108 15:44:02.781342 35 server/node.go:369  [n1] node=1: started with [[]=] engine(s) and attributes []
I161108 15:44:02.781426 35 server/server.go:630  [n1] starting https server at 127.0.0.1:60937
I161108 15:44:02.781508 35 server/server.go:631  [n1] starting grpc/postgres server at 127.0.0.1:60936
I161108 15:44:02.781528 35 server/server.go:632  [n1] advertising CockroachDB node at 127.0.0.1:60936
I161108 15:44:02.782963 115 storage/split_queue.go:103  [n1,split,s1,r1/1:/M{in-ax}] splitting at keys [/Table/11/0 /Table/12/0 /Table/13/0 /Table/14/0]
I161108 15:44:02.784274 115 storage/replica_command.go:2360  [n1,split,s1,r1/1:/M{in-ax}] initiating a split of this range at key /Table/11 [r2]
I161108 15:44:02.786376 47 sql/event_log.go:95  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:60936} Attrs: Locality:} ClusterID:{UUID:91a7e489-6f10-4e2e-ae2d-7c76e2d07dcc} StartedAt:1478637842781300563}
E161108 15:44:02.788132 116 storage/queue.go:579  [n1,replicate] (purgatory) on [n1,s1,r1/1:/{Min-Table/11}]: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I161108 15:44:02.788975 115 storage/replica_command.go:2360  [n1,split,s1,r1/1:/{Min-Table/11}] initiating a split of this range at key /Table/12 [r3]
E161108 15:44:02.789329 122 storage/queue.go:579  [n1,replicate] (purgatory) on [n1,s1,r1/1:/{Min-Table/11}]: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161108 15:44:02.792393 116 storage/queue.go:579  [n1,replicate] (purgatory) on [n1,s1,r2/1:/Table/1{1-2}]: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I161108 15:44:02.792591 115 storage/replica_command.go:2360  [n1,split,s1,r1/1:/{Min-Table/11}] initiating a split of this range at key /Table/13 [r4]
E161108 15:44:02.794856 116 storage/queue.go:579  [n1,replicate] (purgatory) on [n1,s1,r3/1:/Table/1{2-3}]: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I161108 15:44:02.795025 115 storage/replica_command.go:2360  [n1,split,s1,r1/1:/{Min-Table/11}] initiating a split of this range at key /Table/14 [r5]
E161108 15:44:02.798573 116 storage/queue.go:579  [n1,replicate] (purgatory) on [n1,s1,r4/1:/Table/1{3-4}]: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161108 15:44:02.798658 116 storage/queue.go:579  [n1,replicate] (purgatory) on [n1,s1,r5/1:/{Table/14-Max}]: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I161108 15:44:02.844167 35 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:60936]
W161108 15:44:02.844202 35 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
W161108 15:44:02.847740 35 server/status/runtime.go:116  Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161108 15:44:02.848172 35 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161108 15:44:02.849604 35 server/config.go:443  1 storage engine initialized
I161108 15:44:02.850041 35 server/node.go:421  [n?] store [n0,s0] not bootstrapped
I161108 15:44:02.850060 35 storage/stores.go:296  [n?] read 0 node addresses from persistent storage
I161108 15:44:02.850078 35 server/node.go:564  [n?] connecting to gossip network to verify cluster ID...
I161108 15:44:02.857570 30 gossip/client.go:125  [n?] started gossip client to 127.0.0.1:60936
I161108 15:44:02.857693 225 gossip/server.go:285  [n1] received gossip from unknown node
I161108 15:44:02.857916 35 server/node.go:585  [n?] node connected via gossip and verified as part of cluster {"91a7e489-6f10-4e2e-ae2d-7c76e2d07dcc"}
I161108 15:44:02.858011 243 storage/stores.go:312  [n?] wrote 1 node addresses to persistent storage
I161108 15:44:02.858728 35 kv/dist_sender.go:330  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I161108 15:44:02.859190 35 server/node.go:312  [n?] new node allocated ID 2
I161108 15:44:02.859205 35 base/node_id.go:62  [n2] NodeID set to 2
I161108 15:44:02.859238 35 gossip/gossip.go:280  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:60939" > attrs:<> locality:<>
I161108 15:44:02.859291 35 server/node.go:369  [n2] node=2: started with [[]=] engine(s) and attributes []
I161108 15:44:02.859318 35 server/server.go:630  [n2] starting https server at 127.0.0.1:60940
I161108 15:44:02.859345 35 server/server.go:631  [n2] starting grpc/postgres server at 127.0.0.1:60939
I161108 15:44:02.859366 35 server/server.go:632  [n2] advertising CockroachDB node at 127.0.0.1:60939
I161108 15:44:02.859895 124 storage/stores.go:312  [n1] wrote 1 node addresses to persistent storage
I161108 15:44:02.862914 256 server/node.go:545  [n2] bootstrapped store [n2,s2]
I161108 15:44:02.863358 122 storage/replica_raftstorage.go:446  [n1,replicate,s1,r2/1:/Table/1{1-2}] generated snapshot b07bdb55 for range 2 at index 16 in 86.71µs.
I161108 15:44:02.864060 35 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:60936]
W161108 15:44:02.864084 35 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
I161108 15:44:02.864767 259 sql/event_log.go:95  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:60939} Attrs: Locality:} ClusterID:{UUID:91a7e489-6f10-4e2e-ae2d-7c76e2d07dcc} StartedAt:1478637842859275757}
W161108 15:44:02.868003 35 server/status/runtime.go:116  Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161108 15:44:02.868265 35 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161108 15:44:02.869239 35 server/config.go:443  1 storage engine initialized
I161108 15:44:02.869530 35 server/node.go:421  [n?] store [n0,s0] not bootstrapped
I161108 15:44:02.869551 35 storage/stores.go:296  [n?] read 0 node addresses from persistent storage
I161108 15:44:02.869571 35 server/node.go:564  [n?] connecting to gossip network to verify cluster ID...
I161108 15:44:02.876453 122 storage/store.go:3119  [n1,replicate,s1,r2/1:/Table/1{1-2}] streamed snapshot: kv pairs: 10, log entries: 6
I161108 15:44:02.876744 231 storage/replica_raftstorage.go:592  [n2,s2] [n2,s2,r2/?:{-}]: with replicaID [?], applying preemptive snapshot at index 16 (id=b07bdb55, encoded size=16, 1 rocksdb batches, 6 log entries)
I161108 15:44:02.877086 231 storage/replica_raftstorage.go:595  [n2,s2] [n2,s2,r2/?:/Table/1{1-2}]: with replicaID [?], applied preemptive snapshot in 0.000s
I161108 15:44:02.877543 122 storage/replica_command.go:3244  [n1,replicate,s1,r2/1:/Table/1{1-2}] change replicas: read existing descriptor range_id:2 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161108 15:44:02.879262 122 storage/replica.go:2048  [n1,s1,r2/1:/Table/1{1-2}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2} for range 2: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161108 15:44:02.879298 283 gossip/client.go:125  [n?] started gossip client to 127.0.0.1:60936
I161108 15:44:02.879363 166 gossip/server.go:285  [n1] received gossip from unknown node
I161108 15:44:02.879712 35 server/node.go:585  [n?] node connected via gossip and verified as part of cluster {"91a7e489-6f10-4e2e-ae2d-7c76e2d07dcc"}
I161108 15:44:02.879777 167 storage/stores.go:312  [n?] wrote 1 node addresses to persistent storage
I161108 15:44:02.879907 168 storage/stores.go:312  [n?] wrote 2 node addresses to persistent storage
I161108 15:44:02.880032 122 storage/replica_raftstorage.go:446  [n1,replicate,s1,r3/1:/Table/1{2-3}] generated snapshot 2223cf6a for range 3 at index 18 in 59.906µs.
I161108 15:44:02.880482 122 storage/store.go:3119  [n1,replicate,s1,r3/1:/Table/1{2-3}] streamed snapshot: kv pairs: 20, log entries: 8
I161108 15:44:02.880795 35 kv/dist_sender.go:330  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I161108 15:44:02.880901 236 storage/replica_raftstorage.go:592  [n2,s2] [n2,s2,r3/?:{-}]: with replicaID [?], applying preemptive snapshot at index 18 (id=2223cf6a, encoded size=16, 1 rocksdb batches, 8 log entries)
I161108 15:44:02.881241 402 storage/raft_transport.go:432  [n2] raft transport stream to node 1 established
I161108 15:44:02.881380 236 storage/replica_raftstorage.go:595  [n2,s2] [n2,s2,r3/?:/Table/1{2-3}]: with replicaID [?], applied preemptive snapshot in 0.000s
I161108 15:44:02.881533 35 server/node.go:312  [n?] new node allocated ID 3
I161108 15:44:02.881554 35 base/node_id.go:62  [n3] NodeID set to 3
I161108 15:44:02.881639 35 gossip/gossip.go:280  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:60944" > attrs:<> locality:<>
I161108 15:44:02.881724 35 server/node.go:369  [n3] node=3: started with [[]=] engine(s) and attributes []
I161108 15:44:02.881755 35 server/server.go:630  [n3] starting https server at 127.0.0.1:60945
I161108 15:44:02.881768 35 server/server.go:631  [n3] starting grpc/postgres server at 127.0.0.1:60944
I161108 15:44:02.881780 35 server/server.go:632  [n3] advertising CockroachDB node at 127.0.0.1:60944
I161108 15:44:02.881966 122 storage/replica_command.go:3244  [n1,replicate,s1,r3/1:/Table/1{2-3}] change replicas: read existing descriptor range_id:3 start_key:"\224" end_key:"\225" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161108 15:44:02.883784 406 storage/stores.go:312  [n1] wrote 2 node addresses to persistent storage
I161108 15:44:02.884254 407 storage/stores.go:312  [n2] wrote 2 node addresses to persistent storage
I161108 15:44:02.886027 122 storage/replica.go:2048  [n1,s1,r3/1:/Table/1{2-3}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2} for range 3: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161108 15:44:02.886224 189 server/node.go:545  [n3] bootstrapped store [n3,s3]
I161108 15:44:02.886794 122 storage/replica_command.go:3244  [n1,replicate,s1,r3/1:/Table/1{2-3}] change replicas: read existing descriptor range_id:3 start_key:"\224" end_key:"\225" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161108 15:44:02.891340 192 sql/event_log.go:95  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:60944} Attrs: Locality:} ClusterID:{UUID:91a7e489-6f10-4e2e-ae2d-7c76e2d07dcc} StartedAt:1478637842881707625}
I161108 15:44:02.891933 122 storage/replica.go:2048  [n1,s1,r3/1:/Table/1{2-3}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2} for range 3: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161108 15:44:02.892629 122 storage/replica_raftstorage.go:446  [n1,replicate,s1,r4/1:/Table/1{3-4}] generated snapshot a716747e for range 4 at index 20 in 48.362µs.
I161108 15:44:02.893234 122 storage/store.go:3119  [n1,replicate,s1,r4/1:/Table/1{3-4}] streamed snapshot: kv pairs: 59, log entries: 10
I161108 15:44:02.893498 458 storage/replica_raftstorage.go:592  [n2,s2] [n2,s2,r4/?:{-}]: with replicaID [?], applying preemptive snapshot at index 20 (id=a716747e, encoded size=16, 1 rocksdb batches, 10 log entries)
I161108 15:44:02.893962 458 storage/replica_raftstorage.go:595  [n2,s2] [n2,s2,r4/?:/Table/1{3-4}]: with replicaID [?], applied preemptive snapshot in 0.000s
I161108 15:44:02.894294 122 storage/replica_command.go:3244  [n1,replicate,s1,r4/1:/Table/1{3-4}] change replicas: read existing descriptor range_id:4 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161108 15:44:02.895915 122 storage/replica.go:2048  [n1,s1,r4/1:/Table/1{3-4}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2} for range 4: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161108 15:44:02.896324 122 storage/replica_raftstorage.go:446  [n1,replicate,s1,r5/1:/{Table/14-Max}] generated snapshot 1fdd3ee8 for range 5 at index 11 in 27.735µs.
I161108 15:44:02.896710 122 storage/store.go:3119  [n1,replicate,s1,r5/1:/{Table/14-Max}] streamed snapshot: kv pairs: 9, log entries: 1
I161108 15:44:02.896977 366 storage/replica_raftstorage.go:592  [n2,s2] [n2,s2,r5/?:{-}]: with replicaID [?], applying preemptive snapshot at index 11 (id=1fdd3ee8, encoded size=16, 1 rocksdb batches, 1 log entries)
I161108 15:44:02.897223 366 storage/replica_raftstorage.go:595  [n2,s2] [n2,s2,r5/?:/{Table/14-Max}]: with replicaID [?], applied preemptive snapshot in 0.000s
I161108 15:44:02.897633 122 storage/replica_command.go:3244  [n1,replicate,s1,r5/1:/{Table/14-Max}] change replicas: read existing descriptor range_id:5 start_key:"\226" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161108 15:44:02.899385 122 storage/replica.go:2048  [n1,s1,r5/1:/{Table/14-Max}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2} for range 5: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161108 15:44:02.899848 122 storage/replica_raftstorage.go:446  [n1,replicate,s1,r1/1:/{Min-Table/11}] generated snapshot 9c776090 for range 1 at index 52 in 31.858µs.
I161108 15:44:02.905466 122 storage/store.go:3119  [n1,replicate,s1,r1/1:/{Min-Table/11}] streamed snapshot: kv pairs: 784, log entries: 42
I161108 15:44:02.905753 514 storage/replica_raftstorage.go:592  [n2,s2] [n2,s2,r1/?:{-}]: with replicaID [?], applying preemptive snapshot at index 52 (id=9c776090, encoded size=16, 1 rocksdb batches, 42 log entries)
I161108 15:44:02.907927 514 storage/replica_raftstorage.go:595  [n2,s2] [n2,s2,r1/?:/{Min-Table/11}]: with replicaID [?], applied preemptive snapshot in 0.002s
I161108 15:44:02.908887 122 storage/replica_command.go:3244  [n1,replicate,s1,r1/1:/{Min-Table/11}] change replicas: read existing descriptor range_id:1 start_key:"" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161108 15:44:02.911034 122 storage/replica.go:2048  [n1,s1,r1/1:/{Min-Table/11}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2} for range 1: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161108 15:44:02.911637 122 storage/queue.go:630  [n1,replicate] purgatory is now empty
I161108 15:44:02.911747 116 storage/replica_raftstorage.go:446  [n1,replicate,s1,r2/1:/Table/1{1-2}] generated snapshot 30f45123 for range 2 at index 19 in 40.013µs.
I161108 15:44:02.924114 116 storage/store.go:3119  [n1,replicate,s1,r2/1:/Table/1{1-2}] streamed snapshot: kv pairs: 11, log entries: 9
I161108 15:44:02.924325 351 storage/replica_raftstorage.go:592  [n3,s3] [n3,s3,r2/?:{-}]: with replicaID [?], applying preemptive snapshot at index 19 (id=30f45123, encoded size=16, 1 rocksdb batches, 9 log entries)
I161108 15:44:02.924553 351 storage/replica_raftstorage.go:595  [n3,s3] [n3,s3,r2/?:/Table/1{1-2}]: with replicaID [?], applied preemptive snapshot in 0.000s
I161108 15:44:02.924874 116 storage/replica_command.go:3244  [n1,replicate,s1,r2/1:/Table/1{1-2}] change replicas: read existing descriptor range_id:2 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I161108 15:44:02.927186 116 storage/replica.go:2048  [n1,s1,r2/1:/Table/1{1-2}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3} for range 2: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I161108 15:44:02.928014 116 storage/replica_raftstorage.go:446  [n1,replicate,s1,r3/1:/Table/1{2-3}] generated snapshot 9ef4d0c2 for range 3 at index 25 in 65µs.
I161108 15:44:02.928538 116 storage/store.go:3119  [n1,replicate,s1,r3/1:/Table/1{2-3}] streamed snapshot: kv pairs: 26, log entries: 15
I161108 15:44:02.928870 516 storage/replica_raftstorage.go:592  [n3,s3] [n3,s3,r3/?:{-}]: with replicaID [?], applying preemptive snapshot at index 25 (id=9ef4d0c2, encoded size=16, 1 rocksdb batches, 15 log entries)
I161108 15:44:02.929202 493 storage/raft_transport.go:432  [n3] raft transport stream to node 1 established
I161108 15:44:02.929330 516 storage/replica_raftstorage.go:595  [n3,s3] [n3,s3,r3/?:/Table/1{2-3}]: with replicaID [?], applied preemptive snapshot in 0.000s
I161108 15:44:02.929649 116 storage/replica_command.go:3244  [n1,replicate,s1,r3/1:/Table/1{2-3}] change replicas: read existing descriptor range_id:3 start_key:"\224" end_key:"\225" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I161108 15:44:02.932024 116 storage/replica.go:2048  [n1,s1,r3/1:/Table/1{2-3}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3} for range 3: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I161108 15:44:02.933628 116 storage/replica_raftstorage.go:446  [n1,replicate,s1,r1/1:/{Min-Table/11}] generated snapshot ae8874aa for range 1 at index 59 in 715µs.
I161108 15:44:02.939207 116 storage/store.go:3119  [n1,replicate,s1,r1/1:/{Min-Table/11}] streamed snapshot: kv pairs: 789, log entries: 49
I161108 15:44:02.939581 497 storage/replica_raftstorage.go:592  [n3,s3] [n3,s3,r1/?:{-}]: with replicaID [?], applying preemptive snapshot at index 59 (id=ae8874aa, encoded size=16, 1 rocksdb batches, 49 log entries)
I161108 15:44:02.944451 497 storage/replica_raftstorage.go:595  [n3,s3] [n3,s3,r1/?:/{Min-Table/11}]: with replicaID [?], applied preemptive snapshot in 0.005s
I161108 15:44:02.944814 116 storage/replica_command.go:3244  [n1,replicate,s1,r1/1:/{Min-Table/11}] change replicas: read existing descriptor range_id:1 start_key:"" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I161108 15:44:02.946652 116 storage/replica.go:2048  [n1,s1,r1/1:/{Min-Table/11}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3} for range 1: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I161108 15:44:02.947543 116 storage/replica_raftstorage.go:446  [n1,replicate,s1,r5/1:/{Table/14-Max}] generated snapshot 4a27ee4e for range 5 at index 14 in 45.055µs.
I161108 15:44:02.948097 116 storage/store.go:3119  [n1,replicate,s1,r5/1:/{Table/14-Max}] streamed snapshot: kv pairs: 10, log entries: 4
I161108 15:44:02.948513 535 storage/replica_raftstorage.go:592  [n3,s3] [n3,s3,r5/?:{-}]: with replicaID [?], applying preemptive snapshot at index 14 (id=4a27ee4e, encoded size=16, 1 rocksdb batches, 4 log entries)
I161108 15:44:02.949171 535 storage/replica_raftstorage.go:595  [n3,s3] [n3,s3,r5/?:/{Table/14-Max}]: with replicaID [?], applied preemptive snapshot in 0.001s
I161108 15:44:02.949637 116 storage/replica_command.go:3244  [n1,replicate,s1,r5/1:/{Table/14-Max}] change replicas: read existing descriptor range_id:5 start_key:"\226" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I161108 15:44:02.953266 116 storage/replica.go:2048  [n1,s1,r5/1:/{Table/14-Max}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3} for range 5: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I161108 15:44:02.954378 116 storage/replica_raftstorage.go:446  [n1,replicate,s1,r4/1:/Table/1{3-4}] generated snapshot 5eaff6b8 for range 4 at index 36 in 47.73µs.
I161108 15:44:02.955407 116 storage/store.go:3119  [n1,replicate,s1,r4/1:/Table/1{3-4}] streamed snapshot: kv pairs: 85, log entries: 26
I161108 15:44:02.955771 538 storage/replica_raftstorage.go:592  [n3,s3] [n3,s3,r4/?:{-}]: with replicaID [?], applying preemptive snapshot at index 36 (id=5eaff6b8, encoded size=16, 1 rocksdb batches, 26 log entries)
I161108 15:44:02.956609 538 storage/replica_raftstorage.go:595  [n3,s3] [n3,s3,r4/?:/Table/1{3-4}]: with replicaID [?], applied preemptive snapshot in 0.001s
I161108 15:44:02.957294 116 storage/replica_command.go:3244  [n1,replicate,s1,r4/1:/Table/1{3-4}] change replicas: read existing descriptor range_id:4 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I161108 15:44:02.959406 116 storage/replica.go:2048  [n1,s1,r4/1:/Table/1{3-4}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3} for range 4: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I161108 15:44:03.175398 547 sql/event_log.go:95  [n1,client=127.0.0.1:60949,user=root] Event: "create_database", target: 50, info: {DatabaseName:bench Statement:CREATE DATABASE bench User:root}
I161108 15:44:03.175497 115 storage/split_queue.go:103  [n1,split,s1,r5/1:/{Table/14-Max}] splitting at keys [/Table/50/0]
I161108 15:44:03.175718 115 storage/replica_command.go:2360  [n1,split,s1,r5/1:/{Table/14-Max}] initiating a split of this range at key /Table/50 [r6]
I161108 15:44:03.180674 547 sql/event_log.go:95  [n1,client=127.0.0.1:60949,user=root] Event: "create_table", target: 51, info: {TableName:bench.insert_distinct Statement:CREATE TABLE bench.insert_distinct (articleID INT, userID INT, uniqueID INT DEFAULT unique_rowid(), PRIMARY KEY (articleID, userID, uniqueID)) User:root}
I161108 15:44:03.180793 115 storage/split_queue.go:103  [n1,split,s1,r6/1:/{Table/50-Max}] splitting at keys [/Table/51/0]
I161108 15:44:03.181194 115 storage/replica_command.go:2360  [n1,split,s1,r6/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r7]
I161108 15:44:03.181710 35 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
I161108 15:44:03.181939 571 util/stop/stopper.go:468  quiescing; tasks left:
5      server/node.go:855
2      storage/intent_resolver.go:383
1      storage/queue.go:477
1      kv/dist_sender.go:813
W161108 15:44:03.182120 402 storage/raft_transport.go:437  [n2] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
W161108 15:44:03.182256 329 storage/raft_transport.go:437  [n1] raft transport stream to node 2 failed: rpc error: code = 13 desc = transport is closing
W161108 15:44:03.182302 432 storage/raft_transport.go:437  [n1] raft transport stream to node 3 failed: rpc error: code = 13 desc = transport is closing
W161108 15:44:03.182403 115 storage/replica.go:1796  [n1,split,s1,r6/1:/{Table/50-Max}] shutdown cancellation of command [txn: a7c1076e], BeginTransaction [/Local/Range/"\xba"/RangeDescriptor,/Min), ConditionalPut [/Local/Range/"\xba"/RangeDescriptor,/Min)
I161108 15:44:03.182435 571 util/stop/stopper.go:468  quiescing; tasks left:
4      server/node.go:855
2      storage/intent_resolver.go:383
1      storage/queue.go:477
W161108 15:44:03.182487 493 storage/raft_transport.go:437  [n3] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
W161108 15:44:03.182541 582 storage/intent_resolver.go:380  could not GC completed transaction anchored at /Local/Range/"\x96"/RangeDescriptor: node unavailable; try another peer
W161108 15:44:03.182802 595 storage/replica.go:1796  [n1,s1,r3/1:/Table/1{2-3}] shutdown cancellation of command ResolveIntent [/Table/12/1/2016-11-08T20:44:03.176314Z/"\x14\x85-H\xc2 /Z\x00"/0,/Min), ResolveIntent [/Table/12/1/2016-11-08T20:44:03.176314Z/"\x14\x85-H\xc2 /Z\x00"/2/1,/Min), ResolveIntent [/Table/12/1/2016-11-08T20:44:03.176314Z/"\x14\x85-H\xc2 /Z\x00"/3/1,/Min), ResolveIntent [/Table/12/1/2016-11-08T20:44:03.176314Z/"\x14\x85-H\xc2 /Z\x00"/4/1,/Min), ResolveIntent [/Table/12/1/2016-11-08T20:44:03.176314Z/"\x14\x85-H\xc2 /Z\x00"/5/1,/Min)
E161108 15:44:03.182824 115 storage/queue.go:568  [n1,split] on [n1,s1,r6/1:/{Table/50-Max}]: unable to split [n1,s1,r6/1:/{Table/50-Max}] at key "/Table/51/0": storage/replica_command.go:2440: split at key /Table/51 failed: result is ambiguous
I161108 15:44:03.183405 571 util/stop/stopper.go:468  quiescing; tasks left:
2      storage/intent_resolver.go:383
1      storage/queue.go:477
1      server/node.go:855
I161108 15:44:03.183459 571 util/stop/stopper.go:468  quiescing; tasks left:
1      storage/queue.go:477
1      storage/intent_resolver.go:383
W161108 15:44:03.183484 595 storage/intent_resolver.go:337  [n1,s1,r1/1:/{Min-Table/11}]: failed to resolve intents: result is ambiguous
I161108 15:44:03.183495 571 util/stop/stopper.go:468  quiescing; tasks left:
1      storage/intent_resolver.go:383
I161108 15:44:03.183511 35 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
I161108 15:44:03.184294 35 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
I161108 15:44:03.184919 35 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
BenchmarkInsertDistinct100Multinode_Cockroach-8         I161108 15:44:03.188070 583 gossip/gossip.go:237  [n?] initial resolvers: []
W161108 15:44:03.188092 583 gossip/gossip.go:1055  [n?] no resolvers found; use --join to specify a connected node
W161108 15:44:03.189702 583 server/status/runtime.go:116  Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161108 15:44:03.189997 583 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161108 15:44:03.190684 583 server/config.go:443  1 storage engine initialized
I161108 15:44:03.190911 583 server/node.go:421  [n?] store [n0,s0] not bootstrapped
I161108 15:44:03.191810 724 storage/replica_proposal.go:328  [s1,r1/1:/M{in-ax}] new range lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 410732h44m12.19148304s following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0s [physicalTime=2016-11-08 20:44:03.191786972 +0000 UTC]
I161108 15:44:03.192170 583 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
I161108 15:44:03.192233 583 server/node.go:350  [n?] **** cluster {12fc0050-24f4-4a84-adc3-c9eae95ddfdf} has been created
I161108 15:44:03.192245 583 server/node.go:351  [n?] **** add additional nodes by specifying --join=127.0.0.1:60950
I161108 15:44:03.192424 583 base/node_id.go:62  [n1] NodeID set to 1
I161108 15:44:03.192653 583 storage/store.go:1183  [n1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I161108 15:44:03.192673 583 server/node.go:434  [n1] initialized store [n1,s1]: {Capacity:536870912 Available:536870912 RangeCount:0 LeaseCount:0}
I161108 15:44:03.192688 583 server/node.go:319  [n1] node ID 1 initialized
I161108 15:44:03.192712 583 gossip/gossip.go:280  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:60950" > attrs:<> locality:<>
I161108 15:44:03.192786 583 storage/stores.go:296  [n1] read 0 node addresses from persistent storage
I161108 15:44:03.192822 583 server/node.go:564  [n1] connecting to gossip network to verify cluster ID...
I161108 15:44:03.192837 583 server/node.go:585  [n1] node connected via gossip and verified as part of cluster {"12fc0050-24f4-4a84-adc3-c9eae95ddfdf"}
I161108 15:44:03.192856 583 server/node.go:369  [n1] node=1: started with [[]=] engine(s) and attributes []
I161108 15:44:03.192880 583 server/server.go:630  [n1] starting https server at 127.0.0.1:60951
I161108 15:44:03.192892 583 server/server.go:631  [n1] starting grpc/postgres server at 127.0.0.1:60950
I161108 15:44:03.193029 583 server/server.go:632  [n1] advertising CockroachDB node at 127.0.0.1:60950
I161108 15:44:03.193581 541 storage/split_queue.go:103  [n1,split,s1,r1/1:/M{in-ax}] splitting at keys [/Table/11/0 /Table/12/0 /Table/13/0 /Table/14/0]
I161108 15:44:03.194223 541 storage/replica_command.go:2360  [n1,split,s1,r1/1:/M{in-ax}] initiating a split of this range at key /Table/11 [r2]
E161108 15:44:03.199229 542 storage/queue.go:579  [n1,replicate] (purgatory) on [n1,s1,r1/1:/{Min-Table/11}]: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161108 15:44:03.199333 446 storage/queue.go:579  [n1,replicate] (purgatory) on [n1,s1,r1/1:/{Min-Table/11}]: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I161108 15:44:03.199414 541 storage/replica_command.go:2360  [n1,split,s1,r1/1:/{Min-Table/11}] initiating a split of this range at key /Table/12 [r3]
E161108 15:44:03.203460 542 storage/queue.go:579  [n1,replicate] (purgatory) on [n1,s1,r2/1:/Table/1{1-2}]: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I161108 15:44:03.203655 541 storage/replica_command.go:2360  [n1,split,s1,r1/1:/{Min-Table/11}] initiating a split of this range at key /Table/13 [r4]
E161108 15:44:03.206122 542 storage/queue.go:579  [n1,replicate] (purgatory) on [n1,s1,r3/1:/Table/1{2-3}]: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I161108 15:44:03.206313 541 storage/replica_command.go:2360  [n1,split,s1,r1/1:/{Min-Table/11}] initiating a split of this range at key /Table/14 [r5]
E161108 15:44:03.208914 542 storage/queue.go:579  [n1,replicate] (purgatory) on [n1,s1,r4/1:/Table/1{3-4}]: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161108 15:44:03.208987 542 storage/queue.go:579  [n1,replicate] (purgatory) on [n1,s1,r5/1:/{Table/14-Max}]: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I161108 15:44:03.252087 583 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:60950]
I161108 15:44:03.252111 774 sql/event_log.go:95  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:60950} Attrs: Locality:} ClusterID:{UUID:12fc0050-24f4-4a84-adc3-c9eae95ddfdf} StartedAt:1478637843192841607}
W161108 15:44:03.252125 583 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
W161108 15:44:03.254368 583 server/status/runtime.go:116  Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161108 15:44:03.254697 583 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161108 15:44:03.255519 583 server/config.go:443  1 storage engine initialized
I161108 15:44:03.255755 583 server/node.go:421  [n?] store [n0,s0] not bootstrapped
I161108 15:44:03.255789 583 storage/stores.go:296  [n?] read 0 node addresses from persistent storage
I161108 15:44:03.255805 583 server/node.go:564  [n?] connecting to gossip network to verify cluster ID...
I161108 15:44:03.263769 697 gossip/client.go:125  [n?] started gossip client to 127.0.0.1:60950
I161108 15:44:03.263898 700 gossip/server.go:285  [n1] received gossip from unknown node
I161108 15:44:03.264143 583 server/node.go:585  [n?] node connected via gossip and verified as part of cluster {"12fc0050-24f4-4a84-adc3-c9eae95ddfdf"}
I161108 15:44:03.264207 870 storage/stores.go:312  [n?] wrote 1 node addresses to persistent storage
I161108 15:44:03.264912 583 kv/dist_sender.go:330  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I161108 15:44:03.265520 583 server/node.go:312  [n?] new node allocated ID 2
I161108 15:44:03.265532 583 base/node_id.go:62  [n2] NodeID set to 2
I161108 15:44:03.265556 583 gossip/gossip.go:280  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:60953" > attrs:<> locality:<>
I161108 15:44:03.265598 583 server/node.go:369  [n2] node=2: started with [[]=] engine(s) and attributes []
I161108 15:44:03.265612 583 server/server.go:630  [n2] starting https server at 127.0.0.1:60954
I161108 15:44:03.265619 583 server/server.go:631  [n2] starting grpc/postgres server at 127.0.0.1:60953
I161108 15:44:03.265626 583 server/server.go:632  [n2] advertising CockroachDB node at 127.0.0.1:60953
I161108 15:44:03.265952 884 storage/stores.go:312  [n1] wrote 1 node addresses to persistent storage
I161108 15:44:03.268203 899 server/node.go:545  [n2] bootstrapped store [n2,s2]
I161108 15:44:03.268577 446 storage/replica_raftstorage.go:446  [n1,replicate,s1,r1/1:/{Min-Table/11}] generated snapshot e6152625 for range 1 at index 34 in 74.523µs.
I161108 15:44:03.270194 583 gossip/gossip.go:237  [n?] initial resolvers: [127.0.0.1:60950]
W161108 15:44:03.270217 583 gossip/gossip.go:1057  [n?] no incoming or outgoing connections
I161108 15:44:03.270457 902 sql/event_log.go:95  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:60953} Attrs: Locality:} ClusterID:{UUID:12fc0050-24f4-4a84-adc3-c9eae95ddfdf} StartedAt:1478637843265588681}
W161108 15:44:03.272036 583 server/status/runtime.go:116  Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161108 15:44:03.272211 583 storage/engine/rocksdb.go:340  opening in memory rocksdb instance
I161108 15:44:03.273045 583 server/config.go:443  1 storage engine initialized
I161108 15:44:03.273236 583 server/node.go:421  [n?] store [n0,s0] not bootstrapped
I161108 15:44:03.273248 583 storage/stores.go:296  [n?] read 0 node addresses from persistent storage
I161108 15:44:03.273259 583 server/node.go:564  [n?] connecting to gossip network to verify cluster ID...
I161108 15:44:03.280822 446 storage/store.go:3119  [n1,replicate,s1,r1/1:/{Min-Table/11}] streamed snapshot: kv pairs: 397, log entries: 24
I161108 15:44:03.281130 984 storage/replica_raftstorage.go:592  [n2,s2] [n2,s2,r1/?:{-}]: with replicaID [?], applying preemptive snapshot at index 34 (id=e6152625, encoded size=16, 1 rocksdb batches, 24 log entries)
I161108 15:44:03.281557 978 gossip/client.go:125  [n?] started gossip client to 127.0.0.1:60950
I161108 15:44:03.281714 950 gossip/server.go:285  [n1] received gossip from unknown node
I161108 15:44:03.282007 583 server/node.go:585  [n?] node connected via gossip and verified as part of cluster {"12fc0050-24f4-4a84-adc3-c9eae95ddfdf"}
I161108 15:44:03.282116 1013 storage/stores.go:312  [n?] wrote 1 node addresses to persistent storage
I161108 15:44:03.282146 984 storage/replica_raftstorage.go:595  [n2,s2] [n2,s2,r1/?:/{Min-Table/11}]: with replicaID [?], applied preemptive snapshot in 0.001s
I161108 15:44:03.282157 1013 storage/stores.go:312  [n?] wrote 2 node addresses to persistent storage
I161108 15:44:03.282497 446 storage/replica_command.go:3244  [n1,replicate,s1,r1/1:/{Min-Table/11}] change replicas: read existing descriptor range_id:1 start_key:"" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161108 15:44:03.283035 583 kv/dist_sender.go:330  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I161108 15:44:03.283522 583 server/node.go:312  [n?] new node allocated ID 3
I161108 15:44:03.283541 583 base/node_id.go:62  [n3] NodeID set to 3
I161108 15:44:03.283589 583 gossip/gossip.go:280  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:60958" > attrs:<> locality:<>
I161108 15:44:03.283633 583 server/node.go:369  [n3] node=3: started with [[]=] engine(s) and attributes []
I161108 15:44:03.283657 583 server/server.go:630  [n3] starting https server at 127.0.0.1:60959
I161108 15:44:03.286229 865 storage/stores.go:312  [n1] wrote 2 node addresses to persistent storage
I161108 15:44:03.286245 814 server/node.go:545  [n3] bootstrapped store [n3,s3]
I161108 15:44:03.286273 583 server/server.go:631  [n3] starting grpc/postgres server at 127.0.0.1:60958
I161108 15:44:03.286297 583 server/server.go:632  [n3] advertising CockroachDB node at 127.0.0.1:60958
I161108 15:44:03.286492 1078 storage/stores.go:312  [n2] wrote 2 node addresses to persistent storage
I161108 15:44:03.287301 817 sql/event_log.go:95  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:60958} Attrs: Locality:} ClusterID:{UUID:12fc0050-24f4-4a84-adc3-c9eae95ddfdf} StartedAt:1478637843283620127}
I161108 15:44:03.287807 446 storage/replica.go:2048  [n1,s1,r1/1:/{Min-Table/11}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2} for range 1: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161108 15:44:03.288461 446 storage/replica_raftstorage.go:446  [n1,replicate,s1,r2/1:/Table/1{1-2}] generated snapshot 43796ca4 for range 2 at index 16 in 46.698µs.
I161108 15:44:03.289479 1110 storage/raft_transport.go:432  [n2] raft transport stream to node 1 established
I161108 15:44:03.298826 446 storage/store.go:3119  [n1,replicate,s1,r2/1:/Table/1{1-2}] streamed snapshot: kv pairs: 10, log entries: 6
I161108 15:44:03.299009 961 storage/replica_raftstorage.go:592  [n3,s3] [n3,s3,r2/?:{-}]: with replicaID [?], applying preemptive snapshot at index 16 (id=43796ca4, encoded size=16, 1 rocksdb batches, 6 log entries)
I161108 15:44:03.299208 961 storage/replica_raftstorage.go:595  [n3,s3] [n3,s3,r2/?:/Table/1{1-2}]: with replicaID [?], applied preemptive snapshot in 0.000s
I161108 15:44:03.299441 446 storage/replica_command.go:3244  [n1,replicate,s1,r2/1:/Table/1{1-2}] change replicas: read existing descriptor range_id:2 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161108 15:44:03.302070 446 storage/replica.go:2048  [n1,s1,r2/1:/Table/1{1-2}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2} for range 2: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I161108 15:44:03.302529 446 storage/replica_raftstorage.go:446  [n1,replicate,s1,r3/1:/Table/1{2-3}] generated snapshot 1d536fa7 for range 3 at index 22 in 28.321µs.
I161108 15:44:03.302912 446 storage/store.go:3119  [n1,replicate,s1,r3/1:/Table/1{2-3}] streamed snapshot: kv pairs: 25, log entries: 12
I161108 15:44:03.303104 1157 storage/replica_raftstorage.go:592  [n2,s2] [n2,s2,r3/?:{-}]: with replicaID [?], applying preemptive snapshot at index 22 (id=1d536fa7, encoded size=16, 1 rocksdb batches, 12 log entries)
I161108 15:44:03.303326 1157 storage/replica_raftstorage.go:595  [n2,s2] [n2,s2,r3/?:/Table/1{2-3}]: with replicaID [?], applied preemptive snapshot in 0.000s
I161108 15:44:03.303446 1171 storage/raft_transport.go:432  [n3] raft transport stream to node 1 established
I161108 15:44:03.303596 446 storage/replica_command.go:3244  [n1,replicate,s1,r3/1:/Table/1{2-3}] change replicas: read existing descriptor range_id:3 start_key:"\224" end_key:"\225" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161108 15:44:03.305285 446 storage/replica.go:2048  [n1,s1,r3/1:/Table/1{2-3}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2} for range 3: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161108 15:44:03.305721 446 storage/replica_raftstorage.go:446  [n1,replicate,s1,r4/1:/Table/1{3-4}] generated snapshot b7290ce1 for range 4 at index 21 in 29.505µs.
I161108 15:44:03.306256 446 storage/store.go:3119  [n1,replicate,s1,r4/1:/Table/1{3-4}] streamed snapshot: kv pairs: 54, log entries: 11
I161108 15:44:03.306501 1175 storage/replica_raftstorage.go:592  [n2,s2] [n2,s2,r4/?:{-}]: with replicaID [?], applying preemptive snapshot at index 21 (id=b7290ce1, encoded size=16, 1 rocksdb batches, 11 log entries)
I161108 15:44:03.306940 1175 storage/replica_raftstorage.go:595  [n2,s2] [n2,s2,r4/?:/Table/1{3-4}]: with replicaID [?], applied preemptive snapshot in 0.000s
I161108 15:44:03.307204 446 storage/replica_command.go:3244  [n1,replicate,s1,r4/1:/Table/1{3-4}] change replicas: read existing descriptor range_id:4 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161108 15:44:03.309517 446 storage/replica.go:2048  [n1,s1,r4/1:/Table/1{3-4}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2} for range 4: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161108 15:44:03.310218 446 storage/replica_raftstorage.go:446  [n1,replicate,s1,r5/1:/{Table/14-Max}] generated snapshot 71486669 for range 5 at index 11 in 40.093µs.
I161108 15:44:03.310709 446 storage/store.go:3119  [n1,replicate,s1,r5/1:/{Table/14-Max}] streamed snapshot: kv pairs: 9, log entries: 1
I161108 15:44:03.311006 1176 storage/replica_raftstorage.go:592  [n2,s2] [n2,s2,r5/?:{-}]: with replicaID [?], applying preemptive snapshot at index 11 (id=71486669, encoded size=16, 1 rocksdb batches, 1 log entries)
I161108 15:44:03.311337 1176 storage/replica_raftstorage.go:595  [n2,s2] [n2,s2,r5/?:/{Table/14-Max}]: with replicaID [?], applied preemptive snapshot in 0.000s
I161108 15:44:03.311788 446 storage/replica_command.go:3244  [n1,replicate,s1,r5/1:/{Table/14-Max}] change replicas: read existing descriptor range_id:5 start_key:"\226" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161108 15:44:03.314182 446 storage/replica.go:2048  [n1,s1,r5/1:/{Table/14-Max}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2} for range 5: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161108 15:44:03.314593 446 storage/queue.go:630  [n1,replicate] purgatory is now empty
I161108 15:44:03.314859 542 storage/replica_raftstorage.go:446  [n1,replicate,s1,r1/1:/{Min-Table/11}] generated snapshot 0c8bc044 for range 1 at index 52 in 97.602µs.
I161108 15:44:03.321846 542 storage/store.go:3119  [n1,replicate,s1,r1/1:/{Min-Table/11}] streamed snapshot: kv pairs: 787, log entries: 42
I161108 15:44:03.322601 1159 storage/replica_raftstorage.go:592  [n3,s3] [n3,s3,r1/?:{-}]: with replicaID [?], applying preemptive snapshot at index 52 (id=0c8bc044, encoded size=16, 1 rocksdb batches, 42 log entries)
I161108 15:44:03.325052 1159 storage/replica_raftstorage.go:595  [n3,s3] [n3,s3,r1/?:/{Min-Table/11}]: with replicaID [?], applied preemptive snapshot in 0.002s
I161108 15:44:03.325379 542 storage/replica_command.go:3244  [n1,replicate,s1,r1/1:/{Min-Table/11}] change replicas: read existing descriptor range_id:1 start_key:"" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I161108 15:44:03.326963 542 storage/replica.go:2048  [n1,s1,r1/1:/{Min-Table/11}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3} for range 1: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I161108 15:44:03.327730 542 storage/replica_raftstorage.go:446  [n1,replicate,s1,r2/1:/Table/1{1-2}] generated snapshot 4a514cb4 for range 2 at index 19 in 48.792µs.
I161108 15:44:03.328120 542 storage/store.go:3119  [n1,replicate,s1,r2/1:/Table/1{1-2}] streamed snapshot: kv pairs: 11, log entries: 9
I161108 15:44:03.328396 1177 storage/replica_raftstorage.go:592  [n2,s2] [n2,s2,r2/?:{-}]: with replicaID [?], applying preemptive snapshot at index 19 (id=4a514cb4, encoded size=16, 1 rocksdb batches, 9 log entries)
I161108 15:44:03.328718 1177 storage/replica_raftstorage.go:595  [n2,s2] [n2,s2,r2/?:/Table/1{1-2}]: with replicaID [?], applied preemptive snapshot in 0.000s
I161108 15:44:03.328980 542 storage/replica_command.go:3244  [n1,replicate,s1,r2/1:/Table/1{1-2}] change replicas: read existing descriptor range_id:2 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > next_replica_id:3
I161108 15:44:03.330992 542 storage/replica.go:2048  [n1,s1,r2/1:/Table/1{1-2}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:3} for range 2: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:2 StoreID:2 ReplicaID:3}]
I161108 15:44:03.331691 542 storage/replica_raftstorage.go:446  [n1,replicate,s1,r5/1:/{Table/14-Max}] generated snapshot ca5a868d for range 5 at index 14 in 29.55µs.
I161108 15:44:03.332099 542 storage/store.go:3119  [n1,replicate,s1,r5/1:/{Table/14-Max}] streamed snapshot: kv pairs: 10, log entries: 4
I161108 15:44:03.332397 1149 storage/replica_raftstorage.go:592  [n3,s3] [n3,s3,r5/?:{-}]: with replicaID [?], applying preemptive snapshot at index 14 (id=ca5a868d, encoded size=16, 1 rocksdb batches, 4 log entries)
I161108 15:44:03.332618 1149 storage/replica_raftstorage.go:595  [n3,s3] [n3,s3,r5/?:/{Table/14-Max}]: with replicaID [?], applied preemptive snapshot in 0.000s
I161108 15:44:03.332894 542 storage/replica_command.go:3244  [n1,replicate,s1,r5/1:/{Table/14-Max}] change replicas: read existing descriptor range_id:5 start_key:"\226" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I161108 15:44:03.336132 542 storage/replica.go:2048  [n1,s1,r5/1:/{Table/14-Max}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3} for range 5: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I161108 15:44:03.336869 542 storage/replica_raftstorage.go:446  [n1,replicate,s1,r4/1:/Table/1{3-4}] generated snapshot 5ecc23f7 for range 4 at index 33 in 30.41µs.
I161108 15:44:03.337394 542 storage/store.go:3119  [n1,replicate,s1,r4/1:/Table/1{3-4}] streamed snapshot: kv pairs: 80, log entries: 23
I161108 15:44:03.337600 1118 storage/replica_raftstorage.go:592  [n3,s3] [n3,s3,r4/?:{-}]: with replicaID [?], applying preemptive snapshot at index 33 (id=5ecc23f7, encoded size=16, 1 rocksdb batches, 23 log entries)
I161108 15:44:03.337897 1118 storage/replica_raftstorage.go:595  [n3,s3] [n3,s3,r4/?:/Table/1{3-4}]: with replicaID [?], applied preemptive snapshot in 0.000s
I161108 15:44:03.338146 542 storage/replica_command.go:3244  [n1,replicate,s1,r4/1:/Table/1{3-4}] change replicas: read existing descriptor range_id:4 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I161108 15:44:03.340456 542 storage/replica.go:2048  [n1,s1,r4/1:/Table/1{3-4}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3} for range 4: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I161108 15:44:03.341438 542 storage/replica_raftstorage.go:446  [n1,replicate,s1,r3/1:/Table/1{2-3}] generated snapshot f606b180 for range 3 at index 25 in 84.72µs.
I161108 15:44:03.342299 542 storage/store.go:3119  [n1,replicate,s1,r3/1:/Table/1{2-3}] streamed snapshot: kv pairs: 26, log entries: 15
I161108 15:44:03.342782 1083 storage/replica_raftstorage.go:592  [n3,s3] [n3,s3,r3/?:{-}]: with replicaID [?], applying preemptive snapshot at index 25 (id=f606b180, encoded size=16, 1 rocksdb batches, 15 log entries)
I161108 15:44:03.343431 1083 storage/replica_raftstorage.go:595  [n3,s3] [n3,s3,r3/?:/Table/1{2-3}]: with replicaID [?], applied preemptive snapshot in 0.001s
I161108 15:44:03.343761 542 storage/replica_command.go:3244  [n1,replicate,s1,r3/1:/Table/1{2-3}] change replicas: read existing descriptor range_id:3 start_key:"\224" end_key:"\225" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I161108 15:44:03.346231 542 storage/replica.go:2048  [n1,s1,r3/1:/Table/1{2-3}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3} for range 3: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I161108 15:44:03.459449 1152 sql/event_log.go:95  [n1,client=127.0.0.1:60963,user=root] Event: "create_database", target: 50, info: {DatabaseName:bench Statement:CREATE DATABASE bench User:root}
I161108 15:44:03.459536 541 storage/split_queue.go:103  [n1,split,s1,r5/1:/{Table/14-Max}] splitting at keys [/Table/50/0]
I161108 15:44:03.459713 541 storage/replica_command.go:2360  [n1,split,s1,r5/1:/{Table/14-Max}] initiating a split of this range at key /Table/50 [r6]
I161108 15:44:03.465363 1152 sql/event_log.go:95  [n1,client=127.0.0.1:60963,user=root] Event: "create_table", target: 51, info: {TableName:bench.insert_distinct Statement:CREATE TABLE bench.insert_distinct (articleID INT, userID INT, uniqueID INT DEFAULT unique_rowid(), PRIMARY KEY (articleID, userID, uniqueID)) User:root}
I161108 15:44:03.465474 541 storage/split_queue.go:103  [n1,split,s1,r6/1:/{Table/50-Max}] splitting at keys [/Table/51/0]
I161108 15:44:03.465925 541 storage/replica_command.go:2360  [n1,split,s1,r6/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r7]
I161108 15:44:03.782118 583 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
W161108 15:44:03.782231 1171 storage/raft_transport.go:437  [n3] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
I161108 15:44:03.782273 1239 util/stop/stopper.go:468  quiescing; tasks left:
2      storage/queue.go:477
2      server/node.go:855
W161108 15:44:03.782336 1040 storage/raft_transport.go:437  [n1] raft transport stream to node 2 failed: rpc error: code = 13 desc = transport is closing
W161108 15:44:03.782345 1098 storage/raft_transport.go:437  [n1] raft transport stream to node 3 failed: rpc error: code = 13 desc = transport is closing
W161108 15:44:03.782608 544 storage/replica.go:1796  [n1,raftlog,s1,r7/1:/{Table/51-Max}] shutdown cancellation of command TruncateLog [/Table/51,/Min)
W161108 15:44:03.782622 1110 storage/raft_transport.go:437  [n2] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
E161108 15:44:03.782647 544 storage/queue.go:568  [n1,raftlog] on [n1,s1,r7/1:/{Table/51-Max}]: result is ambiguous
I161108 15:44:03.782667 1239 util/stop/stopper.go:468  quiescing; tasks left:
1      storage/queue.go:477
E161108 15:44:03.782685 545 storage/queue.go:568  [n1,timeSeriesMaintenance] on [n1,s1,r1/1:/{Min-Table/11}]: node unavailable; try another peer
I161108 15:44:03.782703 583 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
I161108 15:44:03.783476 583 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
I161108 15:44:03.785715 583 util/stop/stopper.go:396  stop has been called, stopping or quiescing all running tasks
--- FAIL: BenchmarkInsertDistinct100Multinode_Cockroach-8
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
        bench_test.go:1221: pq: replay txn
FAIL
exit status 1
FAIL    github.com/cockroachdb/cockroach/pkg/sql        1.058s
@jordanlewis
Copy link
Member Author

@andreimatei suggests assigning to @spencerkimball who might know more about these TransactionReplayErrors. Spencer, feel free to unassign if you don't have time for this.

jordanlewis added a commit to jordanlewis/cockroach that referenced this issue Nov 8, 2016
...until cockroachdb#10551 is resolved. It'd be nice to get the nightly benchmark
runs working again.
spencerkimball added a commit that referenced this issue Nov 11, 2016
Previously, in the event that we reproposed a command or else discovered
on Raft application that our index was out of order, we'd both retry the
command as well as update the timestamp cache. This caused replay txn
errors in some cases.

Fixes #10551
spencerkimball added a commit that referenced this issue Nov 11, 2016
Previously, in the event that we reproposed a command or else discovered
on Raft application that our index was out of order, we'd both retry the
command as well as update the timestamp cache. This caused replay txn
errors in some cases.

Fixes #10551
spencerkimball added a commit that referenced this issue Nov 11, 2016
Previously, in the event that we reproposed a command or else discovered
on Raft application that our index was out of order, we'd both retry the
command as well as update the timestamp cache. This caused replay txn
errors in some cases.

Fixes #10551
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants