Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

teamcity: failed test: TestImportCSVStmt #34568

Closed
cockroach-teamcity opened this issue Feb 5, 2019 · 3 comments · Fixed by #34589
Closed

teamcity: failed test: TestImportCSVStmt #34568

cockroach-teamcity opened this issue Feb 5, 2019 · 3 comments · Fixed by #34589
Assignees
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Milestone

Comments

@cockroach-teamcity
Copy link
Member

The following tests appear to have failed on master (testrace): TestImportCSVStmt, TestImportCSVStmt/schema-in-query-transform-only, TestImportCSVStmt/schema-in-file-explicit-gzip, TestImportCSVStmt/schema-in-file-auto-gzip, TestImportCSVStmt/schema-in-file-implicit-gzip, TestImportCSVStmt/schema-in-file-no-decompress, TestImportCSVStmt/schema-in-file-auto-decompress, TestImportCSVStmt/schema-in-file-sstsize, TestImportCSVStmt/empty-file, TestImportCSVStmt/schema-in-query-opts, TestImportCSVStmt/empty-with-files

You may want to check for open issues.

#1125636:

TestImportCSVStmt
...:23:28.229080 8805 storage/replica_raft.go:372  [n1,s1,r20/1:/{Table/24-Max}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I190205 07:23:28.252383 8805 storage/store_snapshot.go:762  [n1,replicate,s1,r2/1:/System/NodeLiveness{-Max}] sending preemptive snapshot 7d70cdf8 at applied index 23
I190205 07:23:28.283280 8805 storage/store_snapshot.go:805  [n1,replicate,s1,r2/1:/System/NodeLiveness{-Max}] streamed snapshot to (n3,s3):?: kv pairs: 13, log entries: 13, rate-limit: 8.0 MiB/sec, 0.04s
I190205 07:23:28.287815 10002 storage/replica_raftstorage.go:805  [n3,s3,r2/?:{-}] applying preemptive snapshot at index 23 (id=7d70cdf8, encoded size=2788, 1 rocksdb batches, 13 log entries)
I190205 07:23:28.292176 10002 storage/replica_raftstorage.go:811  [n3,s3,r2/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
I190205 07:23:28.295241 8805 storage/replica_command.go:798  [n1,replicate,s1,r2/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r2:/System/NodeLiveness{-Max} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
I190205 07:23:28.365988 8805 storage/replica_raft.go:372  [n1,s1,r2/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I190205 07:23:28.402830 8805 storage/store_snapshot.go:762  [n1,replicate,s1,r14/1:/Table/1{8-9}] sending preemptive snapshot 22ee957a at applied index 19
I190205 07:23:28.403871 8805 storage/store_snapshot.go:805  [n1,replicate,s1,r14/1:/Table/1{8-9}] streamed snapshot to (n3,s3):?: kv pairs: 8, log entries: 9, rate-limit: 8.0 MiB/sec, 0.01s
I190205 07:23:28.411282 10004 storage/replica_raftstorage.go:805  [n3,s3,r14/?:{-}] applying preemptive snapshot at index 19 (id=22ee957a, encoded size=1641, 1 rocksdb batches, 9 log entries)
I190205 07:23:28.413881 10004 storage/replica_raftstorage.go:811  [n3,s3,r14/?:/Table/1{8-9}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I190205 07:23:28.416901 8805 storage/replica_command.go:798  [n1,replicate,s1,r14/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r14:/Table/1{8-9} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
I190205 07:23:28.563435 8805 storage/replica_raft.go:372  [n1,s1,r14/1:/Table/1{8-9}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I190205 07:23:28.640627 8805 storage/store_snapshot.go:762  [n1,replicate,s1,r12/1:/Table/1{6-7}] sending preemptive snapshot e3b3b505 at applied index 18
I190205 07:23:28.641608 8805 storage/store_snapshot.go:805  [n1,replicate,s1,r12/1:/Table/1{6-7}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 8, rate-limit: 8.0 MiB/sec, 0.04s
I190205 07:23:28.644627 10020 storage/replica_raftstorage.go:805  [n2,s2,r12/?:{-}] applying preemptive snapshot at index 18 (id=e3b3b505, encoded size=1514, 1 rocksdb batches, 8 log entries)
I190205 07:23:28.647062 10020 storage/replica_raftstorage.go:811  [n2,s2,r12/?:/Table/1{6-7}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I190205 07:23:28.653646 8805 storage/replica_command.go:798  [n1,replicate,s1,r12/1:/Table/1{6-7}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r12:/Table/1{6-7} [(n1,s1):1, (n3,s3):2, next=3, gen=0]
I190205 07:23:28.811667 8805 storage/replica_raft.go:372  [n1,s1,r12/1:/Table/1{6-7}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I190205 07:23:29.275406 10174 sql/event_log.go:135  [n1,client=127.0.0.1:48380,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:kv.import.batch_size Value:10KB User:root}
I190205 07:23:29.430267 9005 server/status/runtime.go:464  [n1] runtime stats: 586 MiB RSS, 627 goroutines, 42 MiB/71 MiB/136 MiB GO alloc/idle/total, 81 MiB/121 MiB CGO alloc/total, 0.0 CGO/sec, 0.0/0.0 %(u/s)time, 0.0 %gc (109x), 4.1 MiB/4.1 MiB (r/w)net



TestImportCSVStmt/empty-file
... /Table/59/2/"Z"/1794 that contains live data
I190205 07:25:53.798633 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/59/2/"Z"/1794 - /Table/59/3/3956/!NULL that contains live data
I190205 07:25:53.812234 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/59/3/3956/!NULL - /Table/61 that contains live data
I190205 07:25:53.822522 9283 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/59/1/783 - /Table/59/1/1538 that contains live data
I190205 07:25:53.822896 9283 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/59/1/1538 - /Table/61 that contains live data
I190205 07:25:53.823235 9283 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/59/1/1923 - /Table/59/1/2678 that contains live data
I190205 07:25:53.823507 9283 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/59/1/2678 - /Table/59/1/3433 that contains live data
I190205 07:25:53.823833 9283 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/59/1/3433 - /Table/59/1/3677 that contains live data
I190205 07:25:53.824071 9283 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/59/1/3677 - /Table/59/2/"G"/4426 that contains live data
I190205 07:25:53.824401 9283 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/59/2/NULL/853 - /Table/59/2/NULL/4189 that contains live data
I190205 07:25:53.824665 9283 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/59/2/NULL/4189 - /Table/59/2/"D"/3176 that contains live data
I190205 07:25:53.825121 9283 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/59/2/"D"/3176 - /Table/59/2/"G"/4426 that contains live data
I190205 07:25:53.825378 9283 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/59/2/"G"/4426 - /Table/59/2/"L"/2118 that contains live data
I190205 07:25:53.825760 9283 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/59/2/"L"/2118 - /Table/59/2/"Z"/1794 that contains live data
I190205 07:25:53.826115 9283 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/59/2/"P"/4566 - /Table/59/2/"Z"/1794 that contains live data
I190205 07:25:53.826359 9283 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/59/2/"U"/3686 - /Table/59/2/"Z"/1794 that contains live data
I190205 07:25:53.826594 9283 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/59/2/"Z"/1794 - /Table/59/3/3956/!NULL that contains live data
I190205 07:25:53.826765 9283 storage/compactor/compactor.go:329  [n2,s2,compactor] purging suggested compaction for range /Table/59/3/3956/!NULL - /Table/61 that contains live data
I190205 07:25:54.236131 24730 ccl/importccl/read_import_proc.go:75  [n1,import-distsql] could not fetch file size; falling back to per-file progress: <nil>
I190205 07:25:54.498275 24739 ccl/importccl/read_import_proc.go:75  [n1,import-distsql] could not fetch file size; falling back to per-file progress: <nil>
        import_stmt_test.go:1180: job 6 did not match:
            Description: "IMPORT TABLE csv6.public.t (a INT8 PRIMARY KEY, b STRING, INDEX (b), INDEX (a, b)) CSV DATA ('nodelocal:///empty.csv')" != "IMPORT TABLE csv4.public.t (a INT8 PRIMARY KEY, b STRING, INDEX (b), INDEX (a, b)) CSV DATA ('nodelocal:///csv/data-0', 'nodelocal:///csv/data-1', 'nodelocal:///csv/data-2', 'nodelocal:///csv/data-3', 'nodelocal:///csv/data-4') WITH sstsize = '10K'"



TestImportCSVStmt/schema-in-file-auto-decompress
...n2,merge,s2,r240/3:/Table/6{6/3/4564…-7}] initiating a merge of r250:/Table/6{7-8/1/741} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=2] into this range (lhs+rhs has (size=12 KiB+19 KiB qps=0.00+0.00 --> 0.00qps) below threshold (size=31 KiB, qps=0.00))
I190205 07:26:41.781742 9289 server/status/runtime.go:464  [n2] runtime stats: 1.2 GiB RSS, 677 goroutines, 77 MiB/32 MiB/137 MiB GO alloc/idle/total, 191 MiB/232 MiB CGO alloc/total, 8902.0 CGO/sec, 141.6/13.3 %(u/s)time, 1.7 %gc (9x), 3.5 MiB/3.5 MiB (r/w)net
I190205 07:26:42.344264 29330 storage/replica_command.go:383  [n1,merge,s1,r239/1:/Table/66/2/"{D"/1018-G"/3387}] initiating a merge of r241:/Table/66/2/"{G"/3387-K"/817} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range (lhs+rhs has (size=18 KiB+18 KiB qps=3.52+0.00 --> 3.52qps) below threshold (size=37 KiB, qps=3.52))
I190205 07:26:42.381380 9547 storage/store.go:2669  [n3,s3,r232/2:/Table/66/{1/3514-2/"D"/1…}] removing replica r239/2
I190205 07:26:42.399165 9269 storage/store.go:2669  [n2,s2,r232/3:/Table/66/{1/3514-2/"D"/1…}] removing replica r239/3
I190205 07:26:42.412795 8950 storage/store.go:2669  [n1,s1,r232/1:/Table/66/{1/3514-2/"D"/1…}] removing replica r239/1
I190205 07:26:42.576685 29452 storage/replica_command.go:383  [n3,merge,s3,r232/2:/Table/66/{1/3514-2/"G"/3…}] initiating a merge of r241:/Table/66/2/"{G"/3387-K"/817} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range (lhs+rhs has (size=74 KiB+18 KiB qps=5.72+0.00 --> 5.72qps) below threshold (size=93 KiB, qps=5.72))
I190205 07:26:42.694756 9594 server/status/runtime.go:464  [n3] runtime stats: 1.2 GiB RSS, 676 goroutines, 61 MiB/46 MiB/137 MiB GO alloc/idle/total, 191 MiB/232 MiB CGO alloc/total, 9497.9 CGO/sec, 141.2/13.4 %(u/s)time, 1.8 %gc (9x), 3.4 MiB/3.4 MiB (r/w)net
I190205 07:26:43.103575 9208 storage/store.go:2669  [n2,s2,r240/3:/Table/6{6/3/4564…-7}] removing replica r250/3
I190205 07:26:43.107592 9536 storage/store.go:2669  [n3,s3,r240/2:/Table/6{6/3/4564…-7}] removing replica r250/2
I190205 07:26:43.172798 8978 storage/store.go:2669  [n1,s1,r240/1:/Table/6{6/3/4564…-7}] removing replica r250/1
E190205 07:26:43.359605 29483 storage/queue.go:826  [n1,replicaGC,s1,r250/1:/Table/6{7-8/1/741}] r250 was not found on s1
I190205 07:26:43.529164 29531 storage/replica_command.go:383  [n1,merge,s1,r241/1:/Table/66/2/"{G"/3387-K"/817}] initiating a merge of r242:/Table/66/2/"{K"/817-N"/3264} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range (lhs+rhs has (size=18 KiB+18 KiB qps=4.20+0.00 --> 4.20qps) below threshold (size=37 KiB, qps=4.20))
I190205 07:26:43.551284 8960 storage/store.go:2669  [n1,s1,r232/1:/Table/66/{1/3514-2/"G"/3…}] removing replica r241/1
I190205 07:26:43.554306 9511 storage/store.go:2669  [n3,s3,r232/2:/Table/66/{1/3514-2/"G"/3…}] removing replica r241/2
I190205 07:26:43.570055 9258 storage/store.go:2669  [n2,s2,r232/3:/Table/66/{1/3514-2/"G"/3…}] removing replica r241/3
I190205 07:26:43.718253 29518 storage/replica_command.go:383  [n3,merge,s3,r232/2:/Table/66/{1/3514-2/"K"/8…}] initiating a merge of r242:/Table/66/2/"{K"/817-N"/3264} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range (lhs+rhs has (size=93 KiB+18 KiB qps=7.51+0.00 --> 7.51qps) below threshold (size=111 KiB, qps=7.51))
        import_stmt_test.go:1180: job 8 did not match:
            Description: "IMPORT TABLE csv8.public.t (a INT8 PRIMARY KEY, b STRING, INDEX (b), INDEX (a, b)) CSV DATA ('nodelocal:///csv/data-0', 'nodelocal:///csv/data-1', 'nodelocal:///csv/data-2', 'nodelocal:///csv/data-3', 'nodelocal:///csv/data-4') WITH decompress = 'auto'" != "IMPORT TABLE \"\".\"\".t (a INT8 PRIMARY KEY, b STRING, INDEX (b), INDEX (a, b)) CSV DATA ('nodelocal:///csv/data-0-opts', 'nodelocal:///csv/data-1-opts', 'nodelocal:///csv/data-2-opts', 'nodelocal:///csv/data-3-opts', 'nodelocal:///csv/data-4-opts') WITH comment = '#', delimiter = '|', \"nullif\" = '', skip = '2', transform = 'nodelocal:///5'"



TestImportCSVStmt/schema-in-query-transform-only
...146/1
I190205 07:25:47.478228 9222 storage/store.go:2669  [n2,s2,r141/3:/Table/59/{1/1538-2/"Z"/1…}] removing replica r146/3
I190205 07:25:47.519690 9501 storage/store.go:2669  [n3,s3,r141/2:/Table/59/{1/1538-2/"Z"/1…}] removing replica r146/2
I190205 07:25:47.615734 24032 storage/replica_command.go:383  [n1,merge,s1,r141/1:/Table/59/{1/1538-3/3956/…}] initiating a merge of r163:/Table/{59/3/3956/!NULL-61} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=5] into this range (lhs+rhs has (size=326 KiB+28 KiB qps=12.28+0.00 --> 12.28qps) below threshold (size=354 KiB, qps=12.28))
I190205 07:25:47.907661 8947 storage/store.go:2669  [n1,s1,r141/1:/Table/59/{1/1538-3/3956/…}] removing replica r163/1
I190205 07:25:47.936124 9539 storage/store.go:2669  [n3,s3,r141/2:/Table/59/{1/1538-3/3956/…}] removing replica r163/2
I190205 07:25:47.940859 9225 storage/store.go:2669  [n2,s2,r141/3:/Table/59/{1/1538-3/3956/…}] removing replica r163/3
I190205 07:25:48.135667 24123 storage/replica_command.go:383  [n3,merge,s3,r166/2:/Table/59{-/1/783}] initiating a merge of r117:/Table/59/1/{783-1538} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range (lhs+rhs has (size=20 KiB+19 KiB qps=0.00+1.74 --> 1.74qps) below threshold (size=39 KiB, qps=1.74))
I190205 07:25:48.317441 23549 sql/event_log.go:135  [n1] Event: "create_statistics", target: 59, info: {StatisticName:__auto__ Statement:CREATE STATISTICS __auto__ FROM [59] AS OF SYSTEM TIME '-30s'}
I190205 07:25:48.805706 9496 storage/store.go:2669  [n3,s3,r166/2:/Table/59{-/1/783}] removing replica r117/2
I190205 07:25:48.822713 9248 storage/store.go:2669  [n2,s2,r166/3:/Table/59{-/1/783}] removing replica r117/3
I190205 07:25:48.837608 8974 storage/store.go:2669  [n1,s1,r166/1:/Table/59{-/1/783}] removing replica r117/1
I190205 07:25:49.461858 24361 storage/replica_command.go:383  [n3,merge,s3,r166/2:/Table/59{-/1/1538}] initiating a merge of r141:/Table/{59/1/1538-61} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=9] into this range (lhs+rhs has (size=39 KiB+354 KiB qps=4.81+8.14 --> 12.95qps) below threshold (size=393 KiB, qps=12.95))
I190205 07:25:49.736438 9005 server/status/runtime.go:464  [n1] runtime stats: 1.1 GiB RSS, 758 goroutines, 76 MiB/34 MiB/137 MiB GO alloc/idle/total, 201 MiB/239 MiB CGO alloc/total, 2006.3 CGO/sec, 133.1/12.0 %(u/s)time, 0.7 %gc (8x), 3.5 MiB/3.5 MiB (r/w)net
I190205 07:25:50.913532 9497 storage/store.go:2669  [n3,s3,r166/2:/Table/59{-/1/1538}] removing replica r141/2
I190205 07:25:50.928649 8864 storage/store.go:2669  [n1,s1,r166/1:/Table/59{-/1/1538}] removing replica r141/1
I190205 07:25:51.013051 9266 storage/store.go:2669  [n2,s2,r166/3:/Table/59{-/1/1538}] removing replica r141/3
I190205 07:25:51.734600 9289 server/status/runtime.go:464  [n2] runtime stats: 1.1 GiB RSS, 698 goroutines, 57 MiB/51 MiB/137 MiB GO alloc/idle/total, 205 MiB/247 MiB CGO alloc/total, 1850.2 CGO/sec, 138.2/10.9 %(u/s)time, 0.9 %gc (10x), 3.5 MiB/3.5 MiB (r/w)net
I190205 07:25:52.527913 9594 server/status/runtime.go:464  [n3] runtime stats: 1.1 GiB RSS, 679 goroutines, 58 MiB/50 MiB/137 MiB GO alloc/idle/total, 205 MiB/248 MiB CGO alloc/total, 2221.8 CGO/sec, 139.9/10.7 %(u/s)time, 0.9 %gc (10x), 3.4 MiB/3.4 MiB (r/w)net
        import_stmt_test.go:1180: job 5 did not match:
            Description: "IMPORT TABLE \"\".\"\".t (a INT8 PRIMARY KEY, b STRING, INDEX (b), INDEX (a, b)) CSV DATA ('nodelocal:///csv/data-0-opts', 'nodelocal:///csv/data-1-opts', 'nodelocal:///csv/data-2-opts', 'nodelocal:///csv/data-3-opts', 'nodelocal:///csv/data-4-opts') WITH comment = '#', delimiter = '|', \"nullif\" = '', skip = '2', transform = 'nodelocal:///5'" != "IMPORT TABLE csv3.public.t (a INT8 PRIMARY KEY, b STRING, INDEX (b), INDEX (a, b)) CSV DATA ('nodelocal:///csv/data-0-opts', 'nodelocal:///csv/data-1-opts', 'nodelocal:///csv/data-2-opts', 'nodelocal:///csv/data-3-opts', 'nodelocal:///csv/data-4-opts') WITH comment = '#', delimiter = '|', \"nullif\" = '', skip = '2'"



TestImportCSVStmt/schema-in-file-no-decompress
... range /Table/68/3/2821/"N"/PrefixEnd - /Table/68/3/3488/"E"/PrefixEnd that contains live data
I190205 07:27:25.241433 9576 storage/compactor/compactor.go:329  [n3,s3,compactor] purging suggested compaction for range /Table/68/3/4080/"Y" - /Table/68/3/4746/"O"/PrefixEnd that contains live data
I190205 07:27:25.241828 9576 storage/compactor/compactor.go:329  [n3,s3,compactor] purging suggested compaction for range /Table/68/3/4746/"O"/PrefixEnd - /Table/70/1/741 that contains live data
I190205 07:27:25.242125 9576 storage/compactor/compactor.go:329  [n3,s3,compactor] purging suggested compaction for range /Table/69 - /Table/70/1/741 that contains live data
I190205 07:27:25.245186 9500 storage/store.go:2669  [n3,s3,r277/2:/Table/68{-/1/4316}] removing replica r199/2
I190205 07:27:25.258507 30527 storage/queue.go:912  [n3,merge] purgatory is now empty
I190205 07:27:25.269123 9170 storage/store.go:2669  [n2,s2,r277/3:/Table/68{-/1/4316}] removing replica r199/3
I190205 07:27:25.292225 8970 storage/store.go:2669  [n1,s1,r277/1:/Table/68{-/1/4316}] removing replica r199/1
I190205 07:27:25.442409 33232 storage/replica_command.go:383  [n3,merge,s3,r277/2:/Table/68{-/1/4654}] initiating a merge of r271:/Table/68/{1/4654-2/"B"/3954} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=0] into this range (lhs+rhs has (size=122 KiB+19 KiB qps=3.75+0.00 --> 3.75qps) below threshold (size=141 KiB, qps=3.75))
I190205 07:27:25.716515 9267 storage/store.go:2669  [n2,s2,r274/3:/Table/68/3/{44/"S"…-730/"C…}] removing replica r275/3
I190205 07:27:25.722491 8942 storage/store.go:2669  [n1,s1,r274/1:/Table/68/3/{44/"S"…-730/"C…}] removing replica r275/1
I190205 07:27:25.739511 9541 storage/store.go:2669  [n3,s3,r274/2:/Table/68/3/{44/"S"…-730/"C…}] removing replica r275/2
I190205 07:27:25.901624 33276 storage/replica_command.go:383  [n2,merge,s2,r276/3:/Table/68/3/{821/"P"-3488/"…}] initiating a merge of r257:/Table/{68/3/3488/"E"/PrefixEnd-70/1/741} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=3] into this range (lhs+rhs has (size=73 KiB+60 KiB qps=2.85+2.55 --> 5.39qps) below threshold (size=134 KiB, qps=5.39))
I190205 07:27:26.068599 9523 storage/store.go:2669  [n3,s3,r277/2:/Table/68{-/1/4654}] removing replica r271/2
I190205 07:27:26.082540 9249 storage/store.go:2669  [n2,s2,r277/3:/Table/68{-/1/4654}] removing replica r271/3
I190205 07:27:26.131293 8945 storage/store.go:2669  [n1,s1,r277/1:/Table/68{-/1/4654}] removing replica r271/1
I190205 07:27:26.270002 33372 storage/replica_command.go:383  [n3,merge,s3,r277/2:/Table/68{-/2/"B"/3…}] initiating a merge of r192:/Table/68/2/"{B"/3954-F"/1332} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range (lhs+rhs has (size=141 KiB+18 KiB qps=3.75+0.00 --> 3.75qps) below threshold (size=159 KiB, qps=3.75))
I190205 07:27:26.463255 9241 storage/store.go:2669  [n2,s2,r276/3:/Table/68/3/{821/"P"-3488/"…}] removing replica r257/3
I190205 07:27:26.469833 8944 storage/store.go:2669  [n1,s1,r276/1:/Table/68/3/{821/"P"-3488/"…}] removing replica r257/1
I190205 07:27:26.502121 33276 storage/queue.go:912  [n2,merge] purgatory is now empty
I190205 07:27:26.503424 9561 storage/store.go:2669  [n3,s3,r276/2:/Table/68/3/{821/"P"-3488/"…}] removing replica r257/2
I190205 07:27:26.539561 33512 storage/replica_command.go:383  [n2,merge,s2,r274/3:/Table/68/3/{44/"S"…-821/"P"}] initiating a merge of r276:/Table/{68/3/821/"P"-70/1/741} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=3] into this range (lhs+rhs has (size=21 KiB+134 KiB qps=4.00+2.85 --> 6.85qps) below threshold (size=154 KiB, qps=6.85))
        import_stmt_test.go:1180: job 9 did not match:
            Description: "IMPORT TABLE csv9.public.t (a INT8 PRIMARY KEY, b STRING, INDEX (b), INDEX (a, b)) CSV DATA ('nodelocal:///csv/data-0', 'nodelocal:///csv/data-1', 'nodelocal:///csv/data-2', 'nodelocal:///csv/data-3', 'nodelocal:///csv/data-4') WITH decompress = 'none'" != "CREATE STATISTICS __auto__ FROM [59] AS OF SYSTEM TIME '-30s'"



TestImportCSVStmt/schema-in-file-auto-gzip
...e (lhs+rhs has (size=45 KiB+19 KiB qps=8.99+0.00 --> 8.99qps) below threshold (size=64 KiB, qps=8.99))
I190205 07:28:19.120727 38663 storage/replica_command.go:244  [n3,s3,r381/2:/Table/74/3/{1337/"…-3708/"…}] initiating a split of this range at key /Table/74/3/2004/"C"/PrefixEnd [r382] (manual)
I190205 07:28:19.333578 36842 storage/queue.go:912  [n1,merge] purgatory is now empty
I190205 07:28:19.387572 38669 storage/replica_command.go:383  [n1,merge,s1,r324/1:/Table/72/1/3{160-875}] initiating a merge of r329:/Table/72/1/{3875-4590} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range (lhs+rhs has (size=19 KiB+19 KiB qps=0.00+0.00 --> 0.00qps) below threshold (size=38 KiB, qps=0.00))
I190205 07:28:19.418480 9001 gossip/gossip.go:557  [n1] gossip status (ok, 3 nodes)
gossip client (0/3 cur/max conns)
gossip server (2/3 cur/max conns, infos 4023/2038 sent/received, bytes 1384900B/560568B sent/received)
  2: 127.0.0.1:40801 (4m58s)
  3: 127.0.0.1:45665 (4m57s)
gossip connectivity
  n3 [sentinel];
  n2 -> n1; n3 -> n1;
I190205 07:28:19.443416 9211 storage/store.go:2669  [n2,s2,r313/3:/Table/72/1/{1456-3160}] removing replica r324/3
I190205 07:28:19.447269 9541 storage/store.go:2669  [n3,s3,r313/2:/Table/72/1/{1456-3160}] removing replica r324/2
I190205 07:28:19.495824 8965 storage/store.go:2669  [n1,s1,r313/1:/Table/72/1/{1456-3160}] removing replica r324/1
I190205 07:28:19.720332 38702 storage/replica_command.go:244  [n3,s3,r382/2:/Table/74/3/{2004/"…-3708/"…}] initiating a split of this range at key /Table/74/3/2671/"T"/PrefixEnd [r383] (manual)
I190205 07:28:19.965335 9005 server/status/runtime.go:464  [n1] runtime stats: 1.4 GiB RSS, 670 goroutines, 45 MiB/58 MiB/137 MiB GO alloc/idle/total, 277 MiB/317 MiB CGO alloc/total, 8181.6 CGO/sec, 146.2/11.3 %(u/s)time, 1.7 %gc (9x), 3.1 MiB/3.1 MiB (r/w)net
I190205 07:28:20.149429 38789 storage/replica_command.go:383  [n2,merge,s2,r313/3:/Table/72/1/{1456-3875}] initiating a merge of r329:/Table/72/1/{3875-4590} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range (lhs+rhs has (size=64 KiB+19 KiB qps=8.99+0.00 --> 8.99qps) below threshold (size=83 KiB, qps=8.99))
I190205 07:28:20.166502 38749 storage/replica_command.go:244  [n3,s3,r383/2:/Table/74/3/{2671/"…-3708/"…}] initiating a split of this range at key /Table/74/3/3042/"A" [r384] (manual)
I190205 07:28:20.480415 38870 storage/replica_command.go:383  [n1,merge,s1,r329/1:/Table/72/1/{3875-4590}] initiating a merge of r332:/Table/72/{1/4590-2/"B"/2420} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range (lhs+rhs has (size=19 KiB+19 KiB qps=4.69+0.00 --> 4.69qps) below threshold (size=38 KiB, qps=4.69))
I190205 07:28:20.529719 9538 storage/store.go:2669  [n3,s3,r313/2:/Table/72/1/{1456-3875}] removing replica r329/2
I190205 07:28:20.530222 9256 storage/store.go:2669  [n2,s2,r313/3:/Table/72/1/{1456-3875}] removing replica r329/3
I190205 07:28:20.539784 8938 storage/store.go:2669  [n1,s1,r313/1:/Table/72/1/{1456-3875}] removing replica r329/1
I190205 07:28:21.025609 38906 storage/replica_command.go:383  [n2,merge,s2,r313/3:/Table/72/1/{1456-4590}] initiating a merge of r332:/Table/72/{1/4590-2/"B"/2420} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range (lhs+rhs has (size=83 KiB+19 KiB qps=9.00+0.00 --> 9.00qps) below threshold (size=101 KiB, qps=9.00))
        import_stmt_test.go:1180: job 11 did not match:
            Description: "IMPORT TABLE csv11.public.t (a INT8 PRIMARY KEY, b STRING, INDEX (b), INDEX (a, b)) CSV DATA ('nodelocal:////csv/data-0.gz', 'nodelocal:////csv/data-1.gz', 'nodelocal:////csv/data-2.gz', 'nodelocal:////csv/data-3.gz', 'nodelocal:////csv/data-4.gz') WITH decompress = 'auto'" != "IMPORT TABLE csv7.public.t (a INT8 PRIMARY KEY, b STRING, INDEX (b), INDEX (a, b)) CSV DATA ('nodelocal:///empty.csv', 'nodelocal:///csv/data-0', 'nodelocal:///csv/data-1', 'nodelocal:///csv/data-2', 'nodelocal:///csv/data-3', 'nodelocal:///csv/data-4')"



TestImportCSVStmt/schema-in-query-opts
... removing replica r121/2
I190205 07:25:04.009890 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/55/1/741 - /Table/55/1/2171 that contains live data
I190205 07:25:04.028646 9172 storage/store.go:2669  [n2,s2,r101/3:/Table/57/1/{741-2886}] removing replica r121/3
I190205 07:25:04.036976 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/55/1/3638 - /Table/55/3/456/"O"/PrefixEnd that contains live data
I190205 07:25:04.039842 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/55/3/456/"O"/PrefixEnd - /Table/57 that contains live data
I190205 07:25:04.043775 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/55/3/4089/"H"/PrefixEnd - /Table/55/3/4756/"Y"/PrefixEnd that contains live data
I190205 07:25:04.054069 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/55/3/4756/"Y"/PrefixEnd - /Table/57 that contains live data
I190205 07:25:04.054582 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/57/1/1456 - /Table/57/1/2171 that contains live data
I190205 07:25:04.054804 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/57/1/2171 - /Table/57/1/2886 that contains live data
I190205 07:25:04.054986 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/57/1/2886 - /Table/57/1/3601 that contains live data
I190205 07:25:04.055498 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/57/3/1261/"N"/PrefixEnd - /Table/57/3/1712/"W" that contains live data
I190205 07:25:04.055986 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/57/3/1712/"W" - /Table/57/3/2378/"M"/PrefixEnd that contains live data
I190205 07:25:04.056565 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/57/3/2378/"M"/PrefixEnd - /Table/57/3/3045/"D"/PrefixEnd that contains live data
I190205 07:25:04.057072 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/57/3/3045/"D"/PrefixEnd - /Table/57/3/3712/"U"/PrefixEnd that contains live data
I190205 07:25:04.057491 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/57/3/3712/"U"/PrefixEnd - /Table/57/3/4345/"D" that contains live data
I190205 07:25:04.057849 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/57/3/4345/"D" - /Table/58 that contains live data
I190205 07:25:04.631808 19592 storage/replica_command.go:383  [n1,merge,s1,r101/1:/Table/57/1/{741-3601}] initiating a merge of r124:/Table/57/1/{3601-4316} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range (lhs+rhs has (size=75 KiB+19 KiB qps=6.31+0.00 --> 6.31qps) below threshold (size=94 KiB, qps=6.31))
I190205 07:25:04.975821 9547 storage/store.go:2669  [n3,s3,r101/2:/Table/57/1/{741-3601}] removing replica r124/2
I190205 07:25:04.981573 9252 storage/store.go:2669  [n2,s2,r101/3:/Table/57/1/{741-3601}] removing replica r124/3
I190205 07:25:04.992512 8936 storage/store.go:2669  [n1,s1,r101/1:/Table/57/1/{741-3601}] removing replica r124/1
        import_stmt_test.go:1180: job 3 did not match:
            Description: "IMPORT TABLE csv3.public.t (a INT8 PRIMARY KEY, b STRING, INDEX (b), INDEX (a, b)) CSV DATA ('nodelocal:///csv/data-0-opts', 'nodelocal:///csv/data-1-opts', 'nodelocal:///csv/data-2-opts', 'nodelocal:///csv/data-3-opts', 'nodelocal:///csv/data-4-opts') WITH comment = '#', delimiter = '|', \"nullif\" = '', skip = '2'" != "CREATE STATISTICS __auto__ FROM [55] AS OF SYSTEM TIME '-30s'"



TestImportCSVStmt/schema-in-file-explicit-gzip
.../3701 - /Table/68/3/44/"S"/PrefixEnd that contains live data
I190205 07:27:40.317844 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/68/3/730/"C"/PrefixEnd - /Table/68/3/821/"P" that contains live data
I190205 07:27:40.318131 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/68/3/821/"P" - /Table/70/1/741 that contains live data
I190205 07:27:40.331151 8999 storage/compactor/compactor.go:329  [n1,s1,compactor] purging suggested compaction for range /Table/68/3/3488/"E"/PrefixEnd - /Table/70/1/741 that contains live data
I190205 07:27:40.647597 34745 storage/replica_command.go:244  [n1,s1,r335/1:/Table/72/2/"{B"/2420-I"/9}] initiating a split of this range at key /Table/72/2/"E"/2682 [r338] (manual)
I190205 07:27:41.411939 34769 storage/replica_command.go:244  [n1,s1,r337/1:/Table/72/{2/"O"/4…-3/776/"…}] initiating a split of this range at key /Table/72/2/"S"/2411 [r339] (manual)
I190205 07:27:41.476577 34562 storage/replica_command.go:244  [n1,s1,r336/1:/{Table/72/3/2…-Max}] initiating a split of this range at key /Table/72/3/2777/"V"/PrefixEnd [r340] (manual)
I190205 07:27:41.865402 9289 server/status/runtime.go:464  [n2] runtime stats: 1.4 GiB RSS, 686 goroutines, 80 MiB/31 MiB/137 MiB GO alloc/idle/total, 294 MiB/337 MiB CGO alloc/total, 5925.9 CGO/sec, 143.8/11.2 %(u/s)time, 2.1 %gc (9x), 3.3 MiB/3.3 MiB (r/w)net
I190205 07:27:42.837258 9594 server/status/runtime.go:464  [n3] runtime stats: 1.4 GiB RSS, 679 goroutines, 88 MiB/21 MiB/137 MiB GO alloc/idle/total, 294 MiB/337 MiB CGO alloc/total, 6404.2 CGO/sec, 142.9/11.2 %(u/s)time, 2.2 %gc (9x), 3.3 MiB/3.3 MiB (r/w)net
I190205 07:27:42.844476 34562 storage/replica_command.go:244  [n1,s1,r340/1:/{Table/72/3/2…-Max}] initiating a split of this range at key /Table/72/3/3444/"M"/PrefixEnd [r341] (manual)
I190205 07:27:42.894198 34893 storage/replica_command.go:244  [n1,s1,r339/1:/Table/72/{2/"S"/2…-3/776/"…}] initiating a split of this range at key /Table/72/2/"V"/4858 [r342] (manual)
I190205 07:27:44.103236 34970 storage/replica_command.go:244  [n1,s1,r342/1:/Table/72/{2/"V"/4…-3/776/"…}] initiating a split of this range at key /Table/72/2/"Z"/2340 [r345] (manual); delayed split for 0.2s to avoid Raft snapshot
I190205 07:27:44.227727 34562 storage/replica_command.go:244  [n1,s1,r341/1:/{Table/72/3/3…-Max}] initiating a split of this range at key /Table/72/3/4111/"D"/PrefixEnd [r344] (manual); delayed split for 0.6s to avoid Raft snapshot
I190205 07:27:44.321275 34898 storage/replica_command.go:244  [n1,s1,r341/1:/{Table/72/3/3…-Max}] initiating a split of this range at key /Table/73 [r343] (manual); delayed split for 0.8s to avoid Raft snapshot
I190205 07:27:44.609926 35049 storage/replica_command.go:244  [n1,s1,r345/1:/Table/72/{2/"Z"/2…-3/776/"…}] initiating a split of this range at key /Table/72/3/98/"U" [r346] (manual)
I190205 07:27:44.877920 35007 storage/replica_command.go:244  [n1,s1,r344/1:/{Table/72/3/4…-Max}] initiating a split of this range at key /Table/73 [r348] (manual)
I190205 07:27:45.022307 34562 storage/replica_command.go:244  [n1,s1,r344/1:/{Table/72/3/4…-Max}] initiating a split of this range at key /Table/72/3/4336/"U" [r347] (manual); delayed split for 0.2s to avoid Raft snapshot
I190205 07:27:45.208533 34562 storage/replica_command.go:244  [n1,s1,r344/1:/Table/7{2/3/4111…-3}] initiating a split of this range at key /Table/72/3/4336/"U" [r349] (manual)
        import_stmt_test.go:1180: job 10 did not match:
            Description: "IMPORT TABLE csv10.public.t (a INT8 PRIMARY KEY, b STRING, INDEX (b), INDEX (a, b)) CSV DATA ('nodelocal:////csv/data-0.gz', 'nodelocal:////csv/data-1.gz', 'nodelocal:////csv/data-2.gz', 'nodelocal:////csv/data-3.gz', 'nodelocal:////csv/data-4.gz') WITH decompress = 'gzip'" != "IMPORT TABLE csv6.public.t (a INT8 PRIMARY KEY, b STRING, INDEX (b), INDEX (a, b)) CSV DATA ('nodelocal:///empty.csv')"



TestImportCSVStmt/schema-in-file-implicit-gzip
... 143.6/11.2 %(u/s)time, 0.8 %gc (7x), 3.5 MiB/3.5 MiB (r/w)net
I190205 07:28:52.018300 41870 storage/replica_command.go:383  [n1,merge,s1,r353/1:/Table/74/{1/741-2/"L"/2…}] initiating a merge of r365:/Table/74/{2/"L"/2430-3/1337/"L"/PrefixEnd} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=5] into this range (lhs+rhs has (size=173 KiB+112 KiB qps=4.27+5.83 --> 10.11qps) below threshold (size=285 KiB, qps=10.11))
I190205 07:28:52.497373 41905 storage/replica_command.go:244  [n1,s1,r418/1:/Table/76/3/{1821/"…-3131/"…}] initiating a split of this range at key /Table/76/3/2465/"V" [r419] (manual)
I190205 07:28:52.728869 9535 storage/store.go:2669  [n3,s3,r381/2:/Table/74/3/{1337/"…-2004/"…}] removing replica r382/2
I190205 07:28:52.734817 9227 storage/store.go:2669  [n2,s2,r381/3:/Table/74/3/{1337/"…-2004/"…}] removing replica r382/3
I190205 07:28:52.745982 8975 storage/store.go:2669  [n1,s1,r381/1:/Table/74/3/{1337/"…-2004/"…}] removing replica r382/1
I190205 07:28:52.857177 42025 storage/replica_command.go:383  [n3,merge,s3,r369/2:/Table/74/3/{3708/"…-4375/"…}] initiating a merge of r370:/Table/7{4/3/4375/"H"/PrefixEnd-5} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range (lhs+rhs has (size=18 KiB+17 KiB qps=0.00+0.00 --> 0.00qps) below threshold (size=35 KiB, qps=0.00))
I190205 07:28:52.990575 9594 server/status/runtime.go:464  [n3] runtime stats: 1.5 GiB RSS, 675 goroutines, 69 MiB/36 MiB/138 MiB GO alloc/idle/total, 304 MiB/343 MiB CGO alloc/total, 8847.7 CGO/sec, 144.2/11.7 %(u/s)time, 0.9 %gc (8x), 3.4 MiB/3.4 MiB (r/w)net
I190205 07:28:53.411020 8959 storage/store.go:2669  [n1,s1,r353/1:/Table/74/{1/741-2/"L"/2…}] removing replica r365/1
I190205 07:28:53.429854 9516 storage/store.go:2669  [n3,s3,r353/2:/Table/74/{1/741-2/"L"/2…}] removing replica r365/2
I190205 07:28:53.490608 9263 storage/store.go:2669  [n2,s2,r353/3:/Table/74/{1/741-2/"L"/2…}] removing replica r365/3
I190205 07:28:53.634141 42094 storage/replica_command.go:383  [n1,merge,s1,r370/1:/Table/7{4/3/4375…-5}] initiating a merge of r371:/Table/7{5-6/1/741} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=2] into this range (lhs+rhs has (size=17 KiB+19 KiB qps=0.00+0.00 --> 0.00qps) below threshold (size=36 KiB, qps=0.00))
E190205 07:28:53.642526 42096 storage/queue.go:826  [n2,replicaGC,s2,r365/3:/Table/74/{2/"L"/2…-3/1337/…}] r365 was not found on s2
I190205 07:28:53.660908 9519 storage/store.go:2669  [n3,s3,r369/2:/Table/74/3/{3708/"…-4375/"…}] removing replica r370/2
I190205 07:28:53.714226 8939 storage/store.go:2669  [n1,s1,r369/1:/Table/74/3/{3708/"…-4375/"…}] removing replica r370/1
I190205 07:28:53.723113 9250 storage/store.go:2669  [n2,s2,r369/3:/Table/74/3/{3708/"…-4375/"…}] removing replica r370/3
I190205 07:28:53.891291 42136 storage/replica_command.go:383  [n3,merge,s3,r369/2:/Table/7{4/3/3708…-5}] initiating a merge of r371:/Table/7{5-6/1/741} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=2] into this range (lhs+rhs has (size=35 KiB+19 KiB qps=4.99+0.00 --> 4.99qps) below threshold (size=54 KiB, qps=4.99))
I190205 07:28:54.307394 9263 storage/store.go:2669  [n2,s2,r369/3:/Table/7{4/3/3708…-5}] removing replica r371/3
I190205 07:28:54.309872 9498 storage/store.go:2669  [n3,s3,r369/2:/Table/7{4/3/3708…-5}] removing replica r371/2
I190205 07:28:54.324721 8985 storage/store.go:2669  [n1,s1,r369/1:/Table/7{4/3/3708…-5}] removing replica r371/1
        import_stmt_test.go:1180: job 12 did not match:
            Description: "IMPORT TABLE csv12.public.t (a INT8 PRIMARY KEY, b STRING, INDEX (b), INDEX (a, b)) CSV DATA ('nodelocal:////csv/data-0.gz', 'nodelocal:////csv/data-1.gz', 'nodelocal:////csv/data-2.gz', 'nodelocal:////csv/data-3.gz', 'nodelocal:////csv/data-4.gz')" != "IMPORT TABLE csv8.public.t (a INT8 PRIMARY KEY, b STRING, INDEX (b), INDEX (a, b)) CSV DATA ('nodelocal:///csv/data-0', 'nodelocal:///csv/data-1', 'nodelocal:///csv/data-2', 'nodelocal:///csv/data-3', 'nodelocal:///csv/data-4') WITH decompress = 'auto'"



TestImportCSVStmt/schema-in-file-sstsize
...190205 07:25:38.357769 9576 storage/compactor/compactor.go:329  [n3,s3,compactor] purging suggested compaction for range /Table/60 - /Table/61/1/741 that contains live data
I190205 07:25:38.367138 23048 storage/replica_command.go:383  [n1,merge,s1,r120/1:/Table/59/{1/4432-2/NULL/…}] initiating a merge of r143:/Table/59/2/NULL/{853-4189} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range (lhs+rhs has (size=20 KiB+20 KiB qps=0.00+0.00 --> 0.00qps) below threshold (size=40 KiB, qps=0.00))
I190205 07:25:38.407316 23113 storage/replica_command.go:383  [n3,merge,s3,r142/2:/Table/59/{1/3677-2/NULL/…}] initiating a merge of r143:/Table/59/2/NULL/{853-4189} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range (lhs+rhs has (size=39 KiB+20 KiB qps=2.92+0.00 --> 2.92qps) below threshold (size=59 KiB, qps=2.92))
I190205 07:25:38.544632 23087 storage/queue.go:912  [n1,merge] purgatory is now empty
I190205 07:25:38.573725 23089 storage/replica_command.go:244  [n1,s1,r138/1:/{Table/61/3/4…-Max}] initiating a split of this range at key /Table/62 [r139] (manual)
I190205 07:25:38.752197 23175 storage/replica_command.go:383  [n1,merge,s1,r141/1:/Table/59/1/1{538-923}] initiating a merge of r144:/Table/59/1/{1923-2678} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=0] into this range (lhs+rhs has (size=9.9 KiB+19 KiB qps=0.00+0.00 --> 0.00qps) below threshold (size=29 KiB, qps=0.00))
I190205 07:25:38.776188 9546 storage/store.go:2669  [n3,s3,r142/2:/Table/59/{1/3677-2/NULL/…}] removing replica r143/2
I190205 07:25:38.780547 8969 storage/store.go:2669  [n1,s1,r142/1:/Table/59/{1/3677-2/NULL/…}] removing replica r143/1
I190205 07:25:38.805973 9261 storage/store.go:2669  [n2,s2,r142/3:/Table/59/{1/3677-2/NULL/…}] removing replica r143/3
I190205 07:25:39.100915 9521 storage/store.go:2669  [n3,s3,r141/2:/Table/59/1/1{538-923}] removing replica r144/2
I190205 07:25:39.103774 8973 storage/store.go:2669  [n1,s1,r141/1:/Table/59/1/1{538-923}] removing replica r144/1
I190205 07:25:39.132206 9217 storage/store.go:2669  [n2,s2,r141/3:/Table/59/1/1{538-923}] removing replica r144/3
I190205 07:25:39.249435 23186 storage/replica_command.go:383  [n1,merge,s1,r141/1:/Table/59/1/{1538-2678}] initiating a merge of r113:/Table/59/1/{2678-3433} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=1] into this range (lhs+rhs has (size=29 KiB+19 KiB qps=0.00+0.00 --> 0.00qps) below threshold (size=49 KiB, qps=0.00))
I190205 07:25:39.251506 23254 storage/replica_command.go:383  [n3,merge,s3,r142/2:/Table/59/{1/3677-2/NULL/…}] initiating a merge of r145:/Table/59/2/{NULL/4189-"D"/3176} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=2] into this range (lhs+rhs has (size=59 KiB+19 KiB qps=10.96+0.00 --> 10.96qps) below threshold (size=78 KiB, qps=10.96))
I190205 07:25:39.685735 9005 server/status/runtime.go:464  [n1] runtime stats: 1.0 GiB RSS, 667 goroutines, 44 MiB/64 MiB/137 MiB GO alloc/idle/total, 180 MiB/216 MiB CGO alloc/total, 3455.3 CGO/sec, 134.5/13.1 %(u/s)time, 0.9 %gc (10x), 4.1 MiB/4.1 MiB (r/w)net
I190205 07:25:39.799037 9558 storage/store.go:2669  [n3,s3,r141/2:/Table/59/1/{1538-2678}] removing replica r113/2
I190205 07:25:39.801319 8946 storage/store.go:2669  [n1,s1,r141/1:/Table/59/1/{1538-2678}] removing replica r113/1
I190205 07:25:39.828885 9260 storage/store.go:2669  [n2,s2,r141/3:/Table/59/1/{1538-2678}] removing replica r113/3
I190205 07:25:39.860942 22739 sql/event_log.go:135  [n1] Event: "create_statistics", target: 57, info: {StatisticName:__auto__ Statement:CREATE STATISTICS __auto__ FROM [57] AS OF SYSTEM TIME '-30s'}
        import_stmt_test.go:1180: job 4 did not match:
            Description: "IMPORT TABLE csv4.public.t (a INT8 PRIMARY KEY, b STRING, INDEX (b), INDEX (a, b)) CSV DATA ('nodelocal:///csv/data-0', 'nodelocal:///csv/data-1', 'nodelocal:///csv/data-2', 'nodelocal:///csv/data-3', 'nodelocal:///csv/data-4') WITH sstsize = '10K'" != "CREATE STATISTICS __auto__ FROM [53] AS OF SYSTEM TIME '-30s'"



TestImportCSVStmt/empty-with-files
...27359 storage/replica_command.go:244  [n1,s1,r245/1:/Table/66/{2/"U"/3…-3/2018/…}] initiating a split of this range at key /Table/66/2/"Y"/675 [r246] (manual)
I190205 07:26:19.971449 27361 storage/replica_command.go:383  [n1,merge,s1,r136/1:/Table/61/{1/1214-3/2564/…}] initiating a merge of r219:/Table/6{1/3/2564/"Q"-4} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=5] into this range (lhs+rhs has (size=306 KiB+67 KiB qps=5.00+0.00 --> 5.00qps) below threshold (size=372 KiB, qps=5.00))
I190205 07:26:20.274483 27510 storage/replica_command.go:244  [n1,s1,r246/1:/Table/66/{2/"Y"/6…-3/2018/…}] initiating a split of this range at key /Table/66/3/333/"V"/PrefixEnd [r247] (manual)
I190205 07:26:20.333907 8946 storage/store.go:2669  [n1,s1,r136/1:/Table/61/{1/1214-3/2564/…}] removing replica r219/1
I190205 07:26:20.345679 9237 storage/store.go:2669  [n2,s2,r136/3:/Table/61/{1/1214-3/2564/…}] removing replica r219/3
I190205 07:26:20.381508 9549 storage/store.go:2669  [n3,s3,r136/2:/Table/61/{1/1214-3/2564/…}] removing replica r219/2
I190205 07:26:20.935442 27605 storage/replica_command.go:244  [n1,s1,r247/1:/Table/66/3/{333/"V…-2018/"…}] initiating a split of this range at key /Table/66/3/1000/"M"/PrefixEnd [r248] (manual)
I190205 07:26:21.441679 27521 storage/replica_command.go:244  [n1,s1,r248/1:/Table/66/3/{1000/"…-2018/"…}] initiating a split of this range at key /Table/66/3/1352/"A" [r249] (manual)
I190205 07:26:21.514293 22137 storage/queue.go:912  [n2,merge] purgatory is now empty
I190205 07:26:21.516878 9285 gossip/gossip.go:557  [n2] gossip status (ok, 3 nodes)
gossip client (1/3 cur/max conns)
  1: 127.0.0.1:39043 (3m0s: infos 655/1203 sent/received, bytes 171734B/429166B sent/received)
gossip server (0/3 cur/max conns, infos 655/1203 sent/received, bytes 171734B/429166B sent/received)
I190205 07:26:21.653412 27614 storage/replica_command.go:383  [n1,merge,s1,r135/1:/Table/61/1/{741-1214}] initiating a merge of r136:/Table/6{1/1/1214-4} [(n1,s1):1, (n3,s3):2, (n2,s2):3, next=4, gen=12] into this range (lhs+rhs has (size=12 KiB+372 KiB qps=0.00+4.17 --> 4.17qps) below threshold (size=385 KiB, qps=4.17))
I190205 07:26:21.758728 9289 server/status/runtime.go:464  [n2] runtime stats: 1.2 GiB RSS, 673 goroutines, 46 MiB/60 MiB/137 MiB GO alloc/idle/total, 190 MiB/232 MiB CGO alloc/total, 9348.7 CGO/sec, 135.3/12.7 %(u/s)time, 1.5 %gc (9x), 3.7 MiB/3.7 MiB (r/w)net
I190205 07:26:22.150253 22585 storage/queue.go:912  [n3,merge] purgatory is now empty
I190205 07:26:22.214950 9578 gossip/gossip.go:557  [n3] gossip status (ok, 3 nodes)
gossip client (1/3 cur/max conns)
  1: 127.0.0.1:39043 (3m0s: infos 600/1272 sent/received, bytes 173257B/439883B sent/received)
gossip server (0/3 cur/max conns, infos 600/1272 sent/received, bytes 173257B/439883B sent/received)
I190205 07:26:22.243216 8860 storage/store.go:2669  [n1,s1,r135/1:/Table/61/1/{741-1214}] removing replica r136/1
I190205 07:26:22.295405 9527 storage/store.go:2669  [n3,s3,r135/2:/Table/61/1/{741-1214}] removing replica r136/2
I190205 07:26:22.305182 9265 storage/store.go:2669  [n2,s2,r135/3:/Table/61/1/{741-1214}] removing replica r136/3
I190205 07:26:22.590901 27703 storage/replica_command.go:244  [n1,s1,r240/1:/{Table/66/3/4…-Max}] initiating a split of this range at key /Table/67 [r250] (manual)
I190205 07:26:22.634158 9594 server/status/runtime.go:464  [n3] runtime stats: 1.2 GiB RSS, 666 goroutines, 71 MiB/36 MiB/137 MiB GO alloc/idle/total, 189 MiB/232 MiB CGO alloc/total, 9173.3 CGO/sec, 134.1/12.9 %(u/s)time, 1.5 %gc (9x), 3.7 MiB/3.7 MiB (r/w)net
        import_stmt_test.go:1180: job 7 did not match:
            Description: "IMPORT TABLE csv7.public.t (a INT8 PRIMARY KEY, b STRING, INDEX (b), INDEX (a, b)) CSV DATA ('nodelocal:///empty.csv', 'nodelocal:///csv/data-0', 'nodelocal:///csv/data-1', 'nodelocal:///csv/data-2', 'nodelocal:///csv/data-3', 'nodelocal:///csv/data-4')" != "CREATE STATISTICS __auto__ FROM [57] AS OF SYSTEM TIME '-30s'"




Please assign, take a look and update the issue accordingly.

@cockroach-teamcity cockroach-teamcity added this to the 2.2 milestone Feb 5, 2019
@cockroach-teamcity cockroach-teamcity added C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. labels Feb 5, 2019
@tbg
Copy link
Member

tbg commented Feb 5, 2019

Cc @rebeccataft

danhhz added a commit to danhhz/cockroach that referenced this issue Feb 5, 2019
It's breaking the master build. See cockroachdb#34568.

Release note: None
@rytaft
Copy link
Collaborator

rytaft commented Feb 5, 2019

oops sorry didn't see this -- my GitHub handle is @rytaft -- will take a look

@tbg
Copy link
Member

tbg commented Feb 5, 2019

No worries, I realized the handle was wrong but was on my phone (GH mobile doesn't auto-complete the handle) and ran out of guesses. Meant to return and fix, but forgot. Thanks for taking a look.

rytaft added a commit to rytaft/cockroach that referenced this issue Feb 5, 2019
TestImportCSVStmt tests that IMPORT jobs appear in a certain order
in the system.jobs table. Automatic statistics were causing this
test to be flaky since CreateStats jobs were present in the jobs
table as well, in an unpredictable order. This commit disables
automatic stats for this test.

Fixes cockroachdb#34568

Release note: None
rytaft added a commit to rytaft/cockroach that referenced this issue Feb 5, 2019
TestImportCSVStmt tests that IMPORT jobs appear in a certain order
in the system.jobs table. Automatic statistics were causing this
test to be flaky since CreateStats jobs were present in the jobs
table as well, in an unpredictable order. This commit fixes the problem
by only selecting IMPORT jobs from the jobs table.

Fixes cockroachdb#34568

Release note: None
rytaft added a commit to rytaft/cockroach that referenced this issue Feb 6, 2019
TestImportCSVStmt tests that IMPORT jobs appear in a certain order
in the system.jobs table. Automatic statistics were causing this
test to be flaky since CreateStats jobs were present in the jobs
table as well, in an unpredictable order. This commit fixes the problem
by only selecting IMPORT jobs from the jobs table.

Fixes cockroachdb#34568

Release note: None
rytaft added a commit to rytaft/cockroach that referenced this issue Feb 6, 2019
TestImportCSVStmt tests that IMPORT jobs appear in a certain order
in the system.jobs table. Automatic statistics were causing this
test to be flaky since CreateStats jobs were present in the jobs
table as well, in an unpredictable order. This commit fixes the problem
by only selecting IMPORT jobs from the jobs table.

Fixes cockroachdb#34568

Release note: None
craig bot pushed a commit that referenced this issue Feb 11, 2019
34296: storage: improve message on slow Raft proposal r=petermattis a=tbg

Touches #33007.

Release note: None

34589: importccl: fix flaky test TestImportCSVStmt r=rytaft a=rytaft

`TestImportCSVStmt` tests that `IMPORT` jobs appear in a certain order
in the `system.jobs` table. Automatic statistics were causing this
test to be flaky since `CreateStats` jobs were present in the jobs
table as well, in an unpredictable order. This commit fixes the problem
by only selecting `IMPORT` jobs from the jobs table.

Fixes #34568

Release note: None

34660: storage: make RaftTruncatedState unreplicated r=bdarnell a=tbg

This isn't 100% polished yet, but generally ready for review.

----

See #34287.

Today, Raft (or preemptive) snapshots include the past Raft log, that is,
log entries which are already reflected in the state of the snapshot.
Fundamentally, this is because we have historically used a replicated
TruncatedState.

TruncatedState essentially tells us what the first index in the log is
(though it also includes a Term). If the TruncatedState cannot diverge
across replicas, we *must* send the whole log in snapshots, as the first
log index must match what the TruncatedState claims it is.

The Raft log is typically, but not necessarily small. Log truncations are
driven by a queue and use a complex decision process. That decision process
can be faulty and even if it isn't, the queue could be held up. Besides,
even when the Raft log contains only very few entries, these entries may be
quite large (see SSTable ingestion during RESTORE).

All this motivates that we don't want to (be forced to) send the Raft log
as part of snapshots, and in turn we need the TruncatedState to be
unreplicated.

This change migrates the TruncatedState into unreplicated keyspace. It does
not yet allow snapshots to avoid sending the past Raft log, but that is a
relatively straightforward follow-up change.

VersionUnreplicatedRaftTruncatedState, when active, moves the truncated
state into unreplicated keyspace on log truncations.

The migration works as follows:

1. at any log position, the replicas of a Range either use the new
(unreplicated) key or the old one, and exactly one of them exists.

2. When a log truncation evaluates under the new cluster version, it
initiates the migration by deleting the old key. Under the old cluster
version, it behaves like today, updating the replicated truncated state.

3. The deletion signals new code downstream of Raft and triggers a write to
the new, unreplicated, key (atomic with the deletion of the old key).

4. Future log truncations don't write any replicated data any more, but
(like before) send along the TruncatedState which is written downstream of
Raft atomically with the deletion of the log entries. This actually uses
the same code as 3. What's new is that the truncated state needs to be
verified before replacing a previous one. If replicas disagree about their
truncated state, it's possible for replica X at FirstIndex=100 to apply a
truncated state update that sets FirstIndex to, say, 50 (proposed by a
replica with a "longer" historical log). In that case, the truncated state
update must be ignored (this is straightforward downstream-of-Raft code).

5. When a split trigger evaluates, it seeds the RHS with the legacy key iff
the LHS uses the legacy key, and the unreplicated key otherwise. This makes
sure that the invariant that all replicas agree on the state of the
migration is upheld.

6. When a snapshot is applied, the receiver is told whether the snapshot
contains a legacy key. If not, it writes the truncated state (which is part
of the snapshot metadata) in its unreplicated version. Otherwise it doesn't
have to do anything (the range will migrate later).

The following diagram visualizes the above. Note that it abuses sequence
diagrams to get a nice layout; the vertical lines belonging to NewState and
OldState don't imply any particular ordering of operations.

```
┌────────┐                            ┌────────┐
│OldState│                            │NewState│
└───┬────┘                            └───┬────┘
   │                        Bootstrap under old version
   │ <─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─
─ ─
   │                                     │
   │                                     │     Bootstrap under new version
   │                                     │ <─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─
─ ─
   │                                     │
   │─ ─ ┐
   │    | Log truncation under old version
   │< ─ ┘
   │                                     │
   │─ ─ ┐                                │
   │    | Snapshot                       │
   │< ─ ┘                                │
   │                                     │
   │                                     │─ ─ ┐
   │                                     │    | Snapshot
   │                                     │< ─ ┘
   │                                     │
   │   Log truncation under new version  │
   │ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─>│
   │                                     │
   │                                     │─ ─ ┐
   │                                     │    | Log truncation under new version
   │                                     │< ─ ┘
   │                                     │
   │                                     │─ ─ ┐
   │                                     │    | Log truncation under old version
   │                                     │< ─ ┘ (necessarily running new binary)
```

Release note: None

34762: distsqlplan: fix error in union planning r=jordanlewis a=jordanlewis

Previously, if 2 inputs to a UNION ALL had identical post processing
except for renders, further post processing on top of that union all
could invalidate the plan and cause errors or crashes.

Fixes #34437.

Release note (bug fix): fix a planning crash during UNION ALL operations
that had projections, filters or renders directly on top of the UNION
ALL in some cases.

34767: sql: reuse already allocated memory for the cache in a row container r=yuzefovich a=yuzefovich

Previously, we would always allocate new memory for every row that
we put in the cache of DiskBackedIndexedRowContainer and simply
discard the memory underlying the row that we remove from the cache.
Now, we're reusing that memory.

Release note: None

34779: opt: add stats to tpch xform test r=justinj a=justinj

Since we have stats by default now, this should be the default testing
mechanism. I left in tpch-no-stats since we also have that for tpcc,
just for safety.

Release note: None

Co-authored-by: Tobias Schottdorf <[email protected]>
Co-authored-by: Rebecca Taft <[email protected]>
Co-authored-by: Jordan Lewis <[email protected]>
Co-authored-by: Yahor Yuzefovich <[email protected]>
Co-authored-by: Justin Jaffray <[email protected]>
@craig craig bot closed this as completed in #34589 Feb 11, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants