Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ccl/schemachangerccl: TestBackupMixedVersionElements_base_drop_schema failed #109613

Closed
cockroach-teamcity opened this issue Aug 28, 2023 · 6 comments
Labels
branch-release-23.1 Used to mark GA and release blockers, technical advisories, and bugs for 23.1 C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. T-sql-foundations SQL Foundations Team (formerly SQL Schema + SQL Sessions)
Milestone

Comments

@cockroach-teamcity
Copy link
Member

cockroach-teamcity commented Aug 28, 2023

ccl/schemachangerccl.TestBackupMixedVersionElements_base_drop_schema failed with artifacts on release-23.1 @ b58a5373fd9a78c1a90e9aa2b35735ddae24ed76:

I230828 17:44:01.559287 507710 3@pebble/event.go:701  [n3,s3,pebble] 35  [JOB 10] sstable deleted 000011
I230828 17:44:01.559308 507906 3@pebble/event.go:701  [n3,s3,pebble] 36  [JOB 11] sstable deleted 000004
I230828 17:44:01.559330 507908 3@pebble/event.go:701  [n3,s3,pebble] 37  [JOB 12] sstable deleted 000006
I230828 17:44:01.559352 507910 3@pebble/event.go:701  [n3,s3,pebble] 38  [JOB 13] sstable deleted 000007
I230828 17:44:01.570078 505157 13@kv/kvserver/replica_raft.go:381  [T1,n1,s1,r6/1:/Table/{0-3}] 39  proposing SIMPLE(v2) [(n3,s3):2]: after=[(n1,s1):1 (n3,s3):2] next=3
I230828 17:44:01.573818 505157 13@kv/kvserver/replicate_queue.go:1283  [T1,n1,replicate,s1,r51/1:/Table/5{1-2}] 40  adding voter n2,s2: [1*:12]
I230828 17:44:01.574459 505157 13@kv/kvserver/replica_command.go:2352  [T1,n1,replicate,s1,r51/1:/Table/5{1-2}] 41  change replicas (add [(n2,s2):2LEARNER] remove []): existing descriptor r51:/Table/5{1-2} [(n1,s1):1, next=2, gen=0]
I230828 17:44:01.582383 505157 13@kv/kvserver/replica_raft.go:381  [T1,n1,s1,r51/1:/Table/5{1-2}] 42  proposing SIMPLE(l2) [(n2,s2):2LEARNER]: after=[(n1,s1):1 (n2,s2):2LEARNER] next=3
I230828 17:44:01.606071 507923 2@rpc/context.go:2581  [T1,n1,rnode=3,raddr=127.0.0.1:39323,class=system,rpc] 43  connection is now ready
I230828 17:44:01.611189 507970 13@kv/kvserver/store_snapshot.go:1579  [T1,n1,s1,r51/1:/Table/5{1-2}] 44  streamed INITIAL snapshot 4f76aeff at applied index 15 to (n2,s2):2LEARNER with 696 B in 0.00s @ 3.0 MiB/s: kvs=9 rangeKVs=0, rate-limit: 32 MiB/s, queued: 0.00s
I230828 17:44:01.612139 507972 13@kv/kvserver/replica_raftstorage.go:480  [T1,n2,s2,r51/2:{-}] 45  applying INITIAL snapshot 4f76aeff from (n1,s1):1 at applied index 15
I230828 17:44:01.612598 507972 3@pebble/event.go:697  [n2,s2,pebble] 46  [JOB 5] ingesting: sstable created 000004
I230828 17:44:01.612647 507972 3@pebble/event.go:697  [n2,s2,pebble] 47  [JOB 5] ingesting: sstable created 000009
I230828 17:44:01.612666 507972 3@pebble/event.go:697  [n2,s2,pebble] 48  [JOB 5] ingesting: sstable created 000005
I230828 17:44:01.612681 507972 3@pebble/event.go:697  [n2,s2,pebble] 49  [JOB 5] ingesting: sstable created 000006
I230828 17:44:01.612698 507972 3@pebble/event.go:697  [n2,s2,pebble] 50  [JOB 5] ingesting: sstable created 000007
I230828 17:44:01.612711 507972 3@pebble/event.go:697  [n2,s2,pebble] 51  [JOB 5] ingesting: sstable created 000008
I230828 17:44:01.612776 507972 3@pebble/event.go:717  [n2,s2,pebble] 52  [JOB 6] WAL created 000010
I230828 17:44:01.612977 508002 3@pebble/event.go:677  [n2,s2,pebble] 53  [JOB 7] flushing 1 memtable to L0
I230828 17:44:01.613046 508002 3@pebble/event.go:697  [n2,s2,pebble] 54  [JOB 7] flushing: sstable created 000011
I230828 17:44:01.613235 508002 3@pebble/event.go:681  [n2,s2,pebble] 55  [JOB 7] flushed 1 memtable to L0 [000011] (1.4 K), in 0.0s (0.0s total), output rate 6.9 M/s
I230828 17:44:01.613331 508005 3@pebble/event.go:665  [n2,s2,pebble] 56  [JOB 8] compacting(move) L0 [000011] (1.4 K) + L6 [] (0 B)
I230828 17:44:01.613479 508005 3@pebble/event.go:669  [n2,s2,pebble] 57  [JOB 8] compacted(move) L0 [000011] (1.4 K) + L6 [] (0 B) -> L6 [000011] (1.4 K), in 0.0s (0.0s total), output rate 16 M/s
I230828 17:44:01.613678 507972 3@pebble/event.go:705  [n2,s2,pebble] 58  [JOB 5] ingested L6:000004 (1.4 K), L0:000009 (1.1 K), L0:000005 (1.5 K), L6:000006 (1.2 K), L6:000007 (1.1 K), L6:000008 (1.1 K)
I230828 17:44:01.614018 508007 3@pebble/event.go:665  [n2,s2,pebble] 59  [JOB 9] compacting(default) L0 [000009 000005] (2.6 K) + L6 [000011] (1.4 K)
I230828 17:44:01.614146 508007 3@pebble/event.go:697  [n2,s2,pebble] 60  [JOB 9] compacting: sstable created 000012
I230828 17:44:01.614729 507972 kv/kvserver/replica_raftstorage.go:491  [T1,n2,s2,r51/2:/Table/5{1-2}] 61  applied INITIAL snapshot 4f76aeff from (n1,s1):1 at applied index 15 (total=3ms data=753 B ingestion=6@1ms)
I230828 17:44:01.614768 508007 3@pebble/event.go:669  [n2,s2,pebble] 62  [JOB 9] compacted(default) L0 [000009 000005] (2.6 K) + L6 [000011] (1.4 K) -> L6 [000012] (1.6 K), in 0.0s (0.0s total), output rate 2.3 M/s
I230828 17:44:01.614906 508011 3@pebble/event.go:665  [n2,s2,pebble] 63  [JOB 11] compacting(elision-only) L6 [000004] (1.4 K) + L6 [] (0 B)
I230828 17:44:01.615056 508011 3@pebble/event.go:697  [n2,s2,pebble] 64  [JOB 11] compacting: sstable created 000013
I230828 17:44:01.615267 508011 3@pebble/event.go:669  [n2,s2,pebble] 65  [JOB 11] compacted(elision-only) L6 [000004] (1.4 K) + L6 [] (0 B) -> L6 [000013] (1.1 K), in 0.0s (0.0s total), output rate 4.0 M/s
I230828 17:44:01.615444 508020 3@pebble/event.go:665  [n2,s2,pebble] 66  [JOB 12] compacting(elision-only) L6 [000006] (1.2 K) + L6 [] (0 B)
I230828 17:44:01.615542 508019 3@pebble/event.go:701  [n2,s2,pebble] 67  [JOB 11] sstable deleted 000004
I230828 17:44:01.615673 508010 3@pebble/event.go:701  [n2,s2,pebble] 68  [JOB 9] sstable deleted 000005
I230828 17:44:01.615687 508020 3@pebble/event.go:669  [n2,s2,pebble] 69  [JOB 12] compacted(elision-only) L6 [000006] (1.2 K) + L6 [] (0 B) -> L6 [] (0 B), in 0.0s (0.0s total), output rate 0 B/s
I230828 17:44:01.615901 508022 3@pebble/event.go:665  [n2,s2,pebble] 70  [JOB 13] compacting(elision-only) L6 [000007] (1.1 K) + L6 [] (0 B)
I230828 17:44:01.616141 508022 3@pebble/event.go:669  [n2,s2,pebble] 71  [JOB 13] compacted(elision-only) L6 [000007] (1.1 K) + L6 [] (0 B) -> L6 [] (0 B), in 0.0s (0.0s total), output rate 0 B/s
I230828 17:44:01.616296 508024 3@pebble/event.go:665  [n2,s2,pebble] 72  [JOB 14] compacting(elision-only) L6 [000008] (1.1 K) + L6 [] (0 B)
I230828 17:44:01.616466 508021 3@pebble/event.go:701  [n2,s2,pebble] 73  [JOB 12] sstable deleted 000006
I230828 17:44:01.617137 508010 3@pebble/event.go:701  [n2,s2,pebble] 74  [JOB 9] sstable deleted 000009
I230828 17:44:01.617259 508024 3@pebble/event.go:669  [n2,s2,pebble] 75  [JOB 14] compacted(elision-only) L6 [000008] (1.1 K) + L6 [] (0 B) -> L6 [] (0 B), in 0.0s (0.0s total), output rate 0 B/s
I230828 17:44:01.617676 507866 3@pebble/event.go:701  [n2,s2,pebble] 77  [JOB 14] sstable deleted 000008
I230828 17:44:01.617453 505157 13@kv/kvserver/replica_command.go:2352  [T1,n1,replicate,s1,r51/1:/Table/5{1-2}] 76  change replicas (add [(n2,s2):2] remove []): existing descriptor r51:/Table/5{1-2} [(n1,s1):1, (n2,s2):2LEARNER, next=3, gen=1]
I230828 17:44:01.618151 508023 3@pebble/event.go:701  [n2,s2,pebble] 78  [JOB 13] sstable deleted 000007
I230828 17:44:01.618239 508010 3@pebble/event.go:701  [n2,s2,pebble] 79  [JOB 9] sstable deleted 000011
--- FAIL: TestBackupMixedVersionElements_base_drop_schema (9.28s)
=== RUN   TestBackupMixedVersionElements_base_drop_schema/backup/restore_stage_1_of_1
    testcluster.go:429: pq: n3 required, but unavailable
    --- FAIL: TestBackupMixedVersionElements_base_drop_schema/backup/restore_stage_1_of_1 (0.80s)
I230828 17:44:01.621901 505157 13@kv/kvserver/replica_raft.go:381  [T1,n1,s1,r51/1:/Table/5{1-2}] 80  proposing SIMPLE(v2) [(n2,s2):2]: after=[(n1,s1):1 (n2,s2):2] next=3
Help

See also: How To Investigate a Go Test Failure (internal)

/cc @cockroachdb/sql-foundations

This test on roachdash | Improve this report!

Jira issue: CRDB-31019

@cockroach-teamcity cockroach-teamcity added branch-release-23.1 Used to mark GA and release blockers, technical advisories, and bugs for 23.1 C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. T-sql-foundations SQL Foundations Team (formerly SQL Schema + SQL Sessions) labels Aug 28, 2023
@cockroach-teamcity cockroach-teamcity added this to the 23.1 milestone Aug 28, 2023
@rafiss
Copy link
Collaborator

rafiss commented Aug 28, 2023

CRDB logs:

W230828 17:44:01.344507 504040 2@gossip/gossip.go:1406 ⋮ [T1,n3] 1974  no incoming or outgoing connections
I230828 17:44:01.344542 504040 gossip/gossip.go:368 ⋮ [T1,n3] 1975  NodeDescriptor set to ‹node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:39323" > attrs:<> locality:<tiers:<key:"region" value:"us-east3" > > ServerVersion:<major_val:23 minor_val:1 patch:0 internal:0 > build_tag:"v23.1.9-dev" started_at:1693244641344538298 cluster_name:"" sql_address:<network_field:"tcp" address_field:"127.0.0.1:39681" > http_address:<network_field:"tcp" address_field:"127.0.0.1:35831" >›
I230828 17:44:01.344896 504040 kv/kvserver/rebalance_objective.go:274 ⋮ [T1,n3,rebalance-objective] 1976  version doesn't support cpu objective, reverting to qps balance objective
I230828 17:44:01.344947 506750 1@server/server.go:1669 ⋮ [T1,n3] 1977  connecting to gossip network to verify cluster ID ‹"229c7157-5c3e-4bc3-8a29-a2e882257f28"›
I230828 17:44:01.345603 504040 server/node.go:533 ⋮ [T1,n3] 1978  initialized store s3
I230828 17:44:01.345633 504040 kv/kvserver/stores.go:264 ⋮ [T1,n3] 1979  read 0 node addresses from persistent storage
I230828 17:44:01.345671 504040 server/node.go:627 ⋮ [T1,n3] 1980  started with engine type ‹2›
I230828 17:44:01.345678 504040 server/node.go:629 ⋮ [T1,n3] 1981  started with attributes []
I230828 17:44:01.345767 504040 1@server/server.go:1834 ⋮ [T1,n3] 1982  starting https server at ‹127.0.0.1:35831› (use: ‹127.0.0.1:35831›)
I230828 17:44:01.345779 504040 1@server/server.go:1839 ⋮ [T1,n3] 1983  starting postgres server at ‹127.0.0.1:39681› (use: ‹127.0.0.1:39681›)
I230828 17:44:01.345786 504040 1@server/server.go:1842 ⋮ [T1,n3] 1984  starting grpc server at ‹127.0.0.1:39323›
I230828 17:44:01.345794 504040 1@server/server.go:1843 ⋮ [T1,n3] 1985  advertising CockroachDB node at ‹127.0.0.1:39323›
I230828 17:44:01.391984 506828 gossip/client.go:124 ⋮ [T1,n3] 1986  started gossip client to n0 (‹127.0.0.1:35909›)
I230828 17:44:01.392378 506750 1@server/server.go:1672 ⋮ [T1,n3] 1987  node connected via gossip
I230828 17:44:01.392481 504670 kv/kvserver/stores.go:283 ⋮ [T1,n3] 1988  wrote 1 node addresses to persistent storage
I230828 17:44:01.392537 504670 kv/kvserver/stores.go:283 ⋮ [T1,n3] 1989  wrote 2 node addresses to persistent storage
I230828 17:44:01.392778 504168 kv/kvserver/stores.go:283 ⋮ [T1,n1] 1990  wrote 2 node addresses to persistent storage
I230828 17:44:01.393046 504419 kv/kvserver/stores.go:283 ⋮ [T1,n2] 1991  wrote 2 node addresses to persistent storage
I230828 17:44:01.434063 507174 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335 ⋮ [T1,n3] 1992  spanconfig-subscriber: established range feed cache
I230828 17:44:01.434258 504040 1@util/log/event_log.go:32 ⋮ [T1,n3] 1993 ={"Timestamp":1693244641434256530,"EventType":"node_join","NodeID":3,"StartedAt":1693244641344538298,"LastUp":1693244641344538298}
I230828 17:44:01.434335 507177 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335 ⋮ [T1,n3] 1994  settings-watcher: established range feed cache
I230828 17:44:01.435812 504040 sql/sqlliveness/slinstance/slinstance.go:434 ⋮ [T1,n3] 1995  starting SQL liveness instance
I230828 17:44:01.437406 507184 sql/sqlliveness/slstorage/slstorage.go:540 ⋮ [T1,n3] 1996  inserted sqlliveness session 010180b0e95994e0124b4cbcd3c6b4c1414441
I230828 17:44:01.437429 507184 sql/sqlliveness/slinstance/slinstance.go:258 ⋮ [T1,n3] 1997  created new SQL liveness session 010180b0e95994e0124b4cbcd3c6b4c1414441
I230828 17:44:01.437446 504040 sql/sqlinstance/instancestorage/instancestorage.go:342 ⋮ [T1,n3] 1998  assigning instance id to rpc addr ‹127.0.0.1:39323› and sql addr ‹127.0.0.1:39681›
I230828 17:44:01.438484 504040 server/server_sql.go:1521 ⋮ [T1,n3] 1999  bound sqlinstance: Instance{RegionPrefix: gA==, InstanceID: 3, SQLAddr: ‹127.0.0.1:39681›, RPCAddr: ‹127.0.0.1:39323›, SessionID: 010180b0e95994e0124b4cbcd3c6b4c1414441, Locality: ‹region=us-east3›, BinaryVersion: 23.1}
I230828 17:44:01.438593 507258 sql/sqlstats/persistedsqlstats/provider.go:170 ⋮ [T1,n3] 2000  starting sql-stats-worker with initial delay: 8m55.857467654s
I230828 17:44:01.438621 507257 sql/temporary_schema.go:486 ⋮ [T1,n3] 2001  skipping temporary object cleanup run as it is not the leaseholder
I230828 17:44:01.438635 507257 sql/temporary_schema.go:487 ⋮ [T1,n3] 2002  completed temporary object cleanup job
I230828 17:44:01.438642 507257 sql/temporary_schema.go:639 ⋮ [T1,n3] 2003  temporary object cleaner next scheduled to run at 2023-08-28 18:14:01.438598665 +0000 UTC m=+1993.813672988
I230828 17:44:01.439008 504040 upgrade/upgrademanager/manager.go:170 ⋮ [T1,n3] 2004  running permanent upgrades up to version: 22.2
I230828 17:44:01.439171 507249 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335 ⋮ [T1,n3] 2005  system-config-cache: established range feed cache
I230828 17:44:01.445182 504040 upgrade/upgrademanager/manager.go:238 ⋮ [T1,n3] 2006  the last permanent upgrade (v22.1-42) does not appear to have completed; attempting to run all upgrades
I230828 17:44:01.445573 504040 upgrade/upgrademanager/manager.go:283 ⋮ [T1,n3] 2007  running permanent upgrade for version 0.0-2
I230828 17:44:01.448661 504040 upgrade/upgrademanager/manager.go:283 ⋮ [T1,n3] 2008  running permanent upgrade for version 0.0-4
I230828 17:44:01.450570 504040 upgrade/upgrademanager/manager.go:283 ⋮ [T1,n3] 2009  running permanent upgrade for version 0.0-6
I230828 17:44:01.458099 504040 upgrade/upgrademanager/manager.go:283 ⋮ [T1,n3] 2010  running permanent upgrade for version 0.0-8
I230828 17:44:01.460347 504040 upgrade/upgrademanager/manager.go:283 ⋮ [T1,n3] 2011  running permanent upgrade for version 0.0-10
I230828 17:44:01.462167 504040 upgrade/upgrademanager/manager.go:283 ⋮ [T1,n3] 2012  running permanent upgrade for version 0.0-12
I230828 17:44:01.463943 504040 upgrade/upgrademanager/manager.go:283 ⋮ [T1,n3] 2013  running permanent upgrade for version 22.1-42
I230828 17:44:01.465375 504040 server/server_sql.go:1629 ⋮ [T1,n3] 2014  done ensuring all necessary startup migrations have run
I230828 17:44:01.465921 507532 server/auto_upgrade.go:43 ⋮ [T1,n3] 2015  auto upgrade disabled by testing
I230828 17:44:01.465988 507528 jobs/job_scheduler.go:407 ⋮ [T1,n3] 2016  waiting 2m0s before scheduled jobs daemon start
I230828 17:44:01.468457 507620 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335 ⋮ [T1,n3] 2017  tenant-settings-watcher: established range feed cache
I230828 17:44:01.470814 507578 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335 ⋮ [T1,n3] 2018  tenant-capability-watcher: established range feed cache
I230828 17:44:01.471419 507578 multitenant/tenantcapabilities/tenantcapabilitieswatcher/watcher.go:149 ⋮ [T1,n3] 2019  received results of a full table scan for tenant capabilities
I230828 17:44:01.480353 507530 sql/syntheticprivilegecache/cache.go:199 ⋮ [T1,n3] 2020  warmed privileges for virtual tables in 14.330389ms
I230828 17:44:01.491034 504040 1@server/server_sql.go:1747 ⋮ [T1,n3] 2021  serving sql connections
I230828 17:44:01.499378 507687 upgrade/upgrademanager/manager.go:397 ⋮ [T1,n1,client=127.0.0.1:45604,hostssl,user=root,migration-mgr] 2022  migrating cluster from 22.2 to 22.2-48 (stepping through [22.2-2 22.2-4 22.2-6 22.2-8 22.2-10 22.2-12 22.2-14 22.2-16 22.2-18 22.2-20 22.2-22 22.2-24 22.2-26 22.2-28 22.2-30 22.2-32 22.2-34 22.2-36 22.2-38 22.2-40 22.2-42 22.2-44 22.2-46 22.2-48])
I230828 17:44:01.499434 507687 upgrade/upgrademanager/manager.go:657 ⋮ [T1,n1,client=127.0.0.1:45604,hostssl,user=root,migration-mgr] 2023  executing operation validate-cluster-version=22.2-48
W230828 17:44:01.499589 507687 upgrade/upgrademanager/manager.go:370 ⋮ [T1,n1,client=127.0.0.1:45604,hostssl,user=root,migration-mgr] 2024  error encountered during version upgrade: n3 required, but unavailable

schemachangerccltest.log

The problem here doesn't seem related to the schema changes.

I have two questions:

@aliher1911 perhaps you might be a good person to answer these. I will assign to KV so you can take a look, but please reassign us if KV is the wrong place.

@rafiss rafiss added T-kv KV Team and removed T-sql-foundations SQL Foundations Team (formerly SQL Schema + SQL Sessions) labels Aug 29, 2023
@aliher1911
Copy link
Contributor

First point:
I don't think this error is supposed to be retried at least not as a part of work that was done. The PR you referenced is only targeting replica unavailability which was caused by replica circuitbreakers (errors that could temporarily persist after replica is restored)

Maybe it should be extended by node availability as well.

@aliher1911
Copy link
Contributor

Second point: Looks like an issue on kv side. Need to investigate further what's causing this.

@arulajmani
Copy link
Collaborator

Why was n3 unavailable? It looks like it started up correctly.

Looking at the code, it's because n3's liveness record is expired. Looking further back, seems like n3 started quiescing:

I230828 17:44:00.644967 474035 testutils/testcluster/testcluster.go:149 ⋮ [-] 1723  TestCluster quiescing nodes
W230828 17:44:00.650324 504101 2@rpc/nodedialer/nodedialer.go:196 ⋮ [T1,n1,intExec=‹clear-job-claim›] 1752  unable to connect to n3: failed to connect to n3 at ‹127.0.0.1:32955›: grpc: ‹refusing to dial; node is quiescing› [code 7/PermissionDenied]

So it makes sense why n3 stopped heartbeating its liveness.

The error encountered during version upgrade is a red herring here. @rafiss, I'm going to put this back on your plate.

@arulajmani arulajmani removed the T-kv KV Team label Aug 31, 2023
@rafiss
Copy link
Collaborator

rafiss commented Aug 31, 2023

Hm ok, thanks for checking.

I see that the failing line is here:

// Now that we have started all the servers on the bootstrap version, let us
// run the migrations up to the overridden BinaryVersion.
if v := tc.Servers[0].BinaryVersionOverride(); v != (roachpb.Version{}) {
if _, err := tc.Conns[0].Exec(`SET CLUSTER SETTING version = $1`, v.String()); err != nil {
t.Fatal(err)
}

That is inside of the Start function for TestCluster. But the TestCluster quiescing nodes message should only occur if the TestCluster is stopped:

// stopServers stops the stoppers for each individual server in the cluster.
// This method ensures that servers that were previously stopped explicitly are
// not double-stopped.
func (tc *TestCluster) stopServers(ctx context.Context) {
tc.mu.Lock()
defer tc.mu.Unlock()
// Quiesce the servers in parallel to avoid deadlocks. If we stop servers
// serially when we lose quorum (2 out of 3 servers have stopped) the last
// server may never finish due to waiting for a Raft command that can't
// commit due to the lack of quorum.
log.Infof(ctx, "TestCluster quiescing nodes")

Even more confusingly, that closer function is not added until after the point when t.Fatal was called:

tc.stopper.AddCloser(stop.CloserFn(func() { tc.stopServers(context.TODO()) }))

@rafiss rafiss added the T-sql-foundations SQL Foundations Team (formerly SQL Schema + SQL Sessions) label Sep 1, 2023
@rafiss
Copy link
Collaborator

rafiss commented Sep 8, 2023

I haven't been able to reproduce this and it is no longer failing. I don't have any other way to investigate why the node was quiescing at this point in the test, so I'll close this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
branch-release-23.1 Used to mark GA and release blockers, technical advisories, and bugs for 23.1 C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. T-sql-foundations SQL Foundations Team (formerly SQL Schema + SQL Sessions)
Projects
No open projects
Archived in project
Development

No branches or pull requests

4 participants