-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
systemschema: Added tests that ensure consistent initial system tables #77970
Conversation
750e780
to
33473e1
Compare
4a47fc3
to
26ff0bc
Compare
fd69ab9
to
41110ba
Compare
78585: kv: ignore unevaluated Get requests after optimistic evaluation r=nvanbenschoten a=nvanbenschoten Related to but distinct from #78584. This commit adds logic to avoid checking for optimistic evaluation conflicts for Get requests which were not evaluated due to a batch-level byte or key limit. This parallels existing logic that does the same for Scan and ReverseScan requests. This was likely missed in the past because we only recently began using Get requests in more places. The effect of this change is that limited batches of point reads will conflict less with writes, reducing read/write contention. Batches of this form could be issued by a statement that looks like: `SELECT * FROM t WHERE id IN (1, 3, 5) LIMIT 2`. 78618: ccl/sqlproxyccl: add connection tracker component r=JeffSwenson a=jaylim-crl #### ccl/sqlproxyccl: add connection tracker component This commit adds a connection tracker component to the balancer, and this will be used to track all instances of forwarders for each tenant. The forwarders can then be used for connection migration. Release note: None #### ccl/sqlproxyccl: make forwarder implement balancer.ConnectionHandle This commit makes the forwarder struct implement the balancer.ConnectionHandle interface. This then allows the forwarder to be stored within the connection tracker component. Release note: None #### ccl/sqlproxyccl: register forwarders with the connection tracker This commit registers all forwarders with the connection tracker. At the same time, we remove the afterForward testing hook since that is no longer needed. Release note: None 78633: roachtest: exit 11 after cluster provisioning failures r=srosenberg a=tbg `roachtest` exits nonzero if it was unable to run any tests due to cloud flakiness (resource exhaustion etc). Previously the "nonzero" was 1 which is the catchall exit code. Now it uses 11 which can now signal this specific failure mode, and could be special cased in our CI scripts if we so desire in the future. Release note: None 78679: migrations: Fix migration discrepancy on a system table. r=Xiang-Gu a=Xiang-Gu Previously, the migration code for system table `statement_diagnostics_requests` results in discrepancies when compared to the table definition after bootstrapping. The discrepancies are summarized in #77980. This PR fixes it to ensure this table has exactly the same definition after cluster upgrading as that from bootstrapping. Release note: None Fixes: #77980 Sister Issue: #78302 Related PR that discovers this discrepancy: #77970 Co-authored-by: Nathan VanBenschoten <[email protected]> Co-authored-by: Jay <[email protected]> Co-authored-by: Tobias Grieger <[email protected]> Co-authored-by: Xiang Gu <[email protected]>
0b35581
to
89bccb3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to check in this testdata? It seems like it's going to be brittle when we add new tables. Can we just boot the different version, pull what we expect and then do the upgrade, see what we have, and then compare them?
Reviewed 3 of 9 files at r1, 1 of 4 files at r2, all commit messages.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @ajwerner)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to check in this testdata? It seems like it's going to be brittle when we add new tables. Can we just boot the different version, pull what we expect and then do the upgrade, see what we have, and then compare them?
Can you elaborate it further? We currently have two test data in this PR — one for the bootstrapping test and the other for the upgrading test. Are you suggesting that for the upgrading test, we remove its testdata file, boot a cluster on an old version, issue show create table query to retrieve the current system table definition d1
, upgrade the cluster, issue the query again to retrieve table definitions d2
after migration. But then what are we comparing against? We need to somehow compare d2
against the expected output, which is now the testdata file.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @ajwerner)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The bootstrap one is good.
The one I'm talking about is validate_system_schema_after_version_upgrade_expected_output
, for that, could we start by booting the cluster in the new version, run the query, then wipe that cluster, boot it in the predecessor version, upgrade, run the query, and compare.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @ajwerner)
ce07b45
to
633a224
Compare
Previously, there is no tests that checks the validity/sanity of the system tables after bootstrapping/upgrading (recall those table defs are hard-coded). This PR attempts to at least have some tests to help us gain more confidence that nothing about system tables were messed up after bootstrapping/upgrading. It simply use USE system; SHOW CREATE ALL TABLES; during those two situations (bootstrapping and upgrading) and compare the output with expected, predefined output. Namely, the following two packages are modified: 1. pkg/sql/systemschema_test/: this is a newly added package. We added a data-driven that spins up a test cluster and validate system tables are initialized as expected. 2. pkg/cmd/roachtest/tests/: added a roachtest that spins up a cluster with older binary release's version, upgrades it, and ensure the system tables are consistent. Fixes: cockroachdb#72462 Release note: None
633a224
to
e374a48
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's a good idea. I've modified the roachtest part to do this.
RFAL
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @ajwerner)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 2 of 9 files at r1, 1 of 4 files at r2, 2 of 2 files at r3, all commit messages.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @Xiang-Gu)
bors r+ |
Build succeeded: |
Previously, there is no tests that checks the validity/sanity of the system tables after bootstrapping/upgrading (recall those table defs are hard-coded).
This PR attempts to at least have some tests to help us gain more confidence
that nothing about system tables were messed up after bootstrapping/upgrading.
It simply use USE system; SHOW CREATE ALL TABLES; during those two situations (bootstrapping and upgrading) and compare the output with expected, predefined output.
Namely, the following two packages are modified:
pkg/sql/systemschema_test/: this is a newly added package. We added a data-driven that spins up a test cluster and validate system tables are initialized as expected.
pkg/cmd/roachtest/tests/: added a roachtest that spins up a cluster with
older binary release's version, upgrades it, and ensure the system
tables are consistent.
Fixes: #72462
Release note: None