-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
schemachanger: Support dropping column with referenced constraints #97579
schemachanger: Support dropping column with referenced constraints #97579
Conversation
2744dee
to
6c7f70d
Compare
This is ready for a look! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way we can gate the validation on the activation of a cluster version, but also check it in the precondition checking for the upgrade? Ideally if there were a cluster with this validation failing, we'd fail to finalize the upgrade it to 23.1, but we wouldn't immediately break existing workloads.
Reviewed 2 of 2 files at r1, 5 of 5 files at r2, 7 of 7 files at r3, all commit messages.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @Xiang-Gu)
@ajwerner That'd be hard without introducing a new cluster version: We can introduce a new cluster version, say, v22.2.50. Then the newly added validation will be gated on v22.2.49, and the precondition check will be associated with the upgrade step to v22.2.50. This way, the existing workload (whose cluster version is v22.2 or prior) will not see this newly added validation logic, and once the cluster is on v22.2.49 and upgrading to v22.2.50, the precondition check will be able to see this newly added validation logic. The downside will be we can no longer check precondition at the very beginning of upgrading to v23.1 (ideally, we want to run this precondition check when upgrading from v22.2.0 to v22.2.1). If the precondition check fails, the user's cluster version will get stuck at v22.2.49, until the descriptor corruption is manually repaired. What do you think? |
Previously, when a UWI constraint is used to serve an inbound FK and we drop the UWI constraint, the FK is not dropped. This causes a corrupt state where a FK constraint exists but the referenced table does not ensure uniqueness on the referenced columns. This commit fixes this issue in both legacy and declarative schema changer.
Previously, we fallback to legacy schema changer if we drop a column that is referenced in a constraint. This commit enables this behavior in the declarative schema changer.
6c7f70d
to
f84bd02
Compare
I've removed the commit of strengthening validation and let's move the discussion over to #97738, so I can merge this one first. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @Xiang-Gu)
CI failed on a flaky test. Merging... bors r+ |
Build failed (retrying...): |
bors r+ |
Already running a review |
Build succeeded: |
This PR enables dropping columns with referenced constraints in the declarative schema changer.
As a prerequisite step, we also added support to dropping a UWI constraint when there is a dependent
FK constraint in both the legacy and declarative schema changer (commit 2).
Commit 2 Fixes: #96787, Fixes: #97538
Commit 3 Fixes: #96727
Epic: None