-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
release-23.1: sql: PartitionSpan should only use healthy nodes in mixed-process mode #113171
release-23.1: sql: PartitionSpan should only use healthy nodes in mixed-process mode #113171
Conversation
Thanks for opening a backport. Please check the backport criteria before merging:
If some of the basic criteria cannot be satisfied, ensure that the exceptional criteria are satisfied within.
Add a brief release justification to the body of your PR to justify this backport. Some other things to consider:
|
63f73a3
to
2f92222
Compare
Previously, when running in mixed-process mode, the DistSQLPlanner's PartitionSpans method would assume that it could directly assign a given span to the SQLInstanceID that matches the NodeID of whatever replica the current replica oracle returned, without regard to whether the SQL instance was available. This is different from the system tenant code paths which proactively check node health and the non-mixed-process MT code paths which would use an eventually consistent view of healthy nodes. As a result, processes that use PartitionSpans such as BACKUP may fail when a node was down. Here, we have the mixed-process case work more like the separate process case in which we only use nodes returned by the instance reader. This list should eventually exclude any down nodes. An alternative (or perhaps an addition) would be to allow MT planning to do direct status checks more similar to how they are done for the system tenant. Finally, this also adds another error to our list of non-permanent errors. Namely, if we fail to find a SQL instance, we don't tread that as permanent. Fixes cockroachdb#111319 Release note (bug fix): When using a private preview of physical cluster replication, in some circumstances the source cluster would be unable to take backups when a source cluster node was unavailable.
2f92222
to
33d0eed
Compare
tenantRunner := sqlutils.MakeSQLRunner(tenantDB) | ||
var jobID jobspb.JobID | ||
tenantRunner.QueryRow(t, "BACKUP INTO 'nodelocal://1/worker-failure' WITH detached").Scan(&jobID) | ||
jobutils.WaitForJobToSucceed(t, tenantRunner, jobID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@stevendanna I changed the end of this test a little bit because I saw a flake of the form. I don't think it changes the essence of the test, but if that computed timeout was an assertion in itself let me know and I'll try to add it back in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like a good change.
Because we are not backporting #105451 the backport wasn't entirely clean, but the main difference is the signature of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 5 of 5 files at r1, all commit messages.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @stevendanna)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks
tenantRunner := sqlutils.MakeSQLRunner(tenantDB) | ||
var jobID jobspb.JobID | ||
tenantRunner.QueryRow(t, "BACKUP INTO 'nodelocal://1/worker-failure' WITH detached").Scan(&jobID) | ||
jobutils.WaitForJobToSucceed(t, tenantRunner, jobID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like a good change.
Backport:
Please see individual PRs for details.
/cc @cockroachdb/release
Release justification: bug fix that allowed unhealthy nodes to be picked for execution in a mixed-process mode