-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
backupccl,multi-tenant: BACKUP fails when node is down when using shared process multi-tenancy #111319
Labels
A-disaster-recovery
branch-master
Failures and bugs on the master branch.
C-bug
Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior.
release-blocker
Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked.
T-disaster-recovery
Comments
stevendanna
added
C-bug
Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior.
branch-master
Failures and bugs on the master branch.
release-blocker
Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked.
labels
Sep 26, 2023
stevendanna
added a commit
to stevendanna/cockroach
that referenced
this issue
Sep 27, 2023
Previously, when running in mixed-process mode, the DistSQLPlanner's PartitionSpans method would assume that it could directly assign a given span to the SQLInstanceID that matches the NodeID of whatever replica the current replica oracle returned, without regard to whether the SQL instance was available. This is different from the system tenant code paths which proactively check node health and the non-mixed-process MT code paths which would use an eventually consistent view of healthy nodes. As a result, processes that use PartitionSpans such as BACKUP may fail when a node was down. Here, we have the mixed-process case work more like the separate process case in which we only use nodes returned by the instance reader. This list should eventually exclude any down nodes. An alternative (or perhaps an addition) would be to allow MT planning to do direct status checks more similar to how they are done for the system tenant. When reading this code, I also noted that we don't do DistSQL version compatibility checks like we do in the SystemTenant case. I am not sure on the impact of that. Finally, this also adds another error to our list of non-permanent errors. Namely, if we fail to find a SQL instance, we don't tread that as permanent. Fixes cockroachdb#111319 Release note (bug fix): When using a private preview of physical cluster replication, in some circumstances the source cluster would be unable to take backups when a source cluster node was unavailable.
cc @cockroachdb/disaster-recovery |
craig bot
pushed a commit
that referenced
this issue
Oct 4, 2023
111337: sql: PartitionSpan should only use healthy nodes in mixed-process mode r=yuzefovich a=stevendanna Previously, when running in mixed-process mode, the DistSQLPlanner's PartitionSpans method would assume that it could directly assign a given span to the SQLInstanceID that matches the NodeID of whatever replica the current replica oracle returned, without regard to whether the SQL instance was available. This is different from the system tenant code paths which proactively check node health and the non-mixed-process MT code paths which would use an eventually consistent view of healthy nodes. As a result, processes that use PartitionSpans such as BACKUP may fail when a node was down. Here, we have the mixed-process case work more like the separate process case in which we only use nodes returned by the instance reader. This list should eventually exclude any down nodes. An alternative (or perhaps an addition) would be to allow MT planning to do direct status checks more similar to how they are done for the system tenant. When reading this code, I also noted that we don't do DistSQL version compatibility checks like we do in the SystemTenant case. I am not sure on the impact of that. Finally, this also adds another error to our list of non-permanent errors. Namely, if we fail to find a SQL instance, we don't tread that as permanent. Fixes #111319 Release note (bug fix): When using a private preview of physical cluster replication, in some circumstances the source cluster would be unable to take backups when a source cluster node was unavailable. 111675: backupccl: deflake TestShowBackup r=stevendanna a=msbutler This patch simplifies how TestShowBackup parses the stringed timestamp: it removes the manual splitting of date and time and parses the stringed timestamp in one call. Fixes: #111015 Release note: none Co-authored-by: Steven Danna <[email protected]> Co-authored-by: Michael Butler <[email protected]>
adityamaru
pushed a commit
to adityamaru/cockroach
that referenced
this issue
Oct 26, 2023
Previously, when running in mixed-process mode, the DistSQLPlanner's PartitionSpans method would assume that it could directly assign a given span to the SQLInstanceID that matches the NodeID of whatever replica the current replica oracle returned, without regard to whether the SQL instance was available. This is different from the system tenant code paths which proactively check node health and the non-mixed-process MT code paths which would use an eventually consistent view of healthy nodes. As a result, processes that use PartitionSpans such as BACKUP may fail when a node was down. Here, we have the mixed-process case work more like the separate process case in which we only use nodes returned by the instance reader. This list should eventually exclude any down nodes. An alternative (or perhaps an addition) would be to allow MT planning to do direct status checks more similar to how they are done for the system tenant. Finally, this also adds another error to our list of non-permanent errors. Namely, if we fail to find a SQL instance, we don't tread that as permanent. Fixes cockroachdb#111319 Release note (bug fix): When using a private preview of physical cluster replication, in some circumstances the source cluster would be unable to take backups when a source cluster node was unavailable.
adityamaru
pushed a commit
to adityamaru/cockroach
that referenced
this issue
Oct 27, 2023
Previously, when running in mixed-process mode, the DistSQLPlanner's PartitionSpans method would assume that it could directly assign a given span to the SQLInstanceID that matches the NodeID of whatever replica the current replica oracle returned, without regard to whether the SQL instance was available. This is different from the system tenant code paths which proactively check node health and the non-mixed-process MT code paths which would use an eventually consistent view of healthy nodes. As a result, processes that use PartitionSpans such as BACKUP may fail when a node was down. Here, we have the mixed-process case work more like the separate process case in which we only use nodes returned by the instance reader. This list should eventually exclude any down nodes. An alternative (or perhaps an addition) would be to allow MT planning to do direct status checks more similar to how they are done for the system tenant. Finally, this also adds another error to our list of non-permanent errors. Namely, if we fail to find a SQL instance, we don't tread that as permanent. Fixes cockroachdb#111319 Release note (bug fix): When using a private preview of physical cluster replication, in some circumstances the source cluster would be unable to take backups when a source cluster node was unavailable.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
A-disaster-recovery
branch-master
Failures and bugs on the master branch.
C-bug
Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior.
release-blocker
Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked.
T-disaster-recovery
Describe the problem
A user reported being unable to run BACKUP inside a secondary tenant, using mixed process mode when one of the nodes in the cluster was down.
Using demo this is trivially reproducible. After shutting down a node, BACKUPs often fail even well-after the sql instance table should have been cleaned up, with errors such as:
This is my slightly informed speculation about what is going on. BACKUP uses PartitionSpans to distribute work.
PartitionSpans iterates over the given span of work assigning portions of the span to different IDs. It does that via a SpanResolver. The SpanResolver uses an Oracle that is responsible for choosing a replica of a given range. This replica may not be healthy.
When run in mixed-process mode, PartitionSpans uses an instance resolver that simply returns the nodeID as a SQLInstanceID without doing any sort of health-check:
cockroach/pkg/sql/distsql_physical_planner.go
Line 1339 in 96c0868
cockroach/pkg/sql/distsql_physical_planner.go
Lines 1397 to 1399 in 96c0868
Note that is different from other code paths in which we do explicitly check the health of a node before using it:
cockroach/pkg/sql/distsql_physical_planner.go
Lines 1379 to 1386 in 96c0868
If I modify the code to do such a healthcheck, the problem goes away.
Jira issue: CRDB-31854
The text was updated successfully, but these errors were encountered: