Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(barrier): support database failure isolation (part 2, local) #19579

Merged
merged 6 commits into from
Dec 14, 2024

Conversation

wenym1
Copy link
Contributor

@wenym1 wenym1 commented Nov 26, 2024

I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.

What's changed and what's your intention?

After #19664. The local barrier manager part of database failure isolation.

Previously in ManagedBarrierState, all databases are in a Running status, as a DatabaseManagedBarrierState, because in case of any failure, we will wait for a global recovery. To support database failure isolation, besides the Running status, we will have the following statuses.

pub(crate) enum DatabaseStatus {
    Running(DatabaseManagedBarrierState),
    Suspended(SuspendedDatabaseState),
    Resetting(ResettingDatabaseState),
}

The lifecycle of a DatabaseStatus will be

  1. Get created when the control stream is reset, or when handling add_partial_graph request. The status is at Running.
  2. In case of actor failure, it will send to the meta global barrier manager with a ReportDatabaseFailureResponse, and enter the Suspended.
  3. Wait for the ResetDatabaseRequest from the meta global barrier manager, and then enter Resetting, which spawns a task to clear all actors and clear hummock uploader state of the state tables. Note that we may enter the Resetting directly from Running when the database reset is triggered by failure reported from other CNs.
  4. After the task finishes, get the DatabaseStatus removed from ManagedBarrierState, and will be recreated by the subsequent add_partial_graph request from the meta global barrier manager.

Checklist

  • I have written necessary rustdoc comments
  • I have added necessary unit tests and integration tests
  • I have added test labels as necessary. See details.
  • I have added fuzzing tests or opened an issue to track them. (Optional, recommended for new SQL features Sqlsmith: Sql feature generation #7934).
  • My PR contains breaking changes. (If it deprecates some features, please create a tracking issue to remove them in the future).
  • All checks passed in ./risedev check (or alias, ./risedev c)
  • My PR changes performance-critical code. (Please run macro/micro-benchmarks and show the results.)
  • My PR contains critical fixes that are necessary to be merged into the latest release. (Please check out the details)

Documentation

  • My PR needs documentation updates. (Please use the Release note section below to summarize the impact on users)

Release note

If this PR includes changes that directly affect users or other significant modifications relevant to the community, kindly draft a release note to provide a concise summary of these changes. Please prioritize highlighting the impact these changes will have on users.

Copy link
Contributor Author

wenym1 commented Nov 26, 2024

@wenym1 wenym1 force-pushed the yiming/isolation-database-actor-failure branch from b5d838a to 9defad3 Compare November 27, 2024 10:14
@wenym1 wenym1 force-pushed the yiming/database-failure-isolation branch from f71fccc to 75dbaad Compare November 27, 2024 10:17
Base automatically changed from yiming/isolation-database-actor-failure to main November 29, 2024 09:15
@wenym1 wenym1 force-pushed the yiming/database-failure-isolation branch 2 times, most recently from 264210d to 8d9bbb9 Compare November 29, 2024 09:59
@wenym1 wenym1 changed the base branch from main to yiming/extract-inject-initial-barrier November 29, 2024 09:59
Base automatically changed from yiming/extract-inject-initial-barrier to main December 2, 2024 08:11
@wenym1 wenym1 force-pushed the yiming/database-failure-isolation branch 2 times, most recently from 8984fee to 29d69ee Compare December 3, 2024 06:10
@wenym1 wenym1 marked this pull request as ready for review December 3, 2024 06:35
@wenym1 wenym1 changed the base branch from main to yiming/database-failure-isolation-meta-part December 4, 2024 02:37
@wenym1 wenym1 changed the title feat(barrier): support database failure isolation feat(barrier): support database failure isolation (part 2, local) Dec 4, 2024
@wenym1 wenym1 force-pushed the yiming/database-failure-isolation branch from a2d7db4 to d28b483 Compare December 4, 2024 06:51
@wenym1 wenym1 force-pushed the yiming/database-failure-isolation branch 3 times, most recently from 60d379b to 379521e Compare December 4, 2024 09:07
@wenym1 wenym1 force-pushed the yiming/database-failure-isolation-meta-part branch from 7baa0d1 to 07de9ce Compare December 11, 2024 05:57
Base automatically changed from yiming/database-failure-isolation-meta-part to main December 11, 2024 08:15
@graphite-app graphite-app bot requested a review from a team December 11, 2024 08:15
@wenym1 wenym1 force-pushed the yiming/database-failure-isolation branch from 379521e to a1fd984 Compare December 11, 2024 09:27
Copy link
Collaborator

@hzxa21 hzxa21 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic LGTM. I think we need some tests to trigger per DB failure and make sure other DBs are unaffected. Simulation tests are preferred.

// TODO: may only report as database failure instead of reset the stream
// when the HummockUploader support partial recovery. Currently the HummockUploader
// enter `Err` state and stop working until a global recovery to clear the uploader.
self.control_stream_handle.reset_stream_with_err(Status::internal(format!("failed to complete epoch: {} {} {:?} {:?}", database_id, partial_graph_id.0, barrier.epoch, err.as_report())));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We skip try_find_root_actor_failure here. Is it intentional?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, because this error is likely to be caused by hummock sync error, and is not related to actor failure.

@wenym1 wenym1 added this pull request to the merge queue Dec 14, 2024
Merged via the queue into main with commit a520c1d Dec 14, 2024
34 of 36 checks passed
@wenym1 wenym1 deleted the yiming/database-failure-isolation branch December 14, 2024 06:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants