Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage: Refactor disaggregated read flow (#7530) #7582

Conversation

ti-chi-bot
Copy link
Member

This is an automated cherry-pick of #7530

What problem does this PR solve?

Issue Number: ref #6827, close #7576

Problem Summary:

What is changed and how it works?

Re-worked the workflow of disaggregated read. The new workflow is mainly based on multiple MPMCQueues flowing to each other (a.k.a. ThreadedWorker, see below), instead of a shared task map + conditional_variables.

ThreadedWorker: concurrently takes tasks from a SrcQueue, works with it, and pushes the result to the ResultQueue. The src task and the result does not need to be the same type.

image

ThreadedWorker can be chainned:

image

ThreadedWorker populates FINISH state to the result channel only when the source channel is finished and all tasks are processed. ThreadedWorker itself never produce FINISH state to the result channel:

image

ThreadedWorker populates CANCEL state (when there are errors) to both result channel and source channel, so that chainned ThreadedWorkers can be all cancelled:

image

New Read flow: After EstablishDisaggTask, all segment tasks need to work through these steps in order to be "Ready for read":

  1. Try to fetch related pages, keep a CacheGuard.
  2. Build a InputStream (in this step, S3 files will be downloaded).
image

Currently there are only two steps. It is easy if we want to add another step, for example, prepare delta index.

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)

After testing, I discover that this PR still cannot resolves the issue of disaggregated read may freeze when cache capacity is low (e.g. 32MB). The possible reason is: when MPP tasks are distributed to multiple TiFlash nodes, each MPP task may stuck due to waiting available space. These stuck tasks cannot proceed, because available space is already occupied by ReadSegmentTasks in the queue. Additionally, these ReadSegmentsTasks cannot be scheduled, because active MPP readings are not yet finished.

Considering that this dead-lock seems to be hard to resolve, We may need some re-work (simplification) for the local page cache. For example, throwing errors seems to be better than simply deadlocking...

  • No code

Side effects

  • Performance regression: Consumes more CPU
  • Performance regression: Consumes more Memory
  • Breaking backward compatibility

Documentation

  • Affects user behaviors
  • Contains syntax changes
  • Contains variable changes
  • Contains experimental features
  • Changes MySQL compatibility

Release note

None

@ti-chi-bot
Copy link
Contributor

ti-chi-bot bot commented Jun 1, 2023

[REVIEW NOTIFICATION]

This pull request has not been approved.

To complete the pull request process, please ask the reviewers in the list to review by filling /cc @reviewer in the comment.
After your PR has acquired the required number of LGTMs, you can assign this pull request to the committer in the list by filling /assign @committer in the comment to help you merge this pull request.

The full list of commands accepted by this bot can be found here.

Reviewer can indicate their review by submitting an approval review.
Reviewer can cancel approval by submitting a request changes review.

@ti-chi-bot ti-chi-bot bot added do-not-merge/cherry-pick-not-approved release-note-none Denotes a PR that doesn't merit a release note. labels Jun 1, 2023
@ti-chi-bot ti-chi-bot added release-note-none Denotes a PR that doesn't merit a release note. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. type/cherry-pick-for-release-7.1 This PR is cherry-picked to release-7.1 from a source PR. labels Jun 1, 2023
@ti-chi-bot ti-chi-bot added the cherry-pick-approved Cherry pick PR approved by release team. label Jun 14, 2023
@ti-chi-bot ti-chi-bot removed the cherry-pick-approved Cherry pick PR approved by release team. label Jul 4, 2023
@ti-chi-bot
Copy link
Contributor

ti-chi-bot bot commented Jul 4, 2023

This cherry pick PR is for a release branch and has not yet been approved by release team.
Adding the do-not-merge/cherry-pick-not-approved label.

To merge this cherry pick, it must first be approved by the collaborators.

AFTER it has been approved by collaborators, please ping the release team in a comment to request a cherry pick review.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ti-chi-bot ti-chi-bot closed this Jul 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do-not-merge/cherry-pick-not-approved release-note-none Denotes a PR that doesn't merit a release note. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. type/cherry-pick-for-release-7.1 This PR is cherry-picked to release-7.1 from a source PR.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants