storage: Refactor disaggregated read flow (#7530) #7582
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is an automated cherry-pick of #7530
What problem does this PR solve?
Issue Number: ref #6827, close #7576
Problem Summary:
What is changed and how it works?
Re-worked the workflow of disaggregated read. The new workflow is mainly based on multiple MPMCQueues flowing to each other (a.k.a. ThreadedWorker, see below), instead of a shared task map + conditional_variables.
ThreadedWorker: concurrently takes tasks from a SrcQueue, works with it, and pushes the result to the ResultQueue. The src task and the result does not need to be the same type.
ThreadedWorker can be chainned:
ThreadedWorker populates FINISH state to the result channel only when the source channel is finished and all tasks are processed. ThreadedWorker itself never produce FINISH state to the result channel:
ThreadedWorker populates CANCEL state (when there are errors) to both result channel and source channel, so that chainned ThreadedWorkers can be all cancelled:
New Read flow: After EstablishDisaggTask, all segment tasks need to work through these steps in order to be "Ready for read":
Currently there are only two steps. It is easy if we want to add another step, for example, prepare delta index.
Check List
Tests
After testing, I discover that this PR still cannot resolves the issue of disaggregated read may freeze when cache capacity is low (e.g. 32MB). The possible reason is: when MPP tasks are distributed to multiple TiFlash nodes, each MPP task may stuck due to waiting available space. These stuck tasks cannot proceed, because available space is already occupied by ReadSegmentTasks in the queue. Additionally, these ReadSegmentsTasks cannot be scheduled, because active MPP readings are not yet finished.
Considering that this dead-lock seems to be hard to resolve, We may need some re-work (simplification) for the local page cache. For example, throwing errors seems to be better than simply deadlocking...
Side effects
Documentation
Release note