-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
backup: split request spans to be range sized #114268
Conversation
It looks like your PR touches production code but doesn't add or edit any test code. Did you consider adding tests to your PR? 🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf. |
Doing this here, before we put the spans into However this may be less idea for performance: the backup workers pull spans off of Instead we might want to put this logic inside the |
Okay, I added a second commit to this, that changes the queue channel from containing spans to slices of spans, or chunks. The workers that pull work from the queue gain an extra layer of for loop, to process each span in the chunk they pull from the queue, and the queue feeder that puts the (now range-sized, after the first commit) spans on the queue now batches them up into chunks of between 1 and 100 spans (aiming for at least 4 per worker). I think this is RFAL now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the detailed PR description. Do we also want to mention why we prefer this to setting ReturnOnRangeBoundary
on the request header?
I left some comments but nothing blocking, just idle thoughts when reading the code.
Thinking about what test might be useful here.
Backup processors are assigned spans -- which are produced by the SQL planning function PartitionSpans - which they must backup, by reading content of that span using some number of paginated ExportRequests and then writing that content to the assigned destination. Typically each export request sent by a backup processor is expected to be served by approximately one range: it sends the request to the whole span it is trying to export, distsender sends it to first range it overlaps, that range reads until it hits the pagination limit, then distsender returns its result and the processor does this again starting the span from the resume key. Since each request does a range's worth of work, the backup processor can assume it should, if things are normal and healthy in the cluster, return its result within a short amount of time. This is often a second or less, or perhaps a few seconds if it had to wait in queues. As such, the backup processor imposes a 5 minute timeout on these requests, as a single request not returning in this duration indicates something is not normal and healthy in the cluster, and the backup cannot expect to make process until that is resolved. However this logic does not hold if a single request, subject to this timeout, ends up doing substantially more work. This however can happen if that request has a span large than a single range _and_ the ranges in that span are empty and/or don't contain data matching the predicate of the request. In such cases, the request would be sent to one range, it would process it, but since it returns zero results, the pagination limit would not be hit and the request would then continue on to be sent to another range, and another, etc until it either reaches the end of the requested span or finally finds results that hit the pagination limit. If neither of these happen, it could end up hitting the timeout, that was imposed as a limit that should never be hit by doing a single range's worth of work, because we are in fact doing many range's worth of work. This change pre-splits the spans that we need to export into subspans that we will send requests to, so that each sub-span is the size of one range. It is OK if the actual ranges below these requests end up splitting or merging, as this splitting has simply ensured that each request corresponds to "a range's worth of work" which is should as it was at the splitting time a range. By doing this, we should be able to assume that all requests are expected to complete, if the cluster is healthy, within the 5min timeout. Release note: none. Epic: none.
Having a single worker handle sequential spans when backing up allows that worker to append the results to its output, producing output that is also sequential and importantly non-overlapping with other workers, which will allow for reducing the metadata required to track the unique output spans. Release note: none. Epic: none.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM except for the offline comment to try to write a test for splitSpans
start, end hlc.Timestamp | ||
attempts int | ||
lastTried time.Time | ||
finishesSpec bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: can we add a comment above this new field, its not immediately obvious how it influences progress reporting
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll get this in a followup instead of eating a ci for a comment.
TFTRs! I briefly toyed with testing splitSpans but decided to punt to a follow-up change or refactor that splits all of generating bors r+ |
Build succeeded: |
Encountered an error creating backports. Some common things that can go wrong:
You might need to create your backport manually using the backport tool. error creating merge commit from dcad601 to blathers/backport-release-23.1-114268: POST https://api.github.com/repos/cockroachdb/cockroach/merges: 409 Merge conflict [] you may need to manually resolve merge conflicts with the backport tool. Backport to branch 23.1.x failed. See errors above. 🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf. |
PR cockroachdb#114268 broke the plumbing for the CompletedSpans metric which allows the backup coordinator to update the FractionCompleted metric. This patch fixes this bug by passing the finishesSpec field to the resume span. Informs cockroachdb#120161 Release note: none
PR cockroachdb#114268 broke the plumbing for the CompletedSpans metric which allows the backup coordinator to update the FractionCompleted metric. This patch fixes this bug by passing the finishesSpec field to the resume span. Informs cockroachdb#120161 Release note: none
PR cockroachdb#114268 broke the plumbing for the CompletedSpans metric which allows the backup coordinator to update the FractionCompleted metric. This patch fixes this bug by passing the finishesSpec field to the resume span. Informs cockroachdb#120161 Release note: none
120204: backuppcl: pass finishesSpec field to resume span r=dt a=msbutler PR #114268 broke the plumbing for the CompletedSpans metric which allows the backup coordinator to update the FractionCompleted metric. This patch fixes this bug by passing the finishesSpec field to the resume span. Informs #120161 Release note: none 120251: roachprod: search ssh directory for keys r=srosenberg a=ajwerner Before this change, roachprod required that an id_rsa.pub file exist in the user's $HOME/.ssh directory. These days folks use other types of keys like ed25519. The github docs these days in fact explicitly tell you to use ed25519 [1]. This patch now searches for keys the same way that the openssh client does. It searches based on this list: * id_rsa * id_ecdsa * id_ecdsa_sk * id_ed25519 * id_ed25519_sk * id_dsa [1]: https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent Epic: None Release note: None Co-authored-by: Michael Butler <[email protected]> Co-authored-by: Andrew Werner <[email protected]>
PR cockroachdb#114268 broke the plumbing for the CompletedSpans metric which allows the backup coordinator to update the FractionCompleted metric. This patch fixes this bug by passing the finishesSpec field to the resume span. Informs cockroachdb#120161 Release note: none
PR cockroachdb#114268 broke the plumbing for the CompletedSpans metric which allows the backup coordinator to update the FractionCompleted metric. This patch fixes this bug by passing the finishesSpec field to the resume span. Informs cockroachdb#120161 Release note: none
Backup processors are assigned spans -- which are produced by the SQL planning function PartitionSpans - which they must backup, by reading content of that span using some number of paginated ExportRequests and then writing that content to the assigned destination.
Typically each export request sent by a backup processor is expected to be served by approximately one range: it sends the request to the whole span it is trying to export, distsender sends it to first range it overlaps, that range reads until it hits the pagination limit, then distsender returns its result and the processor does this again starting the span from the resume key.
Since each request does a range's worth of work, the backup processor can assume it should, if things are normal and healthy in the cluster, return its result within a short amount of time. This is often a second or less, or perhaps a few seconds if it had to wait in queues. As such, the backup processor imposes a 5 minute timeout on these requests, as a single request not returning in this duration indicates something is not normal and healthy in the cluster, and the backup cannot expect to make process until that is resolved.
However this logic does not hold if a single request, subject to this timeout, ends up doing substantially more work. This however can happen if that request has a span large than a single range and the ranges in that span are empty and/or don't contain data matching the predicate of the request. In such cases, the request would be sent to one range, it would process it, but since it returns zero results, the pagination limit would not be hit and the request would then continue on to be sent to another range, and another, etc until it either reaches the end of the requested span or finally finds results that hit the pagination limit. If neither of these happen, it could end up hitting the timeout, that was imposed as a limit that should never be hit by doing a single range's worth of work, because we are in fact doing many range's worth of work.
This change pre-splits the spans that we need to export into subspans that we will send requests to, so that each sub-span is the size of one range. It is OK if the actual ranges below these requests end up splitting or merging, as this splitting has simply ensured that each request corresponds to "a range's worth of work" which is should as it was at the splitting time a range.
By doing this, we should be able to assume that all requests are expected to complete, if the cluster is healthy, within the 5min timeout.
We do this rather than setting
ReturnOnRangeBoundary
both sinceReturnOnRangeBoundary
is not yet in use and production tested, but also since breaking work up into range-sized chunks allows more evenly distributing between the worker goroutines in the processor.Release note: none.
Epic: none.