Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

backup: split request spans to be range sized #114268

Merged
merged 2 commits into from
Nov 22, 2023
Merged

Conversation

dt
Copy link
Member

@dt dt commented Nov 10, 2023

Backup processors are assigned spans -- which are produced by the SQL planning function PartitionSpans - which they must backup, by reading content of that span using some number of paginated ExportRequests and then writing that content to the assigned destination.

Typically each export request sent by a backup processor is expected to be served by approximately one range: it sends the request to the whole span it is trying to export, distsender sends it to first range it overlaps, that range reads until it hits the pagination limit, then distsender returns its result and the processor does this again starting the span from the resume key.

Since each request does a range's worth of work, the backup processor can assume it should, if things are normal and healthy in the cluster, return its result within a short amount of time. This is often a second or less, or perhaps a few seconds if it had to wait in queues. As such, the backup processor imposes a 5 minute timeout on these requests, as a single request not returning in this duration indicates something is not normal and healthy in the cluster, and the backup cannot expect to make process until that is resolved.

However this logic does not hold if a single request, subject to this timeout, ends up doing substantially more work. This however can happen if that request has a span large than a single range and the ranges in that span are empty and/or don't contain data matching the predicate of the request. In such cases, the request would be sent to one range, it would process it, but since it returns zero results, the pagination limit would not be hit and the request would then continue on to be sent to another range, and another, etc until it either reaches the end of the requested span or finally finds results that hit the pagination limit. If neither of these happen, it could end up hitting the timeout, that was imposed as a limit that should never be hit by doing a single range's worth of work, because we are in fact doing many range's worth of work.

This change pre-splits the spans that we need to export into subspans that we will send requests to, so that each sub-span is the size of one range. It is OK if the actual ranges below these requests end up splitting or merging, as this splitting has simply ensured that each request corresponds to "a range's worth of work" which is should as it was at the splitting time a range.

By doing this, we should be able to assume that all requests are expected to complete, if the cluster is healthy, within the 5min timeout.

We do this rather than setting ReturnOnRangeBoundary both since ReturnOnRangeBoundary is not yet in use and production tested, but also since breaking work up into range-sized chunks allows more evenly distributing between the worker goroutines in the processor.

Release note: none.
Epic: none.

Copy link

blathers-crl bot commented Nov 10, 2023

It looks like your PR touches production code but doesn't add or edit any test code. Did you consider adding tests to your PR?

🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf.

@cockroach-teamcity
Copy link
Member

This change is Reviewable

@dt
Copy link
Member Author

dt commented Nov 10, 2023

Doing this here, before we put the spans into todo, makes it pretty isolated and easy: we can put the full spans into todo or we can split them up and put the split spans into todo but the logic for what we do with a span is left untouched.

However this may be less idea for performance: the backup workers pull spans off of todo so putting one big span into todo as several little spans will meant that the subspans of this big span get picked up by different workers. This in turn will mean that they get flushed to separate SSTs, so instead of having one nice long contiguous SST for the assigned span, we'd end up with N SSTs -- one per worker -- each covering different disjoint bits of the span. This is fine as far as correctness, but would make the metadata larger as we'd miss the optimization where a big long contiguous run of span in a single SST can be represented as a single manifest entry that just has its endKey adjusted as long as we're appending.

Instead we might want to put this logic inside the for span <- todo loop inside each worker, so that they chunk up the assigned-span they pulled off of todo when they go to process it, adding another layer of for loop there over the rdi. That change, however, will probably look bigger as it will move the majority of the worker -- and thus the majority of backup processor -- one layer of for loop deeper. I'm going to try and see what that version looks like, but wanted to throw this one out in the meantime and see what CI thinks.

pkg/cloud/gcp/gcs_storage.go Outdated Show resolved Hide resolved
@dt dt marked this pull request as ready for review November 20, 2023 13:38
@dt dt requested a review from a team as a code owner November 20, 2023 13:38
@dt
Copy link
Member Author

dt commented Nov 20, 2023

Okay, I added a second commit to this, that changes the queue channel from containing spans to slices of spans, or chunks. The workers that pull work from the queue gain an extra layer of for loop, to process each span in the chunk they pull from the queue, and the queue feeder that puts the (now range-sized, after the first commit) spans on the queue now batches them up into chunks of between 1 and 100 spans (aiming for at least 4 per worker).

I think this is RFAL now.

Copy link
Collaborator

@stevendanna stevendanna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed PR description. Do we also want to mention why we prefer this to setting ReturnOnRangeBoundary on the request header?

I left some comments but nothing blocking, just idle thoughts when reading the code.

Thinking about what test might be useful here.

pkg/ccl/backupccl/backup_processor.go Outdated Show resolved Hide resolved
pkg/ccl/backupccl/backup_processor.go Show resolved Hide resolved
pkg/ccl/backupccl/backup_processor.go Outdated Show resolved Hide resolved
pkg/ccl/backupccl/backup_processor.go Outdated Show resolved Hide resolved
dt added 2 commits November 21, 2023 22:43
Backup processors are assigned spans -- which are produced by the SQL
planning function PartitionSpans - which they must backup, by reading
content of that span using some number of paginated ExportRequests and
then writing that content to the assigned destination.

Typically each export request sent by a backup processor is expected to
be served by approximately one range: it sends the request to the whole
span it is trying to export, distsender sends it to first range it overlaps,
that range reads until it hits the pagination limit, then distsender returns
its result and the processor does this again starting the span from the
resume key.

Since each request does a range's worth of work, the backup processor
can assume it should, if things are normal and healthy in the cluster,
return its result within a short amount of time. This is often a second
or less, or perhaps a few seconds if it had to wait in queues. As such,
the backup processor imposes a 5 minute timeout on these requests, as a
single request not returning in this duration indicates something is not
normal and healthy in the cluster, and the backup cannot expect to make
process until that is resolved.

However this logic does not hold if a single request, subject to this
timeout, ends up doing substantially more work. This however can happen
if that request has a span large than a single range _and_ the ranges in
that span are empty and/or don't contain data matching the predicate of
the request. In such cases, the request would be sent to one range, it
would process it, but since it returns zero results, the pagination limit
would not be hit and the request would then continue on to be sent to
another range, and another, etc until it either reaches the end of the
requested span or finally finds results that hit the pagination limit.
If neither of these happen, it could end up hitting the timeout, that
was imposed as a limit that should never be hit by doing a single range's
worth of work, because we are in fact doing many range's worth of work.

This change pre-splits the spans that we need to export into subspans
that we will send requests to, so that each sub-span is the size of one
range. It is OK if the actual ranges below these requests end up
splitting or merging, as this splitting has simply ensured that each
request corresponds to "a range's worth of work" which is should as it
was at the splitting time a range.

By doing this, we should be able to assume that all requests are expected
to complete, if the cluster is healthy, within the 5min timeout.

Release note: none.
Epic: none.
Having a single worker handle sequential spans when backing up allows that
worker to append the results to its output, producing output that is also
sequential and importantly non-overlapping with other workers, which will
allow for reducing the metadata required to track the unique output spans.

Release note: none.
Epic: none.
Copy link
Contributor

@adityamaru adityamaru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM except for the offline comment to try to write a test for splitSpans

start, end hlc.Timestamp
attempts int
lastTried time.Time
finishesSpec bool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can we add a comment above this new field, its not immediately obvious how it influences progress reporting

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll get this in a followup instead of eating a ci for a comment.

@dt
Copy link
Member Author

dt commented Nov 22, 2023

TFTRs!

I briefly toyed with testing splitSpans but decided to punt to a follow-up change or refactor that splits all of generating todo into its own phase of the proc that we could test without running backups.

bors r+

@dt dt added backport-23.1.x Flags PRs that need to be backported to 23.1 backport-23.2.x Flags PRs that need to be backported to 23.2. labels Nov 22, 2023
@craig
Copy link
Contributor

craig bot commented Nov 22, 2023

Build succeeded:

@craig craig bot merged commit d8fc38b into cockroachdb:master Nov 22, 2023
8 checks passed
Copy link

blathers-crl bot commented Nov 22, 2023

Encountered an error creating backports. Some common things that can go wrong:

  1. The backport branch might have already existed.
  2. There was a merge conflict.
  3. The backport branch contained merge commits.

You might need to create your backport manually using the backport tool.


error creating merge commit from dcad601 to blathers/backport-release-23.1-114268: POST https://api.github.com/repos/cockroachdb/cockroach/merges: 409 Merge conflict []

you may need to manually resolve merge conflicts with the backport tool.

Backport to branch 23.1.x failed. See errors above.


🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf.

@dt dt deleted the backup-spans branch November 26, 2023 00:10
@dt dt linked an issue Nov 29, 2023 that may be closed by this pull request
msbutler added a commit to msbutler/cockroach that referenced this pull request Mar 11, 2024
PR cockroachdb#114268 broke the plumbing for the CompletedSpans metric which allows the
backup coordinator to update the FractionCompleted metric. This patch fixes
this bug by passing the finishesSpec field to the resume span.

Informs cockroachdb#120161

Release note: none
msbutler added a commit to msbutler/cockroach that referenced this pull request Mar 11, 2024
PR cockroachdb#114268 broke the plumbing for the CompletedSpans metric which allows the
backup coordinator to update the FractionCompleted metric. This patch fixes
this bug by passing the finishesSpec field to the resume span.

Informs cockroachdb#120161

Release note: none
msbutler added a commit to msbutler/cockroach that referenced this pull request Mar 11, 2024
PR cockroachdb#114268 broke the plumbing for the CompletedSpans metric which allows the
backup coordinator to update the FractionCompleted metric. This patch fixes
this bug by passing the finishesSpec field to the resume span.

Informs cockroachdb#120161

Release note: none
craig bot pushed a commit that referenced this pull request Mar 12, 2024
120204: backuppcl: pass finishesSpec field to resume span r=dt a=msbutler

PR #114268 broke the plumbing for the CompletedSpans metric which allows the backup coordinator to update the FractionCompleted metric. This patch fixes this bug by passing the finishesSpec field to the resume span.

Informs #120161

Release note: none

120251: roachprod: search ssh directory for keys r=srosenberg a=ajwerner

Before this change, roachprod required that an id_rsa.pub file exist in the user's $HOME/.ssh directory. These days folks use other types of keys like ed25519. The github docs these days in fact explicitly tell you to use ed25519 [1]. This patch now searches for keys the same way that the openssh client does. It searches based on this list:

 * id_rsa
 * id_ecdsa
 * id_ecdsa_sk
 * id_ed25519
 * id_ed25519_sk
 * id_dsa

[1]: https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent

Epic: None

Release note: None

Co-authored-by: Michael Butler <[email protected]>
Co-authored-by: Andrew Werner <[email protected]>
msbutler added a commit to msbutler/cockroach that referenced this pull request Mar 12, 2024
PR cockroachdb#114268 broke the plumbing for the CompletedSpans metric which allows the
backup coordinator to update the FractionCompleted metric. This patch fixes
this bug by passing the finishesSpec field to the resume span.

Informs cockroachdb#120161

Release note: none
msbutler added a commit to msbutler/cockroach that referenced this pull request Mar 13, 2024
PR cockroachdb#114268 broke the plumbing for the CompletedSpans metric which allows the
backup coordinator to update the FractionCompleted metric. This patch fixes
this bug by passing the finishesSpec field to the resume span.

Informs cockroachdb#120161

Release note: none
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport-23.1.x Flags PRs that need to be backported to 23.1 backport-23.2.x Flags PRs that need to be backported to 23.2.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

backupccl: ensure each ExportRequest returns on a range boundary
4 participants