Skip to content

Commit

Permalink
backupccl: remove stitching queue file count ceiling
Browse files Browse the repository at this point in the history
This change removes the maxQueueSize that limited the number
of files that could be buffered in the queue when merging
SSTs during a backup. The efficacy and need of this cap
is not evident, and we already have a byte limit on how
large the queue can grow. This reduces the number of
variables that need to be tuned to achieve more optimal file
merging behaviour.

Release note: None
  • Loading branch information
adityamaru committed Dec 13, 2021
1 parent 441809e commit 6716f75
Showing 1 changed file with 1 addition and 5 deletions.
6 changes: 1 addition & 5 deletions pkg/ccl/backupccl/backup_processor.go
Original file line number Diff line number Diff line change
Expand Up @@ -91,10 +91,6 @@ var (
)
)

// maxSinkQueueFiles is how many replies we'll queue up before flushing to allow
// some re-ordering, unless we hit smallFileBuffer size first.
const maxSinkQueueFiles = 24

const backupProcessorName = "backupDataProcessor"

// TODO(pbardea): It would be nice if we could add some DistSQL processor tests
Expand Down Expand Up @@ -608,7 +604,7 @@ func (s *sstSink) push(ctx context.Context, resp returnedSST) error {
s.queue = append(s.queue, resp)
s.queueSize += len(resp.sst)

if len(s.queue) >= maxSinkQueueFiles || s.queueSize >= int(smallFileBuffer.Get(s.conf.settings)) {
if s.queueSize >= int(smallFileBuffer.Get(s.conf.settings)) {
sort.Slice(s.queue, func(i, j int) bool { return s.queue[i].f.Span.Key.Compare(s.queue[j].f.Span.Key) < 0 })

// Drain the first half.
Expand Down

0 comments on commit 6716f75

Please sign in to comment.