Skip to content

Commit

Permalink
kvserver: use response data in the load-based splitter
Browse files Browse the repository at this point in the history
We investigated why running YCSB Workload E results in a single hot
range and we observed that range queries of the form
SELECT * FROM table WHERE pkey >= A LIMIT B will result in all request
spans having the same end key - similar to [A, range_end) - rather than
end keys that take into account the specified LIMIT. Since the majority
of request spans have the same end key, the load splitter algorithm
cannot find a split key without too many contained and balance between
left and right requests. A proposed solution is to use the response span
rather than the request span, since the response span is more accurate
in reflecting the keys that this request truly iterated over. We utilize
the request span as well as the response's resume span to derive the key
span that this request truly iterated over. Using response data (resume
span) rather than just the request span in the load-based splitter
(experimentally) allows the load-based splitter to find a split key
under range query workloads (YCSB Workload E, KV workload with spans).

Release note (ops change): We use response data rather than just the
request span in the load-based splitter to pass more accurate data
about the keys iterated over to the load splitter to find a suitable
split key, enabling the load splitter to find a split key under heavy
range query workloads.
  • Loading branch information
KaiSun314 authored and kvoli committed Jun 7, 2023
1 parent e405460 commit 1fb7942
Showing 1 changed file with 7 additions and 6 deletions.
13 changes: 7 additions & 6 deletions pkg/kv/kvserver/replica_send.go
Original file line number Diff line number Diff line change
Expand Up @@ -401,6 +401,12 @@ func (r *Replica) executeBatchWithConcurrencyRetries(
var requestEvalKind concurrency.RequestEvalKind
var g *concurrency.Guard
defer func() {
// Handle load-based splitting, if necessary.
if pErr == nil {
spansRead, _, _ := r.collectSpansRead(ba, br)
r.recordBatchForLoadBasedSplitting(ctx, ba, spansRead)
}

// NB: wrapped to delay g evaluation to its value when returning.
if g != nil {
r.concMgr.FinishReq(g)
Expand All @@ -412,7 +418,7 @@ func (r *Replica) executeBatchWithConcurrencyRetries(
// commands and wait even if the circuit breaker is tripped.
pp = poison.Policy_Wait
}
for first := true; ; first = false {
for {
// Exit loop if context has been canceled or timed out.
if err := ctx.Err(); err != nil {
return nil, nil, roachpb.NewError(errors.Wrap(err, "aborted during Replica.Send"))
Expand All @@ -432,11 +438,6 @@ func (r *Replica) executeBatchWithConcurrencyRetries(
}
}

// Handle load-based splitting, if necessary.
if first {
r.recordBatchForLoadBasedSplitting(ctx, ba, latchSpans)
}

// Acquire latches to prevent overlapping requests from executing until
// this request completes. After latching, wait on any conflicting locks
// to ensure that the request has full isolation during evaluation. This
Expand Down

0 comments on commit 1fb7942

Please sign in to comment.