release-21.1: sql: default to batch size 1 in allocator #62603
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Backport 1/1 commits from #62534.
/cc @cockroachdb/release
In #62282, the estimated row count was passed into the scan batch
allocator to avoid growing the batch from 1. However, this also changed
the default batch size from 1 to 1024 when no row count estimate was
available, giving significant overhead when fetching small result sets.
On
kv95/enc=false/nodes=1/cpu=32
this reduced performance from 66304ops/s to 58862 ops/s (median of 5 runs), since these are single-row
reads without estimates.
This patch reverts the default batch size to 1 when no row count
estimate is available. This fully fixes the
kv95
performanceregression. YCSB/E takes a small hit going from 1895 ops/s to 1786
ops/s, but this only seems to happen because it takes a while for the
statistics to update: sometime within the first minute of the test
(after the 1-minute ramp-up period), throughput abruptly changes from
~700 ops/s to ~1800 ops/s, so using a 2-minute ramp-up period in
roachtest would mostly eliminate any difference.
Resolves #62524.
Release note: None