Skip to content

Commit

Permalink
Merge #58014
Browse files Browse the repository at this point in the history
58014: roachtest/tpcc: don't scatter on each tpccbench search iteration r=nvanbenschoten a=nvanbenschoten

Fixes #48255.
Fixes #53443.
Fixes #54258.
Fixes #54570.
Fixes #55599.
Fixes #55688.
Fixes #55817.
Fixes #55939.
Fixes #56996.
Fixes #57062.
Fixes #57864.

This needs to be backported to `release-20.1` and `release-20.2`

In #55688 (comment),
we saw that the failures to create load generators in tpccbench were due to
long-running SCATTER operations. These operations weren't stuck, but were very
slow due to the amount of data being moved and the 2MiB/s limit on snapshots. In
hindsight, this should have been expected, as scatter has the potential to
rebalance data and was being run of datasets on the order of 100s of GBs or even
TBs in size.

But this alone did not explain why we used to see this issue infrequently and
only recently began seeing it regularly. We determined that the most likely
reason why this has recently gotten worse is because of #56942. That PR fixed a
race condition in tpcc's `scatterRanges` function which often resulted in 9
scatters of the `warehouse` table instead of 1 scatter of each table in the
database. So before this PR, we were often (but not always due to the racey
nature of the bug) avoiding the scatter on all but the dataset's smallest table.
After this PR, we were always scattering all 9 tables in the dataset, leading to
much larger rebalancing.

To address these issues, this commit removes the per-iteration scattering in
tpccbench. Scattering on each search iteration was a misguided decision. It
wasn't needed because we already scatter once during dataset import (with a
higher `kv.snapshot_rebalance.max_rate`). It was also disruptive as it had the
potential to slow down the test significantly and cause issues like the one were
are fixing here.

With this change, I've seen `tpccbench/nodes=6/cpu=16/multi-az` go from failing
6 out of 10 times to succeeding 10 out of 10 times. This change appears to have
no impact on performance.

Co-authored-by: Nathan VanBenschoten <[email protected]>
  • Loading branch information
craig[bot] and nvanbenschoten committed Dec 17, 2020
2 parents bdc14ea + 9dc433d commit a5d50b6
Showing 1 changed file with 7 additions and 14 deletions.
21 changes: 7 additions & 14 deletions pkg/cmd/roachtest/tpcc.go
Original file line number Diff line number Diff line change
Expand Up @@ -613,13 +613,13 @@ func loadTPCCBench(
var rebalanceWait time.Duration
switch b.LoadConfig {
case singleLoadgen:
loadArgs = `--scatter --checks=false`
loadArgs = `--checks=false`
rebalanceWait = time.Duration(b.LoadWarehouses/250) * time.Minute
case singlePartitionedLoadgen:
loadArgs = fmt.Sprintf(`--scatter --checks=false --partitions=%d`, b.partitions())
loadArgs = fmt.Sprintf(`--checks=false --partitions=%d`, b.partitions())
rebalanceWait = time.Duration(b.LoadWarehouses/125) * time.Minute
case multiLoadgen:
loadArgs = fmt.Sprintf(`--scatter --checks=false --partitions=%d --zones="%s"`,
loadArgs = fmt.Sprintf(`--checks=false --partitions=%d --zones="%s"`,
b.partitions(), strings.Join(b.Distribution.zones(), ","))
rebalanceWait = time.Duration(b.LoadWarehouses/50) * time.Minute
default:
Expand All @@ -646,15 +646,9 @@ func loadTPCCBench(
// Split and scatter the tables. Ramp up to the expected load in the desired
// distribution. This should allow for load-based rebalancing to help
// distribute load. Optionally pass some load configuration-specific flags.
method := ""
if b.Chaos {
// For chaos tests, we don't want to use the default method because it
// involves preparing statements on all connections (see #51785).
method = "--method=simple"
}
cmd = fmt.Sprintf("./workload run tpcc --warehouses=%d --workers=%d --max-rate=%d "+
"--wait=false --duration=%s --scatter --tolerate-errors %s {pgurl%s}",
b.LoadWarehouses, b.LoadWarehouses, b.LoadWarehouses/2, rebalanceWait, method, roachNodes)
"--wait=false --duration=%s --scatter --tolerate-errors {pgurl%s}",
b.LoadWarehouses, b.LoadWarehouses, b.LoadWarehouses/2, rebalanceWait, roachNodes)
if out, err := c.RunWithBuffer(ctx, c.l, loadNode, cmd); err != nil {
return errors.Wrapf(err, "failed with output %q", string(out))
}
Expand Down Expand Up @@ -814,10 +808,9 @@ func runTPCCBench(ctx context.Context, t *test, c *cluster, b tpccBenchSpec) {
t.Status(fmt.Sprintf("running benchmark, warehouses=%d", warehouses))
histogramsPath := fmt.Sprintf("%s/warehouses=%d/stats.json", perfArtifactsDir, activeWarehouses)
cmd := fmt.Sprintf("./workload run tpcc --warehouses=%d --active-warehouses=%d "+
"--tolerate-errors --scatter --ramp=%s --duration=%s%s {pgurl%s} "+
"--histograms=%s",
"--tolerate-errors --ramp=%s --duration=%s%s --histograms=%s {pgurl%s}",
b.LoadWarehouses, activeWarehouses, rampDur,
loadDur, extraFlags, sqlGateways, histogramsPath)
loadDur, extraFlags, histogramsPath, sqlGateways)
err := c.RunE(ctx, group.loadNodes, cmd)
loadDone <- timeutil.Now()
if err != nil {
Expand Down

0 comments on commit a5d50b6

Please sign in to comment.