Skip to content

Commit

Permalink
roachtest: use m2d.4xlarge for 8tb restore test
Browse files Browse the repository at this point in the history
We see that on 2xlarge this test runs likely runs into its EBS bandwidth
limits. The easiest way to avoid that is to switch to a beefier machine,
which doubles its bandwidth limits.

We should also survive being bandwidth-limited, but currently don't do
reliably - this is tracked in cockroachdb#73376.

Epic: CRDB-25503
Release note: None
  • Loading branch information
tbg committed Mar 16, 2023
1 parent 7f9da8f commit 68812d5
Showing 1 changed file with 6 additions and 1 deletion.
7 changes: 6 additions & 1 deletion pkg/cmd/roachtest/tests/restore.go
Original file line number Diff line number Diff line change
Expand Up @@ -532,7 +532,12 @@ func registerRestore(r registry.Registry) {
},
{
// The nightly 8TB Restore test.
hardware: makeHardwareSpecs(hardwareSpecs{nodes: 10, volumeSize: 2000}),
//
// NB: we use 16 CPUs to get better EBS bandwidth on AWS. With 8 CPUs, we get
// a c5d.xlarge at 287.50 MB/s base throughput, with 16 CPUs we get 593.75 MB/s.
//
// See: https://github.com/cockroachdb/cockroach/issues/97019#issuecomment-1452144587
hardware: makeHardwareSpecs(hardwareSpecs{cpus: 16, nodes: 10, volumeSize: 2000}),
backup: makeBackupSpecs(backupSpecs{
version: "v22.2.1",
workload: tpceRestore{customers: 500000}}),
Expand Down

0 comments on commit 68812d5

Please sign in to comment.