Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kvflowcontrol: token return can take a long time with diskBandwidthLimit set #137017

Open
andrewbaptist opened this issue Dec 9, 2024 · 2 comments
Labels
A-replication-admission-control-v2 Related to introduction of replication AC v2 branch-master Failures and bugs on the master branch. C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. T-kv KV Team

Comments

@andrewbaptist
Copy link
Collaborator

andrewbaptist commented Dec 9, 2024

Describe the problem

A cluster with diskBandwidthLimit enabled can take 10s of minutes to return elastic tokens on an idle system.

To Reproduce

Run many of the perturbation tests with a disk bandwidth set to 350MiB and notice that the token return can take 10+ minutes.

e.g.

PERTURBATION_OVERRIDE=acMode=diskBandwidthLimit=350MiB roachtest run perturbation/dev/addNode -l

Expected behavior
During the end period of the test there is no longer an IO and the CPU and disks are sitting idle (<5% utilization). It is expected that the tokens would be returned sooner.

Its not clear if the problem is with the disk bandwidth limit or the RACv2 handling of it.

This issue is intented to track the workaround to wait longer for token returns when this setting is enabled.

Jira issue: CRDB-45358

@andrewbaptist andrewbaptist added C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. T-kv KV Team A-replication-admission-control-v2 Related to introduction of replication AC v2 labels Dec 9, 2024
Copy link

blathers-crl bot commented Dec 9, 2024

Hi @andrewbaptist, please add branch-* labels to identify which branch(es) this C-bug affects.

🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf.

@andrewbaptist andrewbaptist added the branch-master Failures and bugs on the master branch. label Dec 9, 2024
andrewbaptist added a commit to andrewbaptist/cockroach that referenced this issue Dec 9, 2024
Previously the test would wait 10m for tokens to be returned. Without
the disk bandwidth limit set, they typically return almost immediately
but with a limit they can take ~30m to return in some cases even after
the workload is stopped and the system is idle. This change fixes some
of the perturbation/metamorphic/* tests that are hitting this slow token
return.

Epic: none
Fixes: cockroachdb#136982
Fixes: cockroachdb#136553
Informs: cockroachdb#137017

Release note: None
andrewbaptist added a commit to andrewbaptist/cockroach that referenced this issue Dec 10, 2024
Previously the perturbation/* tests would wait 10m for tokens to be
returned. Without the disk bandwidth limit set, they typically return
almost immediately but with a limit they can take ~30m to return in some
cases even after the workload is stopped and the system is idle. This
change fixes some of the perturbation/metamorphic/* tests that are
hitting this slow token return. Additionally this change reduces the
token wait time for the test
admission-control/elastic-workload/mixed-version to 1 minute as this
test doesn't typically wait more then a few seconds for token return.

Epic: none
Fixes: cockroachdb#136982
Fixes: cockroachdb#136553
Informs: cockroachdb#137017

Release note: None
@andrewbaptist
Copy link
Collaborator Author

For the latest run n12 was restarted at 15:51:47 and all SQL traffic stopped at 15:56:51.

Here is a graph of a test run where it sits for 10m without returning tokens:
image

Logs:
cockroach.1.log
cockroach.12.log

craig bot pushed a commit that referenced this issue Dec 10, 2024
136258: kvserver: add TestFlowControlSendQueueRangeSplitMerge test  r=sumeerbhola a=kvoli

Add a new rac2 flow control integration test,
`TestFlowControlSendQueueRangeSplitMerge`.

This test takes the following steps:

```sql
-- We will exhaust the tokens across all streams while admission is blocked on
-- n3, using a single 4 MiB (deduction, the write itself is small) write. Then,
-- we will write a 1 MiB put to the range, split it, write a 1 MiB put to the
-- LHS range, merge the ranges, and write a 1 MiB put to the merged range. We
-- expect that at each stage where a send queue develops n1->s3, the send queue
-- will be flushed by the range merge and range split range operations.``sql
```

Note that the RHS is not written to post-split, pre-merge. See the
relevant comments, this will be resolved via #136649, or some variation,
which enforces the timely replication on subsume requests.

Part of: #132614
Release note: None

136648: rpc: reuse gRPC streams across unary BatchRequest RPCs r=tbg a=nvanbenschoten

Closes #136572.

This commit introduces pooling of gRPC streams that are used to send requests and receive corresponding responses in a manner that mimics unary RPC invocation. Pooling these streams allows for reuse of gRPC resources across calls, as opposed to native unary RPCs, which create a new stream and throw it away for each request (see grpc.invoke).

The new pooling mechanism is used for the Internal/Batch RPC method, which is the dominant RPC method used to communicate between the KV client and KV server. A new Internal/BatchStream RPC method is introduced to allow a client to send and receive BatchRequest/BatchResponse pairs over a long-lived, pooled stream. A pool of these streams is then maintained alongside each gRPC connection. The pool grows and shrinks dynamically based on demand.

The change demonstrates a large performance improvement in both microbenchmarks and full system benchmarks, which reveals just how expensive the gRPC stream setup on each unary RPC is.

### Microbenchmarks:
```
name                                            old time/op    new time/op    delta
Sysbench/KV/1node_remote/oltp_point_select-10     45.9µs ± 1%    28.8µs ± 2%  -37.31%  (p=0.000 n=9+8)
Sysbench/KV/1node_remote/oltp_read_only-10         958µs ± 6%     709µs ± 1%  -26.00%  (p=0.000 n=9+9)
Sysbench/SQL/1node_remote/oltp_read_only-10       3.65ms ± 6%    2.81ms ± 8%  -23.06%  (p=0.000 n=8+9)
Sysbench/KV/1node_remote/oltp_read_write-10       1.77ms ± 5%    1.38ms ± 1%  -22.09%  (p=0.000 n=10+8)
Sysbench/KV/1node_remote/oltp_write_only-10        688µs ± 4%     557µs ± 1%  -19.11%  (p=0.000 n=9+9)
Sysbench/SQL/1node_remote/oltp_point_select-10     181µs ± 8%     159µs ± 2%  -12.10%  (p=0.000 n=8+9)
Sysbench/SQL/1node_remote/oltp_write_only-10      2.16ms ± 4%    1.92ms ± 3%  -11.08%  (p=0.000 n=9+9)
Sysbench/SQL/1node_remote/oltp_read_write-10      5.89ms ± 2%    5.36ms ± 1%   -8.89%  (p=0.000 n=9+9)

name                                            old alloc/op   new alloc/op   delta
Sysbench/KV/1node_remote/oltp_point_select-10     16.3kB ± 0%     6.4kB ± 0%  -60.70%  (p=0.000 n=8+10)
Sysbench/KV/1node_remote/oltp_write_only-10        359kB ± 1%     256kB ± 1%  -28.92%  (p=0.000 n=10+10)
Sysbench/SQL/1node_remote/oltp_write_only-10       748kB ± 0%     548kB ± 1%  -26.78%  (p=0.000 n=8+10)
Sysbench/SQL/1node_remote/oltp_point_select-10    40.9kB ± 0%    30.8kB ± 0%  -24.74%  (p=0.000 n=9+10)
Sysbench/KV/1node_remote/oltp_read_write-10       1.11MB ± 1%    0.88MB ± 1%  -21.17%  (p=0.000 n=9+10)
Sysbench/SQL/1node_remote/oltp_read_write-10      2.00MB ± 0%    1.65MB ± 0%  -17.60%  (p=0.000 n=9+10)
Sysbench/KV/1node_remote/oltp_read_only-10         790kB ± 0%     655kB ± 0%  -17.11%  (p=0.000 n=9+9)
Sysbench/SQL/1node_remote/oltp_read_only-10       1.33MB ± 0%    1.19MB ± 0%  -10.97%  (p=0.000 n=10+9)

name                                            old allocs/op  new allocs/op  delta
Sysbench/KV/1node_remote/oltp_point_select-10        210 ± 0%        61 ± 0%  -70.95%  (p=0.000 n=10+10)
Sysbench/KV/1node_remote/oltp_read_only-10         3.98k ± 0%     1.88k ± 0%  -52.68%  (p=0.019 n=6+8)
Sysbench/KV/1node_remote/oltp_read_write-10        7.10k ± 0%     3.47k ± 0%  -51.07%  (p=0.000 n=10+9)
Sysbench/KV/1node_remote/oltp_write_only-10        3.10k ± 0%     1.58k ± 0%  -48.89%  (p=0.000 n=10+9)
Sysbench/SQL/1node_remote/oltp_write_only-10       6.73k ± 0%     3.82k ± 0%  -43.30%  (p=0.000 n=10+10)
Sysbench/SQL/1node_remote/oltp_read_write-10       14.4k ± 0%      9.2k ± 0%  -36.29%  (p=0.000 n=9+10)
Sysbench/SQL/1node_remote/oltp_point_select-10       429 ± 0%       277 ± 0%  -35.46%  (p=0.000 n=9+10)
Sysbench/SQL/1node_remote/oltp_read_only-10        7.52k ± 0%     5.37k ± 0%  -28.60%  (p=0.000 n=10+10)
```

### Roachtests:
```
name                                            old queries/s  new queries/s  delta
sysbench/oltp_read_write/nodes=3/cpu=8/conc=64     17.6k ± 7%     19.2k ± 2%  +9.22%  (p=0.008 n=5+5)

name                                            old avg_ms/op  new avg_ms/op  delta
sysbench/oltp_read_write/nodes=3/cpu=8/conc=64      72.9 ± 7%      66.6 ± 2%  -8.57%  (p=0.008 n=5+5)

name                                            old p95_ms/op  new p95_ms/op  delta
sysbench/oltp_read_write/nodes=3/cpu=8/conc=64       116 ± 8%       106 ± 3%  -9.02%  (p=0.016 n=5+5)
```

### Manual tests:
Running in a similar configuration to `sysbench/oltp_read_write/nodes=3/cpu=8/conc=64`, but with a benchmarking related cluster settings (before and after) to reduce variance.
```
-- Before
Mean: 19771.03
Median: 19714.22
Standard Deviation: 282.96
Coefficient of variance: .0143

-- After
Mean: 21908.23
Median: 21923.03
Standard Deviation: 200.88
Coefficient of variance: .0091
```

----

Release note (performance improvement): gRPC streams are now pooled across unary intra-cluster RPCs, allowing for reuse of gRPC resources to reduce the cost of remote key-value layer access. This pooling can be disabled using the `rpc.batch_stream_pool.enabled` cluster setting.

137019: roachtest: increase the token return time with disk bandwidth limit r=kvoli a=andrewbaptist

Previously the test would wait 10m for tokens to be returned. Without the disk bandwidth limit set, they typically return almost immediately but with a limit they can take ~30m to return in some cases even after the workload is stopped and the system is idle. This change fixes some of the perturbation/metamorphic/* tests that are hitting this slow token return.

Epic: none
Fixes: #136982
Fixes: #136553
Informs: #137017

Release note: None

137044: kvserver: deflake TestConsistencyQueueRecomputeStats r=miraradeva a=miraradeva

The test manually adds voters and expects a leaseholder to be established before forcing a stats re-computation (which runs on the leaseholder). With leader leases, it might take an extra election timeout for the leader lease to be established after adding the new voters, so the test flaked if the re-computation ran (and failed) before the leaseholder was ready.

This commit makes the test retry the re-computation until a leasholder is established.

Fixes: #136596

Release note: None

137059: catalog/lease: deflake TestDescriptorRefreshOnRetry r=rafiss a=rafiss

The test was flaky since the background thread to refresh leases could run and cause the acquisition counts to be off.

fixes #137033
Release note: None

137099: kvcoord: deflake TestDistSenderReplicaStall r=miraradeva a=miraradeva

The test runs with expiration leases but when fortification is enabled the lease doesn't move off of the stalled replica because the deadlocked leader doesn't step down while it's receiving store liveness support.

This commit ensures fortification is off when expiration leases are used for the test.

Fixes: #136564

Release note: None

137118: crosscluster/logical: update udf test to expect at-least-once r=dt a=dt

We don't provide exactly once so we don't want to test for it.

Release note: none.
Epic: none.

Co-authored-by: Austen McClernon <[email protected]>
Co-authored-by: Nathan VanBenschoten <[email protected]>
Co-authored-by: Tobias Grieger <[email protected]>
Co-authored-by: Andrew Baptist <[email protected]>
Co-authored-by: Mira Radeva <[email protected]>
Co-authored-by: Rafi Shamim <[email protected]>
Co-authored-by: David Taylor <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-replication-admission-control-v2 Related to introduction of replication AC v2 branch-master Failures and bugs on the master branch. C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. T-kv KV Team
Projects
None yet
Development

No branches or pull requests

1 participant