-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kvserver: "auto create stats" job should use lower priority for IO #82508
Comments
The spikes in increased read throughput correlates strongly with periods during which the auto create stats job is running. Note that we consume more read bandwidth, and as the device is maxed out, we start "stealing" throughput from writes. This is a look at the same time period that you posted: Zooming out further, we see the same thing, though this time we have much more throughput to consume (we bumped up to 250 MB/s). That said, we still see increased reads stealing some write throughput (note the dips in the green line on the write throughput chart at the bottom when the read throughput increases ): |
What's left to do in this issue? Downgrade the admission priority level of requests originating from the "auto create stats" job? |
This seems like a small, targeted change with minimal risk. Should we try to get it in for v22.2? |
We should lower the priority and ensure that it gets subject to elastic CPU AC. We don't currently have a way to share read bandwidth in AC. |
Informs cockroachdb#82508. Release note: None
Informs cockroachdb#82508. Release note: None
This aims to simulate badnwidth induced overload by running a large kv0 workload on a 3 node cluster, while all the leases are owned by n1 and n2. Informs cockroachdb#82508. Release note: None
This aims to simulate read bandwidth induced overload by running a large kv0 workload on a 3 node cluster, while all the leases are owned by n1 and n2. Informs cockroachdb#82508. Release note: None
117988: roachtest: admission/follower-overload test improvements r=sumeerbhola a=aadityasondhi [roachtest: fix zone config syntax in ac follower overload test](c80491e) Informs #82508. Release note: None --- [roachtest: add bandwidth overload test in admission/follower-overload](d0dc296) This aims to simulate read bandwidth induced overload by running a large kv0 workload on a 3 node cluster, while all the leases are owned by n1 and n2. Informs #82508. Release note: None Co-authored-by: Aaditya Sondhi <[email protected]>
Informs cockroachdb#82508. Release note: None
This aims to simulate read bandwidth induced overload by running a large kv0 workload on a 3 node cluster, while all the leases are owned by n1 and n2. Informs cockroachdb#82508. Release note: None
Update here for posterity: We ran a few internal experiments for this, and the reason for overload is saturating the provisioned disk bandwidth. Until we enable disk bandwidth AC (#86857), making changes here will not actually subject the Full internal discussion is available here. |
#81516 adds the
admission/follower-overload/presplit-control
roachtest. In this roachtest, a three node cluster is set up so that two nodes have all leases for a kv0 workload. At the time of writing, kv0 runs with 4mb/s of goodput (400 rate limit * 10k per write). On AWS (where this run took place), on a default EBS volume with throughput limit 125mb/s and 3000 iops (aggregate read+write), this is right at the limit. As a result, n1 and n2 get into mild IO overload territory.It was observed that the nodes with leases consistently read more data from disk (green and orange are n1 and n2)
read mb/s:
Zooming out, we see this pattern:
No splits are occurring at the time. However, the bumps match up well with these bumps in raft log:
The raft log queue processes replicas at a fixed rate throughout these spikes, so it's unclear if it is now simply contending with read activity or if it is itself the cause of read activity.
Overlaying
rate(rocksdb_compacted_bytes_read[$__rate_interval])
onto the bytes read shows that compactions are not the driver of the spiky reads on n1 and n2. Quite the opposite, whenever these spikes occur, compactions can't read as quickly as they would like to.Jira issue: CRDB-16492
Epic CRDB-37479
The text was updated successfully, but these errors were encountered: