-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storage: diagnose performance issues with tpc-c when not using partitioning #26059
Comments
After some initial investigation today, there's one problem so big that it pretty much obscures whatever problems may exist beneath it -- bad replica balance. For the most part, the replicas for any given table mostly fall on the same 3 nodes. The worst offender here is the This could be fixed in a number of ways, such as:
The last one should get the quickest results and we've talked about wanting it anyway, so I'll work on that tomorrow and see where it leaves things. I do have a couple questions about oddities I've noticed, though:
Finally, for future reference (cc @piyush-singh), things that would have been nice to have for the sake of debugging the performance issue:
I ended up manually constructing most of this data with the raft debug endpoint + a few python scripts as well as |
And to be clear, the point of the first paragraph above is that leaseholder rebalancing can't fix the problem by itself if all the warehouse range replicas are on the same 3 nodes. |
i'd expect r1 to be mostly idle in the steady state. @nvanbenschoten, @tschottdorf any theories?
This is not expected. Each warehouse should receive approximately the same qps. No idea what is going on, but it is worth pulling on this thread. It might be indicative of a bug in the load generator.
Heh, I came up with a similar list when debugging tpcc performance issues. I believe there are issues filed for many of these. @piyush-singh please double check. |
I can't confirm it right now since I tore the cluster down when I was done last night, but I think it could conceivably be due to errors from that node. I recall there being a fairly large (albeit spiky) amount of "Replica Errors" in the UI's graph of RPC errors. It's possible that those were mostly coming from n11's replicas, and that retries were then bumping up the count of batches sent to n11's replicas. |
What does "scatters replicas in addition to ranges" mean?
That's surprising. This is probably due to overly-broad cache invalidation when the DistSender gets a NotLeaseHolder error (or a few other things). It sounds like we might be invalidating both meta1 and meta2 when we should only be invalidating meta2. |
We will be showing location of replicas in a debug page shortly (#24855), and @vilterp and I have discussed using this layout to similarly show leaseholder counts. @petermattis mentioned similar requests for information about ranges in #23379 (nodes to which they were replicated, size, load). I'll make a note to think through how best to show ranges in the UI for our last milestone since we'll be making some layout changes anyways. Thanks for the write up and detailed requests @a-robinson |
The trace is pretty suspicious for this issue. As you mentioned, we see a large number of QueryTxn requests in the trace. A lot of these something like:
Note that they're addressed to cockroach/pkg/storage/txnwait/queue.go Lines 682 to 684 in f8a7889
pusher.TxnMeta.Key == nil ). For what it's worth, @spencerkimball has been dealing with issues like this lately with some of his contentionQueue changes.
@vilterp's replica matrix (#24855) would address at least the first two of these needs. Might be worth trying it out here. |
Sorry, I guess I can't type late at night. I meant "scatters replicas in addition to leases". |
Yes, we would be seeing spurious |
26238: storage: avoid querying pusher where transaction record can't exist r=spencerkimball a=spencerkimball This change checks whether a txn has a non-nil key before querying it if it's waiting on an extant transaction which owns a conflicting intent. Previously, the code would query the pusher's txn, even if the key was nil, which would just send a spurious `QueryTxn` request to the first range in the keyspace. See #26059 Release note: None Co-authored-by: Spencer Kimball <[email protected]>
26245: backport-2.0: storage: avoid querying pusher where transaction record can't exist r=tschottdorf a=tschottdorf Backport 1/1 commits from #26238. /cc @cockroachdb/release --- This change checks whether a txn has a non-nil key before querying it if it's waiting on an extant transaction which owns a conflicting intent. Previously, the code would query the pusher's txn, even if the key was nil, which would just send a spurious `QueryTxn` request to the first range in the keyspace. See #26059 Release note: None Co-authored-by: Spencer Kimball <[email protected]>
Before scattering replicas, running tpc-c 5k on 15 nodes:
After scattering replicas (and leases), running tpc-c 5k on 15 nodes:
So if our goal is to have a usable workaround, scattering replicas basically gets us there. And if we want to remove the manual aspect of it, it looks like it'll be more impactful to work on stats-based replica rebalancing than lease rebalancing. |
@a-robinson Nice improvement. The p99 and pMax latency looks higher than what I remember for tpcc-5k. Would be interesting to see how this compares to tpcc-5k with partitioning. |
Running with
Of course, the sample sizes here aren't exactly large, so maybe the difference is just noise. And to follow up on my questions from the other day:
The hottest ranges in the cluster at this point are all from the
|
Hey @a-robinson, we can discuss this in a separate issue, but how did you generate the hot ranges table above? Would love to get it into an endpoint so we can show it in the UI. |
The |
hottest_ranges.py from https://gist.github.com/a-robinson/54fbaa6628ae9f1ad9c6185ecd28edb9 run on the output of the Disclaimer: the qps numbers are averages over the last 30 minutes, not instantaneous measurements. Also, that raft debug endpoint is very slow (>1 minute) on a large cluster.
No, they're all represented there except for one outlier that at the very end of the table that's responsible for fewer warehouses. Although that's ignoring the bunch of ranges in the table that are responsible for effectively none of the keyspace (e.g.
|
We don't manually split the |
Using partitioning gets basically the same results as with replica scattering:
It looks to me like the tpc-c 5k fixture I'm |
If this is reproducible, can you file a separate issue about this? Seems like something for the bulkio folks to investigate. |
As outlined in recent comments on cockroachdb#26059, we need to bring back some form of stats-based rebalancing in order to perform well on TPC-C without manual partitioning and replica placement. This commit contains a prototype that demonstrates the effectiveness of changing our approach to making rebalancing decisions from making them in the replicate queue, which operates on arbitrarily ordered replicas of the ranges on a store, to making them at a higher-level. This prototype makes them at a cluster level by running the logic on only one node, but my real proposal is to make them at the store level. This change in abstraction reflects what a human would do if asked to even out the load on a cluster given perfect information about everything happening in the cluster: 1. First, determine which stores have the most load on them (or overfull -- but for the prototype I only considered the one dimension that affects TPC-C the most) 2. Decide whether the most loaded stores are so overloaded that action needs to be taken. 3. Examine the hottest replicas on the store (maybe not the absolute hottest in practice, since moving that one could disrupt user traffic, but in the prototype this seems to work fine) and attempt to move them to under-utilized stores. If this can be done simply by transferring leases to under-utilized stores, then do so. If moving leases isn't enough, then also rebalance replicas from the hottest store to under-utilized stores. 4. Repeat periodically to handle changes in load or cluster membership. In a real versino of this code, the plan is roughly: 1. Each store will independently run their own control loop like this that is only responsible for moving leases/replicas off itself, not off other stores. This avoids needing a centralized coordinator, and will avoid the need to use the raft debug endpoint as long as we start gossiping QPS per store info, since the store already has details about the replicas on itself. 2. The existing replicate queue will stop making decisions motivated by balance. It will switch to only making decisions based on constraints/diversity/lease preferences, which is still needed since the new store-level logic will only check for store-level balance, not that all replicas' constraints are properly met. 3. The new code will have to avoid violating constraints/diversity/lease preferences. 4. The new code should consider range count, disk fullness, and maybe writes per second as well. 5. In order to avoid making decisions based on bad data, I'd like to extend lease transfers to pass along QPS data to the new leaseholder and preemptive snapshots to pass along WPS data to the new replica. This may not be strictly necessary, as shown by the success of this prototype, but should make for more reliable decision making. I tested this out on TPC-C 5k on 15 nodes and am able to consistently get 94% efficiency, which is the max I've seen using a build of the workload generator that erroneously includes the ramp-up period in its final stats. The first run with this code only got 85% because it took a couple minutes to make all the lease transfers it wanted, but then all subsequent runs got the peak efficiency while making negligibly few lease transfers. Note that I didn't even have to implement replica rebalancing to get these results, which oddly contradicts my previous claims. However, I believe that's because I did the initial split/scatter using a binary containing cockroachdb#26438, so the replicas were already better scattered than by default. I ran TPC-C on that build without these changes a couple times, though, and didn't get better than 65% efficiency, so the scatter wasn't the cause of the good results here. Touches cockroachdb#26059, cockroachdb#17979 Release note: None
As outlined in recent comments on cockroachdb#26059, we need to bring back some form of stats-based rebalancing in order to perform well on TPC-C without manual partitioning and replica placement. This commit contains a prototype that demonstrates the effectiveness of changing our approach to making rebalancing decisions from making them in the replicate queue, which operates on arbitrarily ordered replicas of the ranges on a store, to making them at a higher-level. This prototype makes them at a cluster level by running the logic on only one node, but my real proposal is to make them at the store level. This change in abstraction reflects what a human would do if asked to even out the load on a cluster given perfect information about everything happening in the cluster: 1. First, determine which stores have the most load on them (or overfull -- but for the prototype I only considered the one dimension that affects TPC-C the most) 2. Decide whether the most loaded stores are so overloaded that action needs to be taken. 3. Examine the hottest replicas on the store (maybe not the absolute hottest in practice, since moving that one could disrupt user traffic, but in the prototype this seems to work fine) and attempt to move them to under-utilized stores. If this can be done simply by transferring leases to under-utilized stores, then do so. If moving leases isn't enough, then also rebalance replicas from the hottest store to under-utilized stores. 4. Repeat periodically to handle changes in load or cluster membership. In a real versino of this code, the plan is roughly: 1. Each store will independently run their own control loop like this that is only responsible for moving leases/replicas off itself, not off other stores. This avoids needing a centralized coordinator, and will avoid the need to use the raft debug endpoint as long as we start gossiping QPS per store info, since the store already has details about the replicas on itself. 2. The existing replicate queue will stop making decisions motivated by balance. It will switch to only making decisions based on constraints/diversity/lease preferences, which is still needed since the new store-level logic will only check for store-level balance, not that all replicas' constraints are properly met. 3. The new code will have to avoid violating constraints/diversity/lease preferences. 4. The new code should consider range count, disk fullness, and maybe writes per second as well. 5. In order to avoid making decisions based on bad data, I'd like to extend lease transfers to pass along QPS data to the new leaseholder and preemptive snapshots to pass along WPS data to the new replica. This may not be strictly necessary, as shown by the success of this prototype, but should make for more reliable decision making. I tested this out on TPC-C 5k on 15 nodes and am able to consistently get 94% efficiency, which is the max I've seen using a build of the workload generator that erroneously includes the ramp-up period in its final stats. The first run with this code only got 85% because it took a couple minutes to make all the lease transfers it wanted, but then all subsequent runs got the peak efficiency while making negligibly few lease transfers. Note that I didn't even have to implement replica rebalancing to get these results, which oddly contradicts my previous claims. However, I believe that's because I did the initial split/scatter using a binary containing cockroachdb#26438, so the replicas were already better scattered than by default. I ran TPC-C on that build without these changes a couple times, though, and didn't get better than 65% efficiency, so the scatter wasn't the cause of the good results here. Touches cockroachdb#26059, cockroachdb#17979 Release note: None
As outlined in recent comments on cockroachdb#26059, we need to bring back some form of stats-based rebalancing in order to perform well on TPC-C without manual partitioning and replica placement. This commit contains a prototype that demonstrates the effectiveness of changing our approach to making rebalancing decisions from making them in the replicate queue, which operates on arbitrarily ordered replicas of the ranges on a store, to making them at a higher-level. This prototype makes them at a cluster level by running the logic on only one node, but my real proposal is to make them at the store level. This change in abstraction reflects what a human would do if asked to even out the load on a cluster given perfect information about everything happening in the cluster: 1. First, determine which stores have the most load on them (or overfull -- but for the prototype I only considered the one dimension that affects TPC-C the most) 2. Decide whether the most loaded stores are so overloaded that action needs to be taken. 3. Examine the hottest replicas on the store (maybe not the absolute hottest in practice, since moving that one could disrupt user traffic, but in the prototype this seems to work fine) and attempt to move them to under-utilized stores. If this can be done simply by transferring leases to under-utilized stores, then do so. If moving leases isn't enough, then also rebalance replicas from the hottest store to under-utilized stores. 4. Repeat periodically to handle changes in load or cluster membership. In a real versino of this code, the plan is roughly: 1. Each store will independently run their own control loop like this that is only responsible for moving leases/replicas off itself, not off other stores. This avoids needing a centralized coordinator, and will avoid the need to use the raft debug endpoint as long as we start gossiping QPS per store info, since the store already has details about the replicas on itself. 2. The existing replicate queue will stop making decisions motivated by balance. It will switch to only making decisions based on constraints/diversity/lease preferences, which is still needed since the new store-level logic will only check for store-level balance, not that all replicas' constraints are properly met. 3. The new code will have to avoid violating constraints/diversity/lease preferences. 4. The new code should consider range count, disk fullness, and maybe writes per second as well. 5. In order to avoid making decisions based on bad data, I'd like to extend lease transfers to pass along QPS data to the new leaseholder and preemptive snapshots to pass along WPS data to the new replica. This may not be strictly necessary, as shown by the success of this prototype, but should make for more reliable decision making. I tested this out on TPC-C 5k on 15 nodes and am able to consistently get 94% efficiency, which is the max I've seen using a build of the workload generator that erroneously includes the ramp-up period in its final stats. The first run with this code only got 85% because it took a couple minutes to make all the lease transfers it wanted, but then all subsequent runs got the peak efficiency while making negligibly few lease transfers. Note that I didn't even have to implement replica rebalancing to get these results, which oddly contradicts my previous claims. However, I believe that's because I did the initial split/scatter using a binary containing cockroachdb#26438, so the replicas were already better scattered than by default. I ran TPC-C on that build without these changes a couple times, though, and didn't get better than 65% efficiency, so the scatter wasn't the cause of the good results here. Touches cockroachdb#26059, cockroachdb#17979 Release note: None
As outlined in recent comments on cockroachdb#26059, we need to bring back some form of stats-based rebalancing in order to perform well on TPC-C without manual partitioning and replica placement. This commit contains a prototype that demonstrates the effectiveness of changing our approach to making rebalancing decisions from making them in the replicate queue, which operates on arbitrarily ordered replicas of the ranges on a store, to making them at a higher-level. This prototype makes them at a cluster level by running the logic on only one node, but my real proposal is to make them at the store level. This change in abstraction reflects what a human would do if asked to even out the load on a cluster given perfect information about everything happening in the cluster: 1. First, determine which stores have the most load on them (or overfull -- but for the prototype I only considered the one dimension that affects TPC-C the most) 2. Decide whether the most loaded stores are so overloaded that action needs to be taken. 3. Examine the hottest replicas on the store (maybe not the absolute hottest in practice, since moving that one could disrupt user traffic, but in the prototype this seems to work fine) and attempt to move them to under-utilized stores. If this can be done simply by transferring leases to under-utilized stores, then do so. If moving leases isn't enough, then also rebalance replicas from the hottest store to under-utilized stores. 4. Repeat periodically to handle changes in load or cluster membership. In a real versino of this code, the plan is roughly: 1. Each store will independently run their own control loop like this that is only responsible for moving leases/replicas off itself, not off other stores. This avoids needing a centralized coordinator, and will avoid the need to use the raft debug endpoint as long as we start gossiping QPS per store info, since the store already has details about the replicas on itself. 2. The existing replicate queue will stop making decisions motivated by balance. It will switch to only making decisions based on constraints/diversity/lease preferences, which is still needed since the new store-level logic will only check for store-level balance, not that all replicas' constraints are properly met. 3. The new code will have to avoid violating constraints/diversity/lease preferences. 4. The new code should consider range count, disk fullness, and maybe writes per second as well. 5. In order to avoid making decisions based on bad data, I'd like to extend lease transfers to pass along QPS data to the new leaseholder and preemptive snapshots to pass along WPS data to the new replica. This may not be strictly necessary, as shown by the success of this prototype, but should make for more reliable decision making. I tested this out on TPC-C 5k on 15 nodes and am able to consistently get 94% efficiency, which is the max I've seen using a build of the workload generator that erroneously includes the ramp-up period in its final stats. The first run with this code only got 85% because it took a couple minutes to make all the lease transfers it wanted, but then all subsequent runs got the peak efficiency while making negligibly few lease transfers. Note that I didn't even have to implement replica rebalancing to get these results, which oddly contradicts my previous claims. However, I believe that's because I did the initial split/scatter using a binary containing cockroachdb#26438, so the replicas were already better scattered than by default. I ran TPC-C on that build without these changes a couple times, though, and didn't get better than 65% efficiency, so the scatter wasn't the cause of the good results here. Touches cockroachdb#26059, cockroachdb#17979 Release note: None [prototype] storage: Extend new allocator to also move range replicas With this update, TPC-C 10k on 30 went from overloaded to running at peak efficiency over the course of about 4 hours (the manual partitioning approach takes many hours to move all the replicas as well, for a point of comparison). This is without having to run the replica scatter from cockroachdb#26438. Doing a 5 minute run to get a result that doesn't include all the rebalancing time shows: _elapsed_______tpmC____efc__avg(ms)__p50(ms)__p90(ms)__p95(ms)__p99(ms)_pMax(ms) 290.9s 124799.1 97.0% 548.6 486.5 872.4 1140.9 2281.7 10200.5 I think it may have a small bug in it still, since at one point early on one of the replicas from the warehouse table on the node doing the relocating thought that it had 16-17k QPS, which wasn't true by any other metric in the system. Restarting the node fixed it though. I'm not too concerned about the bug, since I assume I just made a code mistake, not that anything about the approach fundamentally leads to a random SQL table replica gets 10s of thousands of QPS. Range 1 is also back to getting a ton of QPS (~3k) even though I raised the range cache size from 1M to 50M. Looking at slow query traces shows a lot of range lookups, way more than I'd expect given that ranges weren't moving around at the time of the traces. Release note: None Release note: None
No description provided.
The text was updated successfully, but these errors were encountered: