Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kvserver: leases thrash on ycsb/b #93540

Closed
kvoli opened this issue Dec 13, 2022 · 2 comments · Fixed by #93555
Closed

kvserver: leases thrash on ycsb/b #93540

kvoli opened this issue Dec 13, 2022 · 2 comments · Fixed by #93555
Assignees
Labels
A-kv Anything in KV that doesn't belong in a more specific category. C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior.

Comments

@kvoli
Copy link
Collaborator

kvoli commented Dec 13, 2022

Describe the problem

Load based lease rebalancing is causing leases to thrash when running ycsb/b

image

image

This is causing a perf regression of 15-20%.

To Reproduce

roachtest run 'ycsb/B/nodes=3/cpu=32$

Expected behavior

Leases don't thrash.

Jira issue: CRDB-22387

@kvoli kvoli added C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. A-kv Anything in KV that doesn't belong in a more specific category. labels Dec 13, 2022
@kvoli kvoli self-assigned this Dec 13, 2022
@kvoli kvoli changed the title kvserver: leases thrash on ycsb kvserver: leases thrash on ycsb/b Dec 13, 2022
@kvoli
Copy link
Collaborator Author

kvoli commented Dec 13, 2022

This was introduced in #91633, however it was an issue before just not to the same extent.

The issue is that gossip is triggered on capacity changes for lease count. When there are few enough ranges, this will trigger on every lease transfer.

The store_rebalancer also locally updates it's own store descriptor and the target after a lease transfer, with the QPS of the range.

These two mechanisms race, where the gossip occurs with stale values immediately following the transfer, then soon after the store_pool is also updated locally. This leads to a weird end state where the store_pool is inconsistent w.r.t actual load on the stores.

With additional logging this is shown below:

I221213 21:22:29.324567 546 13@kv/kvserver/allocator/storepool/store_pool.go:662 ⋮ [n1] 3378  storepool update after lease transfer from 1 qps=30599.417506 to 2 qps=7568.621903
I221213 21:22:29.324635 546 13@kv/kvserver/store_rebalancer.go:656 ⋮ [n1,s1,store-rebalancer] 3379  considering lease transfer for r124 with 698.99 qps
I221213 21:22:29.324713 546 13@kv/kvserver/store_rebalancer.go:711 ⋮ [n1,s1,store-rebalancer] 3380  transferring lease for r124 (qps=698.99) to store s2 (qps=7568.62) from local store s1 (qps=30599.42)
I221213 21:22:29.324736 546 13@kv/kvserver/replicate_queue.go:2036 ⋮ [n1,s1,store-rebalancer] 3381  transferring lease to s2
I221213 21:22:29.324930 143 13@kv/kvserver/allocator/storepool/store_pool.go:542 ⋮ [n1] 3382  received gossip info from 2, qps: 6834.766041
I221213 21:22:29.325462 143 13@kv/kvserver/allocator/storepool/store_pool.go:542 ⋮ [n1] 3383  received gossip info from 2, qps: 6834.766041
I221213 21:22:29.327252 143 13@kv/kvserver/allocator/storepool/store_pool.go:542 ⋮ [n1] 3384  received gossip info from 1, qps: 31333.273369
I221213 21:22:29.327284 546 13@kv/kvserver/allocator/storepool/store_pool.go:662 ⋮ [n1] 3385  storepool update after lease transfer from 1 qps=30634.286033 to 2 qps=7533.753376
I221213 21:22:29.327316 546 13@kv/kvserver/store_rebalancer.go:656 ⋮ [n1,s1,store-rebalancer] 3386  considering lease transfer for r115 with 647.61 qps
I221213 21:22:29.327373 546 13@kv/kvserver/store_rebalancer.go:711 ⋮ [n1,s1,store-rebalancer] 3387  transferring lease for r115 (qps=647.61) to store s2 (qps=7533.75) from local store s1 (qps=30634.29)

This issue previously existed in the store pool, however the store rebalancer was unaffected as it kept a local copy of its own QPS and max threshold when considering rebalancing. It would however pick suboptimal targets due to the store pool being inconsistent.

The robust resolution to this class of inconsistency issues in the state used in allocation decisions is #93532.

A shorter term solution is to increase the capacity change gossip countdowns to a more reasonable number than 1%.

@kvoli
Copy link
Collaborator Author

kvoli commented Dec 13, 2022

Have a patch #93555 which resolves this issue:

image

craig bot pushed a commit that referenced this issue Dec 14, 2022
93555: kvserver: gossip less aggressively on capacity +/- r=shralex a=kvoli

Gossip occurs periodically and on capacity changes, when lease, range, queries per second or writes per second changes since the last gossiped value, above some threshold.

This however causes issues with the store pool state when there are frequent capacity changes due to rebalancing, as the storepool state becomes inconsistent when both gossip and local updates race. This induces thrashing in high load clusters.

This patch reduces the likelihood of storepool races occurring by increasing the threshold required by capacity changes in order for them to trigger re-gossiping earlier than the default interval (10s).

resolves #93540

Release note: None

Co-authored-by: Austen McClernon <[email protected]>
@craig craig bot closed this as completed in d0c809f Dec 14, 2022
blathers-crl bot pushed a commit that referenced this issue Dec 14, 2022
Gossip occurs periodically and on capacity changes, when lease, range,
queries per second or writes per second changes since the last
gossiped value, above some threshold.

This however causes issues with the store pool state when there are
frequent capacity changes due to rebalancing, as the storepool state
becomes inconsistent when both gossip and local updates race. This
induces thrashing in high load clusters.

This patch reduces the likelihood of storepool races occurring by
increasing the threshold required by capacity changes in order for them
to trigger re-gossiping earlier than the default interval (10s).

resolves #93540

Release note: None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-kv Anything in KV that doesn't belong in a more specific category. C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant