-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sqlstats: increase default value for deleted rows #97642
Conversation
It looks like your PR touches production code but doesn't add or edit any test code. Did you consider adding tests to your PR? 🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf. |
During the sql stats compaction job, we limit the amount of rows being deleted per transaction. We used a default value of 1024, but we have been increasinly seeing customer needing to increase this value to allow the job to keep up with the large amount of data being flushed. We have been recommening a value for 20k, so being more conservative with the default (plus the changes on cockroachdb#97123 that won't let tables get in a state with so many rows), this commit changes the value to 10k. Fixes cockroachdb#97528 Release note (sql change): Increase the default value of `sql.stats.cleanup.rows_to_delete_per_txn` to 10k, to increase efficiency of the cleanup job for sql statistics.
1318e4e
to
ce9f3b6
Compare
Just curious, how did we decide on 10k? DId we test this value under a workload? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 1 of 1 files at r1, all commit messages.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @maryliag)
-- commits
line 17 at r1:
Should we create an issue to make this dynamic in the future? Have a min and max value. Then have the actual number change based on the time it take to deletes the rows, or some other metric where it would be better to have a smaller number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how did we decide on 10k?
During escalations we did testing with heavy workloads and have been recommending users to change to 20k instead (as I mentioned on the PR description). Since the majority of users won't have a workload so heavy, I chose a little more conservative value of half our recommendation, so 10k, which is already 10x better the current default value.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @j82w)
Previously, j82w (Jake) wrote…
Should we create an issue to make this dynamic in the future? Have a min and max value. Then have the actual number change based on the time it take to deletes the rows, or some other metric where it would be better to have a smaller number.
issue created: #97712
Code quote:
sql.stats.cleanup.rows_to_delete_per_txn
bors r+ |
Build succeeded: |
During the sql stats compaction job, we limit the amount of rows being deleted per transaction. We used a default value of 1024, but we have been increasinly seeing customer needing to increase this value to allow the job to keep up with the large amount of data being flushed.
We have been recommening a value for 20k, so being more conservative with the default (plus the changes on #97123 that won't let tables get in a state with so many rows), this commit changes the value to 10k.
Fixes #97528
Release note (sql change): Increase the default value of
sql.stats.cleanup.rows_to_delete_per_txn
to 10k, to increase efficiency of the cleanup job for sql statistics.