Skip to content

Commit

Permalink
sqlstats: increase default value for deleted rows
Browse files Browse the repository at this point in the history
During the sql stats compaction job, we limit the amount of
rows being deleted per transaction. We used a default value
of 1024, but we have been increasinly seeing customer needing
to increase this value to allow the job to keep up with the
large amount of data being flushed.
We have been recommening a value for 20k, so being more
conservative with the default (plus the changes on cockroachdb#97123
that won't let tables get in a state with so many rows),
this commit changes the value to 10k.

Fixes cockroachdb#97528

Release note (sql change): Increase the default value of
`sql.stats.cleanup.rows_to_delete_per_txn` to 10k, to increase
efficiency of the cleanup job for sql statistics.
  • Loading branch information
maryliag committed Feb 24, 2023
1 parent 4eb5451 commit ce9f3b6
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion pkg/sql/sqlstats/persistedsqlstats/cluster_settings.go
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,6 @@ var CompactionJobRowsToDeletePerTxn = settings.RegisterIntSetting(
settings.TenantWritable,
"sql.stats.cleanup.rows_to_delete_per_txn",
"number of rows the compaction job deletes from system table per iteration",
1024,
10000,
settings.NonNegativeInt,
)

0 comments on commit ce9f3b6

Please sign in to comment.