release-23.1: sql: make transaction_rows_read_err prevent large scans #104364
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Backport 1/1 commits from #104290.
/cc @cockroachdb/release
Prior to this commit, setting
transaction_rows_read_err
to a non-zero value would cause a transaction to fail as soon as a statement caused the total number of rows read to exceedtransaction_rows_read_err
. However, it was possible that each statement could read many more thantransaction_rows_read_err
rows. This commit adds logic so that a single scan never reads more thantransaction_rows_read_err+1
rows iftransaction_rows_read_err
is set.Informs #70473
Release note (performance improvement): If
transaction_rows_read_err
is set to a non-zero value, we now ensure that any single scan never reads more thantransaction_rows_read_err+1
rows. This prevents transactions that would error due to thetransaction_rows_read_err
setting from causing a large performance overhead due to large scans.Release justification: low-risk fix to reduce the likelihood of OOMs for customers