forked from cockroachdb/cockroach
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
sql: add sql.mutations.max_row_size.err guardrail (large row errors)
Addresses: cockroachdb#67400 Add sql.mutations.max_row_size.err, a new cluster setting similar to sql.mutations.max_row_size.log, which limits the size of rows written to the database. Statements trying to write a row larger than this will fail with an error. (Internal queries will not fail with an error, but will log a LargeRowInternal event to the SQL_INTERNAL_PERF channel.) We're reusing eventpb.CommonLargeRowDetails as the error type, out of convenience. Release note (ops change): A new cluster setting, sql.mutations.max_row_size.err, was added, which limits the size of rows written to the database (or individual column families, if multiple column families are in use). Statements trying to write a row larger than this will fail with a code 54000 (program_limit_exceeded) error. (Internal queries writing a row larger than this will not fail, but will log a LargeRowInternal event to the SQL_INTERNAL_PERF channel.) This limit is enforced for INSERT, UPSERT, and UPDATE statements. CREATE TABLE AS, CREATE INDEX, ALTER TABLE, ALTER INDEX, IMPORT, and RESTORE will not fail with an error, but will log LargeRowInternal events to the SQL_INTERNAL_PERF channel. SELECT, DELETE, TRUNCATE, and DROP are not affected by this limit. **Note that existing rows violating the limit *cannot* be updated, unless the update shrinks the size of the row below the limit, but *can* be selected, deleted, altered, backed-up, and restored.** For this reason we recommend using the accompanying setting sql.mutations.max_row_size.log in conjunction with SELECT pg_column_size() queries to detect and fix any existing large rows before lowering sql.mutations.max_row_size.err. Release justification: Low risk, high benefit change to existing functionality. This causes statements adding large rows to fail with an error. Default is 512 MiB, which was the maximum KV size in 20.2 as of cockroachdb#61818 and also the default range_max_bytes in 21.1, meaning rows larger than this were not possible in 20.2 and are not going to perform well in 21.1.
- Loading branch information
Showing
11 changed files
with
361 additions
and
71 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,50 @@ | ||
new-server name=m1 | ||
---- | ||
|
||
exec-sql | ||
CREATE DATABASE orig; | ||
USE orig; | ||
CREATE TABLE maxrow (i INT PRIMARY KEY, s STRING); | ||
INSERT INTO maxrow VALUES (1, repeat('x', 20000)); | ||
---- | ||
|
||
query-sql | ||
SELECT i, pg_column_size(s) FROM maxrow ORDER BY i; | ||
---- | ||
1 20004 | ||
|
||
exec-sql | ||
SET CLUSTER SETTING sql.mutations.max_row_size.err = '16KiB'; | ||
---- | ||
|
||
query-sql | ||
INSERT INTO maxrow VALUES (2, repeat('x', 20000)) | ||
---- | ||
pq: row larger than max row size: table 55 family 0 primary key /Table/55/1/2/0 size 20013 | ||
|
||
exec-sql | ||
BACKUP maxrow TO 'nodelocal://1/maxrow'; | ||
CREATE DATABASE d2; | ||
RESTORE maxrow FROM 'nodelocal://1/maxrow' WITH into_db='d2'; | ||
---- | ||
|
||
query-sql | ||
SELECT i, pg_column_size(s) FROM d2.maxrow ORDER BY i; | ||
---- | ||
1 20004 | ||
|
||
query-sql | ||
INSERT INTO d2.maxrow VALUES (2, repeat('y', 20000)); | ||
---- | ||
pq: row larger than max row size: table 57 family 0 primary key /Table/57/1/2/0 size 20013 | ||
|
||
exec-sql | ||
SET CLUSTER SETTING sql.mutations.max_row_size.err = default; | ||
INSERT INTO d2.maxrow VALUES (2, repeat('y', 20000)); | ||
---- | ||
|
||
query-sql | ||
SELECT i, pg_column_size(s) FROM d2.maxrow ORDER BY i; | ||
---- | ||
1 20004 | ||
2 20004 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
Oops, something went wrong.