-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sql: fix pagination in UPSERT #51608
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This commit removes a couple of duplicated "init" calls as well as some unused parameters around upsert code. Release note: None
`optTableUpserter` was incorrectly overriding `curBatchSize` method by returning `insertRows.Len`, but that container is not actually used anywhere. As a result, `curBatchSize` was always considered 0, and we didn't perform the pagination on the UPSERTs. The bug was introduced in 33339 (in 19.1.0). Release note (bug fix): Previously, CockroachDB could hit a "command is too large" error when performing UPSERT operation with many values. Internally, we attempt to perform such operation by splitting it into "batches", but the batching mechanism was broken.
RaduBerinde
approved these changes
Jul 20, 2020
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @nvanbenschoten and @RaduBerinde)
TFTR! bors r+ |
Build succeeded |
yuzefovich
added a commit
to yuzefovich/cockroach
that referenced
this pull request
Sep 16, 2020
In cockroachdb#51608 we fixed a bug with pagination of UPSERTs (now it is possible to have multiple batches when performing an UPSERT of over 10k rows), and it exposed another bug in how we're handling an UPSERT with RETURNING clause - we were clearing the row container too early which would result in an index of bounds crash. This is now fixed. Release note (bug fix): Starting from v20.2.0-alpha.3 CockroachDB would crash when performing an UPSERT with RETURNING clause of more than 10k rows, and this is now fixed.
yuzefovich
added a commit
to yuzefovich/cockroach
that referenced
this pull request
Sep 16, 2020
In cockroachdb#51608 we fixed a bug with pagination of UPSERTs (now it is possible to have multiple batches when performing an UPSERT of over 10k rows), and it exposed another bug in how we're handling an UPSERT with RETURNING clause - we were clearing the row container too early which would result in an index of bounds crash. This is now fixed. Release note (bug fix): Starting from v20.2.0-alpha.3 CockroachDB would crash when performing an UPSERT with RETURNING clause of more than 10k rows, and this is now fixed.
craig bot
pushed a commit
that referenced
this pull request
Sep 16, 2020
54420: delegate: implement SHOW GRANTS ON SCHEMA r=arulajmani a=otan Resolves #53570 Release note (sql change): Implement `SHOW GRANTS ON SCHEMA <schema_list>` 54478: sql: fix large UPSERTs with RETURNING r=yuzefovich a=yuzefovich In #51608 we fixed a bug with pagination of UPSERTs (now it is possible to have multiple batches when performing an UPSERT of over 10k rows), and it exposed another bug in how we're handling an UPSERT with RETURNING clause - we were clearing the row container too early which would result in an index of bounds crash. This is now fixed. Fixes: #54465. Release note (bug fix): Starting from v20.2.0-alpha.3 CockroachDB would crash when performing an UPSERT with RETURNING clause of more than 10k rows, and this is now fixed. Co-authored-by: Oliver Tan <[email protected]> Co-authored-by: Yahor Yuzefovich <[email protected]>
yuzefovich
added a commit
to yuzefovich/cockroach
that referenced
this pull request
Sep 17, 2020
In cockroachdb#51608 we fixed a bug with pagination of UPSERTs (now it is possible to have multiple batches when performing an UPSERT of over 10k rows), and it exposed another bug in how we're handling an UPSERT with RETURNING clause - we were clearing the row container too early which would result in an index of bounds crash. This is now fixed. Release note (bug fix): Starting from v20.2.0-alpha.3 CockroachDB would crash when performing an UPSERT with RETURNING clause of more than 10k rows, and this is now fixed.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
sql: minor cleanup around upsert
This commit removes a couple of duplicated "init" calls as well as some
unused parameters around upsert code.
Release note: None
sql: fix pagination in UPSERT
optTableUpserter
was incorrectly overridingcurBatchSize
method byreturning
insertRows.Len
, but that container is not actually usedanywhere. As a result,
curBatchSize
was always considered 0, and wedidn't perform the pagination on the UPSERTs. The bug was introduced in
#33339 (in 19.1.0).
Fixes: #51391.
Release note (bug fix): Previously, CockroachDB could hit a "command is
too large" error when performing UPSERT operation with many values.
Internally, we attempt to perform such operation by splitting it into
"batches", but the batching mechanism was broken.