-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
release-22.1.0: colexec: fix sort chunks with disk spilling in very rare circumstances #80715
release-22.1.0: colexec: fix sort chunks with disk spilling in very rare circumstances #80715
Conversation
This commit fixes a long-standing but very rare bug which could result in some rows being dropped when sort chunks ("segmented sort") spills to disk. The root cause is that a deselector operator is placed on top of the input to the sort chunks op (because its "chunker" spooler assumes no selection vector on batches), and that deselector uses the same allocator as the sort chunks. If the allocator's budget is used up, then an error is thrown, and it is caught by the disk-spilling infrastructure that is wrapping this whole `sort chunks -> chunker -> deselector` chain; the error is then suppressed, and spilling to disk occurs. However, crucially, it was always assumed that the error occurred in `chunker`, so only that component knows how to properly perform the fallover. If the error occurs in the deselector, the deselector might end up losing a single input batch. We worked around this by making a fake allocation in the deselector before reading the input batch. However, if the stars align, and the error occurs _after_ reading the input batch in the deselector, that input batch will be lost, and we might get incorrect results. For the bug to occur a couple of conditions need to be met: 1. The "memory budget exceeded" error must occur for the sort chunks operation. It is far more likely that it will occur in the "chunker" because that component can buffer an arbitrarily large number of tuples and because we did make that fake allocation. 2. The input operator to the chain must be producing batches with selection vectors on top - if this is not the case, then the deselector is a noop. An example of such an input is a table reader with a filter on top. The fix is quite simple - use a separate allocator for the deselector that has an unlimited budget. This allows us to still properly track the memory usage of an extra batch created in the deselector without it running into these difficulties with disk spilling. This also makes it so that if a "memory budget exceeded" error does occur in the deselector (which is possible if `--max-sql-memory` has been used up), it will not be caught by the disk-spilling infrastructure and will be propagate to the user - which is the expected and desired behavior in such a scenario. There is no explicit regression test for this since our existing unit tests already exercise this scenario once the fake allocation in the deselector is removed. Release note (bug fix): Previously, in very rare circumstances CockroachDB could incorrectly evaluate queries with ORDER BY clause when the prefix of ordering was already provided by the index ordering of the scanned table.
b02399a
to
de35ca1
Compare
Thanks for opening a backport. Please check the backport criteria before merging:
If some of the basic criteria cannot be satisfied, ensure that the exceptional criteria are satisfied within.
Add a brief release justification to the body of your PR to justify this backport. Some other things to consider:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(and also add to #release-backports)
Reviewed 7 of 7 files at r1, all commit messages.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @cucaroach and @msirek)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 2 of 0 LGTMs obtained (waiting on @cucaroach)
Backport 1/1 commits from #80679 on behalf of @yuzefovich.
/cc @cockroachdb/release
This commit fixes a long-standing but very rare bug which could result
in some rows being dropped when sort chunks ("segmented sort") spills
to disk.
The root cause is that a deselector operator is placed on top of the
input to the sort chunks op (because its "chunker" spooler assumes no
selection vector on batches), and that deselector uses the same
allocator as the sort chunks. If the allocator's budget is used up, then
an error is thrown, and it is caught by the disk-spilling infrastructure
that is wrapping this whole
sort chunks -> chunker -> deselector
chain; the error is then suppressed, and spilling to disk occurs.
However, crucially, it was always assumed that the error occurred in
chunker
, so only that component knows how to properly perform thefallover. If the error occurs in the deselector, the deselector might
end up losing a single input batch.
We worked around this by making a fake allocation in the deselector
before reading the input batch. However, if the stars align, and the
error occurs after reading the input batch in the deselector, that
input batch will be lost, and we might get incorrect results.
For the bug to occur a couple of conditions need to be met:
operation. It is far more likely that it will occur in the "chunker"
because that component can buffer an arbitrarily large number of tuples
and because we did make that fake allocation.
selection vectors on top - if this is not the case, then the deselector
is a noop. An example of such an input is a table reader with a filter
on top.
The fix is quite simple - use a separate allocator for the deselector
that has an unlimited budget. This allows us to still properly track the
memory usage of an extra batch created in the deselector without it
running into these difficulties with disk spilling. This also makes it
so that if a "memory budget exceeded" error does occur in the deselector
(which is possible if
--max-sql-memory
has been used up), it will notbe caught by the disk-spilling infrastructure and will be propagate to
the user - which is the expected and desired behavior in such
a scenario.
There is no explicit regression test for this since our existing unit
tests already exercise this scenario once the fake allocation in the
deselector is removed.
Fixes: #80645.
Release note (bug fix): Previously, in very rare circumstances
CockroachDB could incorrectly evaluate queries with ORDER BY clause when
the prefix of ordering was already provided by the index ordering of the
scanned table.
Release justification: low risk bug fix.