Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-21.2: colexechash: fix an internal error with distinct mode #74872

Merged
merged 1 commit into from
Jan 18, 2022

Conversation

yuzefovich
Copy link
Member

Backport 1/5 commits from #74825.

/cc @cockroachdb/release


colexechash: fix an internal error with distinct mode

This commit fixes a bug with the hash table when it is used by the
unordered distinct when NULLs are treated as different. This is the case
when UPSERT or INSERT ... ON CONFLICT queries have to perform
upsert-distinct-on operation.

The problem was that we were updating some internal state (GroupID
slice responsible for tracking what is the current duplicate candidate
for each row being probed) in more cases than necessary. The code path
in question is used for two purposes:

  • first, when we're removing the duplicates from within the batch,
    without looking at the state of the hash table at all. In this case we
    do want the update mentioned above;
  • next, when the batch only contains unique rows, we want to remove the
    duplicates when comparing against the hash table. In this case we do not
    want the update.

The bug is fixed by refactoring the code to not update the internal
state at all; instead, we now rely on the distinct flag for each row
to tell us that the row is distinct within the batch, and we then
correctly populate HeadID value for it (which was the ultimate goal
all the time, and previously we used GroupID value as an
intermediary).

This mistake would not result in incorrect results (because distinct
flag is still marked correctly) and could only result in an internal
error due to index out of bounds. In particular, for the error to occur
the last row in the vectorized batch must have a NULL value in any
column (except for the last one) used for the distinctness check.

Fixes: #74795.

Release note (bug fix): Previously, CockroachDB could encounter an
internal error when performing UPSERT or INSERT ... ON CONFLICT queries
in some cases when the new rows contained NULL values (either NULLS
explicitly specified or NULLs used since some columns were omitted).

This commit fixes a bug with the hash table when it is used by the
unordered distinct when NULLs are treated as different. This is the case
when UPSERT or INSERT ... ON CONFLICT queries have to perform
`upsert-distinct-on` operation.

The problem was that we were updating some internal state (`GroupID`
slice responsible for tracking what is the current duplicate candidate
for each row being probed) in more cases than necessary. The code path
in question is used for two purposes:
- first, when we're removing the duplicates from within the batch,
without looking at the state of the hash table at all. In this case we
do want the update mentioned above;
- next, when the batch only contains unique rows, we want to remove the
duplicates when comparing against the hash table. In this case we do not
want the update.

The bug is fixed by refactoring the code to not update the internal
state at all; instead, we now rely on the `distinct` flag for each row
to tell us that the row is distinct within the batch, and we then
correctly populate `HeadID` value for it (which was the ultimate goal
all the time, and previously we used `GroupID` value as an
intermediary).

This mistake would not result in incorrect results (because `distinct`
flag is still marked correctly) and could only result in an internal
error due to index out of bounds. In particular, for the error to occur
the last row in the vectorized batch must have a NULL value in any
column (except for the last one) used for the distinctness check.

Release note (bug fix): Previously, CockroachDB could encounter an
internal error when performing UPSERT or INSERT ... ON CONFLICT queries
in some cases when the new rows contained NULL values (either NULLS
explicitly specified or NULLs used since some columns were omitted).
@blathers-crl
Copy link

blathers-crl bot commented Jan 14, 2022

Thanks for opening a backport.

Please check the backport criteria before merging:

  • Patches should only be created for serious issues or test-only changes.
  • Patches should not break backwards-compatibility.
  • Patches should change as little code as possible.
  • Patches should not change on-disk formats or node communication protocols.
  • Patches should not add new functionality.
  • Patches must not add, edit, or otherwise modify cluster versions; or add version gates.
If some of the basic criteria cannot be satisfied, ensure that the exceptional criteria are satisfied within.
  • There is a high priority need for the functionality that cannot wait until the next release and is difficult to address in another way.
  • The new functionality is additive-only and only runs for clusters which have specifically “opted in” to it (e.g. by a cluster setting).
  • New code is protected by a conditional check that is trivial to verify and ensures that it only runs for opt-in clusters.
  • The PM and TL on the team that owns the changed code have signed off that the change obeys the above rules.

Add a brief release justification to the body of your PR to justify this backport.

Some other things to consider:

  • What did we do to ensure that a user that doesn’t know & care about this backport, has no idea that it happened?
  • Will this work in a cluster of mixed patch versions? Did we test that?
  • If a user upgrades a patch version, uses this feature, and then downgrades, what happens?

@cockroach-teamcity
Copy link
Member

This change is Reviewable

Copy link
Collaborator

@rharding6373 rharding6373 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @mgartner)

@yuzefovich yuzefovich merged commit 99ba4e6 into cockroachdb:release-21.2 Jan 18, 2022
@yuzefovich yuzefovich deleted the backport21.2-74825 branch January 18, 2022 17:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants