Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sql: mark planNodeToRowSource as streaming intelligently #63903

Merged
merged 1 commit into from
Apr 21, 2021

Conversation

yuzefovich
Copy link
Member

Previously, out of abundance of caution (and some laziness) we marked
all planNodeToRowSource processors as of "streaming" nature. This
marker influences whether we wrap it with a streaming or buffering
columnarizer into the vectorized flow. However, doing so is unnecessary
in most cases and kills some of the benefits of the vectorized model.
The only special planNode is hookFnNode which must be streaming, all
others are safe to have buffering around them. This commit implements
that idea. This required adding another method to Processor interface.

Release note: None

Previously, out of abundance of caution (and some laziness) we marked
all `planNodeToRowSource` processors as of "streaming" nature. This
marker influences whether we wrap it with a streaming or buffering
columnarizer into the vectorized flow. However, doing so is unnecessary
in most cases and kills some of the benefits of the vectorized model.
The only special planNode is `hookFnNode` which must be streaming, all
others are safe to have buffering around them. This commit implements
that idea. This required adding another method to `Processor` interface.

Release note: None
@yuzefovich yuzefovich requested review from rytaft, michae2, a team and pbardea and removed request for a team April 20, 2021 03:26
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@yuzefovich
Copy link
Member Author

yuzefovich commented Apr 20, 2021

An example of before and after (note how "local scan buffer" always outputs a batch with a single tuple in the former case slowing down the execution tree below it). I think this change matters the most when we have subqueries.

@yuzefovich yuzefovich removed the request for review from pbardea April 20, 2021 04:57
Copy link
Collaborator

@michae2 michae2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewed 10 of 10 files at r1.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @rytaft)

@yuzefovich
Copy link
Member Author

TFTR!

bors r+

craig bot pushed a commit that referenced this pull request Apr 20, 2021
63238: roachtest: update libpq blocklist to ignore TestCopyInBinaryError r=rafiss a=RichardJCai

roachtest: update libpq blocklist to ignore TestCopyInBinaryError

TestCopyInBinary's behaviour was incorrect in the test since we were not receiving an expected error (`pq: only text format supported for COPY`). 
Furthermore the test would sporadically panic causing the following tests to fail.

Release note: None

Resolves #57855 

63244: logictest: compare floating point values approximately on s390x r=ajwerner a=jonathan-albrecht-ibm

### Overview
On s390x in the std math package and some c-deps, floating point calculations can produce results that differ from the values calculated on amd64. This patch adds a function to compare logictest floating point and decimal values within a small relative margin on s390x. The existing behavior on all other platforms remains the same.

On s390x, there are three main reasons that floating point calculations sometimes give different results:
* the go compiler generates the s390x "fused multiply and add" (FMA) instruction where possible,
* the go math package uses s390x optimized versions of some functions,
* some c libs eg. libgeos, libproj also have platform specific floating point calculation differences.

### Proposal
The motivation for this work is so that users building CRDB on s390x do not need to diagnose tests that fail because of platform dependent floating point differences.

This PR proposes one possible approach to dealing with platform dependent floating point differences. Since development, testing and CI are done on amd64 it keeps the current logic for determining float equality exactly the same. On s390x, it determines values of decimal and float column types (R and F) in query tests to be equal if they are within a tolerance. See the new pkg/testutils/floatcmp package for the implementation of the approximate equality logic and changes in logictest.go to see how it is applied to only s390x.

There are probably other approaches I haven't thought of that would also work. I'd like to use this proposal to start a conversation on how all tests in CRDB that currently fail due to expected floating point differences could eventually be made to pass.

Of course platforms other than s390x may also have differences but I haven't looked at any other platforms. The changes should be easily extendable to other platforms if needed.

### Future Work
The changes in this PR allow the following tests to pass on s390x:
* TestLogic/fakedist-disk/builtin_function/extra_float_digits_3
*  TestLogic/fakedist-metadata/builtin_function/extra_float_digits_3
*  TestLogic/fakedist-vec-off/builtin_function/extra_float_digits_3
*  TestLogic/fakedist/builtin_function/extra_float_digits_3
*  TestLogic/local-spec-planning/builtin_function/extra_float_digits_3
*  TestLogic/local-vec-off/builtin_function/extra_float_digits_3
*  TestLogic/local/builtin_function/extra_float_digits_3

There are about 70 more tests that currently fail due to platform floating point differences on s390x, many are tests of geospatial functions. Assuming we can come up with a good approach, I'd like to continue working on fixes to be submitted in future PRs.

Release note: None

63802: colbuilder: optimize IS DISTINCT FROM NULL when null is casted r=yuzefovich a=yuzefovich

We have an optimized operator for `Is{Not}DistinctFrom` operation which
we can plan currently only if the right side is a constant NULL. In some
cases the optimizer might create a cast expression on the right in order
to propagate the type of the null, and previously we would fallback to
the default comparison operator in such scenario. This is suboptimal,
and this commit fixes the issue by special casing the scenario of
casting NULL to some type.

Fixes: #63792.

Release note: None

63903: sql: mark planNodeToRowSource as streaming intelligently r=yuzefovich a=yuzefovich

Previously, out of abundance of caution (and some laziness) we marked
all `planNodeToRowSource` processors as of "streaming" nature. This
marker influences whether we wrap it with a streaming or buffering
columnarizer into the vectorized flow. However, doing so is unnecessary
in most cases and kills some of the benefits of the vectorized model.
The only special planNode is `hookFnNode` which must be streaming, all
others are safe to have buffering around them. This commit implements
that idea. This required adding another method to `Processor` interface.

Release note: None

Co-authored-by: richardjcai <[email protected]>
Co-authored-by: Jonathan Albrecht <[email protected]>
Co-authored-by: Yahor Yuzefovich <[email protected]>
@craig
Copy link
Contributor

craig bot commented Apr 20, 2021

Build failed (retrying...):

@craig
Copy link
Contributor

craig bot commented Apr 20, 2021

Build failed (retrying...):

@craig
Copy link
Contributor

craig bot commented Apr 21, 2021

Build succeeded:

@craig craig bot merged commit b32bbb5 into cockroachdb:master Apr 21, 2021
@yuzefovich yuzefovich deleted the streaming-proc branch April 21, 2021 00:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants