Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Infer the count of maximum distinct values from min/max #3837

Merged
merged 2 commits into from
Oct 15, 2022

Conversation

isidentical
Copy link
Contributor

Which issue does this PR close?

Part of #3813.

Rationale for this change

This was a point that came out during the initial join cardinality computation PR (link) where the logic only gave an estimate when the distinct count information was available directly in the statistics. This was effective for certain use cases where the distinct count was already calculated (e.g. propagated statistics from stuff like aggregates) but for statistics that originate from initial user input, having distinct_count is very unlikely (e.g. there is no way to save distinct count when exporting a parquet file from pandas, none of the official backends [pyarrow/fastparquet] even support such a thing in their write APIs). So one main thing we can do is actually use min/max values (which are nearly universal at this point) to calculate the maximum possible distinct count (which is actually what we need for selectivity).

What changes are included in this PR?

A fallback option for inferring the maximum distinct count when the actual distinct count information is not available. It only works with numeric values (more specifically, integers) at this point (we can technically determine the range for timestamps or floats, but neither of them feels close to accurate since that would be essentially brute forcing every possible value within the precision boundaries, something that feels very unlikely to happen in real world, but open for discussion).

Are there any user-facing changes?

No backwards incompatible changes.

@github-actions github-actions bot added the core Core DataFusion crate label Oct 15, 2022
@isidentical isidentical force-pushed the gh-3813-distinct-calc branch from 0ce00ff to b265204 Compare October 15, 2022 00:32
@isidentical isidentical marked this pull request as ready for review October 15, 2022 15:19
Copy link
Contributor

@Dandandan Dandandan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @isidentical looking great 😃

@Dandandan Dandandan merged commit fe0000e into apache:master Oct 15, 2022
@ursabot
Copy link

ursabot commented Oct 15, 2022

Benchmark runs are scheduled for baseline = e02376d and contender = fe0000e. fe0000e is a master commit associated with this PR. Results will be available as each benchmark for each run completes.
Conbench compare runs links:
[Skipped ⚠️ Benchmarking of arrow-datafusion-commits is not supported on ec2-t3-xlarge-us-east-2] ec2-t3-xlarge-us-east-2
[Skipped ⚠️ Benchmarking of arrow-datafusion-commits is not supported on test-mac-arm] test-mac-arm
[Skipped ⚠️ Benchmarking of arrow-datafusion-commits is not supported on ursa-i9-9960x] ursa-i9-9960x
[Skipped ⚠️ Benchmarking of arrow-datafusion-commits is not supported on ursa-thinkcentre-m75q] ursa-thinkcentre-m75q
Buildkite builds:
Supported benchmarks:
ec2-t3-xlarge-us-east-2: Supported benchmark langs: Python, R. Runs only benchmarks with cloud = True
test-mac-arm: Supported benchmark langs: C++, Python, R
ursa-i9-9960x: Supported benchmark langs: Python, R, JavaScript
ursa-thinkcentre-m75q: Supported benchmark langs: C++, Java

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core Core DataFusion crate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants