[SPARK-37682][SQL]Apply 'merged column' and 'bit vector' in RewriteDistinctAggregates #34953
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
Adjust the grouping rules of
distinctAggGroups
, specifically inRewriteDistinctAggregates.groupDistinctAggExpr
, so that some 'distinct' can be grouped together, and conditions(eg. CaseWhen, If) involved in them will be stored in the 'if_vector' to avoid unnecessary expanding. The 'if_vector' and 'filter_vector' introduced here can reduce the number of columns in the expand. Besides, children in distinct aggregate function with same datatype will share same project column.Here is a example comparing the difference between the original expand rewriting and the new with 'merged column' and 'bit vector' (in sql):
Current rule will rewrite the above sql plan to the following (pseudo) logical plan:
After applying 'merged column' and 'bit vector' tricks, the logical plan will become:
Why are the changes needed?
It can save mass memory and improve performance in some cases like:
Does this PR introduce any user-facing change?
No
How was this patch tested?
Existing test and a new UT in DataFrameAggregateSuite to test 'Vector Size larger than 64'.
I have written some SQL locally to test the correctness of the distinct calculation, but it seems difficult to cover most of the cases. Perhaps spark's existing test set will be more comprehensive, so I didn't leave it in the code.