Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor EliminateDuplicatedExpr optimizer pass to avoid clone #10218

Merged
merged 5 commits into from
Apr 26, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions datafusion-cli/Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion datafusion/optimizer/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -47,10 +47,10 @@ datafusion-common = { workspace = true, default-features = true }
datafusion-expr = { workspace = true }
datafusion-physical-expr = { workspace = true }
hashbrown = { version = "0.14", features = ["raw"] }
indexmap = { workspace = true }
itertools = { workspace = true }
log = { workspace = true }
regex-syntax = "0.8.0"

[dev-dependencies]
ctor = { workspace = true }
datafusion-sql = { workspace = true }
Expand Down
143 changes: 86 additions & 57 deletions datafusion/optimizer/src/eliminate_duplicated_expr.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,12 @@

use crate::optimizer::ApplyOrder;
use crate::{OptimizerConfig, OptimizerRule};
use datafusion_common::Result;
use datafusion_expr::expr::Sort as ExprSort;
use datafusion_common::tree_node::Transformed;
use datafusion_common::{internal_err, Result};
use datafusion_expr::logical_plan::LogicalPlan;
use datafusion_expr::{Aggregate, Expr, Sort};
use hashbrown::HashSet;

use indexmap::IndexSet;
use std::hash::{Hash, Hasher};
/// Optimization rule that eliminate duplicated expr.
#[derive(Default)]
pub struct EliminateDuplicatedExpr;
Expand All @@ -35,78 +35,107 @@ impl EliminateDuplicatedExpr {
Self {}
}
}

// use this structure to avoid initial clone
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// use this structure to avoid initial clone
/// Wrap the Expr in a Wrapper to support specialized comparison.
///
/// Ignores the sort options for `SortExpr` because if the expression is the same
/// the subsequent exprs are never matched
///
/// For example, `ORDER BY a ASC a DESC` is the same
// as `ORDER BY a ASC` (the second `a DESC` is never compared)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure!

#[derive(Eq, Clone, Debug)]
struct SortExprWrapper {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wrap the Expr in a Wrapper to support specialized comparison

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very clever 👏

expr: Expr,
}
impl PartialEq for SortExprWrapper {
fn eq(&self, other: &Self) -> bool {
match (&self.expr, &other.expr) {
(Expr::Sort(own_sort), Expr::Sort(other_sort)) => {
own_sort.expr == other_sort.expr
}
_ => self.expr == other.expr,
}
}
}
impl Hash for SortExprWrapper {
fn hash<H: Hasher>(&self, state: &mut H) {
match &self.expr {
Expr::Sort(sort) => {
sort.expr.hash(state);
}
_ => {
self.expr.hash(state);
}
}
}
}
impl OptimizerRule for EliminateDuplicatedExpr {
fn try_optimize(
&self,
plan: &LogicalPlan,
_plan: &LogicalPlan,
_config: &dyn OptimizerConfig,
) -> Result<Option<LogicalPlan>> {
internal_err!("Should have called EliminateDuplicatedExpr::rewrite")
}

fn apply_order(&self) -> Option<ApplyOrder> {
Some(ApplyOrder::TopDown)
}

fn supports_rewrite(&self) -> bool {
true
}

fn rewrite(
&self,
plan: LogicalPlan,
_config: &dyn OptimizerConfig,
) -> Result<Transformed<LogicalPlan>> {
match plan {
LogicalPlan::Sort(sort) => {
let normalized_sort_keys = sort
let len = sort.expr.len();
let normalized_sort_keys: Vec<_> = sort
.expr
.iter()
.map(|e| match e {
Expr::Sort(ExprSort { expr, .. }) => {
Expr::Sort(ExprSort::new(expr.clone(), true, false))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

avoid the normalized clone here

}
_ => e.clone(),
})
.collect::<Vec<_>>();
.into_iter()
.map(|e| SortExprWrapper { expr: e })
.collect();

// dedup sort.expr and keep order
let mut dedup_expr = Vec::new();
let mut dedup_set = HashSet::new();
sort.expr.iter().zip(normalized_sort_keys.iter()).for_each(
|(expr, normalized_expr)| {
if !dedup_set.contains(normalized_expr) {
dedup_expr.push(expr);
dedup_set.insert(normalized_expr);
}
},
);
if dedup_expr.len() == sort.expr.len() {
Ok(None)
} else {
Ok(Some(LogicalPlan::Sort(Sort {
expr: dedup_expr.into_iter().cloned().collect::<Vec<_>>(),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

avoid another clone here

input: sort.input.clone(),
fetch: sort.fetch,
})))
let mut index_set = IndexSet::new(); // use index_set instead of Hashset to preserve order
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use index_set to preserve the original order of sort

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also quite clever

I think you can avoid a Vec here if you skip normalized_sort_keys and create unique_exprs directly, as you did below.

Something like

let unique_exprs: Vec<Expr> = sort
                    .expr
                    .into_iter()
                    // use SortExpr wrapper to ignore sort options
                    .map(|e| SortExprWrapper { expr: e })
                    .collect::<IndexSet<_>>()
                    .into_iter()
                    .map(|wrapper| wrapper.expr)
                    .collect();

for wrapper in normalized_sort_keys {
index_set.insert(wrapper);
}
let unique_exprs: Vec<_> =
index_set.into_iter().map(|wrapper| wrapper.expr).collect();
let transformed = if len != unique_exprs.len() {
Transformed::yes
} else {
Transformed::no
};

Ok(transformed(LogicalPlan::Sort(Sort {
expr: unique_exprs,
input: sort.input,
fetch: sort.fetch,
})))
}
LogicalPlan::Aggregate(agg) => {
// dedup agg.groupby and keep order
let mut dedup_expr = Vec::new();
let mut dedup_set = HashSet::new();
agg.group_expr.iter().for_each(|expr| {
if !dedup_set.contains(expr) {
dedup_expr.push(expr.clone());
dedup_set.insert(expr);
}
});
if dedup_expr.len() == agg.group_expr.len() {
Ok(None)
let len = agg.group_expr.len();

let unique_exprs: Vec<Expr> = agg
.group_expr
.into_iter()
.collect::<IndexSet<_>>()
.into_iter()
.collect();

let transformed = if len != unique_exprs.len() {
Transformed::yes
} else {
Ok(Some(LogicalPlan::Aggregate(Aggregate::try_new(
agg.input.clone(),
dedup_expr,
agg.aggr_expr.clone(),
)?)))
}
Transformed::no
};

Aggregate::try_new(agg.input, unique_exprs, agg.aggr_expr)
.map(|f| transformed(LogicalPlan::Aggregate(f)))
}
_ => Ok(None),
_ => Ok(Transformed::no(plan)),
}
}

fn name(&self) -> &str {
"eliminate_duplicated_expr"
}

fn apply_order(&self) -> Option<ApplyOrder> {
Some(ApplyOrder::TopDown)
}
}

#[cfg(test)]
Expand Down