Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RELAY][PASS] Common subexpression elimination #2639

Merged
merged 4 commits into from
Mar 3, 2019

Conversation

vinx13
Copy link
Member

@vinx13 vinx13 commented Feb 21, 2019

This is an optimization pass that eliminates common subexpressions. During the pass, it tries to replace an expression with a previously appeared expression with the same input and attributes. The fskip callback argument allows us to skip specific expressions.

cc @tqchen

@tqchen
Copy link
Member

tqchen commented Feb 21, 2019

@kazum @ZihengJiang please help to take a look

@tqchen
Copy link
Member

tqchen commented Feb 22, 2019

@jroesch can you manage this PR as per https://docs.tvm.ai/contribute/committer_guide.html, a good chance to test out your committer rights.

src/relay/pass/eliminate_common_subexpr.cc Outdated Show resolved Hide resolved
* \return Whether two expressions are equal scalars.
*/
inline bool IsEqualScalar(const Expr& a, const Expr& b) {
const auto* constant_a = a.as<ConstantNode>();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

constant_a is nullptr with relay.var("x", shape=(1, 16)) in your test script. Is this what you expect?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, this function is intended to enable combining different Constant instance with the same value

@kazum
Copy link
Contributor

kazum commented Feb 27, 2019

With the below code, the result is as expected.

from tvm import relay

x = relay.var("x", shape=(1, 16))
y1 = relay.add(x, relay.const(1.0, "float32"))                                                                                                                                      
y2 = relay.add(x, relay.const(1.0, "float32"))                                                                                                                                      
y = relay.add(y1, y2)
f = relay.Function([x], y)
f = relay.ir_pass.eliminate_common_subexpr(f)
print(f)
fn (%x: Tensor[(1, 16), float32]) {
  %0 = add(%x, 1f)
  %1 = add(%0, %0)
  %1
}

However, when I changed the code a bit as follows, elimination did not work. Is this not a scope of this PR?

from tvm import relay

x = relay.var("x", shape=(1, 16))
y1 = relay.add(relay.const(1.0, "float32"), x)
y2 = relay.add(relay.const(1.0, "float32"), x)
y = relay.add(y1, y2)
f = relay.Function([x], y)
f = relay.ir_pass.eliminate_common_subexpr(f)
print(f)
fn (%x: Tensor[(1, 16), float32]) {
  %0 = add(1f, %x)
  %1 = add(1f, %x)
  %2 = add(%0, %1)
  %2          
} 

@vinx13
Copy link
Member Author

vinx13 commented Feb 28, 2019

@kazum yes, it is a limitation of current implementation

return GetRef<Call>(candidate);
}
}
expr_map_[new_call->args[0]].push_back(new_call);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me ask one more question. expr_map_ is a map from new_call->args[0] to new_call. Can we change it to a map from new_call->op to new_call? Then, this PR also handles the case of
#2639 (comment), doesn't it?

What I mean is like as follows:

auto it = expr_map_.find(new_call->op);
if (it != expr_map_.end()) {
  for (const CallNode* candidate : it->second) {
    bool is_equivalent = true;
    if (!attrs_equal(new_call->attrs, candidate->attrs)) {
      continue;
    }
    for (size_t i = 0; i < new_call->args.size(); i++) {
      if (!new_call->args[i].same_as(candidate->args[i]) &&
          !IsEqualScalar(new_call->args[i], candidate->args[i])) {
        is_equivalent = false;
        break;
      }
    }
    if (!is_equivalent) continue;
    return GetRef<Call>(candidate);
  }
}
expr_map_[new_call->op].push_back(new_call);
return new_expr;

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason I chose to map from new_call->args[0] is to avoid searching a long list of candidates. But yes you are right, on a second thought I think it is okay to map from op.

Copy link
Contributor

@kazum kazum left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, thanks!

@yzhliu yzhliu mentioned this pull request Mar 2, 2019
28 tasks
@tqchen tqchen merged commit 1ca0393 into apache:master Mar 3, 2019
@tqchen
Copy link
Member

tqchen commented Mar 3, 2019

Thanks, @vinx13 @kazum , this is now merged

bwasti pushed a commit to facebookexperimental/tvm that referenced this pull request Mar 6, 2019
wweic pushed a commit to neo-ai/tvm that referenced this pull request Mar 9, 2019
wweic pushed a commit to neo-ai/tvm that referenced this pull request Mar 12, 2019
wweic pushed a commit to neo-ai/tvm that referenced this pull request Mar 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants