Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[External codegen] Add test cases for fused ops with manual annotation #4741

Closed
wants to merge 18 commits into from

Conversation

masahi
Copy link
Member

@masahi masahi commented Jan 19, 2020

This PR contains

  • A custom annotator which detects conv + bias add + relu ops
  • An example of applying FoldScaleAxis and FoldConstant to layers of conv + bn + relu to get conv + bias add + relu ops which the annotator can detect (before partitioning)
  • dnnl runtime support for conv + bias add + relu op using its post_ops feature.
  • Updates on CodegenDNNL which enable translating fused Relay conv + bias add + relu ops to dnnl counterpart.
  • Test cases on a simple network and mobilenet which demonstrate features above.

The result of partitioning mobilenet is dumped here.

please review @zhiics @comaniac

@masahi masahi changed the title [Partitioning] Add test cases for fused ops with manual annotation [External codegen] Add test cases for fused ops with manual annotation Jan 19, 2020
};

Output ret;
if (auto conv_call = DetectFusedConv2DBiasReLU(call)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if we really want to handle fused op from relay for external codegen. This looks quite ad-hoc to me. You may have countless combinations.

Copy link
Member Author

@masahi masahi Jan 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea is for it to serve as an example of handling fused ops inside external codegen. I assume dnnl backend itself is not meant to be used in production; The purpose is to be a more realistic example than CodegenC, so I thought why don't we add an example of how to handle fused ops. I never intended to cover other fusion cases.

Since we are trying to be so nice to new backend implementers (who might not be familiar with TVM internals) as to add convenient op level annotation and semi automatic fusion mechanism etc for them, I don't think it is reasonable to expect them to figure out how to handle more complicated but often common cases (like fusion) and everything else on their own. Hope this make sense.

Copy link
Member Author

@masahi masahi Jan 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another usage scenario which I think is going to be common is translation from quantized Relay models. It would be great to add an example of translating QNN subgraphs to backend implementations, for example. Without it, it is not obvious how to go about it.

Since DNNL has quantization support and everyone can use it, it would serve as a good example and test case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I agree with you that it's fine to handle fusion in this DNNL codegen, I also agree with @zhiics that the current implementation is a bit too ad-hoc even it's only used for demo purpose for now. As you have implemented, MKL DNN uses set_post_ops to attach ops to be fused. I think this part could be more general. For example:

if call == "relu":
    visit(arg)
    if this->curr_layer == "conv2d":
        generate_post_ops(call)
    else:
        generate_a_layer(call)

In this way, the codegen is able to deal with all MKL DNN supported conv2d fusion (conv2d, conv2d+add, conv2d+add+relu). We could still put heuristic pattern annotations to the annotator and improve it gradually. I like the one you made for conv2d+bias+relu in this PR, for instance.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, this is my minimal effort way to detect only the pattern I care about. Will think about how to make it more general.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can go ahead and implement this, but that would duplicate pattern matching logic that I already have in my python annotator. That sounds bad and it would become a perfect anti-example mentioned in the RFC below :)

I think I should close this one and wait for a better solution to be ready. I will wait for your input for now @comaniac @zhiics

https://discuss.tvm.ai/t/rfc-external-codegen-defining-composite-relay-operators/5470/

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I had a brief discussion with @u99127 before. I will read the discussion more carefully and probably we can discuss from there and try to have some consensus on a design/implementation. Sorry for being late/slow because I am on vacation.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can also leave the current dumb implementation as it is, with the understanding that

  • This is a temporary solution
  • It will serve as a concrete motivation and test case for validating a more general mechanism to be introduced

Trying to be a bit more clever and duplicating an entire state machine logic here do not seem worth it to me anymore. Either way I'm fine.

@masahi
Copy link
Member Author

masahi commented Jan 19, 2020

@zhiics I'm not trying to make DNNL backend more feature complete. I want to add examples and test cases of typical usage scenarios that most backend implementers are likely to encounter.

We talked on the forum that fusion is already possible with manual annotation. But there is no example which demonstrates that. This PR fill this gap.

@masahi
Copy link
Member Author

masahi commented Jan 19, 2020

I add a link below where I clarified my intention. Hopefully this clears up some confusion.
https://discuss.tvm.ai/t/solved-external-codegen-how-the-runtime-determines-function-signatures-for-generated-functions/5455/7

Copy link
Contributor

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR. Overall looks good to me but just some miner points. Please see comments for details.

tests/python/relay/test_pass_partition_graph.py Outdated Show resolved Hide resolved
tests/python/relay/test_pass_partition_graph.py Outdated Show resolved Hide resolved
};

Output ret;
if (auto conv_call = DetectFusedConv2DBiasReLU(call)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I agree with you that it's fine to handle fusion in this DNNL codegen, I also agree with @zhiics that the current implementation is a bit too ad-hoc even it's only used for demo purpose for now. As you have implemented, MKL DNN uses set_post_ops to attach ops to be fused. I think this part could be more general. For example:

if call == "relu":
    visit(arg)
    if this->curr_layer == "conv2d":
        generate_post_ops(call)
    else:
        generate_a_layer(call)

In this way, the codegen is able to deal with all MKL DNN supported conv2d fusion (conv2d, conv2d+add, conv2d+add+relu). We could still put heuristic pattern annotations to the annotator and improve it gradually. I like the one you made for conv2d+bias+relu in this PR, for instance.

@masahi masahi force-pushed the partition-fused-ops branch 3 times, most recently from dd7046b to 3dbce0f Compare January 24, 2020 06:24
@masahi masahi force-pushed the partition-fused-ops branch from 3dbce0f to af627a9 Compare January 24, 2020 11:38
@comaniac
Copy link
Contributor

As #4771 has been merged, we can revisit this PR for DNNL fuse patterns.

@masahi
Copy link
Member Author

masahi commented Feb 10, 2020

yes I want to update this PR but we don't have a way to hook Composite and Compiler attributes yet, so I couldn't "see" a composite conv + bias + relu in CodegenDNNL atm. Refer to the comments below.
#4771 (comment)
#4771 (comment)

@masahi
Copy link
Member Author

masahi commented Apr 8, 2020

#5272

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants