Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor Pass for fused_conv #48848

Merged
merged 14 commits into from
Dec 21, 2022
Merged

Conversation

zyfncg
Copy link
Contributor

@zyfncg zyfncg commented Dec 7, 2022

PR types

Others

PR changes

Others

Describe

PR47579将conv2d算子拆分为基础conv2d和fused_conv2d两个算子,拆分后需要对相关的Pass逻辑进行修改适配。
本PR在PR47579的基础上对剩余的conv算子相关Pass进行调整,修改的Pass包括:
conv_activation_mkldnn_fuse_pass
conv_affine_channel_mkldnn_fuse_pass
conv_elementwise_add_mkldnn_fuse_pass
conv_bn_fuse_pass
depthwise_conv_bn_fuse_pass
int8_scale_calculation_mkldnn_pass
params_quantization_mkldnn_pass

Pass调整的策略为:

  1. 原先fuse后为conv2d算子的替换为fused_conv2d算子,避免conv2d算子继续使用fuse相关的参数和功能。
  2. 部分Pass在支持conv2d作为输入算子的基础上增加对fused_conv2d作为输入的支持。避免出现算子被Pass处理为fused_conv2d后无法被其他Pass所处理的情况。

In PR47579 conv2d operator was split into two operators fused_conv2d and base conv2d. We need to update the related Passes, so the remaining related Passes are modified in this PR:
conv_activation_mkldnn_fuse_pass
conv_affine_channel_mkldnn_fuse_pass
conv_elementwise_add_mkldnn_fuse_pass
conv_bn_fuse_pass
depthwise_conv_bn_fuse_pass
int8_scale_calculation_mkldnn_pass
params_quantization_mkldnn_pass

The strategy of modifying Pass:

  1. Replace conv2d after fusion withfused_conv2d. By doing this, the fuse-related parameters in conv2d operator will not be used, so we could delete them in the future.
  2. Add support for fused_conv2d as input operator for some related Passes. The conv2d can be processed recursively in the fuse Passes, so this can avoid the situation that the fused_conv2d cannot be processed by other Passes after being fused as fused_conv2d by current Pass.

@paddle-bot
Copy link

paddle-bot bot commented Dec 7, 2022

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@zyfncg zyfncg requested review from jczaja and Silv3S December 19, 2022 02:10
@jczaja
Copy link
Contributor

jczaja commented Dec 19, 2022

@zyfncg Hi, We just started our automatic testing process on this PR (should take 1-2 days). We will let you know if all is fine or on problems found

Copy link
Contributor

@sfraczek sfraczek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work spotting all those places!

Comment on lines 215 to 218
QuantizeConv(graph, "fused_conv2d", true /*with_residual_data*/);
QuantizeConv(graph, "fused_conv2d", false /*with_residual_data*/);
QuantizeConv(graph, "conv2d", true /*with_residual_data*/);
QuantizeConv(graph, "conv2d", false /*with_residual_data*/);
Copy link
Contributor

@sfraczek sfraczek Dec 19, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This pass works on convolution ops that had scales calculated during int8_scales_calculation_pass because it needs attributes Scale_weights to quantize weights and Bias_scales to quantize bias if it exists. You have added the code linked below that changes conv2d to fused_conv2d there: https://github.com/PaddlePaddle/Paddle/pull/48848/files?show-viewed-files=true&file-filters%5B%5D=#diff-202dc497cf83197c83edced692ade07f943a14605f0b2cefc9f10a36a244212bR126-R127.
So we should be able to safely assume that this pass - params_quantization_mkldnn_pass - can work only on fused_conv2d.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, I think you are right. The conv2d in params_quantization_mkldnn_pass has been removed.

Comment on lines +72 to +75
attrs {
name: "Scale_weights"
type: FLOATS
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There can also be "Bias_scales" optionally.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Thanks~

Copy link
Member

@Silv3S Silv3S left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

if ((prev_op_nodes[0]->Op()->Type() != "conv2d" &&
prev_op_nodes[0]->Op()->Type() != "fused_conv2d") ||
is_not_conv_mkldnn) {
LOG(WARNING) << "This fuse pass supports only conv2d(mkldnn) | "
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We try to gradually replace MKLDNN with oneDNN

@@ -39,14 +39,44 @@ def {
name: "data_format"
type: STRING
}
}
extra {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't conv2d.pbtxt and conv_op.cc be also adjusted? Or this will be done in next PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this will be done in next PR.

@jczaja
Copy link
Contributor

jczaja commented Dec 20, 2022

@zyfncg I have just got results from analysis of this PR. And we noticed no perf regression and perf improvement on bf16 convolutional models .

Copy link
Contributor

@sfraczek sfraczek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@zyfncg
Copy link
Contributor Author

zyfncg commented Dec 21, 2022

@zyfncg I have just got results from analysis of this PR. And we noticed no perf regression and perf improvement on bf16 convolutional models .

@jczaja thanks~

Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants