Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

matmul and matmul_v2 refactor #42732

Merged
merged 5 commits into from
May 18, 2022
Merged

Conversation

Silv3S
Copy link
Member

@Silv3S Silv3S commented May 12, 2022

PR types

Others

PR changes

OPs

Describe

  • move inferring "-1" shape to reshape function, so it can be reused,
  • remove redundant PADDLE_ENFORCE which are checked both in fuse pass and operator, with same conditions,
  • remove old unit tests in which fuse_transpose_out and fuse_reshape_out are intentionally set incorrectly,
  • remove code which is commented out.

@paddle-bot-old
Copy link

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot-old paddle-bot-old bot added contributor External developers status: proposed labels May 12, 2022
@Silv3S
Copy link
Member Author

Silv3S commented May 12, 2022

@jakpiase @tsocha please review

tsocha
tsocha previously approved these changes May 13, 2022
Copy link
Contributor

@tsocha tsocha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work

paddle/phi/core/ddim.cc Outdated Show resolved Hide resolved
@@ -519,43 +519,6 @@ def init_data_type(self):
self.data_type_ = np.int8


class TestMatMulOpTransposeReshapeTransposeAxisNotSupportedException(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why these tests are deleted?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These tests only checked if after setting incorrect values for fuse_reshape_out and fuse_transpose_out test would fail. I deleted it, because the only way of setting these parameters is via fuse_pass, which has all PADDLE_ENFORCES to prevent such case. Also, we have separate unit test dedicated for matmul+transpose+reshape fuse pass.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really nice that you have spotted that! Thanks!

@jakpiase jakpiase self-requested a review May 13, 2022 16:02
jakpiase
jakpiase previously approved these changes May 13, 2022
Copy link
Contributor

@jakpiase jakpiase left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! It's good to see another really needed refactoring done

@lidanqing-intel
Copy link
Contributor

@Silv3S

license/cla Expected — Waiting for status to be reported

Maybe you need to reassign license ?

jczaja
jczaja previously approved these changes May 16, 2022
@Silv3S
Copy link
Member Author

Silv3S commented May 16, 2022

Maybe you need to reassign license ?

I checked the CLA assistant manually and everything looks fine. Unfortunately, despite the renewal of the license, the CI is still not updated. Can this be merged as it is, or should I trigger CI with empty commit?

@Silv3S
Copy link
Member Author

Silv3S commented May 16, 2022

@baoachun please review

@paddle-bot-old
Copy link

很抱歉,经过我们的反复讨论,你的PR暂未达到合入标准,请阅读飞桨原生算子开发规范,你可以重新提交新的PR,我们先将此PR关闭,感谢你的贡献。
Sorry to inform you that through our discussion, your PR fails to meet the merging standard (Reference: Paddle Custom Operator Design Doc). You can also submit an new one. Thank you.

@lidanqing-intel lidanqing-intel self-requested a review May 16, 2022 07:57
Copy link
Contributor

@lidanqing-intel lidanqing-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jczaja jczaja requested a review from baoachun May 16, 2022 08:00
@@ -171,11 +171,25 @@ DDim stride_numel(const DDim& ddim) {
return strides;
}

DDim DDim::reshape(const std::vector<int>& shape) const {
DDim DDim::reshape(const std::vector<int>& new_shape) const {
std::vector<int> shape = new_shape;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Creating a new vector here and do ifelse branch in core/ddim.cc will influence each cpu/gpu op I think ?

@Silv3S Silv3S dismissed stale reviews from lidanqing-intel, jczaja, and jakpiase via 9dbb43c May 16, 2022 19:10
@jczaja jczaja merged commit 570d032 into PaddlePaddle:develop May 18, 2022
@paddle-bot-old
Copy link

你的PR已合入Paddle库,请关注后续测试结果。
Your PR has been merged into the repository. An official integration test will be conducted later. Stay tuned.

@Silv3S Silv3S deleted the matmul_refactor branch May 18, 2022 07:37
@paddle-bot-old paddle-bot-old bot removed the contributor External developers label Oct 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants