Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【PIR Dist Op Reg No.25】 reg distributed_fused_lamb_init #62050

Merged

Conversation

xiaoyewww
Copy link
Contributor

PR types

Others

PR changes

Others

Description

#60436
注册算子distributed_fused_lamb_init

Copy link

paddle-bot bot commented Feb 25, 2024

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added the contributor External developers label Feb 25, 2024
@luotao1 luotao1 added the HappyOpenSource 快乐开源活动issue与PR label Feb 26, 2024
@luotao1 luotao1 changed the title PIR Dist Op Reg No.24】 reg distributed_fused_lamb_init 【PIR Dist Op Reg No.24】 reg distributed_fused_lamb_init Feb 26, 2024
Copy link
Contributor

@kangguangli kangguangli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

需要按照 #60436 里第二节的说明添加一个简单的单测,测试Op的翻译能否成功

paddle/fluid/pir/dialect/operator/ir/ops.yaml Show resolved Hide resolved
paddle/phi/infermeta/binary.cc Show resolved Hide resolved
@xiaoyewww
Copy link
Contributor Author

需要按照 #60436 里第二节的说明添加一个简单的单测,测试Op的翻译能否成功

好的,我这里后续加一下,aistudio上一直在第三方库上没编译成功。。。我有空编译好了添加一下

@xiaoyewww xiaoyewww force-pushed the feat/distributed_fused_lamb_init branch from 0e65734 to 583af04 Compare February 27, 2024 02:35
@xiaoyewww
Copy link
Contributor Author

@kangguangli ci通过了,麻烦review一下~

kangguangli
kangguangli previously approved these changes Mar 4, 2024
std::vector<MetaTensor*> grad_out,
MetaTensor* global_scale,
MetaTensor* step) {
fp32_fused_param->set_dtype(DataType::FLOAT32);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个算子不需要设置dims信息吗

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

旧IR下该算子的InferShape为空,这些是他新增的。如果要新增dims的话,只能根据kernel的实现去推测,感觉可以等后续对InferMeta有需要再完善,这里我觉得保持现状较好。

@@ -0,0 +1,152 @@
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2023 -> 2024

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改,谢谢~

@xiaoyewww xiaoyewww force-pushed the feat/distributed_fused_lamb_init branch from 90dc536 to a86d558 Compare March 4, 2024 10:48
@kangguangli kangguangli requested a review from zyfncg March 4, 2024 11:03
zyfncg
zyfncg previously approved these changes Mar 4, 2024
@xiaoyewww xiaoyewww force-pushed the feat/distributed_fused_lamb_init branch from a86d558 to 474f1a3 Compare March 4, 2024 14:12
@kangguangli
Copy link
Contributor

@jzhang533 @sunzhongkai588 @Ligoml
hello,麻烦review下这个PR,新增了一个PIR下的非公开API paddle.base.libpaddle.pir.ops.distributed_fused_lamb_init

Copy link
Contributor

@sunzhongkai588 sunzhongkai588 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@luotao1 luotao1 merged commit cc97ef8 into PaddlePaddle:develop Mar 5, 2024
30 checks passed
@xiaoyewww xiaoyewww changed the title 【PIR Dist Op Reg No.24】 reg distributed_fused_lamb_init 【PIR Dist Op Reg No.25】 reg distributed_fused_lamb_init Mar 6, 2024
@xiaoyewww xiaoyewww deleted the feat/distributed_fused_lamb_init branch May 10, 2024 15:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers HappyOpenSource 快乐开源活动issue与PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants