Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Hackathon No.48】为 Paddle assign_value、meshgrid、kthvalue、determinant 算子实现 float16 数据类型支持 #52046

Closed
wants to merge 10 commits into from

Conversation

denglianbin
Copy link
Contributor

PR types

Others

PR changes

Others

Describe

性能数据(op benchmark)

kthvalue

shape_x keepdim k fp32 fp16
16, 10000 FALSE 5 0.214565287 0.114623381
16, 3000 TRUE 1 0.105815031 0.097130513

determinant

shape_x shape_y fp32 fp16
16, 100, 100 16, 1, 1 2.825516097 2.873911176
16, 200, 200 16, 1, 1 12.56536148 12.72580015

assign_value

shape_x fp32 fp16
10, 12 0.118220582 0.119640389
50, 50 0.61117435 0.669646506

meshgrid

shape_x shape_y fp32 fp16
100 200 0.039633683 0.038926699
1000 2000 0.072888695 0.076483707

@paddle-bot
Copy link

paddle-bot bot commented Mar 23, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

]

def test_check_grad(self):
self.check_grad(['Input'], ['Out'], user_defined_grads=self.gt_grad)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以尝试不使用user_defined_grads

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

您好,之前使用user_defined_grads是因为当时fp16的数值梯度计算存在误差,因此使用自定义fp32梯度进行测试。经过验证,目前paddle dev已经修复了精度问题,故取消使用user_defined_grads

@@ -72,6 +72,16 @@ def init_data(self):
self.attrs["bool_values"] = [int(v) for v in self.value.flat]


@unittest.skipIf(
not core.is_compiled_with_cuda(), "core is not compiled with CUDA"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个skip应该可以去掉?现在单测框架能自动为fp16跳过不支持的设备,看后面的其他单测也没有添加这个装饰器。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的老师,之前确实其他fp16单测没有加装饰器,但是好像这个单测没有正常跳过,挂掉了。我去掉后重新跑下ci看看。

@@ -128,5 +138,13 @@ def init_dtype(self):
self.dtype = "bool"


@unittest.skipIf(
not core.is_compiled_with_cuda(), "core is not compiled with CUDA"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的装饰器应该不能去掉,因为它的基类继承自unittest.TestCase,没有自动跳过的机制。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@zhangting2020
Copy link
Contributor

zhangting2020 commented Apr 19, 2023

关注下PR-CI-Coverage中的单测失败是否和你的修改有关。
可以根据官网教程安装一个nightly的包,测一下挂掉的单测能否复现。另外测一下自己PR的包通过情况。如果nightly的包是正常的,你的PR能复现,那大概就得再排查一下你的PR修改了。

@denglianbin
Copy link
Contributor Author

denglianbin commented Apr 20, 2023

关注下PR-CI-Coverage中的单测失败是否和你的修改有关。 可以根据官网教程安装一个nightly的包,测一下挂掉的单测能否复现。另外测一下自己PR的包通过情况。如果nightly的包是正常的,你的PR能复现,那大概就得再排查一下你的PR修改了。

老师您好,这个我尝试了,官网的nightly包也会挂掉。并且挂掉的这两个单测与我的pr应该没有关系,应该是其他的bug。

@luotao1
Copy link
Contributor

luotao1 commented Apr 24, 2023

这几个单测目前只在这个PR里挂,其他PR里不会挂。
要不尝试拆分PR,一个算子一个PR,看下具体是什么原因。

@denglianbin
Copy link
Contributor Author

这几个单测目前只在这个PR里挂,其他PR里不会挂。 要不尝试拆分PR,一个算子一个PR,看下具体是什么原因。

#53286 determinant
#53285 kthvalue
#53284 meshgrid
#53283 assign_value

好的老师,四个算子均在最新develop上分开提交pr,并且本pr pull了最新develop重新测试CI。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants