Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AMP OP&Test] adjust test_elementwise_sub's tolerance, max_relative_error of grad and #50953

Merged
merged 17 commits into from
Apr 13, 2023

Conversation

Vvsmile
Copy link
Contributor

@Vvsmile Vvsmile commented Feb 27, 2023

PR types

Others

PR changes

Others

Describe

Adjust the fp16 tolerance, max_relative_error of grad and atol/rtol of output to 1e-3

@paddle-bot
Copy link

paddle-bot bot commented Feb 27, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot
Copy link

paddle-bot bot commented Feb 27, 2023

❌ The PR is not created using PR's template. You can refer to this Demo.
Please use PR's template, it helps save our maintainers' time so that more developers get helped.

@ZzSean ZzSean changed the title adjust test_elementwise_sub's tolerance, max_relative_error of grad and [AMP OP&Test] adjust test_elementwise_sub's tolerance, max_relative_error of grad and Mar 6, 2023
Comment on lines 393 to 394
class TestElementwiseBF16OP_scalar(TestElementwiseOp):
def setUp(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这种scalar的是不是不用测反向

self.outputs = {'Out': convert_float_to_uint16(self.outputs['Out'])}

def test_check_grad_normal(self):
self.check_grad(['X', 'Y'], 'Out')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

需要用check_grad_with_place,否则跑到CPU会出错,下同

@@ -90,15 +133,62 @@ def if_enable_cinn(self):
self.enable_cinn = False


class TestElementwiseSubFP16OP_ZeroDim1(TestElementwiseOp):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

继承的父类不对吧,下同

or not core.is_bfloat16_supported(core.CUDAPlace(0)),
"core is not compiled with CUDA and do not support bfloat16",
)
class TestElementwiseBF16OP_ZeroDim1(TestElementwiseOp):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

直接继承TestElementwiseBF16OP,下同

'X': convert_float_to_uint16(self.inputs['X']),
'Y': convert_float_to_uint16(self.inputs['Y']),
}
self.outputs = {'Out': convert_float_to_uint16(self.outputs['Out'])}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

少了self.if_check_prim() self.if_enable_cinn()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -66,7 +71,49 @@ def if_check_prim(self):
self.check_prim = True

def if_enable_cinn(self):
pass
self.enable_cii = False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

拼写错误

Copy link
Contributor

@ZzSean ZzSean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ZzSean ZzSean merged commit 2cff983 into PaddlePaddle:develop Apr 13, 2023
@Vvsmile Vvsmile deleted the amp_fp16_elementwise_sub branch March 11, 2024 08:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants