Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[OpTest] support prim test in OpTest #50509

Merged
merged 29 commits into from
Feb 21, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
0324a82
support prim test in OpTest
Charles-hit Feb 14, 2023
2f4b577
fix cmake
Charles-hit Feb 14, 2023
f8c067e
fix op test
Charles-hit Feb 14, 2023
92705f5
fix test_input_spec
Charles-hit Feb 15, 2023
10aa52f
disable cinn in reduce_sum unit test
Charles-hit Feb 15, 2023
739e146
add bfloat16 dtype for sum
Charles-hit Feb 15, 2023
71fc6a3
polish code
Charles-hit Feb 15, 2023
82f2057
add clear jit program function
Charles-hit Feb 15, 2023
eaf8176
convert grad out from tensor to numpy
Charles-hit Feb 15, 2023
f8e4fa2
remove unnecessary code
Charles-hit Feb 15, 2023
0ed31a1
add only_prim flag
Charles-hit Feb 15, 2023
596b77c
fix flag
Charles-hit Feb 15, 2023
db1ba08
fix op test
Charles-hit Feb 15, 2023
5b141d3
fix optest comp inplace error
Charles-hit Feb 16, 2023
c04cbf2
fix op test
Charles-hit Feb 16, 2023
23d978f
fix op test with guard
Charles-hit Feb 16, 2023
016ee35
add initialization of check_comp flag
Charles-hit Feb 16, 2023
12ab8cb
fix comp inplace error in op test
Charles-hit Feb 16, 2023
4b2d222
rename check_comp with check_prim and add bfloat16 dtype convert
Charles-hit Feb 17, 2023
8b6dda7
rename comp_op_type to prim_op_type
Charles-hit Feb 17, 2023
889146c
rename comp to prim
Charles-hit Feb 17, 2023
7913e27
remove useless code
Charles-hit Feb 17, 2023
a0210f3
skip ci check for only prim
Charles-hit Feb 17, 2023
c3ffa48
add no_grad_vars and grad_outputs in prim test
Charles-hit Feb 19, 2023
e48efee
fix var_dict
Charles-hit Feb 19, 2023
0d375a0
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
Charles-hit Feb 20, 2023
dbddace
fix op test for only_prim
Charles-hit Feb 20, 2023
e825902
fix dy2static bugs
Charles-hit Feb 20, 2023
d19cf71
polish some code
Charles-hit Feb 20, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions python/paddle/fluid/tests/unittests/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1204,6 +1204,14 @@ if($ENV{USE_STANDALONE_EXECUTOR})
PROPERTIES ENVIRONMENT FLAGS_USE_STANDALONE_EXECUTOR=0)
endif()

set(TEST_CINN_OPS test_softmax_op test_expand_v2_op test_reduce_op)

foreach(TEST_CINN_OPS ${TEST_CINN_OPS})
if(WITH_CINN)
set_tests_properties(${TEST_CINN_OPS} PROPERTIES LABELS "RUN_TYPE=CINN")
endif()
endforeach()

if(WITH_CINN AND WITH_TESTING)
set_tests_properties(
test_resnet50_with_cinn
Expand Down
42 changes: 42 additions & 0 deletions python/paddle/fluid/tests/unittests/config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import numpy as np

TOLERANCE = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dose this config used only for op test?

np.dtype('float64'): {
"jit_comp": {"rtol": 1e-15, "atol": 1e-15},
"fw_comp": {"rtol": 1e-15, "atol": 1e-15},
"rev_comp": {"rtol": 1e-15, "atol": 1e-15},
"cinn": {"rtol": 1e-14, "atol": 1e-14},
},
np.dtype('float32'): {
"jit_comp": {"rtol": 1e-6, "atol": 1e-6},
"fw_comp": {"rtol": 1e-6, "atol": 1e-6},
"rev_comp": {"rtol": 1e-6, "atol": 1e-6},
"cinn": {"rtol": 1e-5, "atol": 1e-5},
},
np.dtype('float16'): {
"jit_comp": {"rtol": 1e-3, "atol": 1e-3},
"fw_comp": {"rtol": 1e-3, "atol": 1e-3},
"rev_comp": {"rtol": 1e-3, "atol": 1e-3},
"cinn": {"rtol": 1e-2, "atol": 1e-2},
},
np.dtype('uint16'): {
"jit_comp": {"rtol": 1e-2, "atol": 1e-2},
"fw_comp": {"rtol": 1e-2, "atol": 1e-2},
"rev_comp": {"rtol": 1e-2, "atol": 1e-2},
"cinn": {"rtol": 1e-1, "atol": 1e-1},
},
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是否需要支持bfloat16?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

numpy没有bfloat16数据类型,这儿用unit16来表示bfloat16,目前python api都是这样做的

Loading