-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Skipping CUDA tests for CPU setup #2191
Skipping CUDA tests for CPU setup #2191
Conversation
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## develop #2191 +/- ##
========================================
Coverage 36.57% 36.57%
========================================
Files 484 484
Lines 43300 43300
========================================
Hits 15837 15837
Misses 27463 27463 |
@AlexanderDokuchaev post a number of the corresponding passing torch CPU precommit build. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I disapprove of the entire "templating" pattern for these kinds of tests. Because of this pattern, the common algo part is being run in each of the backend tests, leading to at least (N -1) * (time_in_common_algo_code) extra computing time spent in precommits. Also because of this pattern, you now have to put pytest.skip
not into the test cases, where it would have been a very visible, local marker that this exact test case may be skipped, but into the overridden functions hosting the backend-specific functionality of the template, where it is not immediately visible which test cases the .skip
will impact.
Anyway, if you have to continue with the current approach for now, I would expect a pytest.skip
in the def fn_to_type
method of this class as well.
After the algo unification, these tests should be split into tests for common code, tests for backend-specific parts of the algo (which probably won't share enough code to warrant a template), and maybe interface conformance tests between the backend and the common code.
@vshampor, please provide a code of your proposal. As you describe, tests will still fail with same cuda error. |
precommit_torch_cpu/154/ |
The new solution is much more explicit, thanks. My previous statement still holds, though, although I naturally don't expect it to be applied in this PR. Please make sure to re-run the build and post the updated build number. |
|
Changes
Add check
torch.cuda.is_available
toTestTorchCudaFBCAlgorithm
test.Related tickets
122537