Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skipping CUDA tests for CPU setup #2191

Merged

Conversation

AlexanderDokuchaev
Copy link
Collaborator

Changes

Add check torch.cuda.is_available to TestTorchCudaFBCAlgorithm test.

Related tickets

122537

@AlexanderDokuchaev AlexanderDokuchaev requested a review from a team as a code owner October 11, 2023 17:21
@github-actions github-actions bot added the NNCF PT Pull requests that updates NNCF PyTorch label Oct 11, 2023
@codecov
Copy link

codecov bot commented Oct 11, 2023

Codecov Report

Merging #2191 (5bafdab) into develop (da1e175) will not change coverage.
Report is 4 commits behind head on develop.
The diff coverage is n/a.

Additional details and impacted files
@@           Coverage Diff            @@
##           develop    #2191   +/-   ##
========================================
  Coverage    36.57%   36.57%           
========================================
  Files          484      484           
  Lines        43300    43300           
========================================
  Hits         15837    15837           
  Misses       27463    27463           

see 1 file with indirect coverage changes

@vshampor
Copy link
Contributor

@AlexanderDokuchaev post a number of the corresponding passing torch CPU precommit build.

Copy link
Contributor

@vshampor vshampor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disapprove of the entire "templating" pattern for these kinds of tests. Because of this pattern, the common algo part is being run in each of the backend tests, leading to at least (N -1) * (time_in_common_algo_code) extra computing time spent in precommits. Also because of this pattern, you now have to put pytest.skip not into the test cases, where it would have been a very visible, local marker that this exact test case may be skipped, but into the overridden functions hosting the backend-specific functionality of the template, where it is not immediately visible which test cases the .skip will impact.

Anyway, if you have to continue with the current approach for now, I would expect a pytest.skip in the def fn_to_type method of this class as well.

After the algo unification, these tests should be split into tests for common code, tests for backend-specific parts of the algo (which probably won't share enough code to warrant a template), and maybe interface conformance tests between the backend and the common code.

@AlexanderDokuchaev
Copy link
Collaborator Author

Anyway, if you have to continue with the current approach for now, I would expect a pytest.skip in the def fn_to_type method of this class as well.

@vshampor, please provide a code of your proposal. As you describe, tests will still fail with same cuda error.

@AlexanderDokuchaev
Copy link
Collaborator Author

precommit_torch_cpu/154/

@vshampor
Copy link
Contributor

The new solution is much more explicit, thanks. My previous statement still holds, though, although I naturally don't expect it to be applied in this PR. Please make sure to re-run the build and post the updated build number.

@AlexanderDokuchaev
Copy link
Collaborator Author

precommit_torch_cpu/154/ on last commit

[2023-10-12T13:46:01.604Z] * | 5bafdab3 check cuda in class decorator

@vshampor vshampor self-requested a review October 13, 2023 09:19
@vshampor vshampor merged commit bd9e843 into openvinotoolkit:develop Oct 13, 2023
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
NNCF PT Pull requests that updates NNCF PyTorch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants