Update tests that mock cuda availability and extend it for mps #14012
Labels
accelerator: mps
Apple Silicon GPU
good first issue
Good for newcomers
help wanted
Open to be worked on
tests
Milestone
Proposed refactor
We have tests that fail locally on MPS supported devices because their mocking of CUDA does not extend to MPS. After #13947 landed, the availability check actually works now and we need to update some tests. These are mainly the tests that have this combination of settings:
accelerator="gpu"
(which includes both cuda and mps)torch.cuda.is_available
as True ortorch.cuda.device_count > 0
.An example of such a test:
https://github.com/Lightning-AI/lightning/blob/e6a8283e9cd9df53fb661c64bbf2037e1391a16d/tests/tests_pytorch/trainer/connectors/test_accelerator_connector.py#L245-L262
Motivation
Let MPS users run the tests locally (me).
Pitch
Extend the mocking to MPS where applicable. In some tests it may make sense to actually only test for cuda, in which case we should change
accelerator="gpu"
to"cuda"
. Some other tests may need to be skipped entirely on MPS.Note that simply slapping a
RunIf(min_cuda=x)
on top of these tests is not an option, as for most of the tests we want to run them with mocks on the CPU.Additional context
Rather do this sooner than later. Right now, I can't differentiate from tests failing because of bugs in my branch vs. them failing due to this.
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra.
cc @Borda @akihironitta @justusschock
The text was updated successfully, but these errors were encountered: