-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MPS (Mac M1) device support #13102
Comments
I heard @awaelchli just got an M1 😈 |
@justusschock also has one if I remember correctly 😃 |
I'll investigate on the weekend |
After a bit of offline discussion, we thought about this API:
This shouldn't be a problem because a machine cannot have both accelerators available. |
Just when I thought I wanted to raise an issue! I'm so rooting for this. |
I'm glad we did the accelerator refactor to make supporting new features like this much easier :) |
Note that, on M1 max, with a basic example like LitAutoEncoder of MNIST (like the one described on PytorchLightning homepage) I get better results without results
envMy configuration is (
|
We could think about adding M1 in GitHub actions / self-host it. |
Is training With accelerator="mps" slower than without it? |
@paantya In my case, yes it is.
|
@carmocca How do I resolve pytorch_lightning not working on Mac M1 but torch and torchvision are working?
|
@babaniyi I think you need to downgrade protobuf to < 4.21.0, e.g., 3.20 |
@babaniyi Here are the commands I use to configure a PyTorchLightning env for Apple Silicon. Run at the root level of the PyTorchLightning repository on
Hope this helps |
@awaelchli @scalastic |
🚀 Feature
https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/
Docs: https://pytorch.org/docs/master/notes/mps.html
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @Borda @akihironitta @rohitgr7 @justusschock
The text was updated successfully, but these errors were encountered: