-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cuda not recognized #78
Comments
Hi, What may work is for you to try installing pytorch with the GPU like the following. Please let me know if you have further issues. This should be in the same conda environment.
If you continue having issues, you can try the following conda command too
|
That did not work. I figured it out. Can you be more specific in your installation statement? This worked:
|
Wow @davidhoover thanks so much for figuring this out. May I ask where you found this? I want to see if the part with |
I'm not sure if cudatoolkit needs to be explicit. I figured this out by trial-and-error, guided by this running this test on a GPU node:
If the result is |
After installing v1.0.5, I am getting a message saying cuda is not recognized:
/opt/mamba/envs/model_angelo/lib/python3.10/site-packages/torch/amp/autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
I am running this on an Nvidia v100 node, capable of handling recent CUDA libraries and drivers. Is there anything that can be done to force pytorch to recognize cuda and the GPU devices available on the node?
The text was updated successfully, but these errors were encountered: