-
Notifications
You must be signed in to change notification settings - Fork 732
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues creating a cuda-enabled pytorch environment with UV #7202
Comments
Just some additional data: with
but with |
Have you taken a look at https://docs.astral.sh/uv/pip/compatibility/#local-version-identifiers? |
Yeah that seems to explain it. It's odd that specifying Having to pin the versions exactly to get cuda-enabled wheels isn't great for us. As a library we'd just like to depend on I'm wondering if an option to |
If I'm reading the docs right, it's that This general problem is something that I would like to solve by improving metadata. That's quite a rabbit hole, but if you'd like to learn more:
|
If trying to add cuda-enabled pytorch to a project, I found the following process worked for me (Windows 11, but should work in Linux):
This is what worked for me today. Verified with
|
@msarahan -- That's really interesting. Is there anything we can do on our side to support the PEP, or the variant-selection proposal? |
Doing this without any index specifier doesn't appear to actually capture the CUDA related sources into the Meanwhile, if I use
Even though on the previous step I had used the
The following works and it produces what looks like would be a working
Infact, it looks like it's the only thing that is needed: > py -3.12 -m venv .\.venv\
> .\.venv\Scripts\activate.ps1
> pip install uv
> uv add torch==2.4.1+cu121 torchaudio==2.4.1+cu121 torchvision==0.19.1+cu121 --index-url https://download.pytorch.org/whl/cu121 Then delete the |
Is there a way to add torch, torchaudio and torchvision (with gpu and without gpu) in single pyproject.toml file like we can do in
On a side note, after running below
error:
|
No, we don't support resolutions with conflicting extra groups right now, though we're adding first-class support for it in the future. |
I've tried to do a frash project with p.s.: |
@YoniChechik see #6523 |
I found that the instructions here: Worked for my rye (with uv) projects too. I just had to manually add those lines for which cuda/cpu index I was using, then Specifically the |
EDIT: Just realized my Contrary to the original poster, I actually can get Pytorch (with the CUDA 12.4 runtime) working with just [project]
name = "whisper"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
"torch>=2.5.1",
"torchaudio>=2.5.1",
"torchvision>=0.20.1",
] $ python -c 'import torch; print(torch.__version__)'
2.5.1+cu124 nvidia-smi for my system: +-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.90.07 Driver Version: 550.90.07 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 35C P8 8W / 70W | 1MiB / 15360MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
|
I couldn't make it work with the approach of @extrange. And the solution of @vajranaga is indeed working for me!
Here's my [project]
name = "example"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.11"
dependencies = [
"torch==2.5.1+cu124",
"torchaudio==2.5.1+cu124",
"torchvision==0.20.1+cu124",
]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.uv.sources]
torch = { index = "pytorch-cu124" }
torchvision = { index = "pytorch-cu124" }
torchaudio = { index = "pytorch-cu124" }
[[tool.uv.index]]
name = "pytorch-cu124"
url = "https://download.pytorch.org/whl/cu124"
explicit = true |
@YouJiacheng it looks like the dependency on numpy is optional? What happens if you |
|
Okay, please explain what you're expecting to see when reporting an issue — I can't just guess it from a screenshot. Please read https://docs.astral.sh/uv/pip/compatibility/#local-version-identifiers |
I'm having some issues using
uv lock
&uv sync
in a package that needs to use cuda-enabled pytorch wheels.Here's a minimal-ish reproduction:
https://github.com/pstjohn/uv-torchvision-repro
The workspace stuff is there since I'd like to eventually get this working in a monorepo-like environment (#6935)
The desired behavior would be that
uv lock
would match what I get withpip install --extra-index-url https://download.pytorch.org/whl/cu124 torch torchvision
, which would be (at the time of writing) to install torch 2.4.1 and torchvision 0.19.1 both in their linux_x86_64 / cu124 / cp310 variants.But instead I get the following error:
If I use
uv pip install --extra-index-url https://download.pytorch.org/whl/cu124 torch torchvision
, I gettorch==2.4.1+cu124
andtorchvision==0.2.0
. (i.e., seemingly not pulling torchvision from the cu124 index).Any thoughts on where I might be going wrong? Thanks!
The text was updated successfully, but these errors were encountered: