-
Notifications
You must be signed in to change notification settings - Fork 803
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add documentation for using uv with PyTorch
- Loading branch information
1 parent
a0562d1
commit fdd1119
Showing
3 changed files
with
332 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,330 @@ | ||
# Using uv with PyTorch | ||
|
||
The [PyTorch](https://pytorch.org/) ecosystem is a popular choice for deep learning research and | ||
development. You can use uv to manage PyTorch projects and PyTorch dependencies across different | ||
Python versions and environments, even controlling for the choice of accelerator (e.g., CPU-only vs. | ||
CUDA). | ||
|
||
## Installing PyTorch | ||
|
||
From a packaging perspective, PyTorch has a few uncommon characteristics: | ||
|
||
- Many PyTorch wheels are hosted on a dedicated index, rather than the Python Package Index (PyPI). | ||
As such, installing PyTorch typically often configuring a project to use the PyTorch index. | ||
- PyTorch includes distinct builds for each accelerator (e.g., CPU-only, CUDA). Since there's no | ||
standardized mechanism for specifying these accelerators when publishing or installing, PyTorch | ||
encodes them in the local version specifier. As such, PyTorch versions will often look like | ||
`2.5.1+cpu`, `2.5.1+cu121`, etc. | ||
- Builds for different accelerators are published to different indexes. For example, the `+cpu` | ||
builds are published on https://download.pytorch.org/whl/cpu, while the `+cu121` builds are | ||
published on https://download.pytorch.org/whl/cu121. | ||
|
||
As such, the necessary packaging configuration will vary depending on both the platforms you need to | ||
support and the accelerators you want to enable. | ||
|
||
To start, consider the following (default) configuration, which would be generated by running | ||
`uv init --python 3.12` followed by `uv add torch torchvision`. | ||
|
||
In this case, PyTorch would be installed from PyPI, which hosts CPU-only wheels for Windows and | ||
macOS, and GPU-accelerated wheels on Linux (targeting CUDA 12.4): | ||
|
||
```toml | ||
[project] | ||
name = "project" | ||
version = "0.1.0" | ||
requires-python = ">=3.12" | ||
dependencies = [ | ||
"torch>=2.5.1", | ||
"torchvision>=0.20.1", | ||
] | ||
``` | ||
|
||
!!! tip "Supported Python versions" | ||
|
||
At time of writing, PyTorch does not yet publish wheels for Python 3.13; as such projects with | ||
`requires-python = ">=3.13"` may fail to resolve. See the | ||
[compatibility matrix](https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix). | ||
|
||
This is a valid configuration for projects that want to use CPU builds on Windows and macOS, and | ||
CUDA-enabled builds on Linux. However, if you need to support different platforms or accelerators, | ||
you'll need to configure the project accordingly. | ||
|
||
## Using a PyTorch index | ||
|
||
In some cases, you may want to use a specific PyTorch variant across all platforms. For example, you | ||
may want to use the CPU-only builds on Linux too. | ||
|
||
In such cases, the first step is to add the relevant PyTorch index to your `pyproject.toml`: | ||
|
||
=== "CPU-only" | ||
|
||
```toml | ||
[[tool.uv.index]] | ||
name = "pytorch-cpu" | ||
url = "https://download.pytorch.org/whl/cpu" | ||
explicit = true | ||
``` | ||
|
||
=== "CUDA 11.8" | ||
|
||
```toml | ||
[[tool.uv.index]] | ||
name = "pytorch-cu118" | ||
url = "https://download.pytorch.org/whl/cu118" | ||
explicit = true | ||
``` | ||
|
||
=== "CUDA 12.1" | ||
|
||
```toml | ||
[[tool.uv.index]] | ||
name = "pytorch-cu121" | ||
url = "https://download.pytorch.org/whl/cu121" | ||
explicit = true | ||
``` | ||
|
||
=== "CUDA 12.4" | ||
|
||
```toml | ||
[[tool.uv.index]] | ||
name = "pytorch-cu124" | ||
url = "https://download.pytorch.org/whl/cu124" | ||
explicit = true | ||
``` | ||
|
||
=== "ROCm6" | ||
|
||
```toml | ||
[[tool.uv.index]] | ||
name = "pytorch-rocm" | ||
url = "https://download.pytorch.org/whl/rocm6.2" | ||
explicit = true | ||
``` | ||
|
||
We recommend the use of `explicit = true` to ensure that the index is _only_ used for `torch`, | ||
`torchvision`, and other PyTorch-related packages, as opposed to generic dependencies like `jinja2`, | ||
which should continue to be sourced from the default index (PyPI). | ||
|
||
Next, we'll update the `pyproject.toml` to point `torch` and `torchvision` to the desired index: | ||
|
||
=== "CPU-only" | ||
|
||
PyTorch doesn't publish CPU-only builds for macOS, since macOS builds are always considered CPU-only. | ||
As such, we gate on `platform_system` to instruct uv to ignore the PyTorch index when resolving for macOS. | ||
|
||
```toml | ||
[tool.uv.sources] | ||
torch = [ | ||
{ index = "pytorch-cpu", marker = "platform_system != 'Darwin'"}, | ||
] | ||
torchvision = [ | ||
{ index = "pytorch-cpu", marker = "platform_system != 'Darwin'"}, | ||
] | ||
``` | ||
|
||
=== "CUDA 11.8" | ||
|
||
PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `platform_system` to instruct uv to ignore | ||
the PyTorch index when resolving for macOS. | ||
|
||
```toml | ||
[tool.uv.sources] | ||
torch = [ | ||
{ index = "pytorch-cu118", marker = "platform_system != 'Darwin'"}, | ||
] | ||
torchvision = [ | ||
{ index = "pytorch-cu118", marker = "platform_system != 'Darwin'"}, | ||
] | ||
``` | ||
|
||
=== "CUDA 12.1" | ||
|
||
PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `platform_system` to instruct uv to ignore | ||
the PyTorch index when resolving for macOS. | ||
|
||
```toml | ||
[tool.uv.sources] | ||
torch = [ | ||
{ index = "pytorch-cu121", marker = "platform_system != 'Darwin'"}, | ||
] | ||
torchvision = [ | ||
{ index = "pytorch-cu121", marker = "platform_system != 'Darwin'"}, | ||
] | ||
``` | ||
|
||
=== "CUDA 12.4" | ||
|
||
PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `platform_system` to instruct uv to ignore | ||
the PyTorch index when resolving for macOS. | ||
|
||
```toml | ||
[tool.uv.sources] | ||
torch = [ | ||
{ index = "pytorch-cu124", marker = "platform_system != 'Darwin'"}, | ||
] | ||
torchvision = [ | ||
{ index = "pytorch-cu124", marker = "platform_system != 'Darwin'"}, | ||
] | ||
``` | ||
|
||
=== "ROCm6" | ||
|
||
PyTorch doesn't publish ROCm6 builds for macOS or Windows. As such, we gate on `platform_system` to instruct uv to | ||
ignore the PyTorch index when resolving for those platforms. | ||
|
||
```toml | ||
[tool.uv.sources] | ||
torch = [ | ||
{ index = "pytorch-rocm", marker = "platform_system == 'Linux'"}, | ||
] | ||
torchvision = [ | ||
{ index = "pytorch-rocm", marker = "platform_system == 'Linux'"}, | ||
] | ||
``` | ||
|
||
As a complete example, the following project would use PyTorch's CPU-only builds on all platforms: | ||
|
||
```toml | ||
[project] | ||
name = "project" | ||
version = "0.1.0" | ||
requires-python = ">=3.12.0" | ||
dependencies = [ | ||
"torch>=2.5.1", | ||
"torchvision>=0.20.1", | ||
] | ||
|
||
[tool.uv.sources] | ||
torch = [ | ||
{ index = "pytorch-cpu", marker = "platform_system != 'Darwin'" }, | ||
] | ||
torchvision = [ | ||
{ index = "pytorch-cpu", marker = "platform_system != 'Darwin'" }, | ||
] | ||
|
||
[[tool.uv.index]] | ||
name = "pytorch-cpu" | ||
url = "https://download.pytorch.org/whl/cpu" | ||
explicit = true | ||
``` | ||
|
||
## Configuring accelerators with environment markers | ||
|
||
In some cases, you may want to use CPU-only builds in one environment (e.g., macOS and Windows), and | ||
CUDA-enabled builds in another (e.g., Linux). | ||
|
||
With `tool.uv.sources`, you can use environment markers to specify the desired index for each | ||
platform. For example, the following configuration would use PyTorch's CPU-only builds on Windows | ||
(and macOS, by way of falling back to PyPI), and CUDA-enabled builds on Linux: | ||
|
||
```toml | ||
[project] | ||
name = "project" | ||
version = "0.1.0" | ||
requires-python = ">=3.12.0" | ||
dependencies = [ | ||
"torch>=2.5.1", | ||
"torchvision>=0.20.1", | ||
] | ||
|
||
[tool.uv.sources] | ||
torch = [ | ||
{ index = "pytorch-cpu", marker = "platform_system == 'Windows'" }, | ||
{ index = "pytorch-cu124", marker = "platform_system == 'Linux'" }, | ||
] | ||
torchvision = [ | ||
{ index = "pytorch-cpu", marker = "platform_system == 'Windows'" }, | ||
{ index = "pytorch-cu124", marker = "platform_system == 'Linux'" }, | ||
] | ||
|
||
[[tool.uv.index]] | ||
name = "pytorch-cpu" | ||
url = "https://download.pytorch.org/whl/cpu" | ||
explicit = true | ||
|
||
[[tool.uv.index]] | ||
name = "pytorch-cu124" | ||
url = "https://download.pytorch.org/whl/cu124" | ||
explicit = true | ||
``` | ||
|
||
## Configuring accelerators with optional dependencies | ||
|
||
In some cases, you may want to use CPU-only builds in some cases, but CUDA-enabled builds in others, | ||
with the choice toggled by a user-provided extra (e.g., `uv sync --extra cpu` vs. | ||
`uv sync --extra cu124`). | ||
|
||
With `tool.uv.sources`, you can use extra markers to specify the desired index for each enabled | ||
extra. For example, the following configuration would use PyTorch's CPU-only for | ||
`uv sync --extra cpu` and CUDA-enabled builds for `uv sync --extra cu124`: | ||
|
||
```toml | ||
[project] | ||
name = "project" | ||
version = "0.1.0" | ||
requires-python = ">=3.12.0" | ||
dependencies = [] | ||
|
||
[project.optional-dependencies] | ||
cpu = [ | ||
"torch>=2.5.1", | ||
"torchvision>=0.20.1", | ||
] | ||
cu124 = [ | ||
"torch>=2.5.1", | ||
"torchvision>=0.20.1", | ||
] | ||
|
||
[tool.uv] | ||
conflicts = [ | ||
[ | ||
{ extra = "cpu" }, | ||
{ extra = "cu124" }, | ||
], | ||
] | ||
|
||
[tool.uv.sources] | ||
torch = [ | ||
{ index = "pytorch-cpu", extra = "cpu", marker = "platform_system != 'Darwin'" }, | ||
{ index = "pytorch-cu124", extra = "cu124" }, | ||
] | ||
torchvision = [ | ||
{ index = "pytorch-cpu", extra = "cpu", marker = "platform_system != 'Darwin'" }, | ||
{ index = "pytorch-cu124", extra = "cu124" }, | ||
] | ||
|
||
[[tool.uv.index]] | ||
name = "pytorch-cpu" | ||
url = "https://download.pytorch.org/whl/cpu" | ||
explicit = true | ||
|
||
[[tool.uv.index]] | ||
name = "pytorch-cu124" | ||
url = "https://download.pytorch.org/whl/cu124" | ||
explicit = true | ||
``` | ||
|
||
!!! note | ||
|
||
Since GPU-accelerated builds aren't available on macOS, the above configuration will continue to use | ||
the CPU-only builds on macOS via the `"platform_system != 'Darwin'"` marker, regardless of the extra | ||
provided. | ||
|
||
## The `uv pip` interface | ||
|
||
While the above examples are focused on uv's project interface (`uv lock`, `uv sync`, `uv run`, | ||
etc.), PyTorch can also be installed via the `uv pip` interface. | ||
|
||
PyTorch itself offers a [dedicated interface](https://pytorch.org/get-started/locally/) to determine | ||
the appropriate pip command to run for a given target configuration. For example, you can install | ||
stable, CPU-only PyTorch on Linux with: | ||
|
||
```shell | ||
$ pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu | ||
``` | ||
|
||
To use the same workflow with uv, replace `pip3` with `uv pip`: | ||
|
||
```shell | ||
$ uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters