Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 28 additions & 34 deletions docs/guides/integration/pytorch.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,28 +34,22 @@ As such, the necessary packaging configuration will vary depending on both the p
support and the accelerators you want to enable.

To start, consider the following (default) configuration, which would be generated by running
`uv init --python 3.12` followed by `uv add torch torchvision`.
`uv init --python 3.14` followed by `uv add torch torchvision`.

In this case, PyTorch would be installed from PyPI, which hosts CPU-only wheels for Windows and
macOS, and GPU-accelerated wheels on Linux (targeting CUDA 12.6):
macOS, and GPU-accelerated wheels on Linux (targeting CUDA 12.8, as of PyTorch 2.9.1):

```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12"
requires-python = ">=3.14"
dependencies = [
"torch>=2.7.0",
"torchvision>=0.22.0",
"torch>=2.9.1",
"torchvision>=0.24.1",
]
```

!!! tip "Supported Python versions"

At time of writing, PyTorch does not yet publish wheels for Python 3.14; as such projects with
`requires-python = ">=3.14"` may fail to resolve. See the
[compatibility matrix](https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix).

This is a valid configuration for projects that want to use CPU builds on Windows and macOS, and
CUDA-enabled builds on Linux. However, if you need to support different platforms or accelerators,
you'll need to configure the project accordingly.
Expand Down Expand Up @@ -117,7 +111,7 @@ In such cases, the first step is to add the relevant PyTorch index to your `pypr
```toml
[[tool.uv.index]]
name = "pytorch-rocm"
url = "https://download.pytorch.org/whl/rocm6.3"
url = "https://download.pytorch.org/whl/rocm6.4"
explicit = true
```

Expand Down Expand Up @@ -254,10 +248,10 @@ As a complete example, the following project would use PyTorch's CPU-only builds
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12.0"
requires-python = ">=3.14.0"
dependencies = [
"torch>=2.7.0",
"torchvision>=0.22.0",
"torch>=2.9.1",
"torchvision>=0.24.1",
]

[tool.uv.sources]
Expand Down Expand Up @@ -287,10 +281,10 @@ and CPU-only builds on all other platforms (e.g., macOS and Windows):
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12.0"
requires-python = ">=3.14.0"
dependencies = [
"torch>=2.7.0",
"torchvision>=0.22.0",
"torch>=2.9.1",
"torchvision>=0.24.1",
]

[tool.uv.sources]
Expand Down Expand Up @@ -321,11 +315,11 @@ builds on Windows and macOS (by way of falling back to PyPI):
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12.0"
requires-python = ">=3.14.0"
dependencies = [
"torch>=2.7.0",
"torchvision>=0.22.0",
"pytorch-triton-rocm>=3.3.0 ; sys_platform == 'linux'",
"torch>=2.9.1",
"torchvision>=0.24.1",
"pytorch-triton-rocm>=3.5.1 ; sys_platform == 'linux'",
]

[tool.uv.sources]
Expand All @@ -341,7 +335,7 @@ pytorch-triton-rocm = [

[[tool.uv.index]]
name = "pytorch-rocm"
url = "https://download.pytorch.org/whl/rocm6.3"
url = "https://download.pytorch.org/whl/rocm6.4"
explicit = true
```

Expand All @@ -351,11 +345,11 @@ Or, for Intel GPU builds:
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12.0"
requires-python = ">=3.14.0"
dependencies = [
"torch>=2.7.0",
"torchvision>=0.22.0",
"pytorch-triton-xpu>=3.3.0 ; sys_platform == 'win32' or sys_platform == 'linux'",
"torch>=2.9.1",
"torchvision>=0.24.1",
"pytorch-triton-xpu>=3.5.0 ; sys_platform == 'win32' or sys_platform == 'linux'",
]

[tool.uv.sources]
Expand Down Expand Up @@ -389,17 +383,17 @@ extra. For example, the following configuration would use PyTorch's CPU-only for
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12.0"
requires-python = ">=3.14.0"
dependencies = []

[project.optional-dependencies]
cpu = [
"torch>=2.7.0",
"torchvision>=0.22.0",
"torch>=2.9.1",
"torchvision>=0.24.1",
]
cu128 = [
"torch>=2.7.0",
"torchvision>=0.22.0",
"torch>=2.9.1",
"torchvision>=0.24.1",
]

[tool.uv]
Expand Down Expand Up @@ -473,15 +467,15 @@ then use the most-compatible PyTorch index for all relevant packages (e.g., `tor
etc.). If no such GPU is found, uv will fall back to the CPU-only index. uv will continue to respect
existing index configuration for any packages outside the PyTorch ecosystem.

You can also select a specific backend (e.g., CUDA 12.6) with `--torch-backend=cu126` (or
You can also select a specific backend (e.g., CUDA 12.8) with `--torch-backend=cu126` (or
`UV_TORCH_BACKEND=cu126`):

```shell
$ # With a command-line argument.
$ uv pip install torch torchvision --torch-backend=cu126

$ # With an environment variable.
$ UV_TORCH_BACKEND=cu126 uv pip install torch torchvision
$ UV_TORCH_BACKEND=cu128 uv pip install torch torchvision
```

At present, `--torch-backend` is only available in the `uv pip` interface.
Loading