Skip to content

Commit 99660a8

Browse files
Upgrade PyTorch documentation to latest versions (#16970)
## Summary Point to PyTorch 2.9, Python 3.14, CUDA 12.8, etc.
1 parent 20ab80a commit 99660a8

1 file changed

Lines changed: 28 additions & 34 deletions

File tree

docs/guides/integration/pytorch.md

Lines changed: 28 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -34,28 +34,22 @@ As such, the necessary packaging configuration will vary depending on both the p
3434
support and the accelerators you want to enable.
3535

3636
To start, consider the following (default) configuration, which would be generated by running
37-
`uv init --python 3.12` followed by `uv add torch torchvision`.
37+
`uv init --python 3.14` followed by `uv add torch torchvision`.
3838

3939
In this case, PyTorch would be installed from PyPI, which hosts CPU-only wheels for Windows and
40-
macOS, and GPU-accelerated wheels on Linux (targeting CUDA 12.6):
40+
macOS, and GPU-accelerated wheels on Linux (targeting CUDA 12.8, as of PyTorch 2.9.1):
4141

4242
```toml
4343
[project]
4444
name = "project"
4545
version = "0.1.0"
46-
requires-python = ">=3.12"
46+
requires-python = ">=3.14"
4747
dependencies = [
48-
"torch>=2.7.0",
49-
"torchvision>=0.22.0",
48+
"torch>=2.9.1",
49+
"torchvision>=0.24.1",
5050
]
5151
```
5252

53-
!!! tip "Supported Python versions"
54-
55-
At time of writing, PyTorch does not yet publish wheels for Python 3.14; as such projects with
56-
`requires-python = ">=3.14"` may fail to resolve. See the
57-
[compatibility matrix](https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix).
58-
5953
This is a valid configuration for projects that want to use CPU builds on Windows and macOS, and
6054
CUDA-enabled builds on Linux. However, if you need to support different platforms or accelerators,
6155
you'll need to configure the project accordingly.
@@ -117,7 +111,7 @@ In such cases, the first step is to add the relevant PyTorch index to your `pypr
117111
```toml
118112
[[tool.uv.index]]
119113
name = "pytorch-rocm"
120-
url = "https://download.pytorch.org/whl/rocm6.3"
114+
url = "https://download.pytorch.org/whl/rocm6.4"
121115
explicit = true
122116
```
123117

@@ -254,10 +248,10 @@ As a complete example, the following project would use PyTorch's CPU-only builds
254248
[project]
255249
name = "project"
256250
version = "0.1.0"
257-
requires-python = ">=3.12.0"
251+
requires-python = ">=3.14.0"
258252
dependencies = [
259-
"torch>=2.7.0",
260-
"torchvision>=0.22.0",
253+
"torch>=2.9.1",
254+
"torchvision>=0.24.1",
261255
]
262256

263257
[tool.uv.sources]
@@ -287,10 +281,10 @@ and CPU-only builds on all other platforms (e.g., macOS and Windows):
287281
[project]
288282
name = "project"
289283
version = "0.1.0"
290-
requires-python = ">=3.12.0"
284+
requires-python = ">=3.14.0"
291285
dependencies = [
292-
"torch>=2.7.0",
293-
"torchvision>=0.22.0",
286+
"torch>=2.9.1",
287+
"torchvision>=0.24.1",
294288
]
295289

296290
[tool.uv.sources]
@@ -321,11 +315,11 @@ builds on Windows and macOS (by way of falling back to PyPI):
321315
[project]
322316
name = "project"
323317
version = "0.1.0"
324-
requires-python = ">=3.12.0"
318+
requires-python = ">=3.14.0"
325319
dependencies = [
326-
"torch>=2.7.0",
327-
"torchvision>=0.22.0",
328-
"pytorch-triton-rocm>=3.3.0 ; sys_platform == 'linux'",
320+
"torch>=2.9.1",
321+
"torchvision>=0.24.1",
322+
"pytorch-triton-rocm>=3.5.1 ; sys_platform == 'linux'",
329323
]
330324

331325
[tool.uv.sources]
@@ -341,7 +335,7 @@ pytorch-triton-rocm = [
341335

342336
[[tool.uv.index]]
343337
name = "pytorch-rocm"
344-
url = "https://download.pytorch.org/whl/rocm6.3"
338+
url = "https://download.pytorch.org/whl/rocm6.4"
345339
explicit = true
346340
```
347341

@@ -351,11 +345,11 @@ Or, for Intel GPU builds:
351345
[project]
352346
name = "project"
353347
version = "0.1.0"
354-
requires-python = ">=3.12.0"
348+
requires-python = ">=3.14.0"
355349
dependencies = [
356-
"torch>=2.7.0",
357-
"torchvision>=0.22.0",
358-
"pytorch-triton-xpu>=3.3.0 ; sys_platform == 'win32' or sys_platform == 'linux'",
350+
"torch>=2.9.1",
351+
"torchvision>=0.24.1",
352+
"pytorch-triton-xpu>=3.5.0 ; sys_platform == 'win32' or sys_platform == 'linux'",
359353
]
360354

361355
[tool.uv.sources]
@@ -389,17 +383,17 @@ extra. For example, the following configuration would use PyTorch's CPU-only for
389383
[project]
390384
name = "project"
391385
version = "0.1.0"
392-
requires-python = ">=3.12.0"
386+
requires-python = ">=3.14.0"
393387
dependencies = []
394388

395389
[project.optional-dependencies]
396390
cpu = [
397-
"torch>=2.7.0",
398-
"torchvision>=0.22.0",
391+
"torch>=2.9.1",
392+
"torchvision>=0.24.1",
399393
]
400394
cu128 = [
401-
"torch>=2.7.0",
402-
"torchvision>=0.22.0",
395+
"torch>=2.9.1",
396+
"torchvision>=0.24.1",
403397
]
404398

405399
[tool.uv]
@@ -473,15 +467,15 @@ then use the most-compatible PyTorch index for all relevant packages (e.g., `tor
473467
etc.). If no such GPU is found, uv will fall back to the CPU-only index. uv will continue to respect
474468
existing index configuration for any packages outside the PyTorch ecosystem.
475469

476-
You can also select a specific backend (e.g., CUDA 12.6) with `--torch-backend=cu126` (or
470+
You can also select a specific backend (e.g., CUDA 12.8) with `--torch-backend=cu126` (or
477471
`UV_TORCH_BACKEND=cu126`):
478472

479473
```shell
480474
$ # With a command-line argument.
481475
$ uv pip install torch torchvision --torch-backend=cu126
482476

483477
$ # With an environment variable.
484-
$ UV_TORCH_BACKEND=cu126 uv pip install torch torchvision
478+
$ UV_TORCH_BACKEND=cu128 uv pip install torch torchvision
485479
```
486480

487481
At present, `--torch-backend` is only available in the `uv pip` interface.

0 commit comments

Comments
 (0)