Skip to content

Commit 3353751

Browse files
chore: bump minimum PyTorch version from 2.3 to 2.4
- Update dependency spec in pyproject.toml - Update docs (README, installation, quickstart) - Remove _IS_TORCH_GTE_24 compat shim in _ops.py (register_fake/register_kernel are always available in torch 2.4+) - Remove torch < 2.4 skipif guards in tests - Remove NumPy < 2 downgrade workaround for torch 2.3 on Windows - Update CI test matrices: 2.3.1 → 2.4.1 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 3031919 commit 3353751

11 files changed

Lines changed: 15 additions & 36 deletions

File tree

.github/workflows/test-runner.yml

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -221,12 +221,6 @@ jobs:
221221
pip install pytest-cov
222222
shell: bash
223223

224-
# Windows: Downgrade NumPy for torch<2.4.1 compatibility
225-
# See: https://github.com/pytorch/pytorch/issues/131668
226-
- name: Downgrade NumPy
227-
if: inputs.platform == 'windows' && startsWith(inputs.torch_version, '2.3.')
228-
run: pip install "numpy<2"
229-
230224
- name: Show installed packages
231225
run: pip list
232226

.github/workflows/tests-nightly.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -20,12 +20,12 @@ jobs:
2020
platform: [linux-x64, linux-aarch64, macos, windows]
2121
# default runners don't have AVX-512 support, but icelake does
2222
cpu_type: ["", icelake]
23-
torch_version: ["2.3.1", "2.10.0", "2.11.0"]
23+
torch_version: ["2.4.1", "2.10.0", "2.11.0"]
2424

2525
exclude:
2626
# aarch64 minimum torch version is 2.5.1
2727
- platform: linux-aarch64
28-
torch_version: "2.3.1"
28+
torch_version: "2.4.1"
2929
# icelake only applies to linux-x64
3030
- platform: linux-aarch64
3131
cpu_type: icelake
@@ -62,7 +62,7 @@ jobs:
6262
include:
6363
# Map CUDA version to torch version and PyPI index
6464
- cuda_version: "11.8.0"
65-
torch_version: "2.3.1"
65+
torch_version: "2.4.1"
6666
pypi_index: "https://download.pytorch.org/whl/cu118"
6767
- cuda_version: "12.6.3"
6868
torch_version: "2.8.0"
@@ -82,7 +82,7 @@ jobs:
8282
- platform: windows
8383
gpu_type: T4
8484
cuda_version: "11.8.0"
85-
torch_version: "2.3.1"
85+
torch_version: "2.4.1"
8686
pypi_index: "https://download.pytorch.org/whl/cu118"
8787
- platform: windows
8888
gpu_type: T4

.github/workflows/tests-pr.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -31,20 +31,20 @@ jobs:
3131
platform: [linux-x64, linux-aarch64, macos]
3232
# default runners don't have AVX-512 support, but icelake does
3333
cpu_type: ["", icelake]
34-
torch_version: ["2.3.1", "2.11.0"]
34+
torch_version: ["2.4.1", "2.11.0"]
3535

3636
exclude:
3737
# aarch64 minimum torch version is 2.5.1
3838
- platform: linux-aarch64
39-
torch_version: "2.3.1"
39+
torch_version: "2.4.1"
4040
# icelake only applies to linux-x64
4141
- platform: linux-aarch64
4242
cpu_type: icelake
4343
- platform: macos
4444
cpu_type: icelake
4545

4646
include:
47-
# Add aarch64 with torch 2.5.1 instead of 2.3.1
47+
# Add aarch64 with torch 2.5.1 instead of 2.4.1
4848
- platform: linux-aarch64
4949
cpu_type: ""
5050
torch_version: "2.5.1"
@@ -70,7 +70,7 @@ jobs:
7070
include:
7171
# Map CUDA version to torch version and PyPI index
7272
- cuda_version: "11.8.0"
73-
torch_version: "2.3.1"
73+
torch_version: "2.4.1"
7474
pypi_index: "https://download.pytorch.org/whl/cu118"
7575
- cuda_version: "12.8.1"
7676
torch_version: "2.9.1"

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ The library includes quantization primitives for 8-bit & 4-bit operations, throu
2020
bitsandbytes has the following minimum requirements for all platforms:
2121

2222
* Python 3.10+
23-
* [PyTorch](https://pytorch.org/get-started/locally/) 2.3+
23+
* [PyTorch](https://pytorch.org/get-started/locally/) 2.4+
2424
* _Note: While we aim to provide wide backwards compatibility, we recommend using the latest version of PyTorch for the best experience._
2525

2626
#### Accelerator support:

bitsandbytes/_ops.py

Lines changed: 2 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -4,16 +4,8 @@
44

55
import torch
66

7-
_IS_TORCH_GTE_24 = False
8-
9-
if hasattr(torch.library, "register_fake"):
10-
_IS_TORCH_GTE_24 = True
11-
register_fake = torch.library.register_fake
12-
register_kernel = torch.library.register_kernel
13-
else:
14-
# PyTorch <= 2.3
15-
register_fake = torch.library.impl_abstract
16-
register_kernel = torch.library.impl
7+
register_fake = torch.library.register_fake
8+
register_kernel = torch.library.register_kernel
179

1810
# Int8 mixed precision matmul + dequant + bias
1911
torch.library.define(

docs/source/installation.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ We provide official support for NVIDIA GPUs, CPUs, Intel XPUs, and Intel Gaudi.
2727
These are the minimum requirements for `bitsandbytes` across all platforms. Please be aware that some compute platforms may impose more strict requirements.
2828

2929
* Python >= 3.10
30-
* PyTorch >= 2.3
30+
* PyTorch >= 2.4
3131

3232
## NVIDIA CUDA[[cuda]]
3333

docs/source/quickstart.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Welcome to bitsandbytes! This library enables accessible large language models v
88
pip install bitsandbytes
99
```
1010

11-
**Requirements:** Python 3.10+, PyTorch 2.3+
11+
**Requirements:** Python 3.10+, PyTorch 2.4+
1212

1313
For detailed installation instructions, see the [Installation Guide](./installation).
1414

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ classifiers = [
4343
"Topic :: Scientific/Engineering :: Artificial Intelligence"
4444
]
4545
dependencies = [
46-
"torch>=2.3,<3",
46+
"torch>=2.4,<3",
4747
"numpy>=1.17",
4848
"packaging>=20.9",
4949
]

tests/test_linear4bit.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -355,7 +355,6 @@ def test_params4bit_real_serialization(device, quant_type, blocksize, compress_s
355355
@pytest.mark.parametrize("bias", TRUE_FALSE, ids=id_formatter("bias"))
356356
@pytest.mark.parametrize("fullgraph", TRUE_FALSE, ids=id_formatter("fullgraph"))
357357
@pytest.mark.parametrize("mode", ["default", "reduce-overhead"], ids=id_formatter("mode"))
358-
@pytest.mark.skipif(torch.__version__ < (2, 4), reason="Not supported in torch < 2.4")
359358
@pytest.mark.skipif(
360359
torch.__version__ < (2, 10) and sys.version_info >= (3, 14), reason="Not supported in Python 3.14 until torch 2.10"
361360
)

tests/test_linear8bitlt.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -253,7 +253,6 @@ def test_linear8bit_load_state_dict_raises_runtime_for_tied_weight():
253253
@pytest.mark.parametrize("bias", TRUE_FALSE, ids=id_formatter("bias"))
254254
@pytest.mark.parametrize("fullgraph", TRUE_FALSE, ids=id_formatter("fullgraph"))
255255
@pytest.mark.parametrize("mode", ["default", "reduce-overhead"], ids=id_formatter("mode"))
256-
@pytest.mark.skipif(torch.__version__ < (2, 4), reason="Not supported in torch < 2.4")
257256
@pytest.mark.skipif(
258257
torch.__version__ < (2, 10) and sys.version_info >= (3, 14), reason="Not supported in Python 3.14 until torch 2.10"
259258
)

0 commit comments

Comments
 (0)