You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Other supported versions: cpu, cuda121, cuda126, cuda128
133
133
# Nvidia 5090 Please use cuda128 & torch==2.7
134
-
uv pip install -e '.[cuda128]'
134
+
uv pip install -e '.[all,cuda128]'
135
+
```
136
+
137
+
> **Warning:** Hardware groups (`cpu`, `cuda121`, `cuda124`, `cuda126`, `cuda128`, `rocm`, `mamba`) are mutually exclusive. You must choose exactly one. Do NOT combine multiple CUDA versions.
Copy file name to clipboardExpand all lines: docs/faq/models_troubleshooting.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@
8
8
9
9
```bash
10
10
# Install without the [mamba] extra
11
-
uv pip install -e '.[base,mcp,dev,notebook]'
11
+
uv pip install -e '.[base]'
12
12
```
13
13
14
14
Then load a Mamba model through `transformers` (cpu only) as usual. Note that this fallback path is significantly slower than the optimized `mamba-ssm` kernels (which require Linux + CUDA).
|**test**| Testing environment only | pytest and plugins |
122
+
|**notebook**| Jupyter and Marimo support | Jupyter Lab, Marimo |
123
+
|**docs**| Documentation building | mkdocs-material, mkdocstrings, mkdocs-jupyter |
124
+
|**ui**| Gradio web interface | Gradio |
125
+
|**mcp**| MCP server support | Included in core dependencies (no extra install needed) |
124
126
125
-
> **Note:**Core ML libraries (torch, transformers, datasets, peft, accelerate) are installed automatically as main dependencies. The groups above add additional functionality.
127
+
> **Note:**`mcp` is an empty extra because MCP dependencies (`mcp`, `starlette`, `uvicorn`, `websockets`) are already part of the core dependencies. You can still use `.[mcp]` for clarity but it won't install additional packages.
126
128
127
-
### Hardware-Specific Groups
129
+
### Hardware-Specific Groups (Mutually Exclusive)
128
130
129
-
| Dependency Group | PyTorch Version | GPU Type | When to Use |
> **Note:** Hardware groups are NOT included in `all` because they conflict with each other. Always combine a hardware group with your chosen feature group: e.g., `.[all,cuda124]`
138
144
139
145
## Installation Scenarios
140
146
@@ -147,8 +153,8 @@ For development and testing without GPU acceleration:
0 commit comments