Commit 197215d
authored
[Qwen3.6] Stack per-expert MoE tensors during mlx_lm sanitize (#312)
Tracking issue: #289
## Summary
`Qwen/Qwen3.6-35B-A3B-FP8` ships expert MLPs as one tensor per expert
per projection:
```
model.language_model.layers.{L}.mlp.experts.{E}.{gate,up,down}_proj.weight
```
The bf16 master `Qwen/Qwen3.6-35B-A3B` is already pre-stacked
(`experts.gate_up_proj` / `experts.down_proj`) and loads via the
existing combined-format branch in `mlx_lm.qwen3_5_moe.Model.sanitize`
unchanged — only the FP8 release lands per-expert, likely because Qwen's
FP8 quantization pipeline runs per-expert and the artifact is not
re-stacked.
On vllm-metal `main`, loading `Qwen/Qwen3.6-35B-A3B-FP8` fails strict
`load_weights` with `Received 30720 parameters not in model` — these are
per-expert MoE tensors that vllm-metal's existing FP8 dequant compat
doesn't address. (For reference: the same checkpoint on vanilla mlx-lm
fails with 61,690 keys, the difference being 30,970 FP8
`weight_scale_inv` tensors that vllm-metal's
`compat.py::_dequantize_qwen35_fp8_weights` already handles separately.)
## What this PR does
Add `_stack_qwen36_moe_per_expert_weights` chained after FP8 dequant in
the MoE sanitize wrapper:
1. **Scan** the weights dict for per-layer experts prefixes and their
expert-index sets.
2. **Validate** each prefix's index set is a contiguous `{0, 1, …, N-1}`
(raises `ValueError` otherwise).
3. **Walk** the per-expert tensors in order, `mx.stack` along axis 0,
`mx.concatenate` gate+up along the intermediate-dim axis, emit the
combined `experts.gate_up_proj` / `experts.down_proj` form upstream
sanitize already handles.
Pre-stacked checkpoints are unaffected (helper short-circuits when no
per-expert keys are present).
The MoE-only nature of the stacking is made explicit by splitting the
sanitize patch by model class:
- `mlx_lm.models.qwen3_5.Model` → wrapped with **FP8 dequant only**
(`_transform_dense`)
- `mlx_lm.models.qwen3_5_moe.Model` → wrapped with **FP8 dequant +
per-expert stacking** (`_transform_moe`)
Routing is driven by an explicit `transforms_by_module` map; future Qwen
variants added without a corresponding entry get logged as `unpatchable`
rather than silently inheriting one of the two transforms.
## Why a vllm-metal compat shim instead of waiting for upstream
mlx-community publishes pre-stacked redistributions of this checkpoint
that already load on existing mlx-lm. This shim lets users load
Qwen-org's canonical FP8 artifact directly without a 35GB→70GB bf16
intermediate conversion step that doesn't fit on memory-constrained Macs
(≤64 GB).
This complements ml-explore/mlx-lm#1224, which adds the same per-expert
stacking logic plus FP8 `weight_scale_inv` dequant for the qwen3_5
family inline in upstream sanitize. Once mlx-lm#1224 lands and
vllm-metal's mlx-lm pin bumps past a release containing it, both this
PR's per-expert stacking shim and the existing
`_dequantize_qwen35_fp8_weights` shim in `compat.py` become removable in
a follow-up cleanup.
## Files
- `vllm_metal/compat.py` — add `_stack_qwen36_moe_per_expert_weights`
helper; split sanitize patches into per-class transforms via
`transforms_by_module`.
- `docs/supported_models.md` — update Qwen3.6 row note + link this PR.
- `tests/test_compat.py` — four new unit tests using the existing
numpy-fake-mlx fixture (no real model weights, runs in milliseconds):
- `test_per_expert_moe_tensors_stack_to_combined` — positive: per-expert
input produces correctly stacked combined output, content preserved per
axis-0 slot.
- `test_pre_stacked_moe_is_noop_for_per_expert_helper` — regression:
pre-stacked input passes through unchanged (covers mlx-community
redistributions and Qwen3.6 bf16 master).
- `test_non_contiguous_per_expert_indices_raise` — defensive: malformed
`{0, 1, 3}` checkpoint raises `ValueError`.
- `test_per_expert_helper_does_not_run_on_dense_qwen35` — architecture
invariant: dense path doesn't run the MoE helper.
## Verification (per #289 pass bar)
| Checkpoint | Hardware | Status | Output |
|---|---|---|---|
| `Qwen/Qwen3.6-35B-A3B-FP8` | M3 Max / 128 GB | ✅ loads, generates |
`"The capital of France is"` → `" Paris, a city renowned for its iconic
landmarks such"` |
| `Qwen/Qwen3.6-35B-A3B` (bf16) | M3 Max / 128 GB | ✅ loads via
unchanged combined-format branch (both shims dormant — confirms the new
branch is properly gated) | same output |
- Hybrid SDPA + GDN linear attention path on Apple Silicon Metal, paged
KV cache.
- `pytest tests/test_compat.py`: **15 passed, 1 skipped** (the skip is
the pre-existing `VLLM_METAL_RUN_REAL_MLX_FP8_TESTS=1`-gated test).
- Existing Qwen3.5 golden-token smoke (`test_qwen35_smoke.py`): 5/5
pass, unchanged.
- `bash scripts/lint.sh`: clean (shellcheck, ruff check, ruff format
--check, mypy).
Rebased on latest `main` (3323d32) and re-validated against the bumped
dep stack: `mlx-lm 0.31.3` (from #313), `vllm 0.20.0+cpu` (from #262),
`transformers 5.7.0`.
---------
Signed-off-by: Shivendra Dayal <sdayal@gmail.com>1 parent ac94ebb commit 197215d
3 files changed
Lines changed: 283 additions & 10 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
26 | 26 | | |
27 | 27 | | |
28 | 28 | | |
29 | | - | |
| 29 | + | |
30 | 30 | | |
31 | 31 | | |
32 | 32 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
21 | 21 | | |
22 | 22 | | |
23 | 23 | | |
| 24 | + | |
| 25 | + | |
24 | 26 | | |
25 | 27 | | |
26 | 28 | | |
| |||
175 | 177 | | |
176 | 178 | | |
177 | 179 | | |
| 180 | + | |
| 181 | + | |
| 182 | + | |
| 183 | + | |
| 184 | + | |
| 185 | + | |
| 186 | + | |
| 187 | + | |
| 188 | + | |
| 189 | + | |
| 190 | + | |
| 191 | + | |
| 192 | + | |
| 193 | + | |
| 194 | + | |
| 195 | + | |
| 196 | + | |
| 197 | + | |
| 198 | + | |
| 199 | + | |
| 200 | + | |
| 201 | + | |
| 202 | + | |
| 203 | + | |
| 204 | + | |
| 205 | + | |
| 206 | + | |
| 207 | + | |
| 208 | + | |
| 209 | + | |
| 210 | + | |
| 211 | + | |
| 212 | + | |
| 213 | + | |
| 214 | + | |
| 215 | + | |
| 216 | + | |
| 217 | + | |
| 218 | + | |
| 219 | + | |
| 220 | + | |
| 221 | + | |
| 222 | + | |
| 223 | + | |
| 224 | + | |
| 225 | + | |
| 226 | + | |
| 227 | + | |
| 228 | + | |
| 229 | + | |
| 230 | + | |
| 231 | + | |
| 232 | + | |
| 233 | + | |
| 234 | + | |
| 235 | + | |
| 236 | + | |
| 237 | + | |
| 238 | + | |
| 239 | + | |
| 240 | + | |
| 241 | + | |
| 242 | + | |
| 243 | + | |
| 244 | + | |
| 245 | + | |
| 246 | + | |
| 247 | + | |
| 248 | + | |
| 249 | + | |
| 250 | + | |
| 251 | + | |
| 252 | + | |
| 253 | + | |
| 254 | + | |
| 255 | + | |
| 256 | + | |
| 257 | + | |
| 258 | + | |
| 259 | + | |
| 260 | + | |
| 261 | + | |
| 262 | + | |
| 263 | + | |
| 264 | + | |
| 265 | + | |
| 266 | + | |
| 267 | + | |
| 268 | + | |
| 269 | + | |
| 270 | + | |
| 271 | + | |
| 272 | + | |
| 273 | + | |
| 274 | + | |
| 275 | + | |
| 276 | + | |
| 277 | + | |
| 278 | + | |
| 279 | + | |
| 280 | + | |
| 281 | + | |
| 282 | + | |
| 283 | + | |
| 284 | + | |
| 285 | + | |
| 286 | + | |
| 287 | + | |
| 288 | + | |
| 289 | + | |
| 290 | + | |
| 291 | + | |
| 292 | + | |
| 293 | + | |
| 294 | + | |
| 295 | + | |
| 296 | + | |
| 297 | + | |
| 298 | + | |
| 299 | + | |
| 300 | + | |
| 301 | + | |
| 302 | + | |
| 303 | + | |
| 304 | + | |
| 305 | + | |
| 306 | + | |
| 307 | + | |
| 308 | + | |
| 309 | + | |
| 310 | + | |
| 311 | + | |
| 312 | + | |
| 313 | + | |
| 314 | + | |
| 315 | + | |
| 316 | + | |
| 317 | + | |
| 318 | + | |
| 319 | + | |
| 320 | + | |
| 321 | + | |
| 322 | + | |
| 323 | + | |
| 324 | + | |
| 325 | + | |
| 326 | + | |
| 327 | + | |
| 328 | + | |
| 329 | + | |
| 330 | + | |
| 331 | + | |
| 332 | + | |
| 333 | + | |
| 334 | + | |
178 | 335 | | |
179 | 336 | | |
180 | 337 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
131 | 131 | | |
132 | 132 | | |
133 | 133 | | |
| 134 | + | |
| 135 | + | |
| 136 | + | |
| 137 | + | |
| 138 | + | |
| 139 | + | |
| 140 | + | |
| 141 | + | |
| 142 | + | |
| 143 | + | |
| 144 | + | |
| 145 | + | |
| 146 | + | |
| 147 | + | |
| 148 | + | |
| 149 | + | |
| 150 | + | |
| 151 | + | |
| 152 | + | |
| 153 | + | |
| 154 | + | |
| 155 | + | |
| 156 | + | |
| 157 | + | |
| 158 | + | |
| 159 | + | |
| 160 | + | |
| 161 | + | |
| 162 | + | |
| 163 | + | |
| 164 | + | |
| 165 | + | |
| 166 | + | |
| 167 | + | |
| 168 | + | |
| 169 | + | |
| 170 | + | |
| 171 | + | |
| 172 | + | |
| 173 | + | |
| 174 | + | |
| 175 | + | |
| 176 | + | |
| 177 | + | |
| 178 | + | |
| 179 | + | |
| 180 | + | |
| 181 | + | |
| 182 | + | |
| 183 | + | |
| 184 | + | |
| 185 | + | |
| 186 | + | |
| 187 | + | |
| 188 | + | |
| 189 | + | |
| 190 | + | |
| 191 | + | |
| 192 | + | |
| 193 | + | |
| 194 | + | |
| 195 | + | |
| 196 | + | |
| 197 | + | |
| 198 | + | |
| 199 | + | |
| 200 | + | |
| 201 | + | |
| 202 | + | |
| 203 | + | |
| 204 | + | |
| 205 | + | |
| 206 | + | |
| 207 | + | |
| 208 | + | |
| 209 | + | |
| 210 | + | |
| 211 | + | |
| 212 | + | |
| 213 | + | |
| 214 | + | |
| 215 | + | |
| 216 | + | |
| 217 | + | |
| 218 | + | |
| 219 | + | |
| 220 | + | |
| 221 | + | |
| 222 | + | |
| 223 | + | |
| 224 | + | |
| 225 | + | |
| 226 | + | |
| 227 | + | |
| 228 | + | |
134 | 229 | | |
135 | 230 | | |
136 | 231 | | |
| |||
177 | 272 | | |
178 | 273 | | |
179 | 274 | | |
180 | | - | |
181 | | - | |
182 | | - | |
183 | | - | |
184 | | - | |
185 | | - | |
| 275 | + | |
| 276 | + | |
| 277 | + | |
| 278 | + | |
| 279 | + | |
| 280 | + | |
| 281 | + | |
| 282 | + | |
| 283 | + | |
| 284 | + | |
| 285 | + | |
| 286 | + | |
| 287 | + | |
| 288 | + | |
| 289 | + | |
| 290 | + | |
| 291 | + | |
| 292 | + | |
186 | 293 | | |
187 | 294 | | |
188 | 295 | | |
189 | 296 | | |
| 297 | + | |
190 | 298 | | |
191 | 299 | | |
192 | | - | |
| 300 | + | |
193 | 301 | | |
194 | | - | |
195 | | - | |
| 302 | + | |
| 303 | + | |
| 304 | + | |
| 305 | + | |
| 306 | + | |
| 307 | + | |
| 308 | + | |
| 309 | + | |
| 310 | + | |
| 311 | + | |
196 | 312 | | |
197 | 313 | | |
198 | 314 | | |
| |||
0 commit comments