-
Notifications
You must be signed in to change notification settings - Fork 155
[Bug] Flux FP4 doesn't handle multiple LoRA #71
Copy link
Copy link
Closed
Labels
loraLoRA related issuesLoRA related issues
Description
Checklist
- 1. I have searched for related issues and FAQs (Nunchaku Frequently Asked Questions nunchaku#262) but was unable to find a solution.
- 2. The issue persists in the latest version.
- 3. Please note that without environment information and a minimal reproducible example, it will be difficult for us to reproduce and address the issue, which may delay our response.
- 4. If your report is a question rather than a bug, please submit it as a discussion at https://github.com/mit-han-lab/ComfyUI-nunchaku/discussions/new/choose. Otherwise, this issue will be closed.
- 5. I will do my best to describe the issue in English.
Describe the Bug
Hello,
On ComfyUI when I use in series two LoRA models, I've always one of the two LoRA who doesn't work.
I've tested it
- Each of LoRA one by one it works, if I use both of them it looks like it bypass one of them, lora model weight doesn't change nothing, I used Nunchaku Flux.1 LoRA Loader.
- I tested it with GGUF Q4 (Unet Loader), FP8 and CR LoRA stack on the exact same pipeline and it works.
Environment
Window 11, Pytorch2.8Cu128, RTX 5090, ComfyUI 0.3.27, sageattention 2
Reproduction Steps
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
loraLoRA related issuesLoRA related issues
