Skip to content

[Bug] Flux FP4 doesn't handle multiple LoRA #71

@Matriv-org

Description

@Matriv-org

Checklist

Describe the Bug

Hello,

On ComfyUI when I use in series two LoRA models, I've always one of the two LoRA who doesn't work.
I've tested it

  • Each of LoRA one by one it works, if I use both of them it looks like it bypass one of them, lora model weight doesn't change nothing, I used Nunchaku Flux.1 LoRA Loader.
  • I tested it with GGUF Q4 (Unet Loader), FP8 and CR LoRA stack on the exact same pipeline and it works.

Environment

Window 11, Pytorch2.8Cu128, RTX 5090, ComfyUI 0.3.27, sageattention 2

Reproduction Steps

FP4 vs FP8 vs GGUF.json

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    loraLoRA related issues

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions