Cannot use torch.compile with that LoRA

#5
by NielsGx - opened

When I try to use torch.compile with that LoRA, it fails complaining about recompilation limits reached.
Could it be caused by a lack of VRAM (and this being FP32) ?

Or could be an issue of ComfyUI with Qwen-Image and LoRA for it in general...
I opened issues:
ComfyUI: https://github.com/comfyanonymous/ComfyUI/issues/9289
KJ Nodes: https://github.com/kijai/ComfyUI-KJNodes/issues/363

EDIT: I tried to resave this lora as BF16 (except the INT64 parts ofc), and same issue

This was resolved by updating to PyTorch nightly

NielsGx changed discussion status to closed

Sign up or log in to comment