Falled to run 'qwen-image-Q6_K.gguf' with ollama
Hello!
I downloaded qwen-image-Q6_K.gguf
and created ModelFile
ollama list
NAME ID SIZE MODIFIED
qwen-image:latest 1f9ea9dde17d 16 GB 19 minutes ago
However when I ran ollama run qwen-image
it crashed
time=2025-08-06T11:11:22.804+08:00 level=WARN source=memory.go:129 msg="model missing blk.0 layer size"
panic: interface conversion: interface {} is nil, not *ggml.array[string]
goroutine 52 [running]:
github.com/ollama/ollama/fs/ggml.GGML.GraphSize({{0x564cd04b79e0, 0xc00059a5f0}, {0x564cd04b7990, 0xc000189008}, 0x3ead8fa20}, 0x1000, 0x200, 0x1, {0x0, 0x0})
github.com/ollama/ollama/fs/ggml/ggml.go:486 +0x18f4
github.com/ollama/ollama/llm.EstimateGPULayers({_, _, _}, _, {_, _, _}, {{0x1000, 0x200, 0xffffffffffffffff, ...}, ...}, ...)
github.com/ollama/ollama/llm/memory.go:142 +0x725
github.com/ollama/ollama/llm.PredictServerFit({0xc000125b70?, 0x5b?, 0xc0001258b8?}, 0xc00026faa0, {0x0?, 0xc000125a68?, 0xc000125b68?}, {0x0, 0x0, 0x0}, ...)
github.com/ollama/ollama/llm/memory.go:23 +0xe5
github.com/ollama/ollama/server.pickBestFullFitByLibrary(0xc0004f9380, 0xc00026faa0, {0xc0000e2240?, 0xfffffffffffffffc?, 0x564cd0013f3a?}, 0xc000057cc8)
github.com/ollama/ollama/server/sched.go:785 +0x6fb
github.com/ollama/ollama/server.(*Scheduler).processPending(0xc000111020, {0x564cd04bbaf0, 0xc00059ac80})
github.com/ollama/ollama/server/sched.go:227 +0xf6e
github.com/ollama/ollama/server.(*Scheduler).Run.func1()
github.com/ollama/ollama/server/sched.go:108 +0x1f
created by github.com/ollama/ollama/server.(*Scheduler).Run in goroutine 1
github.com/ollama/ollama/server/sched.go:107 +0xb1
Any help is appreciated!
ollama does not work with image generation models as far as I know.
Do you want to use ollama? This is not the correct model.
Do you want to use this model? Ollama is not the correct software, you'll have to:
- Install ComfyUI: https://github.com/comfyanonymous/ComfyUI/#get-started
- Install ComfyUI custom nodes for GGUF quantization support: https://github.com/city96/ComfyUI-GGUF#installation
- Follow this guide: https://comfyanonymous.github.io/ComfyUI_examples/qwen_image/
After step 3 you'll replace model loader node with GGUF loader from the custom nodes installed in step 2 - this will allow loading of the GGUF files from this repository.
Thank you so much!
Keep in mind you can use open-webui to hook into comfyui's API via a custom workflow and the prompt will be fed by whatever model you are running in Ollama. This takes some effort but if this is what you originally wanted to do this is how you would do it.