Error in ollama

#5
by Sam1989 - opened

I have downlod the files for Q8_0 via huggingface-cli :

<huggingface-cli download unsloth/GLM-4.5-Air-GGUF --include "Q8_0/*.gguf" --local-dir .>

then merge the files via llama.cpp:
<~/llama.cpp/build/bin/llama-gguf-split --merge GLM-4.5-Air-Q8_0-00001-of-00003.gguf GLM-4.5-Air-Q8_0.gguf>

This works without any errors.
In the end i have obtained one single file.
After that i create via command a file without any errors:

<ollama create GLM-4.5-Air-Q4_K_M:latest -f ~/Modelfile>
this model is listed after command.

But if i want to use the model in ollama, i get an error:

unable to load model: .ollama/models/blobs/sha256-08b.....(status code: 500)
ollama is updated to the version 0.11.2.

other official ollama works. The qwen-235b-Q4 from unsloth also works (created in the same way described above)

What can cause this problem ? I have no idea :(

Sign up or log in to comment