Quatized yasserrmd/gpt-oss-coder-20b with llama.cpp commit 54a241f
Thanks for the awesome finetuning work by yasserrmd and OpenAI for the original model.
Multiple ways to use:
- run with llama.cpp:
./llama-server -m "gpt-oss-coder-MXFP4_MOE.gguf" -ngl 25 --jinja
- use with lmstudio: just pull from
benhaotang/gpt-oss-coder-20B-GGUF
- Downloads last month
- 197
Hardware compatibility
Log In
to view the estimation
4-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for benhaotang/gpt-oss-coder-20B-GGUF
Base model
openai/gpt-oss-20b
Quantized
unsloth/gpt-oss-20b-unsloth-bnb-4bit
Finetuned
yasserrmd/gpt-oss-coder-20b