metadata
language:
- en
license: mit
datasets:
- HuggingFaceFW/fineweb
base_model: jxm/gpt-oss-20b-base
library_name: transformers
tags:
- llama-cpp
Quantized to MXFP4 using llama.cpp b6150
I also gguf-my-repo'd a q6_K on my personal account but it's actually not any better and it's too big. looks like llama.cpp quantization is bad quality for these models except for mxfp4