base_model: | |
- openai/gpt-oss-120b | |
# gpt-oss-120b | |
Detailed guide for using this model with `llama.cpp`: | |
https://github.com/ggml-org/llama.cpp/discussions/15396 | |
Quick start: | |
```sh | |
llama-server -hf ggml-org/gpt-oss-120b-GGUF -c 0 -fa --jinja | |
# Then, access http://localhost:8080 | |
``` |