sychonix's picture
Update README.md
5bff3ff verified
---
license: apache-2.0
tags:
- gguf
- coding
- quantized
- Q6_K
- olympiccoder
- llama.cpp
model_type: llama
inference: false
base_model: bartowski/open-r1_OlympicCoder-7B-GGUF
---
# 🧐 OlympicCoder 7B Q6
Optimized and quantized version of OlympicCoder 7B designed for algorithmic reasoning, competitive programming, and symbolic inference.
---
## 📊 Model Details
* **Model Name**: OlympicCoder 7B Q6
* **Quantization**: Q6\_K
* **Format**: GGUF
* **Size**: 6.25 GB
* **Architecture**: LLaMA-style 7B
* **Base Model**: [open-r1\_OlympicCoder-7B-GGUF](https://huggingface.co/bartowski/open-r1_OlympicCoder-7B-GGUF)
---
## 🛠️ Use Cases
* ⚖️ Competitive programming and Codeforces-style tasks
* 📈 Symbolic reasoning and algorithmic inference
* 💻 Code generation and technical prompts
---
## 🚀 How to Run (with llama.cpp)
```bash
./main -m open-r1_OlympicCoder-7B-Q6_K.gguf -p "Write a function that checks if a number is prime."
```
Other tools:
* **LM Studio**: Import `.gguf` and chat directly
* **KoboldCpp** / **text-generation-webui**: Load as GGUF model
---
## 📄 License
Apache 2.0 — free for commercial and research use.
---