🧐 OlympicCoder 7B Q6
Optimized and quantized version of OlympicCoder 7B designed for algorithmic reasoning, competitive programming, and symbolic inference.
📊 Model Details
- Model Name: OlympicCoder 7B Q6
- Quantization: Q6_K
- Format: GGUF
- Size: 6.25 GB
- Architecture: LLaMA-style 7B
- Base Model: open-r1_OlympicCoder-7B-GGUF
🛠️ Use Cases
- ⚖️ Competitive programming and Codeforces-style tasks
- 📈 Symbolic reasoning and algorithmic inference
- 💻 Code generation and technical prompts
🚀 How to Run (with llama.cpp)
./main -m open-r1_OlympicCoder-7B-Q6_K.gguf -p "Write a function that checks if a number is prime."
Other tools:
- LM Studio: Import
.gguf
and chat directly - KoboldCpp / text-generation-webui: Load as GGUF model
📄 License
Apache 2.0 — free for commercial and research use.
- Downloads last month
- 9
Hardware compatibility
Log In
to view the estimation
6-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for sychonix/OlympicCoder-7B-Sychonix
Base model
Qwen/Qwen2.5-7B
Finetuned
Qwen/Qwen2.5-Coder-7B
Finetuned
Qwen/Qwen2.5-Coder-7B-Instruct
Finetuned
open-r1/OlympicCoder-7B
Quantized
bartowski/open-r1_OlympicCoder-7B-GGUF