sychonix's picture
Update README.md
5476772 verified
|
raw
history blame
1.09 kB
metadata
license: apache-2.0
tags:
  - gguf
  - coding
  - quantized
  - Q6_K
  - olympiccoder
  - llama.cpp
  - sychonix
inference: false
base_model: bartowski/open-r1_OlympicCoder-7B-GGUF
model_type: llama
quantization_config:
  quant_method: bitsandbytes
  load_in_4bit: false
  load_in_8bit: false
  weight_dtype: float16
  bnb_4bit_quant_type: nf4

🧠 OlympicCoder 7B Q6

Optimized and quantized version of OlympicCoder 7B designed for algorithmic reasoning, coding challenges, and symbolic inference.

...

🧩 Model Details

  • Model Name: OlympicCoder 7B Q6
  • Quantization: Q6_K
  • Format: GGUF
  • Size: 6.25 GB
  • Architecture: LLaMA-style 7B
  • Compatibility: llama.cpp, KoboldCpp, LM Studio, text-generation-webui

🔧 Recommended Use

  • 🧮 LeetCode / CodeForces-style problem solving
  • 💻 Competitive programming and algorithmic reasoning
  • 🛠️ General-purpose code generation

🚀 How to Use (with llama.cpp)

./main -m open-r1_OlympicCoder-7B-Q6_K.gguf -p "Write a Python function to solve the 2-sum problem."