Gacrux-R1-Qwen3-1.7B-MoD-GGUF

Gacrux-R1-Qwen3-1.7B-MoD is a high-efficiency, multi-domain model fine-tuned on Qwen3-1.7B with traces of Mixture of Domains (MoD). It leverages the prithivMLmods/Gargantua-R1-Wee dataset, designed for rigorous mathematical problem-solving and enriched with multi-domain coverage across mathematics, coding, and science. This model blends symbolic precision, scientific logic, and structured output fluency—making it an ideal tool for developers, educators, and researchers seeking advanced reasoning under constrained compute.

Model Files

File Name Quant Type File Size
Gacrux-R1-Qwen3-1.7B-MoD.BF16.gguf BF16 3.45 GB
Gacrux-R1-Qwen3-1.7B-MoD.F16.gguf F16 3.45 GB
Gacrux-R1-Qwen3-1.7B-MoD.F32.gguf F32 6.89 GB
Gacrux-R1-Qwen3-1.7B-MoD.Q2_K.gguf Q2_K 778 MB
Gacrux-R1-Qwen3-1.7B-MoD.Q3_K_L.gguf Q3_K_L 1 GB
Gacrux-R1-Qwen3-1.7B-MoD.Q3_K_M.gguf Q3_K_M 940 MB
Gacrux-R1-Qwen3-1.7B-MoD.Q3_K_S.gguf Q3_K_S 867 MB
Gacrux-R1-Qwen3-1.7B-MoD.Q4_K_M.gguf Q4_K_M 1.11 GB
Gacrux-R1-Qwen3-1.7B-MoD.Q4_K_S.gguf Q4_K_S 1.06 GB
Gacrux-R1-Qwen3-1.7B-MoD.Q5_K_M.gguf Q5_K_M 1.26 GB
Gacrux-R1-Qwen3-1.7B-MoD.Q5_K_S.gguf Q5_K_S 1.23 GB
Gacrux-R1-Qwen3-1.7B-MoD.Q6_K.gguf Q6_K 1.42 GB
Gacrux-R1-Qwen3-1.7B-MoD.Q8_0.gguf Q8_0 1.83 GB

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
221
GGUF
Model size
1.72B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Gacrux-R1-Qwen3-1.7B-MoD-GGUF

Finetuned
Qwen/Qwen3-1.7B
Quantized
(3)
this model

Dataset used to train prithivMLmods/Gacrux-R1-Qwen3-1.7B-MoD-GGUF

Collection including prithivMLmods/Gacrux-R1-Qwen3-1.7B-MoD-GGUF