--- library_name: llama.cpp license: mit tags: - gguf - bf16 base_model: - CrucibleLab/M3.2-24B-Loki-V1.3 --- # M3.2-24B-Loki-V1.3-GGUF GGUF model files for `M3.2-24B-Loki-V1.3`. This repository contains GGUF models quantized using [`llama.cpp`](https://github.com/ggerganov/llama.cpp). - **Base Model:** `M3.2-24B-Loki-V1.3` - **Quantization Methods Processed in this Job:** `Q8_0`, `Q6_K`, `Q5_K_M`, `Q5_0`, `Q5_K_S`, `Q4_K_M`, `Q4_K_S`, `Q4_0`, `Q3_K_L`, `Q3_K_M`, `Q3_K_S`, `Q2_K`, `BF16` - **Importance Matrix Used:** No This specific upload is for the **`BF16`** quantization.