Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -3,7 +3,7 @@ library_name: llama.cpp
|
|
3 |
license: mit
|
4 |
tags:
|
5 |
- gguf
|
6 |
-
-
|
7 |
---
|
8 |
# M3.2-24B-Loki-V1.3-GGUF
|
9 |
|
@@ -15,4 +15,4 @@ This repository contains GGUF models quantized using [`llama.cpp`](https://githu
|
|
15 |
- **Quantization Methods Processed in this Job:** `Q8_0`, `Q6_K`, `Q5_K_M`, `Q5_0`, `Q5_K_S`, `Q4_K_M`, `Q4_K_S`, `Q4_0`, `Q3_K_L`, `Q3_K_M`, `Q3_K_S`, `Q2_K`, `BF16`
|
16 |
- **Importance Matrix Used:** No
|
17 |
|
18 |
-
This specific upload is for the **`
|
|
|
3 |
license: mit
|
4 |
tags:
|
5 |
- gguf
|
6 |
+
- q5-k-m
|
7 |
---
|
8 |
# M3.2-24B-Loki-V1.3-GGUF
|
9 |
|
|
|
15 |
- **Quantization Methods Processed in this Job:** `Q8_0`, `Q6_K`, `Q5_K_M`, `Q5_0`, `Q5_K_S`, `Q4_K_M`, `Q4_K_S`, `Q4_0`, `Q3_K_L`, `Q3_K_M`, `Q3_K_S`, `Q2_K`, `BF16`
|
16 |
- **Importance Matrix Used:** No
|
17 |
|
18 |
+
This specific upload is for the **`Q5_K_M`** quantization.
|