Steelskull commited on
Commit
1dad089
·
verified ·
1 Parent(s): d4a8b47

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -4,6 +4,8 @@ license: mit
4
  tags:
5
  - gguf
6
  - bf16
 
 
7
  ---
8
  # M3.2-24B-Loki-V1.3-GGUF
9
 
@@ -15,4 +17,4 @@ This repository contains GGUF models quantized using [`llama.cpp`](https://githu
15
  - **Quantization Methods Processed in this Job:** `Q8_0`, `Q6_K`, `Q5_K_M`, `Q5_0`, `Q5_K_S`, `Q4_K_M`, `Q4_K_S`, `Q4_0`, `Q3_K_L`, `Q3_K_M`, `Q3_K_S`, `Q2_K`, `BF16`
16
  - **Importance Matrix Used:** No
17
 
18
- This specific upload is for the **`BF16`** quantization.
 
4
  tags:
5
  - gguf
6
  - bf16
7
+ base_model:
8
+ - CrucibleLab/M3.2-24B-Loki-V1.3
9
  ---
10
  # M3.2-24B-Loki-V1.3-GGUF
11
 
 
17
  - **Quantization Methods Processed in this Job:** `Q8_0`, `Q6_K`, `Q5_K_M`, `Q5_0`, `Q5_K_S`, `Q4_K_M`, `Q4_K_S`, `Q4_0`, `Q3_K_L`, `Q3_K_M`, `Q3_K_S`, `Q2_K`, `BF16`
18
  - **Importance Matrix Used:** No
19
 
20
+ This specific upload is for the **`BF16`** quantization.