Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,7 @@ The following example starts at the root of D drive and quantizes mistral's Mixt
|
|
28 |
|
29 |
# Instructions:
|
30 |
|
31 |
-
## Windows command prompt
|
32 |
* D:
|
33 |
* mkdir Mixtral
|
34 |
* git clone https://github.com/ggerganov/llama.cpp
|
@@ -46,10 +46,10 @@ Extract the two .zip files directly into the llama.cpp folder you just git clone
|
|
46 |
* https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1
|
47 |
|
48 |
|
49 |
-
## Convert the model to fp16:
|
50 |
* D:\llama.cpp>python convert.py D:\Mixtral --outtype f16 --outfile D:\Mixtral\Mixtral-8x7B-Instruct-v0.1.fp16.bin
|
51 |
|
52 |
-
## Quantize the fp16 model to q5_k_m:
|
53 |
* D:\llama.cpp>quantize.exe D:\Mixtral\Mixtral-8x7B-Instruct-v0.1.fp16.bin D:\Mixtral\Mixtral-8x7B-Instruct-v0.1.q5_k_m.gguf q5_k_m
|
54 |
|
55 |
That's it. Load up the resulting .gguf file like you normally would.
|
|
|
28 |
|
29 |
# Instructions:
|
30 |
|
31 |
+
## Windows command prompt - folder setup and git clone llama.cpp
|
32 |
* D:
|
33 |
* mkdir Mixtral
|
34 |
* git clone https://github.com/ggerganov/llama.cpp
|
|
|
46 |
* https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1
|
47 |
|
48 |
|
49 |
+
## Windows command prompt - Convert the model to fp16:
|
50 |
* D:\llama.cpp>python convert.py D:\Mixtral --outtype f16 --outfile D:\Mixtral\Mixtral-8x7B-Instruct-v0.1.fp16.bin
|
51 |
|
52 |
+
## Windows command prompt - Quantize the fp16 model to q5_k_m:
|
53 |
* D:\llama.cpp>quantize.exe D:\Mixtral\Mixtral-8x7B-Instruct-v0.1.fp16.bin D:\Mixtral\Mixtral-8x7B-Instruct-v0.1.q5_k_m.gguf q5_k_m
|
54 |
|
55 |
That's it. Load up the resulting .gguf file like you normally would.
|