OptimizeLLM commited on
Commit
194d9eb
·
verified ·
1 Parent(s): 99a726e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -17,7 +17,7 @@ prompt_template: '[INST] {prompt} [/INST]
17
  quantized_by: OptimizeLLM
18
  ---
19
 
20
- This is Mistral AI's Mixtral Instruct v0.1 model, quantized on 02/24/2024. The file size is slightly larger than TheBloke's version from December, and it seems to work well.
21
 
22
  # How to quantize your own models with Windows and an RTX GPU:
23
 
 
17
  quantized_by: OptimizeLLM
18
  ---
19
 
20
+ This is Mistral AI's Mixtral Instruct v0.1 model, quantized on 02/24/2024. It works well.
21
 
22
  # How to quantize your own models with Windows and an RTX GPU:
23