dgomes03 commited on
Commit
4d5ff24
·
verified ·
1 Parent(s): f90cf58

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -10,3 +10,4 @@ tags:
10
  - mlx
11
  pipeline_tag: text-generation
12
  ---
 
 
10
  - mlx
11
  pipeline_tag: text-generation
12
  ---
13
+ Mistral-7B-Instruct-v0.3 quantized with mixed precision: This is a Mistral-7B-Instruct model where the embedding layer and output (head) layer are quantized to 8-bit precision, while the rest of the model uses 6-bit quantization. This mixed-precision approach aims to balance model size and inference speed with improved precision in critical layers.