nm-research commited on
Commit
99bee74
·
verified ·
1 Parent(s): 8811570

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -68,7 +68,7 @@ This model was created with [llm-compressor](https://github.com/vllm-project/llm
68
 
69
 
70
  ```bash
71
- python quantize.py --model_path ibm-granite/granite-3.1-8b-instruct --quant_path "output_dir/granite-3.1-8b-instruct-quantized.w4a16" --calib_size 1024 --dampening_frac 0.01 --observer mse --actorder dynamic
72
  ```
73
 
74
 
@@ -192,18 +192,18 @@ evalplus.evaluate \
192
 
193
  | Metric | ibm-granite/granite-3.1-8b-instruct | neuralmagic-ent/granite-3.1-8b-instruct-quantized.w4a16 |
194
  |-----------------------------------------|:---------------------------------:|:-------------------------------------------:|
195
- | ARC-Challenge (Acc-Norm, 25-shot) | 66.81 | 66.98 |
196
- | GSM8K (Strict-Match, 5-shot) | 64.52 | 68.08 |
197
- | HellaSwag (Acc-Norm, 10-shot) | 84.18 | 83.30 |
198
- | MMLU (Acc, 5-shot) | 65.52 | 63.96 |
199
- | TruthfulQA (MC2, 0-shot) | 60.57 | 60.62 |
200
- | Winogrande (Acc, 5-shot) | 80.19 | 78.61 |
201
- | **Average Score** | **70.30** | **70.26** |
202
- | **Recovery** | **100.00** | **99.94** |
203
 
204
  #### HumanEval pass@1 scores
205
  | Metric | ibm-granite/granite-3.1-8b-instruct | neuralmagic-ent/granite-3.1-8b-instruct-quantized.w4a16 |
206
  |-----------------------------------------|:---------------------------------:|:-------------------------------------------:|
207
- | HumanEval Pass@1 | 71.00 | 70.90 |
208
 
209
 
 
68
 
69
 
70
  ```bash
71
+ python quantize.py --model_path ibm-granite/granite-3.1-8b-instruct --quant_path "output_dir/granite-3.1-8b-instruct-quantized.w4a16" --calib_size 1024 --dampening_frac 0.1 --observer mse --actorder static
72
  ```
73
 
74
 
 
192
 
193
  | Metric | ibm-granite/granite-3.1-8b-instruct | neuralmagic-ent/granite-3.1-8b-instruct-quantized.w4a16 |
194
  |-----------------------------------------|:---------------------------------:|:-------------------------------------------:|
195
+ | ARC-Challenge (Acc-Norm, 25-shot) | 66.81 | 66.81 |
196
+ | GSM8K (Strict-Match, 5-shot) | 64.52 | 65.66 |
197
+ | HellaSwag (Acc-Norm, 10-shot) | 84.18 | 83.62 |
198
+ | MMLU (Acc, 5-shot) | 65.52 | 64.25 |
199
+ | TruthfulQA (MC2, 0-shot) | 60.57 | 60.17 |
200
+ | Winogrande (Acc, 5-shot) | 80.19 | 78.37 |
201
+ | **Average Score** | **70.30** | **69.81** |
202
+ | **Recovery** | **100.00** | **99.31** |
203
 
204
  #### HumanEval pass@1 scores
205
  | Metric | ibm-granite/granite-3.1-8b-instruct | neuralmagic-ent/granite-3.1-8b-instruct-quantized.w4a16 |
206
  |-----------------------------------------|:---------------------------------:|:-------------------------------------------:|
207
+ | HumanEval Pass@1 | 71.00 | 70.50 |
208
 
209