Update README.md
Browse files
README.md
CHANGED
|
@@ -23,7 +23,7 @@ license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/
|
|
| 23 |
- **Model Developers:** Neural Magic
|
| 24 |
|
| 25 |
Quantized version of [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct), with the new configuration files.
|
| 26 |
-
It achieves an average score of 69.
|
| 27 |
|
| 28 |
### Model Optimizations
|
| 29 |
|
|
@@ -187,9 +187,9 @@ lm_eval \
|
|
| 187 |
</td>
|
| 188 |
<td>69.33
|
| 189 |
</td>
|
| 190 |
-
<td>68.
|
| 191 |
</td>
|
| 192 |
-
<td>99.
|
| 193 |
</td>
|
| 194 |
</tr>
|
| 195 |
<tr>
|
|
@@ -197,9 +197,9 @@ lm_eval \
|
|
| 197 |
</td>
|
| 198 |
<td>63.05
|
| 199 |
</td>
|
| 200 |
-
<td>
|
| 201 |
</td>
|
| 202 |
-
<td>
|
| 203 |
</td>
|
| 204 |
</tr>
|
| 205 |
<tr>
|
|
@@ -207,9 +207,9 @@ lm_eval \
|
|
| 207 |
</td>
|
| 208 |
<td>76.95
|
| 209 |
</td>
|
| 210 |
-
<td>
|
| 211 |
</td>
|
| 212 |
-
<td>
|
| 213 |
</td>
|
| 214 |
</tr>
|
| 215 |
<tr>
|
|
@@ -217,9 +217,9 @@ lm_eval \
|
|
| 217 |
</td>
|
| 218 |
<td>79.58
|
| 219 |
</td>
|
| 220 |
-
<td>79.
|
| 221 |
</td>
|
| 222 |
-
<td>99.
|
| 223 |
</td>
|
| 224 |
</tr>
|
| 225 |
<tr>
|
|
@@ -227,9 +227,9 @@ lm_eval \
|
|
| 227 |
</td>
|
| 228 |
<td>74.82
|
| 229 |
</td>
|
| 230 |
-
<td>
|
| 231 |
</td>
|
| 232 |
-
<td>
|
| 233 |
</td>
|
| 234 |
</tr>
|
| 235 |
<tr>
|
|
@@ -237,9 +237,9 @@ lm_eval \
|
|
| 237 |
</td>
|
| 238 |
<td>54.41
|
| 239 |
</td>
|
| 240 |
-
<td>
|
| 241 |
</td>
|
| 242 |
-
<td>
|
| 243 |
</td>
|
| 244 |
</tr>
|
| 245 |
<tr>
|
|
@@ -247,9 +247,9 @@ lm_eval \
|
|
| 247 |
</td>
|
| 248 |
<td><strong>69.69</strong>
|
| 249 |
</td>
|
| 250 |
-
<td><strong>69.
|
| 251 |
</td>
|
| 252 |
-
<td><strong>99.
|
| 253 |
</td>
|
| 254 |
</tr>
|
| 255 |
</table>
|
|
|
|
| 23 |
- **Model Developers:** Neural Magic
|
| 24 |
|
| 25 |
Quantized version of [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct), with the new configuration files.
|
| 26 |
+
It achieves an average score of 69.45 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 69.69.
|
| 27 |
|
| 28 |
### Model Optimizations
|
| 29 |
|
|
|
|
| 187 |
</td>
|
| 188 |
<td>69.33
|
| 189 |
</td>
|
| 190 |
+
<td>68.72
|
| 191 |
</td>
|
| 192 |
+
<td>99.12%
|
| 193 |
</td>
|
| 194 |
</tr>
|
| 195 |
<tr>
|
|
|
|
| 197 |
</td>
|
| 198 |
<td>63.05
|
| 199 |
</td>
|
| 200 |
+
<td>62.54
|
| 201 |
</td>
|
| 202 |
+
<td>99.19%
|
| 203 |
</td>
|
| 204 |
</tr>
|
| 205 |
<tr>
|
|
|
|
| 207 |
</td>
|
| 208 |
<td>76.95
|
| 209 |
</td>
|
| 210 |
+
<td>77.03
|
| 211 |
</td>
|
| 212 |
+
<td>100.1%
|
| 213 |
</td>
|
| 214 |
</tr>
|
| 215 |
<tr>
|
|
|
|
| 217 |
</td>
|
| 218 |
<td>79.58
|
| 219 |
</td>
|
| 220 |
+
<td>79.37
|
| 221 |
</td>
|
| 222 |
+
<td>99.74%
|
| 223 |
</td>
|
| 224 |
</tr>
|
| 225 |
<tr>
|
|
|
|
| 227 |
</td>
|
| 228 |
<td>74.82
|
| 229 |
</td>
|
| 230 |
+
<td>75.37
|
| 231 |
</td>
|
| 232 |
+
<td>100.7%
|
| 233 |
</td>
|
| 234 |
</tr>
|
| 235 |
<tr>
|
|
|
|
| 237 |
</td>
|
| 238 |
<td>54.41
|
| 239 |
</td>
|
| 240 |
+
<td>53.68
|
| 241 |
</td>
|
| 242 |
+
<td>98.66%
|
| 243 |
</td>
|
| 244 |
</tr>
|
| 245 |
<tr>
|
|
|
|
| 247 |
</td>
|
| 248 |
<td><strong>69.69</strong>
|
| 249 |
</td>
|
| 250 |
+
<td><strong>69.45</strong>
|
| 251 |
</td>
|
| 252 |
+
<td><strong>99.66%</strong>
|
| 253 |
</td>
|
| 254 |
</tr>
|
| 255 |
</table>
|