Update README.md
Browse files
README.md
CHANGED
|
@@ -176,6 +176,13 @@ output_text = tokenizer.batch_decode(
|
|
| 176 |
print("Response:", output_text[0][len(prompt):])
|
| 177 |
```
|
| 178 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 179 |
# Model Quality
|
| 180 |
We rely on [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the quality of the quantized model.
|
| 181 |
Need to install lm-eval from source:
|
|
|
|
| 176 |
print("Response:", output_text[0][len(prompt):])
|
| 177 |
```
|
| 178 |
|
| 179 |
+
Note: to `push_to_hub` you need to run
|
| 180 |
+
```Shell
|
| 181 |
+
pip install -U "huggingface_hub[cli]"
|
| 182 |
+
huggingface-cli login
|
| 183 |
+
```
|
| 184 |
+
and use a token with write access, from https://huggingface.co/settings/tokens
|
| 185 |
+
|
| 186 |
# Model Quality
|
| 187 |
We rely on [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the quality of the quantized model.
|
| 188 |
Need to install lm-eval from source:
|