Update README.md
Browse files
README.md
CHANGED
|
@@ -109,9 +109,9 @@ assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 3200
|
|
| 109 |
|
| 110 |
^: Zephyr-β often fails to follow few-shot CoT instructions, likely because it was aligned with only chat data but not trained on few-shot data.
|
| 111 |
|
| 112 |
-
**: Open-source SOTA results are taken from reported results in instruction-tuned model papers and official repositories.
|
| 113 |
|
| 114 |
-
|
| 115 |
|
| 116 |
## Limitations
|
| 117 |
|
|
|
|
| 109 |
|
| 110 |
^: Zephyr-β often fails to follow few-shot CoT instructions, likely because it was aligned with only chat data but not trained on few-shot data.
|
| 111 |
|
| 112 |
+
**: Mistral and Open-source SOTA results are taken from reported results in instruction-tuned model papers and official repositories.
|
| 113 |
|
| 114 |
+
All models are evaluated in chat mode (e.g. with the respective conversation template applied). All zero-shot benchmarks follow the same setting as in the AGIEval paper and Orca paper. CoT tasks use the same configuration as Chain-of-Thought Hub, HumanEval is evaluated with EvalPlus, and MT-bench is run using FastChat. To reproduce our results, follow the instructions in [our repository](https://github.com/imoneoi/openchat/#benchmarks).
|
| 115 |
|
| 116 |
## Limitations
|
| 117 |
|