Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -26,6 +26,9 @@ This repo contains the model checkpoints for:
|
|
| 26 |
- optimized with the loss <b>SLIC</b>
|
| 27 |
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
|
| 28 |
|
|
|
|
|
|
|
|
|
|
| 29 |
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
|
| 30 |
|
| 31 |
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
|
|
|
|
| 26 |
- optimized with the loss <b>SLIC</b>
|
| 27 |
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
|
| 28 |
|
| 29 |
+
To prompt archangel models, ensure that the format is consistent with that of TuluV2, i.e. `"<s>\n<|user|>\n" + <prompt> + "\n<|assistant|>\n</s>"`.
|
| 30 |
+
Note that the BOS / EOS tokens should be excluded if automatically added by your tokenizer during batch collation.
|
| 31 |
+
|
| 32 |
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
|
| 33 |
|
| 34 |
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
|