Commit
·
71e3645
1
Parent(s):
434f879
Update README.md
Browse files
README.md
CHANGED
|
@@ -23,7 +23,16 @@ Its training dataset consists of purely user-generated content [retry_and_contin
|
|
| 23 |
|
| 24 |
## Uses and limitations
|
| 25 |
### Intended use
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
### Out-of-scope use
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
### How to use
|
| 28 |
|
| 29 |
This reward model can be loaded using the `AutoModelForSequenceClassification` functionality, with a GPT2 tokenizer where the `pad_token_id` is set to the EOS token id, padding sides need to be set according to the configurations used during model training.
|
|
|
|
| 23 |
|
| 24 |
## Uses and limitations
|
| 25 |
### Intended use
|
| 26 |
+
This reward model was developed primarily for commercial purposes. It learns an inner representation of response quality rated by humans that can be used to conduct best-of-N sampling and Reinforcement Leanring with the PPO framework.
|
| 27 |
+
|
| 28 |
+
In addition to scientific uses, you may also further fine-tune and adapt this reward model for deployment, as long as your use is in accordance with the cc-by-nc-4.0 license, i.e. non-commercial use. This model works with the Transformers Library. If you decide to this pre-trained reward model as a basis for your fine-tuned model, please note that you need to conduct your own risk and bias assessment.
|
| 29 |
+
|
| 30 |
### Out-of-scope use
|
| 31 |
+
|
| 32 |
+
This reward model is **not** intended for deployment as-is. It is not a product and cannot be used for human-facing interactions without supervision.
|
| 33 |
+
|
| 34 |
+
This model **has not** been optimised for common reward-model objectives such as harmfulness, truthfulness and helpfulness, it is only trained based on user actions present on the Chai mobile app platform. Therefore, this model will **not** rank responses appropriately when evaluating on common open-sourced datasets. All base model responses within the training data were generated using an in-house variant of [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B), therefore the model performance may degrade when the input is generated using other language models.
|
| 35 |
+
|
| 36 |
### How to use
|
| 37 |
|
| 38 |
This reward model can be loaded using the `AutoModelForSequenceClassification` functionality, with a GPT2 tokenizer where the `pad_token_id` is set to the EOS token id, padding sides need to be set according to the configurations used during model training.
|