Update README.md
Browse files
README.md
CHANGED
|
@@ -68,6 +68,18 @@ or
|
|
| 68 |
|
| 69 |
```
|
| 70 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 71 |
This model for me produced coherent outputs with the following sampler settings, but feel free to experiment:
|
| 72 |
```
|
| 73 |
max_new_tokens=128, do_sample=True, use_cache=True, temperature=0.2, top_p=0.9, eos_token_id= self.tokenizer.eos_token_id
|
|
|
|
| 68 |
|
| 69 |
```
|
| 70 |
|
| 71 |
+
This model seems to have issues with device="auto" in the model arguments (and requires the trust_remote_code=True, so you should maybe load it like I am here:
|
| 72 |
+
```
|
| 73 |
+
self.tokenizer = AutoTokenizer.from_pretrained("./Replit-CodeInstruct/", trust_remote_code=True)
|
| 74 |
+
self.model = AutoModelForCausalLM.from_pretrained(
|
| 75 |
+
"./Replit-CodeInstruct",
|
| 76 |
+
torch_dtype=torch.bfloat16,
|
| 77 |
+
trust_remote_code=True
|
| 78 |
+
)
|
| 79 |
+
self.model.to('cuda')
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
+
|
| 83 |
This model for me produced coherent outputs with the following sampler settings, but feel free to experiment:
|
| 84 |
```
|
| 85 |
max_new_tokens=128, do_sample=True, use_cache=True, temperature=0.2, top_p=0.9, eos_token_id= self.tokenizer.eos_token_id
|