Update README.md
Browse files
README.md
CHANGED
|
@@ -42,6 +42,8 @@ The **GPT2-Prompt-Upscaler-v1** is designed to extend and refine prompts, aligni
|
|
| 42 |
|
| 43 |
The model is finetuned on **GPT2-medium** with **10M prompts** extracted from a refined Pixiv dataset for 5 epochs, with about ~2B tokens seen per epoch.
|
| 44 |
|
|
|
|
|
|
|
| 45 |
The training format looks something like this:
|
| 46 |
|
| 47 |
- **Rating:** [safe | nsfw]
|
|
|
|
| 42 |
|
| 43 |
The model is finetuned on **GPT2-medium** with **10M prompts** extracted from a refined Pixiv dataset for 5 epochs, with about ~2B tokens seen per epoch.
|
| 44 |
|
| 45 |
+
Training is done on a 8xH100 node for about 30 hours
|
| 46 |
+
|
| 47 |
The training format looks something like this:
|
| 48 |
|
| 49 |
- **Rating:** [safe | nsfw]
|