Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
base_model: unsloth/gemma-3-4b-pt-unsloth-bnb-4bit
|
3 |
library_name: peft
|
|
|
1 |
+
|
2 |
+
A proof of concept generating captions using Google Gemma 3 on Google Colab Free Tier for captioning prompts akin to training data of FLUX Chroma: https://huggingface.co/lodestones/Chroma
|
3 |
+
|
4 |
+
Try the Chroma model at: https://tensor.art/models/891236315830428357
|
5 |
+
|
6 |
+
This dataset was built using 200 images from Redcaps : https://huggingface.co/datasets/lodestones/pixelprose
|
7 |
+
|
8 |
+
And 200 LLM captioned e621 images: https://huggingface.co/datasets/lodestones/e621-captions/tree/main
|
9 |
+
|
10 |
+
The total trained images are just 400 total , randomly selected , so this LoRa adaptation is very basic! You can likely train a better version yourself with listed tools on Google Colab Free Tier T4.
|
11 |
+
|
12 |
+
//----//
|
13 |
+
|
14 |
+
I made some .parquets of the captions here for easier browsing: https://huggingface.co/datasets/codeShare/chroma_prompts
|
15 |
+
|
16 |
+
To use this Gemma LoRa adaptation got to the Google Colab Jupyter notebook in this repo: https://huggingface.co/codeShare/flux_chroma_image_captioner/blob/main/gemma_image_captioner.ipynb
|
17 |
+
|
18 |
+
To train your own LoRa adaptation of the Gemma on Google Colab Free Tier T4 , visit : https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(4B)-Vision.ipynb
|
19 |
+
|
20 |
+
|
21 |
---
|
22 |
base_model: unsloth/gemma-3-4b-pt-unsloth-bnb-4bit
|
23 |
library_name: peft
|