playerzer0x commited on
Commit
e3a0536
·
verified ·
1 Parent(s): f982fbd

Model card auto-generated by SimpleTuner

Browse files
Files changed (1) hide show
  1. README.md +146 -0
README.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ base_model: "black-forest-labs/FLUX.1-Kontext-dev"
4
+ tags:
5
+ - flux
6
+ - flux-diffusers
7
+ - text-to-image
8
+ - image-to-image
9
+ - diffusers
10
+ - simpletuner
11
+ - not-for-all-audiences
12
+ - lora
13
+
14
+ - template:sd-lora
15
+ - standard
16
+ pipeline_tag: text-to-image
17
+ inference: true
18
+ widget:
19
+ - text: 'unconditional (blank prompt)'
20
+ parameters:
21
+ negative_prompt: 'blurry, cropped, ugly'
22
+ output:
23
+ url: ./assets/image_0_0.png
24
+ - text: 'turn this person into a labubu'
25
+ parameters:
26
+ negative_prompt: 'blurry, cropped, ugly'
27
+ output:
28
+ url: ./assets/image_1_0.png
29
+ - text: 'turn this person into a labubu'
30
+ parameters:
31
+ negative_prompt: 'blurry, cropped, ugly'
32
+ output:
33
+ url: ./assets/image_2_0.png
34
+ - text: 'turn this person into a labubu'
35
+ parameters:
36
+ negative_prompt: 'blurry, cropped, ugly'
37
+ output:
38
+ url: ./assets/image_3_0.png
39
+ ---
40
+
41
+ # labubu_dataset
42
+
43
+ This is a PEFT LoRA derived from [black-forest-labs/FLUX.1-Kontext-dev](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev).
44
+
45
+ The main validation prompt used during training was:
46
+ ```
47
+ a photo of a daisy
48
+ ```
49
+
50
+
51
+ ## Validation settings
52
+ - CFG: `2.5`
53
+ - CFG Rescale: `0.0`
54
+ - Steps: `20`
55
+ - Sampler: `FlowMatchEulerDiscreteScheduler`
56
+ - Seed: `69`
57
+ - Resolution: `1024x1024`
58
+ - Skip-layer guidance:
59
+
60
+ Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
61
+
62
+ You can find some example images in the following gallery:
63
+
64
+
65
+ <Gallery />
66
+
67
+ The text encoder **was not** trained.
68
+ You may reuse the base model text encoder for inference.
69
+
70
+
71
+ ## Training settings
72
+
73
+ - Training epochs: 15
74
+ - Training steps: 125
75
+ - Learning rate: 1e-05
76
+ - Learning rate schedule: constant
77
+ - Warmup steps: 50
78
+ - Max grad value: 2.0
79
+ - Effective batch size: 2
80
+ - Micro-batch size: 1
81
+ - Gradient accumulation steps: 2
82
+ - Number of GPUs: 1
83
+ - Gradient checkpointing: True
84
+ - Prediction type: flow_matching (extra parameters=['flow_schedule_auto_shift', 'shift=0.0', 'flux_guidance_mode=constant', 'flux_guidance_value=1.0', 'flux_lora_target=fal'])
85
+ - Optimizer: optimi-lion
86
+ - Trainable parameter precision: Pure BF16
87
+ - Base model precision: `int8-quanto`
88
+ - Caption dropout probability: 0.05%
89
+
90
+
91
+ - LoRA Rank: 16
92
+ - LoRA Alpha: 16.0
93
+ - LoRA Dropout: 0.1
94
+ - LoRA initialisation style: default
95
+
96
+
97
+ ## Datasets
98
+
99
+ ### my-edited-images
100
+ - Repeats: 0
101
+ - Total number of images: 16
102
+ - Total number of aspect buckets: 1
103
+ - Resolution: 1.048576 megapixels
104
+ - Cropped: False
105
+ - Crop style: None
106
+ - Crop aspect: None
107
+ - Used for regularisation data: No
108
+
109
+
110
+ ## Inference
111
+
112
+
113
+ ```python
114
+ import torch
115
+ from diffusers import DiffusionPipeline
116
+
117
+ model_id = 'black-forest-labs/FLUX.1-Kontext-dev'
118
+ adapter_id = 'playerzer0x/labubu_dataset'
119
+ pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
120
+ pipeline.load_lora_weights(adapter_id)
121
+
122
+ prompt = "a photo of a daisy"
123
+
124
+
125
+ ## Optional: quantise the model to save on vram.
126
+ ## Note: The model was quantised during training, and so it is recommended to do the same during inference time.
127
+ from optimum.quanto import quantize, freeze, qint8
128
+ quantize(pipeline.transformer, weights=qint8)
129
+ freeze(pipeline.transformer)
130
+
131
+ pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
132
+ model_output = pipeline(
133
+ prompt=prompt,
134
+ num_inference_steps=20,
135
+ generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(69),
136
+ width=1024,
137
+ height=1024,
138
+ guidance_scale=2.5,
139
+ ).images[0]
140
+
141
+ model_output.save("output.png", format="PNG")
142
+
143
+ ```
144
+
145
+
146
+