Model card auto-generated by SimpleTuner
Browse files
README.md
CHANGED
@@ -71,14 +71,14 @@ You may reuse the base model text encoder for inference.
|
|
71 |
## Training settings
|
72 |
|
73 |
- Training epochs: 62
|
74 |
-
- Training steps:
|
75 |
- Learning rate: 1e-05
|
76 |
- Learning rate schedule: constant
|
77 |
-
- Warmup steps:
|
78 |
- Max grad value: 2.0
|
79 |
-
- Effective batch size:
|
80 |
- Micro-batch size: 1
|
81 |
-
- Gradient accumulation steps:
|
82 |
- Number of GPUs: 1
|
83 |
- Gradient checkpointing: True
|
84 |
- Prediction type: flow_matching (extra parameters=['flow_schedule_auto_shift', 'shift=0.0', 'flux_guidance_mode=constant', 'flux_guidance_value=1.0', 'flux_lora_target=fal'])
|
|
|
71 |
## Training settings
|
72 |
|
73 |
- Training epochs: 62
|
74 |
+
- Training steps: 250
|
75 |
- Learning rate: 1e-05
|
76 |
- Learning rate schedule: constant
|
77 |
+
- Warmup steps: 100
|
78 |
- Max grad value: 2.0
|
79 |
+
- Effective batch size: 4
|
80 |
- Micro-batch size: 1
|
81 |
+
- Gradient accumulation steps: 4
|
82 |
- Number of GPUs: 1
|
83 |
- Gradient checkpointing: True
|
84 |
- Prediction type: flow_matching (extra parameters=['flow_schedule_auto_shift', 'shift=0.0', 'flux_guidance_mode=constant', 'flux_guidance_value=1.0', 'flux_lora_target=fal'])
|