Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,50 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-4.0
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-4.0
|
| 3 |
---
|
| 4 |
+
|
| 5 |
+
# InstaFlow-0.9B fine-tuned from 2-Rectified Flow
|
| 6 |
+
|
| 7 |
+
InstaFlow-0.9B is a **one-step** text-to-image generative model fine-tuned from [2-Rectified Flow](https://huggingface.co/XCLiu/2_rectified_flow_from_sd_1_5).
|
| 8 |
+
|
| 9 |
+
It is trained with text-conditioned reflow and distillation as described in [our paper](https://arxiv.org/abs/2309.06380).
|
| 10 |
+
|
| 11 |
+
Rectified Flow has interesting theoretical properties. You may check [this ICLR paper](https://arxiv.org/abs/2209.03003) and [this arXiv paper](https://arxiv.org/abs/2209.14577).
|
| 12 |
+
|
| 13 |
+
## 512-Resolution Images Generated from InstaFlow-0.9B
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
|
| 17 |
+
# Usage
|
| 18 |
+
|
| 19 |
+
Please refer to the [official github repo](https://github.com/gnobitab/InstaFlow).
|
| 20 |
+
|
| 21 |
+
## Training
|
| 22 |
+
|
| 23 |
+
Training pipeline:
|
| 24 |
+
1. Distill (Stage 1):
|
| 25 |
+
Starting from the [2-Rectified Flow](https://huggingface.co/XCLiu/2_rectified_flow_from_sd_1_5) checkpoint, we fix the time t=0 for the neural network,
|
| 26 |
+
and fine-tune it using the distillation objective with a batch size of 1024 for 21,500 iterations.
|
| 27 |
+
The guidance scale of the teacher model, 2-Rectified Flow, is set to 1.5 and the similarity loss is L2 loss. (54.4 A100 GPU days)
|
| 28 |
+
2. Distill (Stage 2):
|
| 29 |
+
We switch the similarity loss to LPIPS loss, then we continue to train the model using the distillation objective
|
| 30 |
+
and a batch size of 1024 for another 18,000 iterations. (53.6 A100 GPU days)
|
| 31 |
+
|
| 32 |
+
The final model is **InstaFlow-0.9B**.
|
| 33 |
+
|
| 34 |
+
**Total Training Cost:** It takes 199.2 A100 GPU days in total (data generation + reflow + distillation) to get InstaFlow-0.9B.
|
| 35 |
+
|
| 36 |
+
## Evaluation Results - Metrics
|
| 37 |
+
|
| 38 |
+
The following metrics of InstaFlow-0.9B are measured on MS COCO 2017 with 5000 images and 1-step Euler solver:
|
| 39 |
+
|
| 40 |
+
*FID-5k = 23.4, CLIP score = 0.304*
|
| 41 |
+
|
| 42 |
+
## Citation
|
| 43 |
+
```
|
| 44 |
+
@article{liu2023insta,
|
| 45 |
+
title={InstaFlow: One Step is Enough for High-Quality Diffusion-Based Text-to-Image Generation},
|
| 46 |
+
author={Liu, Xingchao and Zhang, Xiwen and Ma, Jianzhu and Peng, Jian and Liu, Qiang},
|
| 47 |
+
journal={arXiv preprint arXiv:2309.06380},
|
| 48 |
+
year={2023}
|
| 49 |
+
}
|
| 50 |
+
```
|