hysts
commited on
Commit
·
95b2a65
1
Parent(s):
d854fce
Add README
Browse files
README.md
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
---
|
| 4 |
+
# CogView2
|
| 5 |
+
## Model description
|
| 6 |
+
|
| 7 |
+
**CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers**
|
| 8 |
+
|
| 9 |
+
- [Paper](https://arxiv.org/abs/2204.14217)
|
| 10 |
+
- [GitHub Repo](https://github.com/THUDM/CogView2)
|
| 11 |
+
|
| 12 |
+
### Abstract
|
| 13 |
+
|
| 14 |
+
The development of the transformer-based text-to-image models are impeded by its slow generation and complexity for high-resolution images. In this work, we put forward a solution based on hierarchical transformers and local parallel auto-regressive generation. We pretrain a 6B-parameter transformer with a simple and flexible self-supervised task, Cross-modal general language model (CogLM), and finetune it for fast super-resolution. The new text-to-image system, CogView2, shows very competitive generation compared to concurrent state-of-the-art DALL-E-2, and naturally supports interactive text-guided editing on images.
|
| 15 |
+
|
| 16 |
+
## BibTeX entry and citation info
|
| 17 |
+
```bibtex
|
| 18 |
+
@article{ding2022cogview2,
|
| 19 |
+
title={CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers},
|
| 20 |
+
author={Ding, Ming and Zheng, Wendi and Hong, Wenyi and Tang, Jie},
|
| 21 |
+
journal={arXiv preprint arXiv:2204.14217},
|
| 22 |
+
year={2022}
|
| 23 |
+
}
|
| 24 |
+
```
|