Image-to-Image
vincie
leigangqu commited on
Commit
e3b9553
·
verified ·
1 Parent(s): d171c4d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -3
README.md CHANGED
@@ -1,3 +1,67 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: vincie
4
+ pipeline_tag: image-to-image
5
+ ---
6
+
7
+ <!-- <div align="center">
8
+ <img src="assets/seedvr_logo.png" alt="SeedVR" width="400"/>
9
+ </div> -->
10
+
11
+
12
+ # VINCIE: Unlocking In-context Image Editing from Video
13
+ > [Leigang Qu](https://leigang-qu.github.io/), [Feng Cheng](https://klauscc.github.io/), [Ziyan Yang](https://ziyanyang.github.io/), [Qi Zhao](https://kevinz8866.github.io/), [Shanchuan Lin](https://scholar.google.com/citations?user=EDWUw7gAAAAJ&hl=en), [Yichun Shi](https://seasonsh.github.io/), [Yicong Li](https://yl3800.github.io/), [Wenjie Wang](https://wenjiewwj.github.io/), [Tat-Seng Chua](https://www.chuatatseng.com/), [Lu Jiang](http://www.lujiang.info/index.html)
14
+
15
+ <p align="center">
16
+ <a href="https://vincie2025.github.io/">
17
+ <img
18
+ src="https://img.shields.io/badge/VINCIE-Website-0A66C2?logo=safari&logoColor=white"
19
+ alt="VINCIE Website"
20
+ />
21
+ </a>
22
+ <a href="https://arxiv.org/abs/2506.10941">
23
+ <img
24
+ src="https://img.shields.io/badge/VINCIE-Paper-red?logo=arxiv&logoColor=red"
25
+ alt="VINCIE Paper on ArXiv"
26
+ />
27
+ </a>
28
+ <a href="https://github.com/ByteDance-Seed/VINCIE">
29
+ <img
30
+ alt="Github" src="https://img.shields.io/badge/VINCIE-Codebase-536af5?color=536af5&logo=github"
31
+ alt="VINCIE Codebase"
32
+ />
33
+ </a>
34
+ <a href="https://huggingface.co/collections/ByteDance-Seed/vincie-6864cc2e3116d82e4a83a17c">
35
+ <img
36
+ src="https://img.shields.io/badge/VINCIE-Models-yellow?logo=huggingface&logoColor=yellow"
37
+ alt="VINCIE Models"
38
+ />
39
+ </a>
40
+ <a href="https://huggingface.co/spaces/ByteDance-Seed/VINCIE-3B">
41
+ <img
42
+ src="https://img.shields.io/badge/VINCIE-Space-orange?logo=huggingface&logoColor=yellow"
43
+ alt="VINCIE Space"
44
+ />
45
+ </a>
46
+ </p>
47
+
48
+ >
49
+ > In-context image editing aims to modify images based on a contextual sequence comprising text and previously generated images. Existing methods typically depend on task-specific pipelines and expert models (e.g., segmentation and inpainting) to curate training data. In this work, we explore whether an in-context image editing model can be learned directly from videos. We introduce a scalable approach to annotate videos as interleaved multimodal sequences. To effectively learn from this data, we design a block-causal diffusion transformer trained on three proxy tasks: next-image prediction, current segmentation prediction, and next-segmentation prediction. Additionally, we propose a novel multi-turn image editing benchmark to advance research in this area. Extensive experiments demonstrate that our model exhibits strong in-context image editing capabilities and achieves state-of-the-art results on two multi-turn image editing benchmarks. Despite being trained exclusively on videos, our model also shows promising abilities in multi-concept composition, story generation, and chain-of-editing applications.
50
+
51
+
52
+
53
+ ## ✍️ Citation
54
+
55
+ ```bibtex
56
+ @article{qu2025vincie,
57
+ title={VINCIE: Unlocking In-context Image Editing from Video},
58
+ author={Qu, Leigang and Cheng, Feng and Yang, Ziyan and Zhao, Qi and Lin, Shanchuan and Shi, Yichun and Li, Yicong and Wang, Wenjie and Chua, Tat-Seng and Jiang, Lu},
59
+ journal={arXiv preprint arXiv:2506.10941},
60
+ year={2025}
61
+ }
62
+
63
+ ```
64
+
65
+
66
+ ## 📜 License
67
+ VINCIE is licensed under the Apache 2.0.