Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<div align="center">
|
2 |
+
<h1>✨X-SAM </h1>
|
3 |
+
<h3>From Segment Anything to Any Segmentation</h3>
|
4 |
+
|
5 |
+
[Hao Wang](https://github.com/wanghao9610)<sup>1,2</sup>,[Limeng Qiao](https://scholar.google.com/citations?user=3PFZAg0AAAAJ&hl=en)<sup>3</sup>,[Zequn Jie](https://scholar.google.com/citations?user=4sKGNB0AAAAJ&hl)<sup>3</sup>, [Zhijian Huang](https://zhijian11.github.io/)<sup>1</sup>, [Chengjian Feng](https://fcjian.github.io/)<sup>3</sup>,
|
6 |
+
|
7 |
+
[Qingfang Zheng](https://openreview.net/profile?id=%7EZheng_Qingfang1)<sup>1</sup>, [Lin Ma](https://forestlinma.com/)<sup>3</sup>, [Xiangyuan Lan](https://scholar.google.com/citations?user=c3iwWRcAAAAJ&hl)<sup>2</sup><sup>:email:</sup>, [Xiaodan Liang](https://scholar.google.com/citations?user=voxznZAAAAAJ&hl)<sup>1,2</sup><sup>:email:</sup>
|
8 |
+
|
9 |
+
<sup>1</sup> Sun Yat-sen University, <sup>2</sup> Peng Cheng Laboratory, <sup>3</sup> Meituan Inc.
|
10 |
+
|
11 |
+
<sup>:email:</sup> corresponding author.
|
12 |
+
</div>
|
13 |
+
|
14 |
+
<div align="center" style="display: flex; justify-content: center; align-items: center;">
|
15 |
+
<a href="" style="margin: 0 2px;">
|
16 |
+
<img src='https://img.shields.io/badge/arXiv-paper_id-red?style=flat&logo=arXiv&logoColor=red' alt='arxiv'>
|
17 |
+
</a>
|
18 |
+
<a href='' style="margin: 0 2px;">
|
19 |
+
<img src='https://img.shields.io/badge/Hugging Face-ckpts-orange?style=flat&logo=HuggingFace&logoColor=orange' alt='huggingface'>
|
20 |
+
</a>
|
21 |
+
<a href="https://github.com/wanghao9610/X-SAM" style="margin: 0 2px;">
|
22 |
+
<img src='https://img.shields.io/badge/GitHub-Repo-blue?style=flat&logo=GitHub' alt='GitHub'>
|
23 |
+
</a>
|
24 |
+
<a href="http://47.115.200.157:7861" style="margin: 0 2px;">
|
25 |
+
<img src='https://img.shields.io/badge/Demo-Gradio-gold?style=flat&logo=Gradio&logoColor=red' alt='Demo'>
|
26 |
+
</a>
|
27 |
+
<a href='https://wanghao9610.github.io/X-SAM/' style="margin: 0 2px;">
|
28 |
+
<img src='https://img.shields.io/badge/Webpage-Project-silver?style=flat&logo=&logoColor=orange' alt='webpage'>
|
29 |
+
</a>
|
30 |
+
</div>
|
31 |
+
|
32 |
+
## :rocket: Introduction
|
33 |
+
|
34 |
+
* X-SAM introduces a unified multimodal large language model (MLLM) framework, extending the segmentation paradigm from *segment anything* to *any segmentation*, thereby enhancing pixel-level perceptual understanding.
|
35 |
+
|
36 |
+
* X-SAM proposes a novel Visual GrounDed (VGD) segmentation task, which segments all instance objects using interactive visual prompts, empowering the model with visually grounded, pixel-wise interpretative capabilities.
|
37 |
+
|
38 |
+
* X-SAM presents a unified training strategy that enables co-training across multiple datasets. Experimental results demonstrate that X-SAM achieves state-of-the-art performance on various image segmentation benchmarks, highlighting its efficiency in multimodal, pixel-level visual understanding.
|
39 |
+
|
40 |
+
## :bookmark: Abstract
|
41 |
+
|
42 |
+
Large Language Models (LLMs) demonstrate strong capabilities in broad knowledge representation, yet they are inherently deficient in pixel-level perceptual understanding. Although the Segment Anything Model (SAM) represents a significant advancement in visual-prompt-driven image segmentation, it exhibits notable limitations in multi-mask prediction and category-specific segmentation tasks, and it cannot integrate all segmentation tasks within a unified model architecture. To address these limitations, we present X-SAM, a streamlined Multimodal Large Language Model (MLLM) framework that extends the segmentation paradigm from *segment anything* to *any segmentation*. Specifically, we introduce a novel unified framework that enables more advanced pixel-level perceptual comprehension for MLLMs. Furthermore, we propose a new segmentation task, termed Visual GrounDed (VGD) segmentation, which segments all instance objects with interactive visual prompts and empowers MLLMs with visual grounded, pixel-wise interpretative capabilities. To enable effective training on diverse data sources, we present a unified training strategy that supports co-training across multiple datasets. Experimental results demonstrate that X-SAM achieves state-of-the-art performance on a wide range of image segmentation benchmarks, highlighting its efficiency for multimodal, pixel-level visual understanding.
|
43 |
+
|
44 |
+
## :mag: Overview
|
45 |
+
|
46 |
+
<img src="docs/images/xsam_framework.png" width="800">
|
47 |
+
|
48 |
+
**More details can be found in [Github Page](https://wanghao9610.github.io/X-SAM/).**
|
49 |
+
|
50 |
+
## :pushpin: Citation
|
51 |
+
If you find X-SAM is helpful for your research or applications, please consider giving us a like 💖 and citing it by the following BibTex entry.
|
52 |
+
|
53 |
+
```bibtex
|
54 |
+
|
55 |
+
```
|