Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
pipeline_tag: image-to-image
|
4 |
+
library_name: diffusers
|
5 |
+
base_model:
|
6 |
+
- stabilityai/stable-diffusion-2
|
7 |
+
---
|
8 |
+
|
9 |
+
# SDMatte - SafeTensors Models for Interactive Matting
|
10 |
+
|
11 |
+
This repository contains SafeTensors versions of the SDMatte models for interactive image matting, optimized for ComfyUI usage.
|
12 |
+
|
13 |
+
## About SDMatte
|
14 |
+
|
15 |
+
**SDMatte: Grafting Diffusion Models for Interactive Matting**
|
16 |
+
|
17 |
+
SDMatte is a state-of-the-art diffusion-driven interactive matting model that leverages the powerful priors of diffusion models to achieve exceptional performance in extracting fine-grained details, especially in edge regions.
|
18 |
+
|
19 |
+
### Key Features
|
20 |
+
- **Diffusion-powered**: Utilizes diffusion model priors for superior detail extraction
|
21 |
+
- **Interactive matting**: Visual prompt-driven interaction for precise control
|
22 |
+
- **Fine-grained details**: Excels at capturing complex edge regions and texture details
|
23 |
+
- **Coordinate & opacity awareness**: Enhanced spatial and opacity information processing
|
24 |
+
|
25 |
+
## Available Models
|
26 |
+
|
27 |
+
- **SDMatte.safetensors** - Standard interactive matting model
|
28 |
+
- **SDMatte_plus.safetensors** - Enhanced version with improved performance
|
29 |
+
|
30 |
+
## Credits and Attribution
|
31 |
+
|
32 |
+
### Original Work
|
33 |
+
**Authors**: vivoCameraResearch Team
|
34 |
+
**Original Repository**: https://huggingface.co/LongfeiHuang/SDMatte
|
35 |
+
**Official Code**: https://github.com/vivoCameraResearch/SDMatte
|
36 |
+
**Paper**: SDMatte: Grafting Diffusion Models for Interactive Matting
|
37 |
+
|
38 |
+
### Abstract
|
39 |
+
*Recent interactive matting methods have shown satisfactory performance in capturing the primary regions of objects, but they fall short in extracting fine-grained details in edge regions. Diffusion models trained on billions of image-text pairs, demonstrate exceptional capability in modeling highly complex data distributions and synthesizing realistic texture details, while exhibiting robust text-driven interaction capabilities, making them an attractive solution for interactive matting.*
|