LongfeiHuang commited on
Commit
84e85ec
·
verified ·
1 Parent(s): 619095c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -42
README.md CHANGED
@@ -17,45 +17,4 @@ Recent interactive matting methods have shown satisfactory performance in captur
17
  ## Code and Usage
18
 
19
  The official code and model are available at the following GitHub repository:
20
- [https://github.com/chen-sd/SDMatte](https://github.com/chen-sd/SDMatte)
21
-
22
- This model can be loaded using the 🤗 Diffusers library. Below is a conceptual example of how you might use `DiffusionPipeline`. Please note that interactive matting typically requires specific input formats for images and prompts (e.g., scribbles, masks, or points). Refer to the official GitHub repository for precise usage instructions, setup, and examples.
23
-
24
- ```python
25
- from diffusers import DiffusionPipeline
26
- import torch
27
- from PIL import Image
28
-
29
- # The model ID on the Hugging Face Hub (assuming it's named 'SDMatte/SDMatte')
30
- # Replace 'SDMatte/SDMatte' with the actual repository ID if different.
31
- model_id = "SDMatte/SDMatte" # Placeholder, update with actual repo ID
32
-
33
- try:
34
- # Load the SDMatte pipeline.
35
- # The specific components might require custom pipeline logic,
36
- # but this is a common starting point for diffusers models.
37
- pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
38
- pipe.to("cuda")
39
-
40
- # Example: Load your input image and visual prompt (e.g., a scribble mask)
41
- # These paths are illustrative; replace with your actual image and prompt data.
42
- # input_image = Image.open("path/to/your_input_image.jpg").convert("RGB")
43
- # visual_prompt = Image.open("path/to/your_visual_prompt.png").convert("L") # For a mask or scribble
44
-
45
- # In a real scenario, you would pass these inputs to the pipeline.
46
- # The method signature (e.g., `pipe(image=..., prompt=...)`) may vary
47
- # depending on the specific implementation in the official repository.
48
- # Example placeholder for inference:
49
- # matted_image = pipe(image=input_image, prompt=visual_prompt).images[0]
50
- # matted_image.save("matted_output.png")
51
-
52
- print(f"Model {model_id} loaded successfully. Please refer to the GitHub repository for detailed usage.")
53
-
54
- except Exception as e:
55
- print(f"Error loading or initializing the model: {e}")
56
- print("Please ensure the model ID is correct and refer to the official GitHub repository for detailed installation and usage instructions.")
57
-
58
- ```
59
-
60
- For more detailed usage, advanced features, and how to prepare your inputs (images and interactive prompts), please visit the official project repository:
61
- [https://github.com/chen-sd/SDMatte](https://github.com/chen-sd/SDMatte)
 
17
  ## Code and Usage
18
 
19
  The official code and model are available at the following GitHub repository:
20
+ [https://github.com/vivoCameraResearch/SDMatte](https://github.com/vivoCameraResearch/SDMatte)