Improve model card: Add pipeline_tag and enhance sample usage
#3
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,13 +1,14 @@
|
|
1 |
---
|
2 |
-
license: cc-by-nc-4.0
|
3 |
-
language:
|
4 |
-
- en
|
5 |
base_model:
|
6 |
- stabilityai/stable-diffusion-3-medium
|
|
|
|
|
7 |
library_name: diffusers
|
8 |
-
|
9 |
-
|
|
|
10 |
---
|
|
|
11 |
# TeEFusion: Blending Text Embeddings to Distill Classifier-Free Guidance (ICCV 2025)
|
12 |
|
13 |
<p align="center">
|
@@ -27,14 +28,14 @@ TeEFusion is a simple yet powerful distillation method that fuses classifier-fre
|
|
27 |
|
28 |
## 🚀 Key Features
|
29 |
|
30 |
-
*
|
31 |
-
|
32 |
|
33 |
-
*
|
34 |
-
|
35 |
|
36 |
-
*
|
37 |
-
|
38 |
|
39 |
---
|
40 |
|
@@ -56,7 +57,8 @@ from pipelines.sd3_teefusion_pipeline import TeEFusionSD3Pipeline
|
|
56 |
|
57 |
pipe = TeEFusionSD3Pipeline.from_pretrained(
|
58 |
"AIDC-AI/TeEFusion",
|
59 |
-
torch_dtype=torch.bfloat16
|
|
|
60 |
)
|
61 |
pipe.to("cuda")
|
62 |
|
|
|
1 |
---
|
|
|
|
|
|
|
2 |
base_model:
|
3 |
- stabilityai/stable-diffusion-3-medium
|
4 |
+
language:
|
5 |
+
- en
|
6 |
library_name: diffusers
|
7 |
+
license: cc-by-nc-4.0
|
8 |
+
pipeline_tag: text-to-image
|
9 |
+
tags: []
|
10 |
---
|
11 |
+
|
12 |
# TeEFusion: Blending Text Embeddings to Distill Classifier-Free Guidance (ICCV 2025)
|
13 |
|
14 |
<p align="center">
|
|
|
28 |
|
29 |
## 🚀 Key Features
|
30 |
|
31 |
+
* **Embed-Level Guidance Fusion**.
|
32 |
+
Incorporates guidance magnitude *w* by linearly combining conditional and null prompt embeddings, eliminating the need for two forward passes .
|
33 |
|
34 |
+
* **Test-Time Sampling Agnostic**.
|
35 |
+
Distills from complex teacher sampling strategy (Euler + CFG, Z-Sampling + CFG, W2SD + CFG) into a simple student that uses standard Euler sampling .
|
36 |
|
37 |
+
* **Parameter-Free**.
|
38 |
+
No extra network modules beyond the pretrained model’s encoder and decoder.
|
39 |
|
40 |
---
|
41 |
|
|
|
57 |
|
58 |
pipe = TeEFusionSD3Pipeline.from_pretrained(
|
59 |
"AIDC-AI/TeEFusion",
|
60 |
+
torch_dtype=torch.bfloat16,
|
61 |
+
trust_remote_code=True
|
62 |
)
|
63 |
pipe.to("cuda")
|
64 |
|