File size: 7,056 Bytes
f33f967
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8a620c5
f33f967
 
 
 
 
84c53af
f33f967
 
 
84c53af
f33f967
 
 
 
 
84c53af
f33f967
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2d5947c
 
f33f967
 
 
 
 
 
 
 
 
 
 
 
 
 
2d5947c
f33f967
 
 
 
 
 
 
 
 
2d5947c
f33f967
 
 
 
 
 
 
 
 
 
 
 
 
 
ab9dce5
2d5947c
8d1f6fc
f33f967
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
---
license: apache-2.0
language:
- en
base_model:
- Wan-AI/Wan2.1-T2V-14B
pipeline_tag: text-to-video
tags:
- text-to-video
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
      The video shows a [z00m_ca11] with four participants. In the top left box, a medieval knight in full armor adjusts his helmet. To his right, a pirate with a parrot on his shoulder drinks from a mug. In the bottom left, a scientist in a lab coat scribbles on a whiteboard. In the bottom right, an alien in a suit waves awkwardly.
  output:
    url: example_videos/zoom1.mp4
- text: >-
      The video shows a [z00m_ca11] with three participants. In the top left box, a centaur in business attire is seated at a large wooden desk. The top right box shows a wizard with a long beard reviewing spreadsheets. The bottom box shows a velociraptor wearing glasses, sipping coffee and nodding seriously.
  output:
    url: example_videos/zoom2.mp4
- text: >-
      The video shows a [z00m_ca11] with four participants. In the top left, a chef covered in flour frantically checks a recipe. To the right, a yoga instructor sits calmly with candles lit. The bottom left shows a DJ with headphones bobbing their head. The bottom right shows a firefighter in full gear, sipping coffee.
  output:
    url: example_videos/zoom3.mp4
- text: >-
      The video shows a [z00m_ca11] with three participants in a 3x3 grid formation. The first person in the top left is a cat wearing glasses, sitting in front of a computer. The second person has a hood and looks down. The third person is a dog wearing a tie, attentively watching the screen.
  output:
    url: example_videos/zoom4.mp4
---

<div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">
  <h1 style="color: #24292e; margin-top: 0;">Zoom Call Style LoRA for Wan2.1 14B T2V</h1>
  
  <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
    <h2 style="color: #24292e; margin-top: 0;">Overview</h2>
    <p>This LoRA is trained on the Wan2.1 14B T2V model and allows you to generate videos of Zoom calls featuring whatever character you want!</p>
  </div>

  <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
    <h2 style="color: #24292e; margin-top: 0;">Features</h2>
    <ul style="margin-bottom: 0;">
      <li>Trained on the Wan2.1 14B T2V base model</li>
      <li>Consistent results across different object types</li>
      <li>Simple prompt structure that's easy to adapt</li>
    </ul>
  </div>

  <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
    <h2 style="color: #24292e; margin-top: 0;">Community</h2>
    <ul style="margin-bottom: 0;">
      <li><b>Discord:</b> <a href="https://discord.com/invite/7tsKMCbNFC" style="color: #0366d6; text-decoration: none;">Join our community</a> to generate videos with this LoRA for free</li>
      <li><b>Request LoRAs:</b> We're training and open-sourcing Wan2.1 LoRAs for free - join our Discord to make requests!</li>
    </ul>
  </div>
</div>

<Gallery />

# Model File and Inference Workflow

## 📥 Download Links:

- [zoom_call_10_epochs.safetensors](./zoom_call_10_epochs.safetensors) - LoRA Model File
- [wan_txt2vid_lora_workflow.json](./workflow/wan_txt2vid_lora_workflow.json) - Wan T2V with LoRA Workflow for ComfyUI

---
<div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">
  <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
    <h2 style="color: #24292e; margin-top: 0;">Recommended Settings</h2>
    <ul style="margin-bottom: 0;">
      <li><b>LoRA Strength:</b> 1.0</li>
      <li><b>Embedded Guidance Scale:</b> 6.0</li>
      <li><b>Flow Shift:</b> 5.0</li>
    </ul>
  </div>

  <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
    <h2 style="color: #24292e; margin-top: 0;">Trigger Words</h2>
    <p>The key trigger phrase is: <code style="background-color: #f0f0f0; padding: 3px 6px; border-radius: 4px;">[z00m_ca11]</code></p>
  </div>

  <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
    <h2 style="color: #24292e; margin-top: 0;">Prompt Template</h2>
    <p>For prompting, check out the example prompts; this way of prompting seems to work very well.</p>


  <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
    <h2 style="color: #24292e; margin-top: 0;">ComfyUI Workflow</h2>
    <p>This LoRA works with a modified version of <a href="https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/example_workflows/wanvideo_T2V_example_02.json" style="color: #0366d6; text-decoration: none;">Kijai's Wan Video Wrapper workflow</a>. The main modification is adding a Wan LoRA node connected to the base model.</p>
    <img src="./workflow/workflow_screenshot.png" style="width: 100%; border-radius: 8px; margin: 15px 0; box-shadow: 0 4px 8px rgba(0,0,0,0.1);">
    <p>See the Downloads section above for the modified workflow.</p>
  </div>
</div>

<div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">
  <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
    <h2 style="color: #24292e; margin-top: 0;">Model Information</h2>
    <p>The model weights are available in Safetensors format. See the Downloads section above.</p>
  </div>

  <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
    <h2 style="color: #24292e; margin-top: 0;">Training Details</h2>
    <ul style="margin-bottom: 0;">
      <li><b>Base Model:</b> Wan2.1 14B T2V</li>
      <li><b>Training Data:</b> Trained on 2 minutes of video comprised of 28 short clips (each clip captioned separately) of various Zoom call recordings.</li>
      <li><b> Epochs:</b> 10</li>
    </ul>
  </div>

  <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
    <h2 style="color: #24292e; margin-top: 0;">Additional Information</h2>
    <p>Training was done using <a href="https://github.com/tdrussell/diffusion-pipe" style="color: #0366d6; text-decoration: none;">Diffusion Pipe for Training</a></p>
  </div>

  <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
    <h2 style="color: #24292e; margin-top: 0;">Acknowledgments</h2>
    <p style="margin-bottom: 0;">Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!</p>
  </div>
</div>