Update README.md
Browse files
README.md
CHANGED
@@ -42,7 +42,7 @@ This model is jointly finetuned with [DMD](https://arxiv.org/pdf/2405.14867) and
|
|
42 |
Training was conducted on **8 nodes with 64 H200 GPUs** in total, using a `global batch size = 64`.
|
43 |
We enable `gradient checkpointing`, set `HSDP_shard_dim = 8`, `sequence_parallel_size = 4`, and use `learning rate = 1e-5`.
|
44 |
We set **VSA attention sparsity** to 0.9, and training runs for **3000 steps (~52 hours)**
|
45 |
-
The detailed training example script is available [here](https://github.com/hao-ai-lab/FastVideo/blob/main/examples/distill/Wan-Syn-480P/distill_dmd_VSA_t2v_14B_480P.slurm).
|
46 |
|
47 |
|
48 |
|
|
|
42 |
Training was conducted on **8 nodes with 64 H200 GPUs** in total, using a `global batch size = 64`.
|
43 |
We enable `gradient checkpointing`, set `HSDP_shard_dim = 8`, `sequence_parallel_size = 4`, and use `learning rate = 1e-5`.
|
44 |
We set **VSA attention sparsity** to 0.9, and training runs for **3000 steps (~52 hours)**
|
45 |
+
The detailed **training example script** is available [here](https://github.com/hao-ai-lab/FastVideo/blob/main/examples/distill/Wan-Syn-480P/distill_dmd_VSA_t2v_14B_480P.slurm).
|
46 |
|
47 |
|
48 |
|