lym00 commited on
Commit
68d414b
·
verified ·
1 Parent(s): f2a58b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -4
README.md CHANGED
@@ -15,6 +15,18 @@ https://learn.microsoft.com/en-us/windows/wsl/install
15
 
16
  https://www.anaconda.com/docs/getting-started/miniconda/install
17
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  # Dependencies
19
  https://github.com/Dao-AILab/flash-attention
20
 
@@ -29,7 +41,3 @@ https://github.com/THUDM/ImageReward
29
  https://huggingface.co/datasets/siraxe/PrecompiledWheels_Torch-2.8-cu128-cp312
30
 
31
  https://huggingface.co/lldacing/flash-attention-windows-wheel
32
-
33
- # Quantization
34
-
35
- https://github.com/nunchaku-tech/deepcompressor/blob/main/examples/diffusion/README.md
 
15
 
16
  https://www.anaconda.com/docs/getting-started/miniconda/install
17
 
18
+ # Environment
19
+ python 3.10
20
+ cuda 12.8
21
+ torch 2.7
22
+
23
+
24
+ # Quantization
25
+
26
+ https://github.com/nunchaku-tech/deepcompressor/blob/main/examples/diffusion/README.md
27
+
28
+ (deepcompressor) python -m deepcompressor.app.diffusion.ptq examples/diffusion/configs/model/flux.1-kontex-dev.yaml examples/diffusion/configs/svdquant/nvfp4.yaml
29
+
30
  # Dependencies
31
  https://github.com/Dao-AILab/flash-attention
32
 
 
41
  https://huggingface.co/datasets/siraxe/PrecompiledWheels_Torch-2.8-cu128-cp312
42
 
43
  https://huggingface.co/lldacing/flash-attention-windows-wheel