File size: 1,839 Bytes
ff9abff f620ecb ff9abff |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
## LORA Qwen-Image example
World first lora for [Qwen-Image](https://huggingface.co/Qwen/Qwen-Image)
Trigger word: **Valentin**
## 🧪 Usage
---
### 🔧 Initialization
```python
from diffusers import DiffusionPipeline
import torch
model_name = "Qwen/Qwen-Image"
# Load the pipeline
if torch.cuda.is_available():
torch_dtype = torch.bfloat16
device = "cuda"
else:
torch_dtype = torch.float32
device = "cpu"
pipe = DiffusionPipeline.from_pretrained(model_name, torch_dtype=torch_dtype)
pipe = pipe.to(device)
```
### 🔌 Load LoRA Weights
```python
# Load LoRA weights
pipe.load_lora_weights('pytorch_lora_weights.safetensors', adapter_name="lora")
```
### 🎨 Generate Image with lora trained on person
```python
prompt = '''Valentin in a natural daylight selfie at a cafe entrance. He looks seriously into the camera, wearing a black coat or jacket and wireless earbud. Background includes wooden frames, warm pendant lights, and urban cafe details. With text "FLYMY AI"'''
negative_prompt = " "
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
width=1024,
height=1024,
num_inference_steps=50,
true_cfg_scale=5,
generator=torch.Generator(device="cuda").manual_seed(346346)
)
# Display the image (in Jupyter or save to file)
image.show()
# or
image.save("output.png")
```
### 🖼️ Sample Output

## 🤝 Support
If you have questions or suggestions, join our community:
- 🌐 [FlyMy.AI](https://flymy.ai)
- 💬 [Discord Community](https://discord.com/invite/t6hPBpSebw)
- 🐦 [Follow us on X](https://x.com/flymyai)
- 💼 [Connect on LinkedIn](https://linkedin.com/company/flymyai)
- 📧 [Support](mailto:[email protected])
**⭐ Don't forget to star the repository if you like it!**
---
license: apache-2.0
---
|