Instructions to use wangfuyun/AnimateLCM with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use wangfuyun/AnimateLCM with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("wangfuyun/AnimateLCM", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
Kiss
#21 opened over 1 year ago
by
hoyosdaniel0
Please help A1111
#19 opened over 1 year ago
by
MakiGirl2022
Training on another model architecture
#16 opened about 2 years ago
by
Shaleen123
Doesn't work with Controlnet 1.1?
3
#15 opened about 2 years ago
by
Starzilla
Upload WeChat_20240321184529.mp4
#13 opened about 2 years ago
by
sunqixia
Controlling frame dimensions / ratio?
#12 opened about 2 years ago
by
ashokpoudel
Update README.md
#11 opened about 2 years ago
by
fidankuku
SDXL Version
2
#10 opened about 2 years ago
by
Pipp
Update README.md
1
#9 opened about 2 years ago
by
hemanthkumar23
Msg when running the code on model card
#8 opened about 2 years ago
by
tintwotin
For those that keep getting a PEFT error on Google Colab..
👍 1
1
#6 opened about 2 years ago
by
justinmac
License
👀👍 4
#3 opened over 2 years ago
by
mrfakename
how to use this locally /sdwebui or comfyui etc...
1
#2 opened over 2 years ago
by
patientxtr
Thank you very much for this perfect model. Could you convert it to ONNX, please?
1
#1 opened over 2 years ago
by
NikolayKozloff