Linux
Please follow the primary README.md of this repo.
Windows
Windows users may stumble when installing the package triton.
You can choose to run on CPU without xformers and triton installed.
To use CUDA, please refer to issue#24 to try solve the problem of triton installation.
MacOS
You can try to set up according to the following steps to use CPU or MPS device.
Install torch (Preview/Nighly version).
# MPS acceleration is available on MacOS 12.3+ pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cpuCheck more details in official document.
Package
tritonandxformersis not needed since they work with CUDA. Remove the related packages.Your requirements.txt should look like:
# requirements.txt pytorch_lightning==1.4.2 einops open-clip-torch omegaconf torchmetrics==0.6.0 opencv-python-headless scipy matplotlib lpips gradio chardet transformers facexlibpip install -r requirements.txtRun the inference script and specify
--device cpuor--device mps. Using MPS can accelarate your inference.You can specify
--tiledand related arguments to avoid OOM.