Stable Diffusion WebUI AUTOMATIC1111: A Beginner’s Guide
Updated August 28, 2023
By Andrew
Categorized as Tutorial 
Tagged Beginner, Img2img, Txt2img
24 Commentson Stable Diffusion WebUI AUTOMATIC1111: A Beginner’s Guide

Stable Diffusion WebUI (AUTOMATIC1111 or A1111 for short) is the de facto GUI for advanced users. Thanks to the passionate community, most new features come to this free Stable Diffusion GUI first. But it is not the easiest software to use. Documentation is lacking. The extensive list of features it offers can be intimidating.

This guide will teach you how to use AUTOTMATIC1111 GUI. You can use it as a tutorial. There are plenty of examples you can follow step-by-step.

You can also use this guide as a reference manual. Skip through it and see what is there. Come back to it when you actually need to use a feature.

You will see many examples to demonstrate the effect of a setting because I believe this is the only way to make it clear.

Updates:

Aug 20, 2023: Add Canvas Zoom for Inpainting.
Contents [hide]

Download and install Stable Diffusion WebUI
Text-to-image tab
Basic usage
Image generation parameters
Seed
Extra seed options
Restore faces
Tiling
Hires. fix.
Buttons under the Generate button
Image file actions
Img2img tab
Image-to-image
Sketch
Inpainting
Zoom and pan in inpainting
Inpaint sketch
Inpaint upload
Batch
Get prompt from an image
Upscaling
Basic Usage
Upscalers
Face Restoration
PNG Info
Installing extensions
Applying Styles in Stable Diffusion WebUI
Prompts
Checkpoint Models
Lora, LyCORIS, embedding and hypernetwork
Checkpoint merger
Train
Settings
Face Restoration
Stable Diffusion
Quick Settings
Download and install Stable Diffusion WebUI
You can use Stable Diffusion WebUI on Windows, Mac, or Google Colab.

1-click Google Colab Notebook
Installation guide for Windows
Installation guide for Mac
Read the Quick Start Guide to decide which Stable Diffusion to use.

Check out some useful extensions for beginners.

Text-to-image tab
You will see the txt2img tab when you first start the GUI. This tab does the most basic function of Stable Diffusion: turning a text prompt into images.

txt2img tab of Stable Diffusion WebUI (AUTOMATIC1111)
Basic usage
These are the settings you may want to change if this is your first time using AUTOMATIC1111.


Stable Diffusion Checkpoint: Select the model you want to you. First-time users can use the v1.5 base model.

Prompt: Describe what you want to see in the images. Below is an example. See the complete guide for prompt building for a tutorial.

A surrealist painting of a cat by Salvador Dali


Width and height: The size of the output image. You should set at least one side to 512 pixels when using a v1 model. For example, set the width to 512 and the height to 768 for a portrait image with a 2:3 aspect ratio.

Batch size: Number of images to be generated each time. You want to generate at least a few when testing a prompt because each one will differ.

Finally, hit the Generate button. After a short wait, you will get your images!


By default, you will get an additional image of composite thumbnails.

You can save an image to your local storage. First, select the image using the thumbnails below the main image canvas. Right-click the image to bring up the context menu. You should have options to save the image or copy the image to the clipboard.

That’s all you need to know for the basics! The rest of this section explains each function in more detail.

Image generation parameters
txt2img tab in AUTOMATIC1111.
Txt2img tab.
Stable Diffusion checkpoint is a dropdown menu for selecting models. You need to put model files in the folder stable-diffusion-webui > models > Stable-diffusion. See more about installing models.

The refresh button next to the dropdown menu is for refreshing the list of models. It is used when you have just put a new model in the model folder and wish to update the list.

Prompt text box: Put what you want to see in the images. Be detailed and specific. Use some try-and-true keywords. You can find a short list here or a more extensive list in the prompt generator.

Negative Prompt text box: Put what you don’t want to see. You should use a negative prompt when using v2 models. You can use a universal negative prompt. See this article for details.

Sampling method: The algorithm for the denoising process. I use DPM++ 2M Karras because it balances speed and quality well. See this section for more details. You may want to avoid any ancestral samplers (The ones with an a) because their images are unstable even at large sampling steps. This made tweaking the image difficult.

Sampling steps: Number of sampling steps for the denoising process. The more the better, but it also takes longer. 25 steps work for most cases.

Width and height: The size of the output image. You should set at least one side to 512 pixels for v1 models. For example, set the width to 512 and the height to 768 for a portrait image with a 2:3 aspect ratio. Set at least one side to 768 when using the v2-768px model.

Batch count: Number of times you run the image generation pipeline.

Batch size: Number of images to generate each time you run the pipeline.

The total number of images generated equals the batch count times the batch size. You would usually change the batch size because it is faster. You will only change the batch count if you run into memory issues.

CFG scale: Classifier Free Guidance scale is a parameter to control how much the model should respect your prompt.

1 – Mostly ignore your prompt.
3 – Be more creative.
7 – A good balance between following the prompt and freedom.
15 – Adhere more to the prompt.
30 – Strictly follow the prompt.

The images below show the effect of changing CFG with fixed seed values. You don’t want to set CFG values too high or too low. Stable Diffusion will ignore your prompt if the CFG value is too low. The color of the images will be saturated when it is too high.


Seed
Seed: The seed value used to generate the initial random tensor in the latent space. Practically, it controls the content of the image. Each image generated has its own seed value. AUTOMATIC1111 will use a random seed value if it is set to -1.

A common reason to fix the seed is to fix the content of an image and tweak the prompt. Let’s say I generated an image using the following prompt.

photo of woman, dress, city night background



I like this image and want to tweak the prompt to add bracelets to her wrists. You will set the seed to the value of this image. The seed value is in the log message below the image canvas.


An image’s seed value (highlighted) is in the log message.
Copy this value to the seed value input box. Or use the recycle button to copy the seed value.


Now add the term “bracelet” to the prompt

photo of woman, dress, city night background, bracelet


You get a similar picture with bracelets on her wrists.


The scene could completely change because some keywords are strong enough to alter the composition. You may experiment with swapping in a keyword at a later sampling step.

Use the dice icon to set the seed back to -1 (random).


Extra seed options
Checking the Extra option will reveal the Extra Seed menu.


Variation seed: An additional seed value you want to use.

Variation strength: Degree of interpolation between the seed and the variation seed. Setting it to 0 uses the seed value. Setting it to 1 uses the variation seed value.

Here’s an example. Let’s say you have generated 2 images from the same prompt and settings. They have their own seed values, 1 and 3.


First image: Seed value is 1.

Second image: Seed value is 3.
You want to generate a blend of these two images. You would set the seed to 1, the variation seed to 3, and adjust the variation strength between 0 and 1. In the experiment below, variation strength allows you to produce a transition of image content between the two seeds. The girl’s pose and background change gradually when the variation strength increases from 0 to 1.


Resize seed from width/height: Images will change dramatically if you change the image size, even if you use the same seed. This setting tries to fix the content of the image when resizing the image. You will put the new size in width and height sliders and the width and height of the original image here. Put the original seed value in the seed input box. Set variation strength to 0 to ignore the variation seed.

Let’s say you like this image, which is 512×800 with a seed value of 3.


512×800
The composition will change drastically when you change the image size, even when keeping the same seed value.

512×600
512×744
Setting a different size changes the image dramatically.
You will get something much closer to the original one with the new size when you turn on the resize seed from height and width settings. They are not perfectly identical, but they are close.

512×600
512×744
Images are much closer to the original one with the resize seed option.
Restore faces
Restore faces applies an additional model trained for restoring defects on faces. Below are before and after examples.

Original
Face Restore
You must specify which face restoration model to use before using Restore Faces. First, visit the Settings tab. Navigate to the Face restoration section. Select a face restoration model. CodeFormer is a good choice. Set CodeFormer weight to 0 for maximal effect. Remember to click the Apply settings button to save the settings!


Go back to the txt2img tab. Check Restore Faces. The face restoration model will be applied to every image you generate.

You may want to turn off face restoration if you find that the application affects the style on the faces. Alternatively, you can increase the CodeFormer weight parameter to reduce the effect.

Tiling
You can use Stable Diffusion WebUI to create a repeating pattern like a wallpaper.

Use the Tiling option to produce a periodic image that can be tiled. Below is an example.

flowers pattern



This image can be tiled like wallpaper.


2×2 tiled.
The true treasure of using Stable Diffusion is allowing you to create tiles of any images, not just traditional patterns. All you need is to come up with a text prompt.


Hires. fix.
The high-resolution fix option applies a upsacler to scale up your image. You need this because the native resolution of Stable Diffusion is 512 pixels (or 768 pixels for certain v2 models). The image is too small for many usages.

Why can’t you just set the width and height to higher, like 1024 pixels? Deviating from the native resolution would affect compositions and create problems like generating images with two heads.

So you must first generate a small image of 512 pixels on either side. Then scale it up to a bigger one.


Check Hires. fix to enable high-resolution fix.

Upscaler: Choose an upscaler to use. See this article for a primer.

The various Latent upscaler options scale the image in the latent space. It is done after the sampling steps of the text-to-image generation. The process is similar to image-to-image.

Other options are a mix of traditional and AI upscalers. See the AI upscaler article for details.

Hires steps: Only applicable to latent upscalers. It is the number of sampling steps after upscaling the latent image.

Denoising strength: Only applicable to latent upscalers. This parameter has the same meaning as in image-to-image. It controls the noise added to the latent image before performing the Hires sampling steps.

Now let’s look at the effect of upscaling the image below to 2x, using latent as the upscaler.


Original image
0.4
0.65
0.9
The denoising strength of the latent upscaler must be higher than 0.5. Otherwise, you will get blurry images.
For some reason, it must be larger than 0.5 to get a sharp image. Setting it too high will change the image a lot.

The benefit of using a latent upscaler is the lack of upscaling artifacts other upscalers like ESRGAN may introduce. The decoder of Stable Diffusion produces the image, ensuring the style is consistent. The drawback is it would change the images to some extent, depending on the value of denoising strength.

The upscale factor control how many times larger the image will be. For example, setting it to 2 scales a 512-by-768 pixel image to 1024-by-1536 pixels.

Alternatively, you can specify the values of “resize width to” and “resize height to” to set the new image size.

You can avoid the troubles of setting the correct denoising strength by using an AI upscalers like ESRGAN. In general, separating the txt2img and the upscaling into two steps gives you more flexibility. I don’t use the high-resolution fix option but use the Extra page to do upscaling instead.

Buttons under the Generate button

From left to right:

Read the last parameters: It will populate all fields so that you will generate the same images when pressing the Generate button. Note that the seed and the model override will be set. If this is not what you want, set the seed to -1 and remove the override.

Seed value and Model override are highlighted.
2. Trash icon: Delete the current prompt and the negative prompt.

3. Model icon: Show extra networks. This button is for inserting hypernetworks, embeddings, and LoRA phrases into the prompt.

You can use the following two buttons to load and save a prompt and a negative prompt. The set is called a style. It can be a short phrase like an artist’s name, or it can be a full prompt.

4. Load style: You can select multiple styles from the style dropdown menu below. Use this button to insert them into the prompt and the negative prompt.

5. Save style: Save the prompt and the negative prompt. You will need to name the style.

Image file actions

You will find a row of buttons for performing various functions on the images generated. From left to right…

Open folder: Open the image output folder. It may not work for all systems.

Save: Save an image. After clicking, it will show a download link below the buttons. It will save all images if you select the image grid.

Zip: Zip up the image(s) for download.

Send to img2img: Send the selected image to the img2img tab.

Send to inpainting: Send the selected image to the inpainting tab in the img2img tab.

Send to extras: Send the selected image to the Extras tab.

Img2img tab
The img2img tab is where you use the image-to-image functions. Most users would visit this tab for inpainting and turning an image into another.

Image-to-image
An everyday use case in the img2img tab is to do… image-to-image. You can create new images that follow the composition of the base image.

Step 1: Drag and drop the base image to the img2img tab on the img2img page.


Base Image.
Step 2: Adjust width or height, so the new image has the same aspect ratio. You should see a rectangular frame in the image canvas indicating the aspect ratio. In the above landscape image, I set the width to 760 while keeping the height at 512.

Step 3: Set the sampling method and sampling steps. I typically use DPM++ 2M Karass with 25 steps.

Step 4: Set batch size to 4.

Step 5: Write a prompt for the new image. I will use the following prompt.

A photorealistic illustration of a dragon


Step 6: Press the Generate button to generate images. Adjust denoising strength and repeat. Below are images with varying denoising strengths.

0.4
0.6
0.8
Images produced by img2img with various denoising strengths.
Many settings are shared with txt2img. I am only going to explain the new ones.

Resize mode: If the aspect ratio of the new image is not the same as that of the input image, there are a few ways to reconcile the difference.

“Just resize” scales the input image to fit the new image dimension. It will stretch or squeeze the image.
“Crop and resize” fits the new image canvas into the input image. The parts that don’t fit are removed. The aspect ratio of the original image will be preserved.
“Resize and fill” fits the input image into the new image canvas. The extra part is filled with the average color of the input image. The aspect ratio will be preserved.
“Just resize (latent upscale)” is similar to “Just resize”, but the scaling is done in the latent space. Use denoising strength larger than 0.5 to avoid blurry images.
Just resize
Crop and resize
Resize and fill
Just resize (latent upscale)
Resize mode
Denoising strength: Control how much the image will change. Nothing changes if it is set to 0. New images don’t follow the input image if it is set to 1. 0.75 is a good starting point that have a good amount of changes.

You can use the built-in script poor man’s outpainting: For extending an image. See the outpainting guide.

Sketch
Instead of uploading an image, you can sketch the initial picture. You should enable the color sketch tool using the following argument when starting the webui. (It is already enabled in the Google Colab notebook in the Quick Start Guide)

--gradio-img2img-tool color-sketch

Step 1: Navigate to sketch tab on the img2img page.

Step 2: Upload a background image to the canvas. You can use the black or white backgrounds below.

Black background
White background
Step 3: Sketch your creation. With color sketch tool enabled, you should be able to sketch in color.

Step 4: Write a prompt.

award wining house


Step 5: Press Generate.


Sketch your own picture for image-to-image.
You don’t have to draw something from scratch. You can use the sketch function to modify an image. Below is an example of removing the braids by painting them over and doing a round of image-to-image. Use the eye dropper tool to pick a color from the surrounding areas.


Inpainting
Perhaps the most used function in the img2img tab is inpainting. You generated an image you like in the txt2img tab. But there’s a minor defect, and you want to regenerate it.

Let’s say you have generated the following image in the txt2img tab. You want to regenerate the face because it is garbled. You can use the Send to inpaint button to send an image from the txt2img tab to the img2img tab.


You should see your image when switching to the Inpaint tab of the img2img page. Use the paintbrush tool to create a mask over the area to be regenerated.


Parameters like image sizes have been set correctly because you used the “Send to inpaint” function. You usually would adjust

denoising strength: Start at 0.75. Increase to change more. Decrease to change less.
Mask content: original
Mask Mode: Inpaint masked
Batch size: 4
Press the Generate button. Pick the one you like.


Zoom and pan in inpainting
Automatic1111 zoom and pan.
Do you have difficulty in inpainting a small area? Hover over the information icon in the top left corner to see keyboard shortcuts for zoom and pan.

Alt + Wheel / Opt + Wheel: Zoom in and out.
Ctrl + Wheel: Adjust the brush size.
R: Reset zoom.
S: Enter/Exit full screen.
Hold F and move the cursor to pan.
These shortcuts also work in Sketch and Inpaint Sketch.

Inpaint sketch
Inpaint sketch combines inpainting and sketch. It lets you paint like in the sketch tab but only regenerates the painted area. The unpainted area is unchanged. Below is an example.


Inpaint sketch.



Results from inpaint sketch.
Inpaint upload
Inpaint upload lets you upload a separate mask file instead of drawing it.

Batch
Batch lets you inpaint or perform image-to-image for multiple images.

Get prompt from an image
AUTOMATIC1111’s Interogate CLIP button takes the image you upload to the img2img tab and guesses the prompt. It is useful when you want to work on images you don’t know the prompt. To get a guessed prompt from an image:

Step 1: Navigate to the img2img page.

Step 2: Upload an image to the img2img tab.

Step 3: Click the Interrogate CLIP button.


A prompt will show up in the prompt text box.

The Interrogate DeepBooru button offers a similar function, except it is designed for anime images.

Upscaling
You will go to the Extra page for scaling up an image. Why do you need AUTOMATIC1111 to enlarge an image? You can use an AI upscaler that is usually unavailable on your PC. Instead of paying for an AI upscaling service, you can do it for free here.

Basic Usage
Follow these steps to upscale an image.

Step 1: Navigate to the Extra page.

Step 2: Upload an image to the image canvas.

Step 3: Set the Scale by factor under the resize label. The new image will be this many times larger on each side. For example, a 200×400 image will become 800×1600 with a scale factor of 4.

Step 4: Select Upscaler 1. A popular general-purpose AI upscaler is R-ESRGAN 4x+.

Step 5: Press Generate. You should get a new image on the right.


Make sure to inspect the new image at full resolution. For example, you can open the new image in a new tab and disable auto-fit. Upscalers could produce artifacts that you might overlook if it is shrunk.

Even if you don’t need 4x larger, for example, it can still enlarge it to 4x and resize it later. This could help improve sharpness.

Scale to: Instead of setting a scale factor, you can specify the dimensions to resize in the “scale to” tab.

Upscalers
AUTOMATIC1111 offers a few upscalers by default.

Upscalers: The Upscaler dropdown menu lists several built-in options. You can also install your own. See the AI upscaler article for instructions.

Lanczos and Nearest are old-school upscalers. They are not as powerful but the behavior is predictable.

ESRGAN, R-ESRGAN, ScuNet, and SwinIR are AI upscalers. They can literally make up content to increase resolution. Some are trained for a particle style. The best way to find out if they work for your image is to test them out. I may sound like a broken record now, but make sure to look at the image closely at full resolution.

Upscaler 2: Sometimes, you want to combine the effect of two upscalers. This option lets you combine the results of two upscalers. The amount of blending is controlled by the Upscaler 2 Visibility slider. A higher value shows upscaler 2 more.

Can’t find the upscaler you like? You can install additional upscalers from the model library. See installation instructions.

Face Restoration
You can optionally restore faces in the upscaling process. Two options are available: (1) GFPGAN, and (2) CodeFormer. Set the visibility of either one of them to apply the correction. As a rule of thumbnail, you should set the lowest value you can get away with so that the style of the image is not affected.


PNG Info

Many Stable Diffusion GUIs, including AUTOMATIC1111, write generation parameters to the image png file. This is a convenient function to get back the generation parameters quickly.

If AUTOMATIC1111 generates the image, you can use the Send to buttons to quickly copy the parameters to various pages.

It is useful when you find an image on the web and want to see if the prompt is left in the file.

This function could be helpful even for an image that is not generated. You can quickly send the image and its dimension to a page.

Installing extensions
Installing an extension in AUTOMATIC1111 Stable Diffusion WebUI
To install an extension in

Start AUTOMATIC1111 Web-UI normally.
2. Navigate to the Extension Page.

3. Click the Install from URL tab.

4. Enter the extension’s URL in the URL for extension’s git repository field.

5. Wait for the confirmation message that the installation is complete.

6. Restart AUTOMATIC1111. (Tips: Don’t use the Apply and Restart button. It doesn’t work sometimes. Close and Restart Stable Diffusion WebUI completely)

Applying Styles in Stable Diffusion WebUI
A common question is applying a style to the AI-generated images in Stable Diffusion WebUI. There are a few ways.

Prompts
Using prompts alone can achieve amazing styles, even using a base model like Stable Diffusion v1.5 or SDXL. For example, see over a hundred styles achieved using prompts with the SDXL model.

If you prefer a more automated approach to applying styles with prompts, you can use the SDXL Style Selector extension to add style keywords to your prompt.

Checkpoint Models
Thousands of custom checkpoint models fine-tuned to generate various styles are freely available. Go find them on Civitai or Huggingface.

Lora, LyCORIS, embedding and hypernetwork
Lora, LyCORIS, embedding, and hypernetwork models are small files that modify a checkpoint model. They can be used to achieve different styles. Again, find them on Civitai or Huggingface.

Checkpoint merger
AUTOMATIC1111’s checkpoint merger is for combining two or more models. You can combine up to 3 models to create a new model. It is usually for mixing the styles of two or more models. However, the merge result is not guaranteed. It could sometimes produce undesirable artifacts.

Primary model (A, B, C): The input models. The merging will be done according to the formula displayed. The formula will change according to the interpolation method selected.

Interpolation methods:

No interpolation: Use model A only. This is for file conversion or replacing the VAE.
Weighted sum: Merge two models A and B, with multiplier weight M applying to B. The formula is A * (1 – M) + B * M.
Add difference: Merge three models using the formula A + (B – C) * M.
Checkpoint format

ckpt: The original checkpoint model format.
safetensors: SafeTensors is a new model format developed by Hugging Face. It is safe because, unlike ckpt models, loading a Safe Tensor model won’t execute any malicious codes even if they are in the model.
Bake in VAE: Replace the VAE decoder with the one selected. It is for replacing the original one with a better one released by Stability.

Train
The Train page is for training models. It currently supports textual inversion (embedding) and hypernetwork. I don’t have good luck using AUTOMATIC1111 for training, so I will not cover this section.

Settings
There is an extensive list of settings on AUTOMATIC1111’s setting page. I won’t be able to go through them individually in this article. Here are some you want to check.

Make sure to click Apply settings after changing any settings.

Face Restoration
Make sure to select the default face restoration method. CodeFormer is a good one.

Stable Diffusion
Download and select a VAE released by Stability to improve eyes and faces in v1 models.

Quick Settings
Quick Settings

You can enable custom shortcuts on the top.

On the Settings page, click Show All Pages on the left panel.

Search the word Quicksettings gets you to the Quick Setting field.

There are a lot of settings available for selection. For example, the following enables shortcuts for Clip Skip and custom image output directories.


After saving the settings and reloading the Web-UI, you will see the new shortcuts at the top of the page.


The custom output directories come in handy for organizing the images.

Here is the list of Quick settings that are useful to enable

CLIP_stop_at_last_layers
sd_vae
outdir_txt2img_samples
outdir_img2img_samples

Beginner’s Guide to ComfyUI
Published September 14, 2023
By Andrew
Categorized as Tutorial 
Tagged ComfyUI, Txt2img
No Commentson Beginner’s Guide to ComfyUI
ComfyUI.
What you would look like after using ComfyUI for real.
ComfyUI is a node-based GUI for Stable Diffusion. This tutorial is for someone who hasn’t used ComfyUI before. I will covers

Text-to-image
Image-to-image
SDXL workflow
Inpainting
Using LoRAs
ComfyUI Manager – managing custom nodes in GUI.
Impact Pack – a collection of useful ComfyUI nodes.
See this post for a guide to installing ComfyUI.

Contents [hide]

What is ComfyUI?
ComfyUI vs AUTOMATIC1111
Where to start?
Basic controls
Text-to-image
Generating your first image on ComfyUI
1. Selecting a model
2. Enter a prompt and a negative prompt
3. Generate an image
What has just happened?
Load Checkpoint node
CLIP Text Encode
Empty latent image
KSampler
Image-to-image workflow
ComfyUI Manager
Installing ComfyUI Manager
Using ComfyUI Manager
Upscaling
AI upscale
Exercise: Recreate the AI upscaler workflow from text-to-image
Hi-res fix
SD Ultimate upscale – ComfyUI edition
Installing the SD Ultimate upscale node
Using SD Ultimate upscale
ComfyUI Inpainting
Step 1: Create an inpaint mask
Step 2: Open the inpainting workflow
Step 3: Upload the image
Step 4: Adjust parameters
Step 5: Generate inpainting
SDXL workflow
ComfyUI Impact Pack
Install
Regenerate faces
LoRA
Simple LoRA workflows
Multiple LoRAs
Exercise: Make a workflow to compare with and without LoRA
Sharing parameters between two nodes
Workflow to compare images with and without LoRA
Useful resources
What is ComfyUI?
ComfyUI is a node-based GUI for Stable Diffusion. You can construct an image generation workflow by chaining different blocks (called nodes) together.

Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own.

ComfyUI vs AUTOMATIC1111
AUTOMATIC1111 is the de facto GUI for Stable Diffusion.

Should you use ComfyUI instead of AUTOMATIC1111? Here’s a comparison.

The benefits of using ComfyUI are:

Lightweight: it runs fast.
Flexible: very configurable.
Transparent: The data flow is in front of you.
Easy to share: Each file is a reproducible workflow.
Good for prototyping: Prototyping with a graphic interface instead of coding.
The drawbacks of using ComfyUI are:

Inconsistent interface: Each workflow may place the nodes differently. You need to figure out what to change.
Too much detail: Average users don’t need to know how things are wired under the hood. (Isn’t it the whole point of using a GUI?)
Lack of inpainting tool: Inpainting must be done with an external program.
Where to start?
The best way to learn ComfyUI is by going through examples. So, we will learn how to do things in ComfyUI in the simplest text-to-image workflow.

We will go through some basic workflow examples. After studying some essential ones, you will start to understand how to make your own.

At the end of this tutorial, you will have the opportunity to make a pretty involved one. The answer will be provided.

Basic controls
Use the mouse wheel or two-finger pinch to zoom in and out.

Drag and hold the dot of the input or output to form a connection. You can only connect between input and output of the same type.

Hold and drag with the left click to move around the workspace.

Press Ctrl-0 (Windows) or Cmd-0 (Mac) to show the Queue panel.

Text-to-image
Let’s first go through the simplest case: generating an image from text.

Classical, right?

By going through this example, you will also learn the idea before ComfyUI (It’s very different from Automatic1111 WebUI). As a bonus, you will know more about how Stable Diffusion works!

Generating your first image on ComfyUI
After starting ComfyUI for the very first time, you should see the default text-to-image workflow. It should look like this:


If this is not what you see, click Load Default on the right panel to return this default text-to-image workflow.

If you don’t see the right panel, press Ctrl-0 (Windows) or Cmd-0 (Mac).

You will see the workflow is made with two basic building blocks: Nodes and edges.

Nodes are the rectangular blocks, e.g., Load Checkpoint, Clip Text Encoder, etc. Each node executes some code. If you have some programming experience, you can think of them as functions. Each node needs three things

Inputs are the texts and dots on the left that the wires come in.
Outputs are the texts and dots on the right the wires go out.
Parameters are the fields at the center of the block.
Edges are the wires connecting the outputs and the inputs between nodes.

That’s the whole idea! The rest are details.

Don’t worry if the jargon on the nodes looks daunting. We will walk through a simple example of using ComfyUI, introduce some concepts, and gradually move on to more complicated workflows.

Below is the simplest way you can use ComfyUI. You should be in the default workflow.

1. Selecting a model

First, select a Stable Diffusion Checkpoint model in the Load Checkpoint node. Click on the model name to show a list of available models.

If the node is too small, you can use the mouse wheel or pinch with two fingers on the touchpad to zoom in and out.

If clicking the model name does nothing, you may not have installed a model or configured it to use your existing models in A1111. Go back to the installation guide to fix it first.

2. Enter a prompt and a negative prompt

You should see two nodes labeled CLIP Text Encode (Prompt). Enter your prompt in the top one and your negative prompt in the bottom one.

The CLIP Text Enode node first converts the prompt into tokens and then encodes them into embeddings with the text encoder.

You can use the syntax (keyword:weight) to control the weight of the keyword. E.g. (keyword:1.2) to increase its effect. (keyword:0.8) to decrease its effect.

Why is the top one the prompt? Look at the CONDITIONING output. It is connected to the positive input of the KSampler node. The bottom one is connected to the negative, so it is for the negative prompt.

3. Generate an image
Click Queue Prompt to run the workflow. After a short wait, you should see the first image generated.


What has just happened?
The advantage of using ComfyUI is that it is very configurable. It is worth learning what each node does so you can use them to suit your needs.

You can skip the rest of this section if you are not interested in the theory.

Load Checkpoint node

Use the Load Checkpoint node to select a model. A Stable Diffusion model has three main parts:

MODEL: The noise predictor model in the latent space.
CLIP: The language model preprocesses the positive and the negative prompts.
VAE: The Variational AutoEncoder converts the image between the pixel and the latent spaces.
The MODEL output connects to the sampler, where the reverse diffusion process is done.

The CLIP output connects to the prompts because the prompts need to be processed by the CLIP model before they are useful.

In text-to-image, VAE is only used in the last step: Converting the image from the latent to the pixel space. In other words, we are only using the decoder part of the autoencoder.

CLIP Text Encode

The CLIP text encode node gets the prompt and feeds it into the CLIP language model. CLIP is OpenAI’s language model, transforming each word in a prompt into embeddings.

Empty latent image

A text-to-image process starts with a random image in the latent space.

The size of the latent image is proportional to the actual image in the pixel space. So, if you want to change the size of the image, you change the size of the latent image.

You set the height and the weight to change the image size in pixel space.

Here, you can also set the batch size, which is how many images you generate in each run.

KSampler

KSampler is at the heart of image generation in Stable Diffusion. A sampler denoises a random image into one that matches your prompt.

KSampler refers to samplers implemented in this code repository.

Here are the parameters in the KSampler node.

Seed: The random seed value controls the initial noise of the latent image and, hence, the composition of the final image.
Control_after_generation: How the seed should change after each generation. It can either be getting a random value (randomize), increasing by 1 (increment), decreasing by 1 (decrement), or unchanged (fixed).
Step: Number of sampling steps. The higher, the fewer artifacts in the numerical process.
Sampler_name: Here, you can set the sampling algorithm. Read the sampler article for a primer.
Scheduler: Controls how the noise level should change in each step.
Denoise: How much of the initial noise should be erased by the denoising process. 1 means all.
Image-to-image workflow
The Img2img workflow is another staple workflow in Stable Diffusion. It generates an image based on the prompt AND an input image.

You can adjust the denoising strength to control how much Stable Diffusion should follow the base image.

Download the image-to-image workflow

Drag and drop this workflow image to ComfyUI to load.

comfyUI img2img workflow.
To use this img2img workflow:

Select the checkpoint model.
Revise the positive and the negative prompts.
Optionally adjust the denoise (denoising strength) in the KSampler node.
Press Queue Prompt to start generation.
ComfyUI Manager
ComfyUI manager is a custom node that lets you install and update other custom nodes through the ComfyUI interface.

Installing ComfyUI Manager
To install this custom node, go to the custom nodes folder in the PowerShell (Windows) or Terminal (Mac) App:

cd ComfyUI/custom_nodes

And clone the node to your local storage.

git clone https://github.com/ltdrdata/ComfyUI-Manager

Restart ComfyUI completely.

Using ComfyUI Manager
After the installation, you should see an extra Manager button on the Queue Prompt menu. Clicking it shows a GUI that lets you

Install/uninstall custom nodes.
Install missing nodes in the current workflow.
Install Models such as checkpoint models, AI upscalers, VAEs, LoRA, ControlNet models, etc.
Update ComfyUI UI.
Read the community manual.

The Install Missing Nodes function is especially useful for finding what custom nodes that are required in the current workflow.

The Install Custom Nodes menu lets you manage custom nodes. You can uninstall or disable an installed node or install a new one.

ComfyUI manager.
Upscaling
There are several ways to upscale in Stable Diffusion. For teaching purposes, let’s go through upscaling with

an AI upscaler
Hi res fix
Ultimate Upscale
AI upscale
An AI upscaler is an AI model for enlarging images while filling in details. They are not Stable Diffusion models but neural networks trained for enlarging images.

Load this upscaling workflow by first downloading the image on the page. Drag and drop the image to ComfyUI.

Tip: Dragging and dropping an image made with ComfyUI loads the workflow that produces it.

AI upscaler in the upscaling workflow in ComfyUI.
In this basic example, you see the only additions to text-to-image are

Load Upscale Model: This is for loading an AI upscaler model.
Upscale image(using Model): The node now sits between the VAE decoder and the Save image node. It takes the image and the upscaler model. And outputs an upscaled image.
To use this upscaler workflow, you must download an upscaler model from the Upscaler Wiki, and put it in the folder models > upscale_models.

Alternatively, set up ComfyUI to use AUTOMATIC1111’s model files.

Select an upscaler and click Queue Prompt to generate an upscaled image. The image should have been upscaled 4x by the AI upscaler.

Exercise: Recreate the AI upscaler workflow from text-to-image
It is a good exercise to make your first custom workflow by adding an upscaler to the default text-to-image workflow.

Get back to the basic text-to-image workflow by clicking Load Default.
2. Right-click an empty space near Save Image. Select Add Node > loaders > Load Upscale Model.


3. Click on the dot on the wire between VAE Decode and Save Image. Click Delete to delete the wire.


4. Right-click on an empty space and select Add Node > image > upscaling > Upscale Image (using Model) to add the new node.


5. Drag and hold the UPSCALE_MODEL output of Load Upscale Model. Drop it at upscale_model of the Upscale Image (using Model) node.

6. Drag and hold the IMAGE output of the VAE Decode. Drop it at the image input of the Upscale Image (using Model).


7. Drag and hold the IMAGE output of the Upscale Image (uisng Model) node. Drop it at the images input of the Save Image node.

8. Click Queue Prompt to test the workflow.

Now you know how to make a new workflow. This skill comes in handy to make your own workflows.

Hi-res fix
Download the first image on this page and drop it in ComfyUI to load the Hi-Res Fix workflow.

This is a more complex example but also shows you the power of ComfyUI. After studying the nodes and edges, you will know exactly what Hi-Res Fix is.

The first part is identical to text-to-image: You denoise a latent image using a sampler, conditioned with your positive and negative prompts.


The workflow then upscales the image in the latent space and performs a few additional sampling steps. It adds some initial noise to the image and denoises it with a certain denoising strength.


The VAE decoder then decodes the larger latent image to produce an upscaled image.

SD Ultimate upscale – ComfyUI edition
SD Ultimate upscale is a popular upscaling extension for AUTOMATIC1111 WebUI. You can use it on ComfyUI too!

Github Page of SD Ultimate upscale for ComfyUI

This is also a good exercise for installing a custom node.

Installing the SD Ultimate upscale node
To install this custom node, go to the custom nodes folder in the PowerShell (Windows) or Terminal (Mac) App:

cd ComfyUI/custom_nodes

And clone the node to your local storage.

git clone https://github.com/ssitu/ComfyUI_UltimateSDUpscale --recursive

Restart ComfyUI completely.

Using SD Ultimate upscale
A good exercise is to start with the AI upscaler workflow. Add SD Ultimate Upscale and compare the result.

Load the AI upscaler workflow by dragging and dropping the image to ComfyUI or using the Load button to load.

Right-click on an empty space. Select Add Node > image > upscaling > Ultimate SD Upscale.


You should see the new node Ultimate SD Upscale. Wire up its input as follows.

image to VAE Decode’s IMAGE.
model to Load Checkpoint’s MODEL.
positive to CONDITIONING of the positive prompt box.
negative to CONDITIONING of the negative prompt box.
vae to Load Checkpoint’s VAE.
upscale_model to Load Upscale Model’s UPSCALE_MODEL.
For the output:

IMAGE to Save Image’s images.

If they are wired correctly, clicking Queue Prompt should show two large images, one with the AI upscaler and the other with Ultimate Upscale.

You can download this workflow example below. Drag and drop the image to ComfyUI to load.

Ultimate Upscale workflow
ComfyUI Inpainting
You can use ComfyUI for inpainting. It is a basic technique to regenerate a part of the image.

I have to admit that inpainting is not the easiest thing to do with ComfyUI. But here you go…

Step 1: Create an inpaint mask
First, pick an image that you want to inpaint.


Andy Lau is ready for inpainting.
You can download the image in PNG format here.

andylau_png24Download
We will use Photopea, a free online Photoshop clone, to create the inpaint mask. The mask needs to be painted in the Alpha channel of a PNG file.

Drag and drop the PNG image to Photopea.

Select the Eraser Tool (Press E).

Draw the mask by erasing part of the image.


Save it as a PNG file. Click File > Export > PNG.

Step 2: Open the inpainting workflow
To use inpainting, first download the inpainting workflow.

Load the inpainting workflow in ComfyUI by dropping to it.

Step 3: Upload the image
Upload the image with the mask to the Load Image node.


Step 4: Adjust parameters
Change the prompt:

x men Cyclops sun glasses, epic style, super hero


The original denoising strength (denoise) is too high. Set it to 0.8.

Step 5: Generate inpainting
Finally, press the Queue Prompt to perform inpainting.


This is quite an ordeal for a small task… So, I will stick with AUTOMATIC1111 for inpainting.

SDXL workflow
ComfyUI Stable Diffusion XL workflow.
Simple SDXL workflow.
Because of its extremely configurability, ComfyUI is one of the first GUIs that make the Stable Diffusion XL model work.

Download the Simple SDXL workflow for ComfyUI. Drag and drop the image to ComfyUI to load.

You will need to change

Positive Prompt
Negative Prompt
That’s it!

There are a few more complex SDXL workflows on this page.

ComfyUI Impact Pack
ComfyUI Impact pack is a pack of free custom nodes that greatly enhance what ComfyUI can do.

There are more custom nodes in the Impact Pact than I can write about in this article. See the official tutorials to learn them one by one. Read through the beginner tutorials if you want to use this set of nodes effectively.

Install
To install the ComfyUI Impact Pack, first open the PowerShell App (Windows) or the Terminal App (Mac or Linux).

cd custom_nodes

Clone the Impact Pack to your local storage.

git clone https://github.com/ltdrdata/ComfyUI-Impact-Pack.git

Clone Workflow Component that is needed for Impact Pack.

git clone https://github.com/ltdrdata/ComfyUI-Workflow-Component

Restart ComfyUI completely.

Regenerate faces
You can use this workflow in the Impact Pack to regenerate faces with the Face Detailer custom node and SDXL base and refiner models. Download and drop the JSON file into ComfyUI.

To use this workflow, you will need to set

The initial image in the Load Image node.
An SDXL base model in the upper Load Checkpoint node.
An SDXL refiner model in the lower Load Checkpoint node.
The prompt and negative prompt for the new images.
Click Queue Prompt to start the workflow.

Andy Lau’s face doesn’t need any fix (Did he??). So I used a prompt to turn him into a K-pop star.

a closeup photograph of a korean k-pop star man



Only the face changes, while the background and everything else stays the same.

LoRA
LoRA is a small model file modifying a checkpoint model. It is frequently used for modifying styles or injecting a person into the model.

In fact, the modification of LoRA is clear in ComfyUI:


The LoRA model changes the MODEL and CLIP of the checkpoint model but leaves the VAE untouched.

Simple LoRA workflows
This is the simplest LoRA workflow possible: Text-to-image with a LoRA and a checkpoint model.

Download the simple LoRA workflow


To use the workflow:

Select a checkpoint model.
Select a LoRA.
Revise the prompt and the negative prompt.
Click Queue Prompt.
Multiple LoRAs
You can use two LoRAs in the same text-to-image workflow.

Download the two-LoRA workflow

The usage is similar to one LoRA, but now you must pick two.

The two LoRAs are applied one after the other.

Exercise: Make a workflow to compare with and without LoRA
To be good at ComfyUI, you really need to make your own workflows.

A good exercise is to create a workflow to compare text-to-image with and without a LoRA while keeping everything else the same.

To achieve this, you need to know how to share parameters between two nodes.

Sharing parameters between two nodes
Let’s use the same seed in two K-Samplers.


They have their own seed values. To use the same seed value between the two, right-click on the node and select convert seed to input.

You should get a new input node called seed.


Right-click on an empty space. Select Add node > utils > Primitive. Connect the primitive node to the two seed inputs.


Now, you have a single seed value sharing between the two samplers.

Workflow to compare images with and without LoRA
Using this technique alone, you can modify the single LoRA example to make a workflow comparing the effect of LoRA while keeping everything else the same.


Comparing the effect of Epilson offset LoRA. Top: with LoRA. Bottom: without LoRA.
You can download the answer below.


Support for SD-XL was added in version 1.5.0, with additional memory optimizations and built-in sequenced refiner inference added in version 1.6.0.

Read here for a list of tips for optimizing inference: Optimum-SDXL-Usage

Downloads
Two models are available. The first is the primary model.

sd_xl_base_1.0_0.9vae

sd_xl_refiner_1.0_0.9vae

They have a built-in trained vae by madebyollin which fixes NaN infinity calculations running in fp16. (Here is the most up-to-date VAE for reference) Bad/Outdated info. Using this model will not fix fp16 issues for all models. You should merge this VAE with the models.

SD-XL BASE
This is a model designed for generating quality 1024×1024-sized images.

It's tested to produce same (or very close) images as Stability-AI's repo (need to set Random number generator source = CPU in settings)

img

SD-XL REFINER
This secondary model is designed to process the 1024×1024 SD-XL image near completion, to further enhance and refine details in your final output picture. As of version 1.6.0, this is now implemented in the webui natively.

SD2 Variation Models
PR, (more info.)

support for stable-diffusion-2-1-unclip checkpoints that are used for generating image variations.

It works in the same way as the current support for the SD2.0 depth model, in that you run it from the img2img tab, it extracts information from the input image (in this case, CLIP or OpenCLIP embeddings), and feeds those into the model in addition to the text prompt. Normally you would do this with denoising strength set to 1.0, since you don't actually want the normal img2img behaviour to have any influence on the generated image.

image

InstructPix2Pix
Website. Checkpoint. The checkpoint is fully supported in img2img tab. No additional actions are required. Previously an extension by a contributor was required to generate pictures: it's no longer required, but should still work. Most of img2img implementation is by the same person.

To reproduce results of the original repo, use denoising of 1.0, Euler a sampler, and edit the config in configs/instruct-pix2pix.yaml to say:

    use_ema: true
    load_ema: true
instead of:

    use_ema: false
firefox_Yj09cbmDZ1

Extra networks
A single button with a picture of a card on it. It unifies multiple extra ways to extend your generation into one UI.

Find it next to the big Generate button: firefox_QOnhgmmSi5

Extra networks provides a set of cards, each corresponding to a file with a part of model you either train or obtain from somewhere. Clicking the card adds the model to prompt, where it will affect generation.

Extra network	Directory	File types	How to use in prompt
Textual Inversion	embeddings	*.pt, images	embedding's filename
LoRA	models/Lora	*.pt, *.safetensors	<lora:filename:multiplier>
Hypernetworks	models/hypernetworks	*.pt, *.ckpt, *.safetensors	<hypernet:filename:multiplier>
Textual Inversion
A method to fine tune weights for a token in CLIP, the language model used by Stable Diffusion, from summer 2021. Author's site. Long explanation: Textual Inversion

LoRA
A method to fine tune weights for CLIP and Unet, the language model and the actual image de-noiser used by Stable Diffusion, published in 2021. Paper. A good way to train LoRA is to use kohya-ss.

Support for LoRA is built-in into the Web UI, but there is an extension with original implementation by kohya-ss.

Currently, LoRA networks for Stable Diffusion 2.0+ models are not supported by Web UI.

LoRA is added to the prompt by putting the following text into any location: <lora:filename:multiplier>, where filename is the name of file with LoRA on disk, excluding extension, and multiplier is a number, generally from 0 to 1, that lets you choose how strongly LoRA will affect the output. LoRA cannot be added to the negative prompt.

The text for adding LoRA to the prompt, <lora:filename:multiplier>, is only used to enable LoRA, and is erased from prompt afterwards, so you can't do tricks with prompt editing like [<lora:one:1.0>|<lora:two:1.0>]. A batch with multiple different prompts will only use the LoRA from the first prompt.

More LoRA types
Since version 1.5.0, webui supports other network types through the built-in extension.

See the details in the [PR]

Hypernetworks
A method to fine tune weights for CLIP and Unet, the language model and the actual image de-noiser used by Stable Diffusion, generously donated to the world by our friends at Novel AI in autumn 2022. Works in the same way as LoRA except for sharing weights for some layers. Multiplier can be used to choose how strongly the hypernetwork will affect the output.

Same rules for adding hypernetworks to the prompt apply as for LoRA: <hypernet:filename:multiplier>.

Alt-Diffusion
A model trained to accept inputs in different languages. More info. PR.

Download the checkpoint from huggingface. Click the down arrow to download.
Put the file into models/Stable-Diffusion
Notes: (Click to expand:)
Stable Diffusion 2.0
Download your checkpoint file from huggingface. Click the down arrow to download.
Put the file into models/Stable-Diffusion
768 (2.0) - (model)
768 (2.1) - (model)
512 (2.0) - (model)
Notes: (Click to expand:)
Depth Guided Model
The depth-guided model will only work in img2img tab. More info. PR.

512 depth (2.0) - (model+yaml) - .safetensors
512 depth (2.0) - (model, yaml)
Inpainting Model SD2
Model specifically designed for inpainting trained on SD 2.0 512 base.

512 inpainting (2.0) - (model+yaml) - .safetensors
inpainting_mask_weight or inpainting conditioning mask strength works on this too.

Outpainting
Outpainting extends the original image and inpaints the created empty space.

Example:

Original	Outpainting	Outpainting again
		
Original image by Anonymous user from 4chan. Thank you, Anonymous user.

You can find the feature in the img2img tab at the bottom, under Script -> Poor man's outpainting.

Outpainting, unlike normal image generation, seems to profit very much from large step count. A recipe for a good outpainting is a good prompt that matches the picture, sliders for denoising and CFG scale set to max, and step count of 50 to 100 with Euler ancestral or DPM2 ancestral samplers.

81 steps, Euler A	30 steps, Euler A	10 steps, Euler A	80 steps, Euler A
			
Inpainting
In img2img tab, draw a mask over a part of the image, and that part will be in-painted.



Options for inpainting:

draw a mask yourself in the web editor
erase a part of the picture in an external editor and upload a transparent picture. Any even slightly transparent areas will become part of the mask. Be aware that some editors save completely transparent areas as black by default.
change mode (to the bottom right of the picture) to "Upload mask" and choose a separate black and white image for the mask (white=inpaint).
Inpainting model
RunwayML has trained an additional model specifically designed for inpainting. This model accepts additional inputs - the initial image without noise plus the mask - and seems to be much better at the job.

Download and extra info for the model is here: https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion

To use the model, you must rename the checkpoint so that its filename ends in inpainting.ckpt, for example, 1.5-inpainting.ckpt.

After that just select the checkpoint as you'd usually select any checkpoint and you're good to go.

Masked content
The masked content field determines content is placed to put into the masked regions before they are inpainted. This does not represent final output, it's only a look at what's going on mid-process.

mask	fill	original	latent noise	latent nothing
				
Inpaint area
Normally, inpainting resizes the image to the target resolution specified in the UI. With Inpaint area: Only masked enabled, only the masked region is resized, and after processing it is pasted back to the original picture. This allows you to work with large pictures and render the inpainted object at a much larger resolution.

Input	Inpaint area: Whole picture	Inpaint area: Only masked
		
Masking mode
There are two options for masked mode:

Inpaint masked - the region under the mask is inpainted
Inpaint not masked - under the mask is unchanged, everything else is inpainted
Alpha mask
Input	Output
	
Color Sketch
Basic coloring tool for the img2img tab. Chromium-based browsers support a dropper tool. color-sketch_NewUI (this is on firefox)

Prompt matrix
Separate multiple prompts using the | character, and the system will produce an image for every combination of them. For example, if you use a busy city street in a modern city|illustration|cinematic lighting prompt, there are four combinations possible (first part of the prompt is always kept):

a busy city street in a modern city
a busy city street in a modern city, illustration
a busy city street in a modern city, cinematic lighting
a busy city street in a modern city, illustration, cinematic lighting
Four images will be produced, in this order, all with the same seed and each with a corresponding prompt: 

Another example, this time with 5 prompts and 16 variations: 

You can find the feature at the bottom, under Script -> Prompt matrix.

Stable Diffusion upscale
ℹ️ Note: This is not the preferred method of upscaling, as this causes SD to lose attention to the rest of the image due to tiling. It should only be used if VRAM bound, or in tandem with something like ControlNet + the tile model. For the preferred method, see Hires. fix.

Upscale image using RealESRGAN/ESRGAN and then go through tiles of the result, improving them with img2img. It also has an option to let you do the upscaling part yourself in an external program, and just go through tiles with img2img.

Original idea by: https://github.com/jquesnelle/txt2imghd. This is an independent implementation.

To use this feature, select SD upscale from the scripts dropdown selection (img2img tab).

chrome_dl8hcMPYcx

The input image will be upscaled to twice the original width and height, and UI's width and height sliders specify the size of individual tiles. Because of overlap, the size of the tile can be very important: 512x512 image needs nine 512x512 tiles (because of overlap), but only four 640x640 tiles.

Recommended parameters for upscaling:

Sampling method: Euler a
Denoising strength: 0.2, can go up to 0.4 if you feel adventurous
A larger denoising strength is problematic due to the fact SD upscale works in tiles, as the diffusion process is then unable to give attention to the image as a whole.
Original	RealESRGAN	Topaz Gigapixel	SD upscale
			
			
			
Infinite prompt length
Typing past standard 75 tokens that Stable Diffusion usually accepts increases prompt size limit from 75 to 150. Typing past that increases prompt size further. This is done by breaking the prompt into chunks of 75 tokens, processing each independently using CLIP's Transformers neural network, and then concatenating the result before feeding into the next component of stable diffusion, the Unet.

For example, a prompt with 120 tokens would be separated into two chunks: first with 75 tokens, second with 45. Both would be padded to 75 tokens and extended with start/end tokens to 77. After passing those two chunks though CLIP, we'll have two tensors with shape of (1, 77, 768). Concatenating those results in (1, 154, 768) tensor that is then passed to Unet without issue.

BREAK keyword
Adding a BREAK keyword (must be uppercase) fills the current chunks with padding characters. Adding more text after BREAK text will start a new chunk.

Attention/emphasis
Using () in the prompt increases the model's attention to enclosed words, and [] decreases it. You can combine multiple modifiers:



Cheat sheet:

a (word) - increase attention to word by a factor of 1.1
a ((word)) - increase attention to word by a factor of 1.21 (= 1.1 * 1.1)
a [word] - decrease attention to word by a factor of 1.1
a (word:1.5) - increase attention to word by a factor of 1.5
a (word:0.25) - decrease attention to word by a factor of 4 (= 1 / 0.25)
a \(word\) - use literal () characters in prompt
With (), a weight can be specified like this: (text:1.4). If the weight is not specified, it is assumed to be 1.1. Specifying weight only works with () not with [].

If you want to use any of the literal ()[] characters in the prompt, use the backslash to escape them: anime_\(character\).

On 2022-09-29, a new implementation was added that supports escape characters and numerical weights. A downside of the new implementation is that the old one was not perfect and sometimes ate characters: "a (((farm))), daytime", for example, would become "a farm daytime" without the comma. This behavior is not shared by the new implementation which preserves all text correctly, and this means that your saved seeds may produce different pictures. For now, there is an option in settings to use the old implementation.

NAI uses my implementation from before 2022-09-29, except they have 1.05 as the multiplier and use {} instead of (). So the conversion applies:

their {word} = our (word:1.05)
their {{word}} = our (word:1.1025)
their [word] = our (word:0.952) (0.952 = 1/1.05)
their [[word]] = our (word:0.907) (0.907 = 1/1.05/1.05)
Loopback
Selecting the loopback script in img2img allows you to automatically feed output image as input for the next batch. Equivalent to saving output image and replacing the input image with it. Batch count setting controls how many iterations of this you get.

Usually, when doing this, you would choose one of many images for the next iteration yourself, so the usefulness of this feature may be questionable, but I've managed to get some very nice outputs with it that I wasn't able to get otherwise.

Example: (cherrypicked result)



Original image by Anonymous user from 4chan. Thank you, Anonymous user.

X/Y/Z plot
Creates multiple grids of images with varying parameters. X and Y are used as the rows and columns, while the Z grid is used as a batch dimension.

xyz-grid

Select which parameters should be shared by rows, columns and batch by using X type, Y type and Z Type fields, and input those parameters separated by comma into X/Y/Z values fields. For integer, and floating point numbers, and ranges are supported. Examples:

Simple ranges:
1-5 = 1, 2, 3, 4, 5
Ranges with increment in bracket:
1-5 (+2) = 1, 3, 5
10-5 (-3) = 10, 7
1-3 (+0.5) = 1, 1.5, 2, 2.5, 3
Ranges with the count in square brackets:
1-10 [5] = 1, 3, 5, 7, 10
0.0-1.0 [6] = 0.0, 0.2, 0.4, 0.6, 0.8, 1.0
Prompt S/R
Prompt S/R is one of more difficult to understand modes of operation for X/Y Plot. S/R stands for search/replace, and that's what it does - you input a list of words or phrases, it takes the first from the list and treats it as keyword, and replaces all instances of that keyword with other entries from the list.

For example, with prompt a man holding an apple, 8k clean, and Prompt S/R an apple, a watermelon, a gun you will get three prompts:

a man holding an apple, 8k clean
a man holding a watermelon, 8k clean
a man holding a gun, 8k clean
The list uses the same syntax as a line in a CSV file, so if you want to include commas into your entries you have to put text in quotes and make sure there is no space between quotes and separating commas:

darkness, light, green, heat - 4 items - darkness, light, green, heat
darkness, "light, green", heat - WRONG - 4 items - darkness, "light, green", heat
darkness,"light, green",heat - RIGHT - 3 items - darkness, light, green, heat
Prompts from file or textbox
With this script it is possible to create a list of jobs which will be executed sequentially.

Example input:

--prompt "photo of sunset" 
--prompt "photo of sunset" --negative_prompt "orange, pink, red, sea, water, lake" --width 1024 --height 768 --sampler_name "DPM++ 2M Karras" --steps 10 --batch_size 2 --cfg_scale 3 --seed 9
--prompt "photo of winter mountains" --steps 7 --sampler_name "DDIM"
--prompt "photo of winter mountains" --width 1024
Example output:

image

Following parameters are supported:

    "sd_model", "outpath_samples", "outpath_grids", "prompt_for_display", "prompt", "negative_prompt", "styles", "seed", "subseed_strength", "subseed", 
    "seed_resize_from_h", "seed_resize_from_w", "sampler_index", "sampler_name", "batch_size", "n_iter", "steps", "cfg_scale", "width", "height", 
    "restore_faces", "tiling", "do_not_save_samples", "do_not_save_grid"
Resizing
There are three options for resizing input images in img2img mode:

Just resize - simply resizes the source image to the target resolution, resulting in an incorrect aspect ratio
Crop and resize - resize source image preserving aspect ratio so that entirety of target resolution is occupied by it, and crop parts that stick out
Resize and fill - resize source image preserving aspect ratio so that it entirely fits target resolution, and fill empty space by rows/columns from the source image
Example: 

Sampling method selection
Pick out of multiple sampling methods for txt2img:



Seed resize
This function allows you to generate images from known seeds at different resolutions. Normally, when you change resolution, the image changes entirely, even if you keep all other parameters including seed. With seed resizing you specify the resolution of the original image, and the model will very likely produce something looking very similar to it, even at a different resolution. In the example below, the leftmost picture is 512x512, and others are produced with exact same parameters but with larger vertical resolution.

Info	Image
Seed resize not enabled	
Seed resized from 512x512	
Ancestral samplers are a little worse at this than the rest.

You can find this feature by clicking the "Extra" checkbox near the seed.

Variations
A Variation strength slider and Variation seed field allow you to specify how much the existing picture should be altered to look like a different one. At maximum strength, you will get pictures with the Variation seed, at minimum - pictures with the original Seed (except for when using ancestral samplers).



You can find this feature by clicking the "Extra" checkbox near the seed.

Styles
Press the "Save prompt as style" button to write your current prompt to styles.csv, the file with a collection of styles. A dropbox to the right of the prompt will allow you to choose any style out of previously saved, and automatically append it to your input. To delete a style, manually delete it from styles.csv and restart the program.

if you use the special string {prompt} in your style, it will substitute anything currently in the prompt into that position, rather than appending the style to your prompt.

Negative prompt
Allows you to use another prompt of things the model should avoid when generating the picture. This works by using the negative prompt for unconditional conditioning in the sampling process instead of an empty string.

Advanced explanation: Negative prompt

Original	Negative: purple	Negative: tentacles
		
CLIP interrogator
Originally by: https://github.com/pharmapsychotic/clip-interrogator

CLIP interrogator allows you to retrieve the prompt from an image. The prompt won't allow you to reproduce this exact image (and sometimes it won't even be close), but it can be a good start.



The first time you run CLIP interrogator it will download a few gigabytes of models.

CLIP interrogator has two parts: one is a BLIP model that creates a text description from the picture. Other is a CLIP model that will pick few lines relevant to the picture out of a list. By default, there is only one list - a list of artists (from artists.csv). You can add more lists by doing the following:

create interrogate directory in the same place as webui
put text files in it with a relevant description on each line
For example of what text files to use, see https://github.com/pharmapsychotic/clip-interrogator/tree/main/clip_interrogator/data. In fact, you can just take files from there and use them - just skip artists.txt because you already have a list of artists in artists.csv (or use that too, who's going to stop you). Each file adds one line of text to the final description. If you add ".top3." to filename, for example, flavors.top3.txt, the three most relevant lines from this file will be added to the prompt (other numbers also work).

There are settings relevant to this feature:

Interrogate: keep models in VRAM - do not unload Interrogate models from memory after using them. For users with a lot of VRAM.
Interrogate: use artists from artists.csv - adds artist from artists.csv when interrogating. Can be useful to disable when you have your list of artists in interrogate directory
Interrogate: num_beams for BLIP - parameter that affects how detailed descriptions from BLIP model are (the first part of generated prompt)
Interrogate: minimum description length - minimum length for BLIP model's text
Interrogate: maximum descripton length - maximum length for BLIP model's text
Interrogate: maximum number of lines in text file - interrogator will only consider this many first lines in a file. Set to 0, the default is 1500, which is about as much as a 4GB videocard can handle.
Prompt editing
xy_grid-0022-646033397

Prompt editing allows you to start sampling one picture, but in the middle swap to something else. The base syntax for this is:

[from:to:when]
Where from and to are arbitrary texts, and when is a number that defines how late in the sampling cycle should the switch be made. The later it is, the less power the model has to draw the to text in place of from text. If when is a number between 0 and 1, it's a fraction of the number of steps after which to make the switch. If it's an integer greater than zero, it's just the step after which to make the switch.

Nesting one prompt editing inside another does work.

Additionally:

[to:when] - adds to to the prompt after a fixed number of steps (when)
[from::when] - removes from from the prompt after a fixed number of steps (when)
Example: a [fantasy:cyberpunk:16] landscape

At start, the model will be drawing a fantasy landscape.
After step 16, it will switch to drawing a cyberpunk landscape, continuing from where it stopped with fantasy.
Here's a more complex example with multiple edits: fantasy landscape with a [mountain:lake:0.25] and [an oak:a christmas tree:0.75][ in foreground::0.6][ in background:0.25] [shoddy:masterful:0.5] (sampler has 100 steps)

at start, fantasy landscape with a mountain and an oak in foreground shoddy
after step 25, fantasy landscape with a lake and an oak in foreground in background shoddy
after step 50, fantasy landscape with a lake and an oak in foreground in background masterful
after step 60, fantasy landscape with a lake and an oak in background masterful
after step 75, fantasy landscape with a lake and a christmas tree in background masterful
The picture at the top was made with the prompt:

Official portrait of a smiling world war ii general, [male:female:0.99], cheerful, happy, detailed face, 20th century, highly detailed, cinematic lighting, digital art painting by Greg Rutkowski

And the number 0.99 is replaced with whatever you see in column labels on the image.

The last column in the picture is [male:female:0.0], which essentially means that you are asking the model to draw a female from the start, without starting with a male general, and that is why it looks so different from others.

Note: This syntax does not work with extra networks, such as LoRA. See this discussion post for details. For similar functionality, see the sd-webui-loractl extension.

Alternating Words
Convenient Syntax for swapping every other step.

[cow|horse] in a field
On step 1, prompt is "cow in a field." Step 2 is "horse in a field." Step 3 is "cow in a field" and so on.

Alternating Words

See more advanced example below. On step 8, the chain loops back from "man" to "cow."

[cow|cow|horse|man|siberian tiger|ox|man] in a field
Prompt editing was first implemented by Doggettx in this reddit post.

Note: This syntax does not work with extra networks, such as LoRA. See this discussion post for details. For similar functionality, see the sd-webui-loractl extension.

Hires. fix
A convenience option to partially render your image at a lower resolution, upscale it, and then add details at a high resolution. In other words, this is equivalent to generating an image in txt2img, upscaling it via a method of your choice, and running a second pass on the now upscaled image in img2img to further refine the upscale and create the final result.

By default, SD1/2 based models create horrible images at very high resolutions, as these models were only trained at 512px or 768px. This method makes it possible to avoid this issue by utilizing the small picture's composition in the denoising process of the larger version. Enabled by checking the "Hires. fix" checkbox on the txt2img page.

Without	With
00262-836728130	00261-836728130
00345-950170121	00341-950170121
Small picture is rendered at whatever resolution you set using width/height sliders. Large picture's dimensions are controlled by three sliders: "Scale by" multiplier (Hires upscale), "Resize width to" and/or "Resize height to" (Hires resize).

If "Resize width to" and "Resize height to" are 0, "Scale by" is used.
If "Resize width to" is 0, "Resize height to" is calculated from width and height.
If "Resize height to" is 0, "Resize width to" is calculated from width and height.
If both "Resize width to" and "Resize height to" are non-zero, image is upscaled to be at least those dimensions, and some parts are cropped.
In older versions of the webui, the final width and height were input manually (the last option listed above). In new versions, the default is to use the "Scale by" factor, which is the default and preferred.

To potentially further enhance details in hires. fix, see the notes on extra noise.

Upscalers
A dropdown allows you to to select the kind of upscaler to use for resizing the image. In addition to all upscalers you have available on extras tab, there is an option to upscale a latent space image, which is what stable diffusion works with internally - for a 3x512x512 RGB image, its latent space representation would be 4x64x64. To see what each latent space upscaler does, you can set Denoising strength to 0 and Hires steps to 1 - you'll get a very good approximation of what stable diffusion would be working with on upscaled image.

Below are examples of how different latent upscale modes look.

Original
00084-2395363541
Latent, Latent (antialiased)	Latent (bicubic), Latent (bicubic, antialiased)	Latent (nearest)
00071-2395363541	00073-2395363541	00077-2395363541
Antialiased variations were PRd in by a contributor and seem to be the same as non-antialiased.

Composable Diffusion
A method to allow the combination of multiple prompts. combine prompts using an uppercase AND

a cat AND a dog
Supports weights for prompts: a cat :1.2 AND a dog AND a penguin :2.2 The default weight value is 1. It can be quite useful for combining multiple embeddings to your result: creature_embedding in the woods:0.7 AND arcane_embedding:0.5 AND glitch_embedding:0.2

Using a value lower than 0.1 will barely have an effect. a cat AND a dog:0.03 will produce basically the same output as a cat

This could be handy for generating fine-tuned recursive variations, by continuing to append more prompts to your total. creature_embedding on log AND frog:0.13 AND yellow eyes:0.08

Interrupt
Press the Interrupt button to stop current processing.

4GB videocard support
Optimizations for GPUs with low VRAM. This should make it possible to generate 512x512 images on videocards with 4GB memory.

--lowvram is a reimplementation of an optimization idea by basujindal. Model is separated into modules, and only one module is kept in GPU memory; when another module needs to run, the previous is removed from GPU memory. The nature of this optimization makes the processing run slower -- about 10 times slower compared to normal operation on my RTX 3090.

--medvram is another optimization that should reduce VRAM usage significantly by not processing conditional and unconditional denoising in the same batch.

This implementation of optimization does not require any modification to the original Stable Diffusion code.

TAESD
Standard inference support added in version 1.6.0

With this lightweight VAE enabled via settings, it typically allows for very large, fast generations with a small quality loss. This gain can be very large, maximum generations with --lowvram can increase from 1152x1152 to 2560x2560

image

Face restoration
Lets you improve faces in pictures using either GFPGAN or CodeFormer. There is a checkbox in every tab to use face restoration, and also a separate tab that just allows you to use face restoration on any picture, with a slider that controls how visible the effect is. You can choose between the two methods in settings.

Original	GFPGAN	CodeFormer
		
Checkpoint Merger
Guide generously donated by an anonymous benefactor.

1674918832052087

Full guide with other info is here: https://imgur.com/a/VjFi5uM

Saving
Click the Save button under the output section, and generated images will be saved to a directory specified in settings; generation parameters will be appended to a csv file in the same directory.

Loading
Gradio's loading graphic has a very negative effect on the processing speed of the neural network. My RTX 3090 makes images about 10% faster when the tab with gradio is not active. By default, the UI now hides loading progress animation and replaces it with static "Loading..." text, which achieves the same effect. Use the --no-progressbar-hiding commandline option to revert this and show loading animations.

Caching Models
image

If you want faster swapping between models, increase the counter in settings. Webui will keep models you've swapped from in ram.

Make sure you set the appropriate number according to your remaining available ram.

Prompt validation
Stable Diffusion has a limit for input text length. If your prompt is too long, you will get a warning in the text output field, showing which parts of your text were truncated and ignored by the model.

PNG info
Adds information about generation parameters to PNG as a text chunk. You can view this information later using any software that supports viewing PNG chunk info, for example: https://www.nayuki.io/page/png-file-chunk-inspector

Settings
A tab with settings, allows you to use UI to edit more than half of parameters that previously were commandline. Settings are saved to config.js. Settings that remain as commandline options are ones that are required at startup.

Filenames format
The Images filename pattern field in the Settings tab allows customization of generated txt2img and img2img images filenames. This pattern defines the generation parameters you want to include in filenames and their order. The supported tags are:

[seed], [steps], [cfg], [width], [height], [styles], [sampler], [model_hash], [model_name], [date], [datetime], [job_timestamp], [prompt_hash], [prompt], [prompt_no_styles], [prompt_spaces], [prompt_words], [batch_number], [generation_number], [hasprompt], [clip_skip], [denoising]

This list will evolve though, with new additions. You can get an up-to-date list of supported tags by hovering your mouse over the "Images filename pattern" label in the UI.

Example of a pattern: [seed]-[steps]-[cfg]-[sampler]-[prompt_spaces]

Note about "prompt" tags: [prompt] will add underscores between the prompt words, while [prompt_spaces] will keep the prompt intact (easier to copy/paste into the UI again). [prompt_words] is a simplified and cleaned-up version of your prompt, already used to generate subdirectories names, with only the words of your prompt (no punctuation).

If you leave this field empty, the default pattern will be applied ([seed]-[prompt_spaces]).

Please note that the tags are actually replaced inside the pattern. It means that you can also add non-tags words to this pattern, to make filenames even more explicit. For example: s=[seed],p=[prompt_spaces]

User scripts
If the program is launched with --allow-code option, an extra text input field for script code is available at the bottom of the page, under Scripts -> Custom code. It allows you to input python code that will do the work with the image.

In code, access parameters from web UI using the p variable, and provide outputs for web UI using the display(images, seed, info) function. All globals from the script are also accessible.

A simple script that would just process the image and output it normally:

import modules.processing

processed = modules.processing.process_images(p)

print("Seed was: " + str(processed.seed))

display(processed.images, processed.seed, processed.info)
UI config
You can change parameters for UI elements in ui-config.json, it is created automatically when the program first starts. Some options:

radio groups: default selection
sliders: default value, min, max, step
checkboxes: checked state
text and number inputs: default values
Checkboxes that would usually expand a hidden section will not initially do so when set as UI config entries.

ESRGAN
It's possible to use ESRGAN models on the Extras tab, as well as in SD upscale. Paper here.

To use ESRGAN models, put them into ESRGAN directory in the same location as webui.py. A file will be loaded as a model if it has .pth extension. Grab models from the Model Database.

Not all models from the database are supported. All 2x models are most likely not supported.

img2img alternative test
Deconstructs an input image using a reverse of the Euler diffuser to create the noise pattern used to construct the input prompt.

As an example, you can use this image. Select the img2img alternative test from the scripts section.

alt_src

Adjust your settings for the reconstruction process:

Use a brief description of the scene: "A smiling woman with brown hair." Describing features you want to change helps. Set this as your starting prompt, and 'Original Input Prompt' in the script settings.
You MUST use the Euler sampling method, as this script is built on it.
Sampling steps: 50-60. This MUCH match the decode steps value in the script, or you'll have a bad time. Use 50 for this demo.
CFG scale: 2 or lower. For this demo, use 1.8. (Hint, you can edit ui-config.json to change "img2img/CFG Scale/step" to .1 instead of .5.
Denoising strength - this does matter, contrary to what the old docs said. Set it to 1.
Width/Height - Use the width/height of the input image.
Seed...you can ignore this. The reverse Euler is generating the noise for the image now.
Decode cfg scale - Somewhere lower than 1 is the sweet spot. For the demo, use 1.
Decode steps - as mentioned above, this should match your sampling steps. 50 for the demo, consider increasing to 60 for more detailed images.
Once all of the above are dialed in, you should be able to hit "Generate" and get back a result that is a very close approximation to the original.

After validating that the script is re-generating the source photo with a good degree of accuracy, you can try to change the details of the prompt. Larger variations of the original will likely result in an image with an entirely different composition than the source.

Example outputs using the above settings and prompts below (Red hair/pony not pictured)

demo

"A smiling woman with blue hair." Works. "A frowning woman with brown hair." Works. "A frowning woman with red hair." Works. "A frowning woman with red hair riding a horse." Seems to replace the woman entirely, and now we have a ginger pony.

user.css
Create a file named user.css near webui.py and put custom CSS code into it. For example, this makes the gallery taller:

#txt2img_gallery, #img2img_gallery{
    min-height: 768px;
}
A useful tip is you can append /?__theme=dark to your webui url to enable a built in dark theme
e.g. (http://127.0.0.1:7860/?__theme=dark)

Alternatively, you can add the --theme=dark to the set COMMANDLINE_ARGS= in webui-user.bat
e.g. set COMMANDLINE_ARGS=--theme=dark

chrome_O1kvfKs1es

notification.mp3
If an audio file named notification.mp3 is present in webui's root folder, it will be played when the generation process completes.

As a source of inspiration:

https://pixabay.com/sound-effects/search/ding/?duration=0-30
https://pixabay.com/sound-effects/search/notification/?duration=0-30
Tweaks
Clip Skip
This is a slider in settings, and it controls how early the processing of prompt by CLIP network should be stopped.

A more detailed explanation:

CLIP is a very advanced neural network that transforms your prompt text into a numerical representation. Neural networks work very well with this numerical representation and that's why devs of SD chose CLIP as one of 3 models involved in stable diffusion's method of producing images. As CLIP is a neural network, it means that it has a lot of layers. Your prompt is digitized in a simple way, and then fed through layers. You get numerical representation of the prompt after the 1st layer, you feed that into the second layer, you feed the result of that into third, etc, until you get to the last layer, and that's the output of CLIP that is used in stable diffusion. This is the slider value of 1. But you can stop early, and use the output of the next to last layer - that's slider value of 2. The earlier you stop, the less layers of neural network have worked on the prompt.

Some models were trained with this kind of tweak, so setting this value helps produce better results on those models.

Extra noise
Adds additional noise from the random seed, determined by the setting, defaulting to 0. Implemented in version 1.6.0 via #12564, available in settings under img2img -> Extra noise multiplier for img2img and hires fix. As noted in the UI, this parameter should always be lower than the denoising strength used to yield the best results.

One purpose for this tweak is to add back additional detail into hires fix. For a very simplified understanding, you may think of it as a cross between GAN upscaling and latent upscaling.

The below example is of a 512x512 image with hires fix applied, using a GAN upscaler (4x-UltraSharp), at a denoising strength of 0.45. The image on the right utilizes this extra noise tweak.

Extra noise = 0	Extra noise = 0.2
without	with
Note that the previous setting implemented at the time many months ago (Noise multiplier for img2img) technically achieves the same effect, but as noted in the name only applies to img2img (not hires. fix), and due to it was implemented it is very sensitive, realisticly only useful in a range of 1 to 1.1. For almost all operations it would be suggested to use the new Extra noise parameter instead.

For developers, a callback also exists (on_extra_noise). Here is an example of use that makes the region to add noise to maskable. https://gist.github.com/catboxanon/69ce64e0389fa803d26dc59bb444af53


How to train Lora models
Updated September 11, 2023
By Andrew
Categorized as Tutorial 
Tagged Training
19 Commentson How to train Lora models
Lora training Andy Lau
A killer application of Stable Diffusion is training your own model. Being an open-source software, the community has developed easy-to-use tools for that.

Training LoRA models is a smart alternative to checkpoint models. Although it is less powerful than whole-model training methods like Dreambooth or finetuning, LoRA models have the benefit of being small. You can store many of them without filling up your local storage.

Why train your own model? You may have an art style you want to put in Stable Diffusion. Or you want to generate a consistent face in multiple images. Or it’s just fun to learn something new!

In this post, you will learn how to train your own LoRA models using a Google Colab notebook. So, you don’t need to own a GPU to do it.

This tutorial is for training a Stable Diffusion v1 LoRA or LyCORIS model. (In AUTOMATIC1111 WebUI, they are all called Lora.)

Contents [hide]

Software
Train a Lora model
Step 1: Collect training images
Step 2: Upload images to Google Drive
Step 3: Create captions
Running the LoRA trainer
Image path
Other settings
Start auto-captioning
Revising the captions
Step 4: LoRA training
Source model
Folders
Parameters
Start training!
Using the LoRA
Remarks
Reference
Software
You will use a Google Colab notebook to train the Stable Diffusion LoRA model. No GPU hardware is required from you.

Get Lora Trainer Notebook
You will need Stable Diffusion software to use the LoRA model. I recommend using AUTOMATIC1111 Stable Diffusion WebUI.

Get the Quick Start Guide to find out how to start using Stable Diffusion.

Train a Lora model
Step 1: Collect training images
The first step is to collect training images.

Let’s pay tribute to Andy Lau, one of the four Heavenly Kings of Cantopop in Hong Kong, and immortalize him in a Lora…

Andy Lau getting ready for Lora.
Andy Lau, one of the four Heavenly Kings, is getting ready for Lora.
Google Image Search is a good way to collect images.

Searching training images in Google Image search.
Use Image Search to collect training images.
You need at least 15 training images.

It is okay to have images with different aspect ratios. Make sure to turn on the bucketing option in training, which sorts the images into different aspect ratios during training.

Pick images that are at least 512×512 pixels for v1 models.

Make sure the images are either PNG or JPEG formats.

I collected 16 images for training. You can download them to follow this tutorial.

Download training images
Step 2: Upload images to Google Drive
Open the LoRA trainer notebook.

You will need to save the training images to your Google Drive so the LoRA trainer can access them. Use the LoRA training notebook to upload the training images.


Here is some input to review before running the cell.

Project_folder: A folder in Google Drive containing all training images and captions. Use a folder name that doesn’t exist yet.

dataset_name: The name of the dataset.

Number_of_epoches: How many times each image will be used for training.

Lora_output_path: A folder in Google Drive where the Lora file will be saved.

Run this cell by clicking the Play button on the left. It will ask you to connect to your Google Drive.

Click Choose Files and select your training images.

When it is done, you should see a message saying the images were uploaded successfully.

There are three folder paths listed. We will need them later.


Now, go to your Google Drive. The images should be uploaded in My Drive > AI_PICS > training > AndyLau > 100_AndyLau.

It should look like the screenshot below.


Note: All image folders inside the project folder will be used for training. You only need one folder in most cases. So, change to a different project name before uploading a new image set.

Step 3: Create captions
You need to provide a caption for each image. They must be a text file with the same name as an image containing the caption. We will generate the captions automatically using the LoRA trainer.

Running the LoRA trainer
Go to the Train Lora cell. Review the username and password. You will need them after starting the GUI.

Start the notebook by clicking the Play button in the Lora trainer cell.


It will take a while to load. It is ready when you see the Gradio.live link.


Start the LoRA trainer by clicking the gradio.live link.
A new tab showing the Kohya_ss GUI should have opened.

Go to the Utilities page. Select the Captioning tab, and then BLIP Captioning sub-tab.


Settings for generating captions.
Image path
You can find the image folder to caption in the printout of the first cell, after uploading the images.


/content/drive/MyDrive/AI_PICS/training/AndyLau/100_AndyLau

Other settings
The auto caption could sometimes be too short. Set the Min length to 20.

Start auto-captioning
Press the Caption Images button to generate a caption for each image automatically.

Check the Google Colab Notebook for status. It should be running the captioning model. You will see the message “captioning done” when the captioning is completed.


Revising the captions
You should read and revise each caption so that they match the images. You must also add the phrase “Andy Lau” to each caption.

This is better done in your Google Drive page. You should see a text file with the same name generated for each image.

LoRA training images and captions in Google Drive.
For example, the auto-generated caption of the first image is

A man in a black jacket smoking a cigarette in front of a fenced in building


We want to include the keyword Andy Lau. The revised prompt is

Andy Lau in a black jacket smoking a cigarette in front of a fenced in building


Use the Open with… function to use your favorite editor to make the change. The default one should work, even if it launches locally on your PC. Save the changes. You may need to refresh the Google Drive page to see changes.


Revise the captions for all images.

When you are done, go through the text files one more time to make sure all have included “Andy Lau” in it. This is important for training a specific person.

You can download the captions I created to follow the tutorial if you wish.

Download captions
Step 4: LoRA training
We now have images and captions. We are ready to start the LoRA training!

Source model
In Kohya_ss GUI, go to the LoRA page. Select the Training tab. Select the Source model sub-tab. Review the model in Model Quick Pick.


Some popular models you can start training on are:

Stable Diffusion v1.5
runwayml/stable-diffusion-v1-5

The Stable Diffusion v1.5 model is the latest version of the official v1 model.

Realistic Vision v2
SG161222/Realistic_Vision_V2.0

Realistic Vision v2 is good for training photo-style images.

Anything v3
https://huggingface.co/Linaqruf/anything-v3.0

Anything v3 is good for training anime-style images.

Folders
Now, switch to the Folders sub-tab.


In the Image folder field, enter the folder CONTAINING the image folder. You can copy the path from Lora Image folder in the printout after uploading the images.


/content/drive/MyDrive/AI_PICS/training/AndyLau

In the Ouput folder field, enter the location where you want the LoRA file to be saved. You can copy the path from Lora output folder in the printout after uploading the images.


The default location is the Lora folder of the Stable Diffusion notebook so that it can be directly used in the WebUI.

/content/drive/MyDrive/AI_PICS/Lora

Finally, name your LoRA in the Model output name field.

AndyLau100

Parameters
Now, switch to the Parameters sub-tab. If you have just started out in training LoRA models, using a Preset is the way to go. Select sd15-EDG_LoraOptiSettings for training a Standard LoRA.


There are presets for different types of LyCORIS, which are more powerful versions of LoRA. See the LyCORIS tutorial for a primer.

Finally, the T4 GPU on Colab doesn’t support bp16 mix precision. You MUST

Change Mixed precision and Save precision to fp16.
Change Optimizer to AdamW.

Start training!
Now everything is in place. Scroll down and click Start training to start the training process.


Check the progress on the Colab Notebook page. It will take a while.

It is okay to show some warnings. The training fails when it encounters an error.

When it is completed successfully, you should see the progress is at 100%. The loss value should be a number, not nan.


Using the LoRA
If you save the LoRA in the default output location (AI_PICS/Lora), you can easily use the Stable Diffusion Colab Notebook to load it.

Open AUTOMATIC1111 Stable Diffusion WebUI in Google Colab. Click the Extra Networks button under the Generate button. Select the Lora tab and click the LoRA you just created.


Here are the prompt and the negative prompt:

Andy Lau in a suit, full body <lora:AndyLau100:1>


ugly, deformed, nsfw, disfigured


Since we have used the phrase “Andy Lau” in the training caption, you will need it in the prompt to take effect.

Although the LoRA is trained on the Stable Diffusion v1.5 model, it works equally well with the Realistic Vision v2 model.

Here are the results of the Andy Lau LoRA.



Remarks
This step-by-step guide shows you how to train a LoRA. You can select other presets to train a LyCORIS.

Important parameters in training are:

Network rank: the size of the LoRA. The higher, the more information the LoRA can store. (Reference value: 64)
Network alpha: a parameter for preventing the weight from collapsing to zero during training. Increasing it increases the effect. The final effect is controlled by network alpha divided by network rank. (Reference value: 64)
Testing the LoRA weight (<lora:AndyLau100:weight>) when using the LoRA. Sometimes 1 may not be the optimal value.

Reference
LoRA training parameters – An authoritative reference of training parameters.

LEARN TO MAKE LoRA – A graphical guide to training LoRA.

kohya_ss Documentation – English translation of kohya_ss manual.