SDXL 360 Diffusion
General
SDXL 360 Diffusion is a 3.5 billion parameter model designed to generate 360 degree spherical images from text descriptions.
The model was trained from the SD-XL 1.0-base model on an extremely diverse dataset composed of tens of thousands of equirectangular images, depicting landscapes, interiors, humans, animals, and objects. All images were resized to 2048x1024 before training.
Given the right prompt, the model should be capable of producing almost anything you want.
Usage
The trigger phrase is "equirectangular 360 view", "360 panorama", or some variation of those words in your prompt.
When rendering images, it's recommended that you choose a 2:1 aspect ratio, such as 1024x512, 1536x768, or 2048x1024. Afterwards, you can use an upscaler of your choosing to make the resolution high enough for sky-boxes, backgrounds, VR, VR therapy, and 3D worlds.
Additional Tools
HTML 360 Viewer
To make the viewing and sharing of 360 images & video easier, I built a web browser based HTML 360 viewer that runs locally on your device.
- You can try it out here on Github Pages: https://progamergov.github.io/html-360-viewer/
- Github code: https://github.com/ProGamerGov/html-360-viewer
- You can append '
?url=
' followed by a link to your image in order to automatically load it into the 360 viewer, making sharing your 360 creations extremely easy. - Example: https://progamergov.github.io/html-360-viewer/?url=https://upload.wikimedia.org/wikipedia/commons/7/76/Dauderi.jpg
Recommended ComfyUI Nodes
If you are a user of ComfyUI, then these sets of nodes can be useful for working with 360 images & videos.
ComfyUI_preview360panorama
- For viewing 360s inside of ComfyUI (may be slower than my web browser viewer).
- Link: https://github.com/ProGamerGov/ComfyUI_preview360panorama
ComfyUI_pytorch360convert
- For editing 360s, applying circular Conv2d padding, and masking potential artifacts.
- Link: https://github.com/ProGamerGov/ComfyUI_pytorch360convert
For diffusers and other libraries, you can make use of the pytorch360convert library when working with 360 media.
LoRA Training
Due to the relative scarcity of 360 images, it is often easier to produce your own 360s to teach the model new concepts. There are a number of ways that you can produce your own 360 images for training LoRAs:
- Blender Renders
- There are tons of free models and scenes available, and you can pose characters exactly how you want.
- Blender's Cycles rendering engine with panoramic equirectangular setting generates 360 degree renders.
- Video Game Screenshots
- Example: Using Nvidia Ansel.
- 360 Cameras
- Public Libraries: 360 Cameras can sometimes be borrowed from libraries.
- Purchasing: 360 Cameras can also be purchased.
- Digital illustration, Painting, & Drawing Tools
- Some tools used for creating digital illustrations, drawings, paintings, and other mediums by hand also have the ability to help you create seamless 360 images.
Limitations
Due to the nature of SDXL, multiple attempts may be required to achieve a desirable output based on a given prompt.
Contributors
Citation Information
BibTeX
@software{Egan_SDXL_360_Diffusion_2025,
author = {Egan, Ben and {XWAVE} and {Jimmy Carter}},
license = {MIT},
month = aug,
title = {{SDXL 360 Diffusion}},
url = {https://huggingface.co/ProGamerGov/sdxl-360-diffusion},
year = {2025}
}
APA
Egan, B., XWAVE, & Jimmy Carter. (2025). SDXL 360 Diffusion [Computer software]. https://huggingface.co/ProGamerGov/sdxl-360-diffusion
Please refer to the CITATION.cff for more information on how to cite this dataset.
- Downloads last month
- 23
Model tree for ProGamerGov/sdxl-360-diffusion
Base model
stabilityai/stable-diffusion-xl-base-1.0