Any-to-Any
Diffusers
English
ltx-video
image-to-video
text-to-video
How to use from the
Use from the
Diffusers library
# Gated model: Login with a HF token with gated access permission
hf auth login
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Lightricks/LTX-2.3-22b-IC-LoRA-HDR", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

You need to agree to share your contact information to access this model

By clicking "Agree and Access" you acknowledge the Privacy Policy and consent to receive offers and updates. You can unsubscribe at any time.

Log in or Sign Up to review the conditions and access this model content.

LTX-2.3 22B IC-LoRA HDR

This is an IC-LoRA trained on top of LTX-2.3-22b, enabling 16 bit High Dynamic Range generations from the LTX model. This allows both Text/Image driven generations as well as video conversion from 8 bit SDR to 16 bit HDR.

It is based on the LTX-2 foundation model.

What is In-Context LoRA (IC LoRA)?

IC LoRA enables conditioning video generation on reference video frames at inference time, allowing fine-grained video-to-video control on top of a text-to-video, base model. It allows also the usage of an initial image for image-to-video, and generate audio-visual output.

What is Reference Downscale Factor?

IC LoRA uses a reference control signal, i.e. a video that is positionally aligned to the generated video and contains the reference for context. To allow for added efficiency, the reference video can be smaller, so it consumes less tokens. The reference downscale factor determines the expected downscaling of the reference video compared to the generated resolution. To signify the expected reference size, the checkpoint name will have a 'ref' denominator followed by the scale relative to the output resolution.

Model Files

ltx-2.3-22b-ic-lora-hdr-x.x.safetensors

License

See the LTX-2-community-license for full terms.

Model Details

  • Base Model: LTX-2.3-22b Video
  • Training Type: IC LoRA
  • Control Type: HDR enabling generation
  • Reference Downscale Factor: 1 (reference resolution is 1x the output resolution)
  • Pipeline details: The LogC3 transform and inverse transform are used prior and after generation (accordingly) in order to achieve the dynamic range of 16 bit within normal number range.

🔌 Using in ComfyUI

  1. Copy the LoRA weights into models/loras.
  2. Use the official IC-LoRA workflow from the LTX-2 ComfyUI repository.
  3. Make sure to use the nodes supporting Reference Downscale Factor: LTXICLoRALoaderModelOnly to load the lora and extract the downscale factor, and LTXAddVideoICLoRAGuide to add the small latent as a guide.

Dataset

The model was trained using proprietary HDR dataset.

Citation

@article{korem2026hdr,
  title={HDR Video Generation via Latent Alignment with Logarithmic Encoding},
  author={Korem, Naomi Ken and Oumoumad, Mohamed and Cain, Harel and Yosef, Matan Ben and Jelercic, Urska and Bibi, Ofir and Inger, Yaron and Patashnik, Or and Cohen-Or, Daniel},
  journal={arXiv preprint arXiv:2604.11788},
  year={2026}
}

Acknowledgments

  • Base model by Lightricks
  • Training infrastructure: LTX-2 Community Trainer
Downloads last month
6,984
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 2 Ask for provider support

Model tree for Lightricks/LTX-2.3-22b-IC-LoRA-HDR

Finetuned
(53)
this model

Spaces using Lightricks/LTX-2.3-22b-IC-LoRA-HDR 3

Collection including Lightricks/LTX-2.3-22b-IC-LoRA-HDR

Papers for Lightricks/LTX-2.3-22b-IC-LoRA-HDR

Free AI Image Generator No sign-up. Instant results. Open Now