NVIDIA ModelOpt

NVIDIA-ModelOpt is a unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed.

Before you begin, make sure you have nvidia_modelopt installed.

pip install -U "nvidia_modelopt[hf]"

Quantize a model by passing NVIDIAModelOptConfig to from_pretrained() (you can also load pre-quantized models). This works for any model in any modality, as long as it supports loading with Accelerate and contains torch.nn.Linear layers.

The example below only quantizes the weights to FP8.

import torch
from diffusers import AutoModel, SanaPipeline, NVIDIAModelOptConfig

model_id = "Efficient-Large-Model/Sana_600M_1024px_diffusers"
dtype = torch.bfloat16

quantization_config = NVIDIAModelOptConfig(quant_type="FP8", quant_method="modelopt")
transformer = AutoModel.from_pretrained(
    model_id,
    subfolder="transformer",
    quantization_config=quantization_config,
    torch_dtype=dtype,
)
pipe = SanaPipeline.from_pretrained(
    model_id,
    transformer=transformer,
    torch_dtype=dtype,
)
pipe.to("cuda")

print(f"Pipeline memory usage: {torch.cuda.max_memory_reserved() / 1024**3:.3f} GB")

prompt = "A cat holding a sign that says hello world"
image = pipe(
    prompt, num_inference_steps=50, guidance_scale=4.5, max_sequence_length=512
).images[0]
image.save("output.png")

Note:

The quantization methods in NVIDIA-ModelOpt are designed to reduce the memory footprint of model weights using various QAT (Quantization-Aware Training) and PTQ (Post-Training Quantization) techniques while maintaining model performance. However, the actual performance gain during inference depends on the deployment framework (e.g., TRT-LLM, TensorRT) and the specific hardware configuration.

More details can be found here.

NVIDIAModelOptConfig

The NVIDIAModelOptConfig class accepts three parameters:

Supported quantization types

ModelOpt supports weight-only, channel and block quantization int8, fp8, int4, nf4, and nvfp4. The quantization methods are designed to reduce the memory footprint of the model weights while maintaining the performance of the model during inference.

Weight-only quantization stores the model weights in a specific low-bit data type but performs computation with a higher-precision data type, like bfloat16. This lowers the memory requirements from model weights but retains the memory peaks for activation computation.

The quantization methods supported are as follows:

Quantization Type Supported Schemes Required Kwargs Additional Notes
INT8 int8 weight only, int8 channel quantization, int8 block quantization quant_type, quant_type + channel_quantize, quant_type + channel_quantize + block_quantize
FP8 fp8 weight only, fp8 channel quantization, fp8 block quantization quant_type, quant_type + channel_quantize, quant_type + channel_quantize + block_quantize
INT4 int4 weight only, int4 block quantization quant_type, quant_type + channel_quantize + block_quantize channel_quantize = -1 is only supported for now
NF4 nf4 weight only, nf4 double block quantization quant_type, quant_type + channel_quantize + block_quantize + scale_channel_quantize + scale_block_quantize channel_quantize = -1 and scale_channel_quantize = -1 are only supported for now
NVFP4 nvfp4 weight only, nvfp4 block quantization quant_type, quant_type + channel_quantize + block_quantize channel_quantize = -1 is only supported for now

Refer to the official modelopt documentation for a better understanding of the available quantization methods and the exhaustive list of configuration options available.

Serializing and Deserializing quantized models

To serialize a quantized model in a given dtype, first load the model with the desired quantization dtype and then save it using the save_pretrained() method.

import torch
from diffusers import AutoModel, NVIDIAModelOptConfig
from modelopt.torch.opt import enable_huggingface_checkpointing

enable_huggingface_checkpointing()

model_id = "Efficient-Large-Model/Sana_600M_1024px_diffusers"
quant_config_fp8 = {"quant_type": "FP8", "quant_method": "modelopt"}
quant_config_fp8 = NVIDIAModelOptConfig(**quant_config_fp8)
model = AutoModel.from_pretrained(
    model_id,
    subfolder="transformer",
    quantization_config=quant_config_fp8,
    torch_dtype=torch.bfloat16,
)
model.save_pretrained('path/to/sana_fp8', safe_serialization=False)

To load a serialized quantized model, use the from_pretrained() method.

import torch
from diffusers import AutoModel, NVIDIAModelOptConfig, SanaPipeline
from modelopt.torch.opt import enable_huggingface_checkpointing

enable_huggingface_checkpointing()

quantization_config = NVIDIAModelOptConfig(quant_type="FP8", quant_method="modelopt")
transformer = AutoModel.from_pretrained(
    "path/to/sana_fp8",
    subfolder="transformer",
    quantization_config=quantization_config,
    torch_dtype=torch.bfloat16,
)
pipe = SanaPipeline.from_pretrained(
    "Efficient-Large-Model/Sana_600M_1024px_diffusers",
    transformer=transformer,
    torch_dtype=torch.bfloat16,
)
pipe.to("cuda")
prompt = "A cat holding a sign that says hello world"
image = pipe(
    prompt, num_inference_steps=50, guidance_scale=4.5, max_sequence_length=512
).images[0]
image.save("output.png")
< > Update on GitHub