FastDDPM / README.md
nielsr's picture
nielsr HF Staff
Improve model card for Fast-DDPM with paper, code, and usage
eb01979 verified
|
raw
history blame
6.37 kB
metadata
license: mit
pipeline_tag: image-to-image

Fast-DDPM: Fast Denoising Diffusion Probabilistic Models for Medical Image-to-Image Generation

This repository contains the official PyTorch implementation of the paper Fast-DDPM: Fast Denoising Diffusion Probabilistic Models for Medical Image-to-Image Generation.

For the full codebase and more details, please visit the official GitHub repository: https://github.com/mirthAI/Fast-DDPM

Abstract

Denoising diffusion probabilistic models (DDPMs) have achieved unprecedented success in computer vision. However, they remain underutilized in medical imaging, a field crucial for disease diagnosis and treatment planning. This is primarily due to the high computational cost associated with (1) the use of large number of time steps (e.g., 1,000) in diffusion processes and (2) the increased dimensionality of medical images, which are often 3D or 4D. Training a diffusion model on medical images typically takes days to weeks, while sampling each image volume takes minutes to hours. To address this challenge, we introduce Fast-DDPM, a simple yet effective approach capable of improving training speed, sampling speed, and generation quality simultaneously. Unlike DDPM, which trains the image denoiser across 1,000 time steps, Fast-DDPM trains and samples using only 10 time steps. The key to our method lies in aligning the training and sampling procedures to optimize time-step utilization. Specifically, we introduced two efficient noise schedulers with 10 time steps: one with uniform time step sampling and another with non-uniform sampling. We evaluated Fast-DDPM across three medical image-to-image generation tasks: multi-image super-resolution, image denoising, and image-to-image translation. Fast-DDPM outperformed DDPM and current state-of-the-art methods based on convolutional networks and generative adversarial networks in all tasks. Additionally, Fast-DDPM reduced the training time to 0.2x and the sampling time to 0.01x compared to DDPM.

DDPM vs. Fast-DDPM

Usage

For complete instructions, please refer to the official GitHub repository.

Requirements

  • Python==3.10.6
  • torch==1.12.1
  • torchvision==0.15.2
  • numpy
  • opencv-python
  • tqdm
  • tensorboard
  • tensorboardX
  • scikit-image
  • medpy
  • pillow
  • scipy
  • pip install -r requirements.txt

Publicly available Dataset

1. Git clone or download the codes.

2. Pretrained model weights

3. Prepare data

  • Please download our processed dataset or download from the official websites.
  • After downloading, extract the file and put it into folder "data/". The directory structure should be as follows:
├── configs
│
├── data
│	├── LD_FD_CT_train
│	├── LD_FD_CT_test
│	├── PMUB-train
│	├── PMUB-test
│	├── Brats_train
│	└── Brats_test
│
├── datasets
│
├── functions
│
├── models
│
└── runners

4. Training/Sampling a Fast-DDPM model

  • Please make sure that the hyperparameters such as scheduler type and timesteps are consistent between training and sampling.
  • The total number of time steps is defaulted as 1000 in the paper, so the number of involved time steps for Fast-DDPM should be less than 1000 as an integer.
python fast_ddpm_main.py --config {DATASET}.yml --dataset {DATASET_NAME} --exp {PROJECT_PATH} --doc {MODEL_NAME} --scheduler_type {SAMPLING STRATEGY} --timesteps {STEPS}
python fast_ddpm_main.py --config {DATASET}.yml --dataset {DATASET_NAME} --exp {PROJECT_PATH} --doc {MODEL_NAME} --sample --fid --scheduler_type {SAMPLING STRATEGY} --timesteps {STEPS}

where

  • DATASET_NAME should be selected among LDFDCT for image denoising task, BRATS for image-to-image translation task and PMUB for multi image super-resolution task.
  • SAMPLING STRATEGY controls the scheduler sampling strategy proposed in the paper (either uniform or non-uniform).
  • STEPS controls how many timesteps used in the training and inference process. It should be an integer less than 1000 for Fast-DDPM, which is 10 by default.

5. Training/Sampling a DDPM model

  • Please make sure that the hyperparameters such as scheduler type and timesteps are consistent between training and sampling.
  • The total number of time steps is defaulted as 1000 in the paper, so the number of time steps for DDPM is defaulted as 1000.
python ddpm_main.py --config {DATASET}.yml --dataset {DATASET_NAME} --exp {PROJECT_PATH} --doc {MODEL_NAME} --timesteps {STEPS}
python ddpm_main.py --config {DATASET}.yml --dataset {DATASET_NAME} --exp {PROJECT_PATH} --doc {MODEL_NAME} --sample --fid --timesteps {STEPS}

where

  • DATASET_NAME should be selected among LDFDCT for image denoising task, BRATS for image-to-image translation task and PMUB for multi image super-resolution task.
  • STEPS controls how many timesteps used in the training and inference process. It should be 1000 in the setting of this paper.

Citation

If you use our code or dataset, please cite our paper as below:

@article{jiang2025fast,
  title={Fast-DDPM: Fast denoising diffusion probabilistic models for medical image-to-image generation},
  author={Jiang, Hongxu and Imran, Muhammad and Zhang, Teng and Zhou, Yuyin and Liang, Muxuan and Gong, Kuang and Shao, Wei},
  journal={IEEE Journal of Biomedical and Health Informatics},
  year={2025},
  publisher={IEEE}
}