|
--- |
|
license: mit |
|
library_name: pytorch |
|
pipeline_tag: image-to-image |
|
tags: |
|
- medical-imaging |
|
- registration |
|
- 3d-registration |
|
- x-ray |
|
- ct |
|
- mri |
|
--- |
|
|
|
# `xvr`: X-ray to Volume Registration |
|
|
|
[](https://arxiv.org/abs/2503.16309) |
|
[](LICENSE) |
|
<a href="https://colab.research.google.com/drive/1K9lBPxcLh55mr8o50Y7aHkjzjEWKPCrM?usp=sharing"><img alt="Colab" src="https://colab.research.google.com/assets/colab-badge.svg"></a> |
|
<a href="https://huggingface.co/eigenvivek/xvr/tree/main" target="_blank"><img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-ffc107?color=ffc107&logoColor=white"/></a> |
|
<a href="https://huggingface.co/datasets/eigenvivek/xvr-data/tree/main" target="_blank"><img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Data-ffc107?color=ffc107&logoColor=white"/></a> |
|
[](https://github.com/astral-sh/uv) |
|
|
|
**`xvr` is a PyTorch package for training, fine-tuning, and performing 2D/3D X-ray to CT/MR registration using pose regression models.** It provides a streamlined CLI and API for training patient-specific registration models efficiently. Key features include significantly faster training than comparable methods, submillimeter registration accuracy, and human-interpretable pose parameters. |
|
|
|
|
|
<p align="center"> |
|
<img width="410" alt="image" src="https://github.com/user-attachments/assets/8a01c184-f6f1-420e-82b9-1cbe733adf7f" /> |
|
</p> |
|
|
|
## Key Features |
|
|
|
- 🚀 Single CLI/API for training and registration. |
|
- ⚡️ Significantly faster training than existing methods. |
|
- 📐 Submillimeter registration accuracy. |
|
- 🩺 Human-interpretable pose parameters. |
|
- 🐍 Pure Python/PyTorch implementation. |
|
- 🖥️ Cross-platform support (macOS, Linux, Windows). |
|
|
|
`xvr` leverages [`DiffDRR`](https://github.com/eigenvivek/DiffDRR), the differentiable X-ray renderer. |
|
|
|
## Installation and Usage |
|
|
|
Refer to the [GitHub repository](https://github.com/eigenvivek/xvr) for detailed installation instructions, usage examples, and documentation on training, finetuning, and registration. |
|
|
|
|
|
## Experiments |
|
|
|
#### Models |
|
|
|
Pretrained models are available [here](https://huggingface.co/eigenvivek/xvr/tree/main). |
|
|
|
#### Data |
|
|
|
Benchmarks datasets, reformatted into DICOM/NIfTI files, are available [here](https://huggingface.co/datasets/eigenvivek/xvr-data/tree/main). |
|
|
|
If you use the [`DeepFluoro`](https://github.com/rg2/DeepFluoroLabeling-IPCAI2020) dataset, please cite: |
|
|
|
@article{grupp2020automatic, |
|
title={Automatic annotation of hip anatomy in fluoroscopy for robust and efficient 2D/3D registration}, |
|
author={Grupp, Robert B and Unberath, Mathias and Gao, Cong and Hegeman, Rachel A and Murphy, Ryan J and Alexander, Clayton P and Otake, Yoshito and McArthur, Benjamin A and Armand, Mehran and Taylor, Russell H}, |
|
journal={International journal of computer assisted radiology and surgery}, |
|
volume={15}, |
|
pages={759--769}, |
|
year={2020}, |
|
publisher={Springer} |
|
} |
|
|
|
If you use the [`Ljubljana`](https://lit.fe.uni-lj.si/en/research/resources/3D-2D-GS-CA/) dataset, please cite: |
|
|
|
@article{pernus20133d, |
|
title={3D-2D registration of cerebral angiograms: A method and evaluation on clinical images}, |
|
author={Mitrović, Uros˘ and S˘piclin, Z˘iga and Likar, Bos˘tjan and Pernus˘, Franjo}, |
|
journal={IEEE transactions on medical imaging}, |
|
volume={32}, |
|
number={8}, |
|
pages={1550--1563}, |
|
year={2013}, |
|
publisher={IEEE} |
|
} |
|
|
|
|
|
#### Logging |
|
|
|
We use `wandb` to log experiments. To use this feature, set the `WANDB_API_KEY` environment variable by adding the following line to your `.zshrc` or `.bashrc` file: |
|
|
|
```zsh |
|
export WANDB_API_KEY=your_api_key |
|
``` |