MedSAM2 / README.md
adibvafa's picture
Update README.md
1ceeecc verified
|
raw
history blame
6.91 kB
---
language: en
license: cc-by-sa-4.0
library_name: torch
tags:
- medical
- segmentation
- sam
- medical-imaging
- ct
- mri
- ultrasound
pipeline_tag: image-segmentation
datasets:
- medical
---
# MedSAM2: Segment Anything in 3D Medical Images and Videos
<div align="center">
<table align="center">
<tr>
<td><a href="https://github.com/bowang-lab/MedSAM2/blob/main/tbd" target="_blank"><img src="https://img.shields.io/badge/Paper-blue?style=for-the-badge" alt="Paper"></a></td>
<td><a href="https://huggingface.co/wanglab/MedSAM2" target="_blank"><img src="https://img.shields.io/badge/HuggingFace-FFD21E?style=for-the-badge&logoColor=FF9D00" alt="HuggingFace"></a></td>
<td><a href="https://medsam-datasetlist.github.io/" target="_blank"><img src="https://img.shields.io/badge/Dataset%20List-4A90E2?style=for-the-badge&logoColor=white" alt="Dataset List"></a></td>
<td><a href="https://huggingface.co/datasets/wanglab/CT_DeepLesion-MedSAM2" target="_blank"><img src="https://img.shields.io/badge/CT__DeepLesion--MedSAM2-green?style=for-the-badge" alt="CT_DeepLesion-MedSAM2"></a></td>
<td><a href="https://huggingface.co/datasets/wanglab/LLD-MMRI-MedSAM2" target="_blank"><img src="https://img.shields.io/badge/LLD--MMRI--MedSAM2-orange?style=for-the-badge" alt="LLD-MMRI-MedSAM2"></a></td>
<td><a href="https://github.com/bowang-lab/MedSAMSlicer/tree/MedSAM2" target="_blank"><img src="https://img.shields.io/badge/3D_Slicer-black?style=for-the-badge&logo=3DSlicer&logoColor=white" alt="3D Slicer"></a></td>
<td><a href="https://github.com/bowang-lab/MedSAM2/blob/main/app.py" target="_blank"><img src="https://img.shields.io/badge/Gradio_App-yellow?style=for-the-badge&logoColor=white" alt="Gradio App"></a></td>
<td><a href="https://colab.research.google.com/drive/1MKna9Sg9c78LNcrVyG58cQQmaePZq2k2?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/CoLab-4A90E2?style=for-the-badge&logo=CoLab&logoColor=white" alt="Colab"></a></td>
<td><a href="https://github.com/bowang-lab/MedSAM2#citing-medsam2" target="_blank"><img src="https://img.shields.io/badge/BibTeX-4A90E2?style=for-the-badge&logo=BibTeX&logoColor=white" alt="BibTeX"></a></td>
</tr>
</table>
</div>
## Authors
<p align="center">
<a href="https://scholar.google.com.hk/citations?hl=en&user=bW1UV4IAAAAJ&view_op=list_works&sortby=pubdate">Jun Ma</a><sup>* 1,2</sup>,
<a href="https://scholar.google.com/citations?user=8IE0CfwAAAAJ&hl=en">Zongxin Yang</a><sup>* 3</sup>,
Sumin Kim<sup>2,4,5</sup>,
Bihui Chen<sup>2,4,5</sup>,
<a href="https://scholar.google.com.hk/citations?user=U-LgNOwAAAAJ&hl=en&oi=sra">Mohammed Baharoon</a><sup>2,3,5</sup>,
<a href="https://scholar.google.com.hk/citations?user=4qvKTooAAAAJ&hl=en&oi=sra">Adibvafa Fallahpour</a><sup>2,4,5</sup>,
<a href="https://scholar.google.com.hk/citations?user=UlTJ-pAAAAAJ&hl=en&oi=sra">Reza Asakereh</a><sup>4,7</sup>,
Hongwei Lyu<sup>4</sup>,
<a href="https://wanglab.ai/index.html">Bo Wang</a><sup>† 1,2,4,5,6</sup>
</p>
<p align="center">
<sup>*</sup> Equal contribution &nbsp;&nbsp;&nbsp; <sup></sup> Corresponding author
</p>
<p align="center">
<sup>1</sup>AI Collaborative Centre, University Health Network, Toronto, Canada<br>
<sup>2</sup>Vector Institute for Artificial Intelligence, Toronto, Canada<br>
<sup>3</sup>Department of Biomedical Informatics, Harvard Medical School, Harvard University, Boston, USA<br>
<sup>4</sup>Peter Munk Cardiac Centre, University Health Network, Toronto, Canada<br>
<sup>5</sup>Department of Computer Science, University of Toronto, Toronto, Canada<br>
<sup>6</sup>Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada<br>
<sup>7</sup>Roche Canada and Genentech
</p>
## Highlights
- A promptable foundation model for 3D medical image and video segmentation
- Trained on 455,000+ 3D image-mask pairs and 76,000+ annotated video frames
- Versatile segmentation capability across diverse organs and pathologies
- Extensive user studies in large-scale lesion and video datasets demonstrate that MedSAM2 substantially facilitates annotation workflows
## Model Overview
MedSAM2 is a promptable segmentation segmentation model tailored for medical imaging applications. Built upon the foundation of the [Segment Anything Model (SAM) 2.1](https://github.com/facebookresearch/sam2), MedSAM2 has been specifically adapted and fine-tuned for various 3D medical images and videos.
## Available Models
- **MedSAM2_2411.pt**: Base model trained in November 2024
- **MedSAM2_US_Heart.pt**: Fine-tuned model specialized for heart ultrasound video segmentation
- **MedSAM2_MRI_LiverLesion.pt**: Fine-tuned model for liver lesion segmentation in MRI scans
- **MedSAM2_CTLesion.pt**: Fine-tuned model for general lesion segmentation in CT scans
- **MedSAM2_latest.pt** (recommended): Latest version trained on the combination of public datasets and newly annotated medical imaging data
## Downloading Models
### Option 1: Download individual models
You can download the models directly from the Hugging Face repository:
```python
# Using huggingface_hub
from huggingface_hub import hf_hub_download
# Download the recommended latest model
model_path = hf_hub_download(repo_id="wanglab/MedSAM2", filename="MedSAM2_latest.pt")
# Or download a specific fine-tuned model
heart_us_model_path = hf_hub_download(repo_id="wanglab/MedSAM2", filename="MedSAM2_US_Heart.pt")
liver_model_path = hf_hub_download(repo_id="wanglab/MedSAM2", filename="MedSAM2_MRI_LiverLesion.pt")
```
### Option 2: Download all models to a specific folder
```python
from huggingface_hub import hf_hub_download
import os
# Create checkpoints directory if it doesn't exist
os.makedirs("checkpoints", exist_ok=True)
# List of model filenames
model_files = [
"MedSAM2_2411.pt",
"MedSAM2_US_Heart.pt",
"MedSAM2_MRI_LiverLesion.pt",
"MedSAM2_CTLesion.pt",
"MedSAM2_latest.pt"
]
# Download all models
for model_file in model_files:
local_path = os.path.join("checkpoints", model_file)
hf_hub_download(
repo_id="wanglab/MedSAM2",
filename=model_file,
local_dir="checkpoints",
local_dir_use_symlinks=False
)
print(f"Downloaded {model_file} to {local_path}")
```
Alternatively, you can manually download the models from the [Hugging Face repository page](https://huggingface.co/wanglab/MedSAM2).
## Citations
```
@article{MedSAM2,
title={MedSAM2: Segment Anything in 3D Medical Images and Videos},
author={Ma, Jun and Yang, Zongxin and Kim, Sumin and Chen, Bihui and Baharoon, Mohammed and Fallahpour, Adibvafa and Asakereh, Reza and Lyu, Hongwei and Wang, Bo},
journal={arXiv preprint arXiv:2504.03600},
year={2025}
}
```
## License
The model weights can only be used for research and education purposes.