SAMA
Collection
2 items • Updated • 1
| Model | Status | Link |
|---|---|---|
| SAMA-5B | Coming soon | Coming soon |
| SAMA-14B | Available | syxbb/SAMA-14B |
This repository contains the weights of SAMA-14B. For more instructions about how to use the model, please refer to the official GitHub repository.
Recommended environment:
git clone https://github.com/Cynthiazxy123/SAMA
cd SAMA
conda create -n sama python=3.10 -y
conda activate sama
pip install --upgrade pip
pip install -r requirements.txt
Prepare:
Wan2.1-T2V-14B model directory.The inference script is:
infer_sh/run_sama.sh
Edit the variables at the top of that script before running:
MODEL_ROOTSTATE_DICTSRC_VIDEOPROMPTOUTPUT_DIRThen run:
bash infer_sh/run_sama.sh
The generated result will be saved to:
outputs/seed_1/<input_video_filename>
A recommended local model layout is:
models/
├── Wan2.1-T2V-14B/
│ ├── diffusion_pytorch_model-00001-of-00006.safetensors
│ ├── diffusion_pytorch_model-00002-of-00006.safetensors
│ ├── diffusion_pytorch_model-00003-of-00006.safetensors
│ ├── diffusion_pytorch_model-00004-of-00006.safetensors
│ ├── diffusion_pytorch_model-00005-of-00006.safetensors
│ ├── diffusion_pytorch_model-00006-of-00006.safetensors
│ ├── models_t5_umt5-xxl-enc-bf16.pth
│ ├── Wan2.1_VAE.pth
│ └── google/
└── SAMA-14B/
└── <downloaded_checkpoint>.safetensors
4k+1 frame requirement used by Wan video inference.--fps.--model-root is incomplete, the script will stop and report the missing files or directories.@misc{zhang2026samafactorizedsemanticanchoring,
title={SAMA: Factorized Semantic Anchoring and Motion Alignment for Instruction-Guided Video Editing},
author={Xinyao Zhang and Wenkai Dong and Yuxin Song and Bo Fang and Qi Zhang and Jing Wang and Fan Chen and Hui Zhang and Haocheng Feng and Yu Lu and Hang Zhou and Chun Yuan and Jingdong Wang},
year={2026},
eprint={2603.19228},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2603.19228},
}