Safetensors
English
llava_next
AdaptLLM commited on
Commit
9ee2bdb
·
verified ·
1 Parent(s): 132ea80

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -8,7 +8,7 @@ language:
8
  base_model:
9
  - Lin-Chen/open-llava-next-llama3-8b
10
  ---
11
- # Adapting Multimodal Large Language Models to Domains via Post-Training
12
 
13
  This repos contains the **visual-instruction synthesizer** in our paper: [On Domain-Specific Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930).
14
 
@@ -214,10 +214,10 @@ This output can be directly utilized for single-stage post-training with code re
214
  ## Citation
215
  If you find our work helpful, please cite us.
216
 
217
- AdaMLLM
218
  ```bibtex
219
  @article{adamllm,
220
- title={On Domain-Specific Post-Training for Multimodal Large Language Models},
221
  author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
222
  journal={arXiv preprint arXiv:2411.19930},
223
  year={2024}
 
8
  base_model:
9
  - Lin-Chen/open-llava-next-llama3-8b
10
  ---
11
+ # Adapting Multimodal Large Language Models to Domains via Post-Training (EMNLP 2025)
12
 
13
  This repos contains the **visual-instruction synthesizer** in our paper: [On Domain-Specific Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930).
14
 
 
214
  ## Citation
215
  If you find our work helpful, please cite us.
216
 
217
+ [Adapt MLLM to Domains](https://huggingface.co/papers/2411.19930) (EMNLP 2025 Findings)
218
  ```bibtex
219
  @article{adamllm,
220
+ title={On Domain-Adaptive Post-Training for Multimodal Large Language Models},
221
  author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
222
  journal={arXiv preprint arXiv:2411.19930},
223
  year={2024}