File size: 1,634 Bytes
d8210be
 
 
 
 
 
 
 
 
 
 
 
 
 
cd529ff
d8210be
 
 
 
1f1c12d
d8210be
2fbe186
d8210be
 
 
 
 
 
 
 
2fbe186
d8210be
6ad7be1
82e212b
d8210be
 
2fbe186
 
 
 
 
 
 
 
d8210be
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
license: mit
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
  - internlm/internlm2-chat-1_8b
base_model_relation: merge
language:
  - multilingual
tags:
  - internvl
  - vision
  - ocr
  - custom_code
  - moe
---

# Mono-InternVL-2B

This repository contains the instruction-tuned Mono-InternVL-2B model, which has 1.8B activated parameters (3B in total). It is built upon [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b).

Please refer to our [**paper (v1)**](https://huggingface.co/papers/2410.08202), [**paper (v1.5)**](https://arxiv.org/abs/2507.12566), [**project page**](https://internvl.github.io/blog/2024-10-10-Mono-InternVL/) and [**GitHub repository**](https://github.com/OpenGVLab/mono-internvl) for introduction and usage.



## Citation

If you find this project useful in your research, please consider citing:

```BibTeX
@article{mono_internvl_v1,
  title={Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training},
  author={Luo, Gen and Yang, Xue and Dou, Wenhan and Wang, Zhaokai and Liu, Jiawen and Dai, Jifeng and Qiao, Yu and Zhu, Xizhou},
  journal={arXiv preprint arXiv:2410.08202},
  year={2024}
}


@article{mono_internvl_v1.5,
  title={Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models},
  author={Luo, Gen and Dou, Wenhan and Li, Wenhao and Wang, Zhaokai and Yang, Xue and Tian, Changyao and Li, Hao and Wang, Weiyun and Wang, Wenhai and Zhu, Xizhou and Qiao, Yu and Dai, Jifeng},
  journal={arXiv preprint arXiv:2507.12566},
  year={2025}
}
```