Update paper link and correct citation in model card
#3
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,22 +1,23 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
datasets:
|
4 |
- AIDC-AI/Ovis-dataset
|
5 |
-
library_name: transformers
|
6 |
-
tags:
|
7 |
-
- MLLM
|
8 |
-
pipeline_tag: image-text-to-text
|
9 |
language:
|
10 |
- en
|
11 |
- zh
|
|
|
|
|
|
|
|
|
|
|
12 |
---
|
|
|
13 |
# Ovis2.5-9B
|
14 |
<div align="center">
|
15 |
<img src=https://cdn-uploads.huggingface.co/production/uploads/637aebed7ce76c3b834cea37/3IK823BZ8w-mz_QfeYkDn.png width="30%"/>
|
16 |
</div>
|
17 |
|
18 |
<p align="center">
|
19 |
-
<a href="https://
|
20 |
<a href="https://github.com/AIDC-AI/Ovis"><img src="https://img.shields.io/badge/GitHub-AIDC--AI/Ovis-blue?style=flat&logo=github" alt="code"></a>
|
21 |
<a href="https://huggingface.co/spaces/AIDC-AI/Ovis2.5-9B"><img src="https://img.shields.io/badge/🎨_HF_Spaces-AIDC--AI/Ovis2.5--9B-lightblack" alt="demo"></a>
|
22 |
<a href="https://huggingface.co/collections/AIDC-AI/ovis25-689ec1474633b2aab8809335"><img src="https://img.shields.io/badge/🤗_Models-AIDC--AI/Ovis2.5-yellow" alt="models"></a>
|
@@ -316,7 +317,7 @@ We evaluate Ovis2.5 using [VLMEvalKit](https://github.com/open-compass/VLMEvalKi
|
|
316 |
|
317 |
## Citation
|
318 |
If you find Ovis useful, please consider citing the paper
|
319 |
-
```
|
320 |
@article{lu2025ovis25technicalreport,
|
321 |
title={Ovis2.5 Technical Report},
|
322 |
author={Shiyin Lu and Yang Li and Yu Xia and Yuwei Hu and Shanshan Zhao and Yanqing Ma and Zhichao Wei and Yinglun Li and Lunhao Duan and Jianshan Zhao and Yuxuan Han and Haijun Li and Wanying Chen and Junke Tang and Chengkun Hou and Zhixing Du and Tianli Zhou and Wenjie Zhang and Huping Ding and Jiahe Li and Wen Li and Gui Hu and Yiliang Gu and Siran Yang and Jiamang Wang and Hailong Sun and Yibo Wang and Hui Sun and Jinlong Huang and Yuping He and Shengze Shi and Weihong Zhang and Guodong Zheng and Junpeng Jiang and Sensen Gao and Yi-Feng Wu and Sijia Chen and Yuhui Chen and Qing-Guo Chen and Zhao Xu and Weihua Luo and Kaifu Zhang},
|
@@ -325,7 +326,7 @@ If you find Ovis useful, please consider citing the paper
|
|
325 |
}
|
326 |
|
327 |
@article{lu2024ovis,
|
328 |
-
title={Ovis: Structural Embedding Alignment for Multimodal Large Language Model},
|
329 |
author={Shiyin Lu and Yang Li and Qing-Guo Chen and Zhao Xu and Weihua Luo and Kaifu Zhang and Han-Jia Ye},
|
330 |
year={2024},
|
331 |
journal={arXiv:2405.20797}
|
@@ -336,4 +337,4 @@ If you find Ovis useful, please consider citing the paper
|
|
336 |
This project is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0.txt) (SPDX-License-Identifier: Apache-2.0).
|
337 |
|
338 |
## Disclaimer
|
339 |
-
We used compliance-checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to the complexity of the data and the diversity of language model usage scenarios, we cannot guarantee that the model is completely free of copyright issues or improper content. If you believe anything infringes on your rights or generates improper content, please contact us, and we will promptly address the matter.
|
|
|
1 |
---
|
|
|
2 |
datasets:
|
3 |
- AIDC-AI/Ovis-dataset
|
|
|
|
|
|
|
|
|
4 |
language:
|
5 |
- en
|
6 |
- zh
|
7 |
+
library_name: transformers
|
8 |
+
license: apache-2.0
|
9 |
+
pipeline_tag: image-text-to-text
|
10 |
+
tags:
|
11 |
+
- MLLM
|
12 |
---
|
13 |
+
|
14 |
# Ovis2.5-9B
|
15 |
<div align="center">
|
16 |
<img src=https://cdn-uploads.huggingface.co/production/uploads/637aebed7ce76c3b834cea37/3IK823BZ8w-mz_QfeYkDn.png width="30%"/>
|
17 |
</div>
|
18 |
|
19 |
<p align="center">
|
20 |
+
<a href="https://huggingface.co/papers/2508.11737"><img src="https://img.shields.io/badge/📖_Technical_Report-Ovis2.5-b31b1b.svg" alt="technical report"></a>
|
21 |
<a href="https://github.com/AIDC-AI/Ovis"><img src="https://img.shields.io/badge/GitHub-AIDC--AI/Ovis-blue?style=flat&logo=github" alt="code"></a>
|
22 |
<a href="https://huggingface.co/spaces/AIDC-AI/Ovis2.5-9B"><img src="https://img.shields.io/badge/🎨_HF_Spaces-AIDC--AI/Ovis2.5--9B-lightblack" alt="demo"></a>
|
23 |
<a href="https://huggingface.co/collections/AIDC-AI/ovis25-689ec1474633b2aab8809335"><img src="https://img.shields.io/badge/🤗_Models-AIDC--AI/Ovis2.5-yellow" alt="models"></a>
|
|
|
317 |
|
318 |
## Citation
|
319 |
If you find Ovis useful, please consider citing the paper
|
320 |
+
```bibtex
|
321 |
@article{lu2025ovis25technicalreport,
|
322 |
title={Ovis2.5 Technical Report},
|
323 |
author={Shiyin Lu and Yang Li and Yu Xia and Yuwei Hu and Shanshan Zhao and Yanqing Ma and Zhichao Wei and Yinglun Li and Lunhao Duan and Jianshan Zhao and Yuxuan Han and Haijun Li and Wanying Chen and Junke Tang and Chengkun Hou and Zhixing Du and Tianli Zhou and Wenjie Zhang and Huping Ding and Jiahe Li and Wen Li and Gui Hu and Yiliang Gu and Siran Yang and Jiamang Wang and Hailong Sun and Yibo Wang and Hui Sun and Jinlong Huang and Yuping He and Shengze Shi and Weihong Zhang and Guodong Zheng and Junpeng Jiang and Sensen Gao and Yi-Feng Wu and Sijia Chen and Yuhui Chen and Qing-Guo Chen and Zhao Xu and Weihua Luo and Kaifu Zhang},
|
|
|
326 |
}
|
327 |
|
328 |
@article{lu2024ovis,
|
329 |
+
title={Ovis: Structural Embedding Alignment for Multimodal Large Language Model},
|
330 |
author={Shiyin Lu and Yang Li and Qing-Guo Chen and Zhao Xu and Weihua Luo and Kaifu Zhang and Han-Jia Ye},
|
331 |
year={2024},
|
332 |
journal={arXiv:2405.20797}
|
|
|
337 |
This project is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0.txt) (SPDX-License-Identifier: Apache-2.0).
|
338 |
|
339 |
## Disclaimer
|
340 |
+
We used compliance-checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to the complexity of the data and the diversity of language model usage scenarios, we cannot guarantee that the model is completely free of copyright issues or improper content. If you believe anything infringes on your rights or generates improper content, please contact us, and we will promptly address the matter.
|