Update README.md
Browse files
README.md
CHANGED
|
@@ -10,14 +10,16 @@ datasets:
|
|
| 10 |
pipeline_tag: visual-question-answering
|
| 11 |
---
|
| 12 |
|
| 13 |
-
# Model Card for InternVL-Chat-V1
|
| 14 |
<p align="center">
|
| 15 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/X8AXMkOlKeUpNcoJIXKna.webp" alt="Image Description" width="300" height="300">
|
| 16 |
</p>
|
| 17 |
|
| 18 |
-
\[
|
| 19 |
|
| 20 |
-
|
|
|
|
|
|
|
| 21 |
|
| 22 |
<p align="center">
|
| 23 |
<img width="600" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/GIEKCvNc1Y5iMQqLv645p.png">
|
|
@@ -42,10 +44,10 @@ InternVL-Chat-V1.2-Plus uses the same model architecture as [InternVL-Chat-V1.2]
|
|
| 42 |
|
| 43 |
| Model | Vision Foundation Model | Release Date |Note |
|
| 44 |
| :---------------------------------------------------------:|:--------------------------------------------------------------------------: |:----------------------:| :---------------------------------- |
|
| 45 |
-
| InternVL-Chat-V1
|
| 46 |
-
| InternVL-Chat-V1
|
| 47 |
-
| InternVL-Chat-V1
|
| 48 |
-
| InternVL-Chat-V1
|
| 49 |
|
| 50 |
|
| 51 |
|
|
@@ -63,8 +65,8 @@ InternVL-Chat-V1.2-Plus uses the same model architecture as [InternVL-Chat-V1.2]
|
|
| 63 |
| Qwen−VL−Max\* | unknown | 51.4 | 46.8 | 51.0 | 77.6 | 75.7 | - | - | - | - | 79.5 | - | - | - |
|
| 64 |
| | | | | | | | | | | | | | | |
|
| 65 |
| LLaVA−NEXT−34B | 672x672 | 51.1 | 44.7 | 46.5 | 79.3 | 79.0 | - | 1631/397 | 81.8 | 87.7 | 69.5 | 75.9 | 63.8 | 67.1† |
|
| 66 |
-
| InternVL−Chat−V1
|
| 67 |
-
| InternVL−Chat−V1
|
| 68 |
|
| 69 |
- MMBench results are collected from the [leaderboard](https://mmbench.opencompass.org.cn/leaderboard).
|
| 70 |
- Update (2024-04-21): We have fixed a bug in the evaluation code, and the TextVQA results have been corrected.
|
|
@@ -74,7 +76,7 @@ InternVL-Chat-V1.2-Plus uses the same model architecture as [InternVL-Chat-V1.2]
|
|
| 74 |
|
| 75 |
## Model Usage
|
| 76 |
|
| 77 |
-
We provide an example code to run InternVL-Chat-V1
|
| 78 |
|
| 79 |
You also can use our [online demo](https://internvl.opengvlab.com/) for a quick experience of this model.
|
| 80 |
|
|
|
|
| 10 |
pipeline_tag: visual-question-answering
|
| 11 |
---
|
| 12 |
|
| 13 |
+
# Model Card for InternVL-Chat-V1-2-Plus
|
| 14 |
<p align="center">
|
| 15 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/X8AXMkOlKeUpNcoJIXKna.webp" alt="Image Description" width="300" height="300">
|
| 16 |
</p>
|
| 17 |
|
| 18 |
+
[\[🆕 Blog\]](https://internvl.github.io/blog/) [\[📜 InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[📜 InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821) [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/)
|
| 19 |
|
| 20 |
+
[\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#model-usage) [\[🌐 Community-hosted API\]](https://rapidapi.com/adushar1320/api/internvl-chat) [\[📖 中文解读\]](https://zhuanlan.zhihu.com/p/675877376)
|
| 21 |
+
|
| 22 |
+
InternVL-Chat-V1-2-Plus uses the same model architecture as [InternVL-Chat-V1-2](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2), but the difference lies in the SFT dataset. InternVL-Chat-V1-2 only utilizes an SFT dataset with 1.2M samples, while **our plus version employs an SFT dataset with 12M samples**.
|
| 23 |
|
| 24 |
<p align="center">
|
| 25 |
<img width="600" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/GIEKCvNc1Y5iMQqLv645p.png">
|
|
|
|
| 44 |
|
| 45 |
| Model | Vision Foundation Model | Release Date |Note |
|
| 46 |
| :---------------------------------------------------------:|:--------------------------------------------------------------------------: |:----------------------:| :---------------------------------- |
|
| 47 |
+
| InternVL-Chat-V1-5(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5)) | InternViT-6B-448px-V1-5(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5)) |2024.04.18 | support 4K image; super strong OCR; Approaching the performance of GPT-4V and Gemini Pro on various benchmarks like MMMU, DocVQA, ChartQA, MathVista, etc. (🔥new)|
|
| 48 |
+
| InternVL-Chat-V1-2-Plus(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2-Plus) ) |InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) |2024.02.21 | more SFT data and stronger |
|
| 49 |
+
| InternVL-Chat-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2) ) |InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) |2024.02.11 | scaling up LLM to 34B |
|
| 50 |
+
| InternVL-Chat-V1-1(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-1)) |InternViT-6B-448px-V1-0(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0)) |2024.01.24 | support Chinese and stronger OCR |
|
| 51 |
|
| 52 |
|
| 53 |
|
|
|
|
| 65 |
| Qwen−VL−Max\* | unknown | 51.4 | 46.8 | 51.0 | 77.6 | 75.7 | - | - | - | - | 79.5 | - | - | - |
|
| 66 |
| | | | | | | | | | | | | | | |
|
| 67 |
| LLaVA−NEXT−34B | 672x672 | 51.1 | 44.7 | 46.5 | 79.3 | 79.0 | - | 1631/397 | 81.8 | 87.7 | 69.5 | 75.9 | 63.8 | 67.1† |
|
| 68 |
+
| InternVL−Chat−V1-2 | 448x448 | 51.6 | 46.2 | 47.7 | 82.2 | 81.2 | 56.7 | 1687/489 | 83.3 | 88.0 | 72.5 | 75.6 | 60.0 | 64.0† |
|
| 69 |
+
| InternVL−Chat−V1-2−Plus | 448x448 | 50.3 | 45.6 | 59.9 | 83.8 | 82.0 | 58.7 | 1625/553 | 98.1† | 88.7 | 74.1† | 76.4 | - | 66.9† |
|
| 70 |
|
| 71 |
- MMBench results are collected from the [leaderboard](https://mmbench.opencompass.org.cn/leaderboard).
|
| 72 |
- Update (2024-04-21): We have fixed a bug in the evaluation code, and the TextVQA results have been corrected.
|
|
|
|
| 76 |
|
| 77 |
## Model Usage
|
| 78 |
|
| 79 |
+
We provide an example code to run InternVL-Chat-V1-2-Plus using `transformers`.
|
| 80 |
|
| 81 |
You also can use our [online demo](https://internvl.opengvlab.com/) for a quick experience of this model.
|
| 82 |
|