Guilherme34 commited on
Commit
0af29d1
·
verified ·
1 Parent(s): d6e88aa

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +1501 -0
README.md ADDED
@@ -0,0 +1,1501 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: any-to-any
3
+ datasets:
4
+ - openbmb/RLAIF-V-Dataset
5
+ library_name: transformers
6
+ language:
7
+ - multilingual
8
+ tags:
9
+ - minicpm-o
10
+ - omni
11
+ - vision
12
+ - ocr
13
+ - multi-image
14
+ - video
15
+ - custom_code
16
+ - audio
17
+ - speech
18
+ - voice cloning
19
+ - live Streaming
20
+ - realtime speech conversation
21
+ - asr
22
+ - tts
23
+ ---
24
+
25
+ <h1>A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming on Your Phone</h1>
26
+
27
+ [GitHub](https://github.com/OpenBMB/MiniCPM-o) | [Online Demo](https://minicpm-omni-webdemo-us.modelbest.cn) | [Technical Blog](https://openbmb.notion.site/MiniCPM-o-2-6-A-GPT-4o-Level-MLLM-for-Vision-Speech-and-Multimodal-Live-Streaming-on-Your-Phone-185ede1b7a558042b5d5e45e6b237da9) | [Join Us](https://mp.weixin.qq.com/mp/wappoc_appmsgcaptcha?poc_token=HAV8UWijqB3ImPSXecZHlOns7NRgpQw9y9EI2_fE&target_url=https%3A%2F%2Fmp.weixin.qq.com%2Fs%2FKIhH2nCURBXuFXAtYRpuXg%3F)
28
+
29
+
30
+ ### News
31
+
32
+ * [2025.06.20] ⭐️⭐️⭐️ Our official [ollama repository](https://ollama.com/openbmb) is released. Try our latest models with [one click](https://ollama.com/openbmb/minicpm-o2.6)!
33
+
34
+ * [2025.03.01] 🚀🚀🚀 RLAIF-V, which is the alignment technique of MiniCPM-o, is accepted by CVPR 2025!The [code](https://github.com/RLHF-V/RLAIF-V), [dataset](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset), [paper](https://arxiv.org/abs/2405.17220) are open-sourced!
35
+
36
+ * [2025.01.24] 📢📢📢 MiniCPM-o 2.6 technical report is released! [See Here](https://openbmb.notion.site/MiniCPM-o-2-6-A-GPT-4o-Level-MLLM-for-Vision-Speech-and-Multimodal-Live-Streaming-on-Your-Phone-185ede1b7a558042b5d5e45e6b237da9).
37
+
38
+ * [2025.01.19] ⭐️⭐️⭐️ MiniCPM-o tops GitHub Trending and reaches top-2 on Hugging Face Trending!
39
+
40
+ ## MiniCPM-o 2.6
41
+
42
+
43
+ **MiniCPM-o 2.6** is the latest and most capable model in the MiniCPM-o series. The model is built in an end-to-end fashion based on SigLip-400M, Whisper-medium-300M, ChatTTS-200M, and Qwen2.5-7B with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-V 2.6, and introduces new features for real-time speech conversation and multimodal live streaming. Notable features of MiniCPM-o 2.6 include:
44
+
45
+ - 🔥 **Leading Visual Capability.**
46
+ MiniCPM-o 2.6 achieves an average score of 70.2 on OpenCompass, a comprehensive evaluation over 8 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4o-202405, Gemini 1.5 Pro, and Claude 3.5 Sonnet** for single image understanding. It also **outperforms GPT-4V and Claude 3.5 Sonnet** in mutli-image and video understanding, and shows promising in-context learning capability.
47
+
48
+ - 🎙 **State-of-the-art Speech Capability.** MiniCPM-o 2.6 supports **bilingual real-time speech conversation with configurable voices** in English and Chinese. It **outperforms GPT-4o-realtime on audio understanding tasks** such as ASR and STT translation, and shows **state-of-the-art performance on speech conversation in both semantic and acoustic evaluations in the open-source community**. It also allows for fun features such as emotion/speed/style control, end-to-end voice cloning, role play, etc.
49
+
50
+ - 🎬 **Strong Multimodal Live Streaming Capability.** As a new feature, MiniCPM-o 2.6 can **accept continous video and audio streams independent of user queries, and support real-time speech interaction**. It **outperforms GPT-4o-202408 and Claude 3.5 Sonnet and shows state-of-art performance in open-source community on StreamingBench**, a comprehensive benchmark for real-time video understanding, omni-source (video & audio) understanding, and multimodal contextual understanding.
51
+
52
+ - 💪 **Strong OCR Capability and Others.**
53
+ Advancing popular visual capabilites from MiniCPM-V series, MiniCPM-o 2.6 can process images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344). It achieves **state-of-the-art performance on OCRBench for models under 25B, surpassing proprietary models such as GPT-4o-202405**.
54
+ Based on the the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) and [VisCPM](https://github.com/OpenBMB/VisCPM) techniques, it features **trustworthy behaviors**, outperforming GPT-4o and Claude 3.5 Sonnet on MMHal-Bench, and supports **multilingual capabilities** on more than 30 languages.
55
+
56
+
57
+ - 🚀 **Superior Efficiency.**
58
+ In addition to its friendly size, MiniCPM-o 2.6 also shows **state-of-the-art token density** (i.e., number of pixels encoded into each visual token). **It produces only 640 tokens when processing a 1.8M pixel image, which is 75% fewer than most models**. This directly improves the inference speed, first-token latency, memory usage, and power consumption. As a result, MiniCPM-o 2.6 can efficiently support **multimodal live streaming** on end-side devices such as iPad.
59
+
60
+ - 💫 **Easy Usage.**
61
+ MiniCPM-o 2.6 can be easily used in various ways: (1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-omni/examples/llava/README-minicpmo2.6.md) support for efficient CPU inference on local devices, (2) [int4](https://huggingface.co/openbmb/MiniCPM-o-2_6-int4) and [GGUF](https://huggingface.co/openbmb/MiniCPM-o-2_6-gguf) format quantized models in 16 sizes, (3) [vLLM](#efficient-inference-with-llamacpp-ollama-vllm) support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks with [LLaMA-Factory](./docs/llamafactory_train.md), (5) quick local WebUI demo setup with [Gradio](#chat-with-our-demo-on-gradio), and (6) online web demo on [server](https://minicpm-omni-webdemo-us.modelbest.cn/).
62
+
63
+
64
+
65
+ **Model Architecture.**
66
+
67
+ - **End-to-end Omni-modal Architecture.** Different modality encoder/decoders are connected and trained in an **end-to-end** fashion to fully exploit rich multimodal knowledge.
68
+ - **Omni-modal Live Streaming Mechanism.** (1) We change the offline modality encoder/decoders into online ones for **streaminig inputs/outputs.** (2) We devise a **time-division multiplexing (TDM) mechanism** for omni-modality streaminig processing in the LLM backbone. It divides parallel omni-modality streams into sequential info within small periodic time slices.
69
+ - **Configurable Speech Modeling Design.** We devise a multimodal system prompt, including traditional text system prompt, and **a new audio system prompt to determine the assistant voice**. This enables flexible voice configurations in inference time, and also facilitates end-to-end voice cloning and description-based voice creation.
70
+
71
+ <div align="center">
72
+ <img src="https://github.com/OpenBMB/MiniCPM-o/raw/main/assets/minicpm-o-26-framework-v2.png" , width=100%>
73
+ </div>
74
+
75
+
76
+ ### Evaluation <!-- omit in toc -->
77
+
78
+ <div align="center">
79
+ <img src="https://github.com/OpenBMB/MiniCPM-o/raw/main/assets/radar.jpg" width=90% />
80
+ </div>
81
+
82
+ #### Visual understanding results
83
+
84
+ **Image Understanding:**
85
+
86
+ <div align="center">
87
+ <table style="margin: 0px auto;">
88
+ <thead>
89
+ <tr>
90
+ <th align="left">Model</th>
91
+ <th>Size</th>
92
+ <th>Token Density<sup>+</sup></th>
93
+ <th>OpenCompass</th>
94
+ <th>OCRBench</th>
95
+ <th>MathVista mini</th>
96
+ <th>ChartQA</th>
97
+ <th>MMVet</th>
98
+ <th>MMStar</th>
99
+ <th>MME</th>
100
+ <th>MMB1.1 test</th>
101
+ <th>AI2D</th>
102
+ <th>MMMU val</th>
103
+ <th>HallusionBench</th>
104
+ <th>TextVQA val</th>
105
+ <th>DocVQA test</th>
106
+ <th>MathVerse mini</th>
107
+ <th>MathVision</th>
108
+ <th>MMHal Score</th>
109
+ </tr>
110
+ </thead>
111
+ <tbody align="center">
112
+ <tr>
113
+ <td colspan="19" align="left"><strong>Proprietary</strong></td>
114
+ </tr>
115
+ <tr>
116
+ <td nowrap="nowrap" align="left">GPT-4o-20240513</td>
117
+ <td>-</td>
118
+ <td>1088</td>
119
+ <td><u>69.9</u></td>
120
+ <td>736</td>
121
+ <td>61.3</td>
122
+ <td>85.7</td>
123
+ <td><strong>69.1</strong></td>
124
+ <td>63.9</td>
125
+ <td>2328.7</td>
126
+ <td>82.2</td>
127
+ <td>84.6</td>
128
+ <td><strong>69.2</strong></td>
129
+ <td><strong>55.0</strong></td>
130
+ <td>-</td>
131
+ <td>92.8</td>
132
+ <td><strong>50.2</strong></td>
133
+ <td><strong>30.4</strong></td>
134
+ <td><u>3.6</u></td>
135
+ </tr>
136
+ <tr>
137
+ <td nowrap="nowrap" align="left">Claude3.5-Sonnet</td>
138
+ <td>-</td>
139
+ <td>750</td>
140
+ <td>67.9</td>
141
+ <td>788</td>
142
+ <td>61.6</td>
143
+ <td><strong>90.8</strong></td>
144
+ <td>66.0</td>
145
+ <td>62.2</td>
146
+ <td>1920.0</td>
147
+ <td>78.5</td>
148
+ <td>80.2</td>
149
+ <td><u>65.9</u></td>
150
+ <td>49.9</td>
151
+ <td>-</td>
152
+ <td><strong>95.2</strong></td>
153
+ <td>-</td>
154
+ <td>-</td>
155
+ <td>3.4</td>
156
+ </tr>
157
+ <tr>
158
+ <td nowrap="nowrap" align="left">Gemini 1.5 Pro</td>
159
+ <td>-</td>
160
+ <td>-</td>
161
+ <td>64.4</td>
162
+ <td>754</td>
163
+ <td>57.7</td>
164
+ <td>81.3</td>
165
+ <td>64.0</td>
166
+ <td>59.1</td>
167
+ <td>2110.6</td>
168
+ <td>73.9</td>
169
+ <td>79.1</td>
170
+ <td>60.6</td>
171
+ <td>45.6</td>
172
+ <td>73.5</td>
173
+ <td>86.5</td>
174
+ <td>-</td>
175
+ <td>19.2</td>
176
+ <td>-</td>
177
+ </tr>
178
+ <tr>
179
+ <td nowrap="nowrap" align="left">GPT-4o-mini-20240718</td>
180
+ <td>-</td>
181
+ <td>1088</td>
182
+ <td>64.1</td>
183
+ <td>785</td>
184
+ <td>52.4</td>
185
+ <td>-</td>
186
+ <td>66.9</td>
187
+ <td>54.8</td>
188
+ <td>2003.4</td>
189
+ <td>76.0</td>
190
+ <td>77.8</td>
191
+ <td>60.0</td>
192
+ <td>46.1</td>
193
+ <td>-</td>
194
+ <td>-</td>
195
+ <td>-</td>
196
+ <td>-</td>
197
+ <td>3.3</td>
198
+ </tr>
199
+ <tr>
200
+ <td colspan="19" align="left"><strong>Open Source</strong></td>
201
+ </tr>
202
+ <tr>
203
+ <td nowrap="nowrap" align="left">Cambrian-34B</td>
204
+ <td>34B</td>
205
+ <td><u>1820</u></td>
206
+ <td>58.3</td>
207
+ <td>591</td>
208
+ <td>50.3</td>
209
+ <td>75.6</td>
210
+ <td>53.2</td>
211
+ <td>54.2</td>
212
+ <td>2049.9</td>
213
+ <td>77.8</td>
214
+ <td>79.5</td>
215
+ <td>50.4</td>
216
+ <td>41.6</td>
217
+ <td>76.7</td>
218
+ <td>75.5</td>
219
+ <td>-</td>
220
+ <td>-</td>
221
+ <td>-</td>
222
+ </tr>
223
+ <tr>
224
+ <td nowrap="nowrap" align="left">GLM-4V-9B</td>
225
+ <td>13B</td>
226
+ <td>784</td>
227
+ <td>59.1</td>
228
+ <td>776</td>
229
+ <td>51.1</td>
230
+ <td>-</td>
231
+ <td>58.0</td>
232
+ <td>54.8</td>
233
+ <td>2018.8</td>
234
+ <td>67.9</td>
235
+ <td>71.2</td>
236
+ <td>46.9</td>
237
+ <td>45.0</td>
238
+ <td>-</td>
239
+ <td>-</td>
240
+ <td>-</td>
241
+ <td>-</td>
242
+ <td>-</td>
243
+ </tr>
244
+ <tr>
245
+ <td nowrap="nowrap" align="left">Pixtral-12B</td>
246
+ <td>12B</td>
247
+ <td>256</td>
248
+ <td>61.0</td>
249
+ <td>685</td>
250
+ <td>56.9</td>
251
+ <td>81.8</td>
252
+ <td>58.5</td>
253
+ <td>54.5</td>
254
+ <td>-</td>
255
+ <td>72.7</td>
256
+ <td>79.0</td>
257
+ <td>51.1</td>
258
+ <td>47.0</td>
259
+ <td>75.7</td>
260
+ <td>90.7</td>
261
+ <td>-</td>
262
+ <td>-</td>
263
+ <td>-</td>
264
+ </tr>
265
+ <tr>
266
+ <td nowrap="nowrap" align="left">DeepSeek-VL2-27B (4B)</td>
267
+ <td>27B</td>
268
+ <td>672</td>
269
+ <td>66.4</td>
270
+ <td>809</td>
271
+ <td>63.9</td>
272
+ <td>86.0</td>
273
+ <td>60.0</td>
274
+ <td>61.9</td>
275
+ <td>2253.0</td>
276
+ <td>81.2</td>
277
+ <td>83.8</td>
278
+ <td>54.0</td>
279
+ <td>45.3</td>
280
+ <td><u>84.2</u></td>
281
+ <td>93.3</td>
282
+ <td>-</td>
283
+ <td>-</td>
284
+ <td>3.0</td>
285
+ </tr>
286
+ <tr>
287
+ <td nowrap="nowrap" align="left">Qwen2-VL-7B</td>
288
+ <td>8B</td>
289
+ <td>784</td>
290
+ <td>67.1</td>
291
+ <td><u>866</u></td>
292
+ <td>58.2</td>
293
+ <td>83.0</td>
294
+ <td>62.0</td>
295
+ <td>60.7</td>
296
+ <td>2326.0</td>
297
+ <td>81.8</td>
298
+ <td>83.0</td>
299
+ <td>54.1</td>
300
+ <td>50.6</td>
301
+ <td><strong>84.3</strong></td>
302
+ <td><u>94.5</u></td>
303
+ <td>31.9</td>
304
+ <td>16.3</td>
305
+ <td>3.2</td>
306
+ </tr>
307
+ <tr>
308
+ <td nowrap="nowrap" align="left">LLaVA-OneVision-72B</td>
309
+ <td>72B</td>
310
+ <td>182</td>
311
+ <td>68.1</td>
312
+ <td>741</td>
313
+ <td>67.5</td>
314
+ <td>83.7</td>
315
+ <td>60.6</td>
316
+ <td><strong>65.8</strong></td>
317
+ <td>2261.0</td>
318
+ <td><strong>85.0</strong></td>
319
+ <td><u>85.6</u></td>
320
+ <td>56.8</td>
321
+ <td>49.0</td>
322
+ <td>80.5</td>
323
+ <td>91.3</td>
324
+ <td>39.1</td>
325
+ <td>-</td>
326
+ <td>3.5</td>
327
+ </tr>
328
+ <tr>
329
+ <td nowrap="nowrap" align="left">InternVL2.5-8B</td>
330
+ <td>8B</td>
331
+ <td>706</td>
332
+ <td>68.3</td>
333
+ <td>822</td>
334
+ <td><u>64.4</u></td>
335
+ <td>84.8</td>
336
+ <td>62.8</td>
337
+ <td>62.8</td>
338
+ <td>2344.0</td>
339
+ <td><u>83.6</u></td>
340
+ <td>84.5</td>
341
+ <td>56.0</td>
342
+ <td>50.1</td>
343
+ <td>79.1</td>
344
+ <td>93.0</td>
345
+ <td>39.5</td>
346
+ <td>19.7</td>
347
+ <td>3.4</td>
348
+ </tr>
349
+ <tr>
350
+ <td nowrap="nowrap" align="left">MiniCPM-V 2.6</td>
351
+ <td>8B</td>
352
+ <td><strong>2822</strong></td>
353
+ <td>65.2</td>
354
+ <td>852*</td>
355
+ <td>60.6</td>
356
+ <td>79.4</td>
357
+ <td>60.0</td>
358
+ <td>57.5</td>
359
+ <td><u>2348.4*</u></td>
360
+ <td>78.0</td>
361
+ <td>82.1</td>
362
+ <td>49.8*</td>
363
+ <td>48.1*</td>
364
+ <td>80.1</td>
365
+ <td>90.8</td>
366
+ <td>25.7</td>
367
+ <td>18.3</td>
368
+ <td>3.6</td>
369
+ </tr>
370
+ <tr>
371
+ <td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
372
+ <td>8B</td>
373
+ <td><strong>2822</strong></td>
374
+ <td><strong>70.2</strong></td>
375
+ <td><strong>897*</strong></td>
376
+ <td><strong>71.9*</strong></td>
377
+ <td><u>86.9*</u></td>
378
+ <td><u>67.5</u></td>
379
+ <td><u>64.0</u></td>
380
+ <td><strong>2372.0*</strong></td>
381
+ <td>80.5</td>
382
+ <td><strong>85.8</strong></td>
383
+ <td>50.4*</td>
384
+ <td><u>51.9</u></td>
385
+ <td>82.0</td>
386
+ <td>93.5</td>
387
+ <td><u>41.4*</u></td>
388
+ <td><u>23.1*</u></td>
389
+ <td><strong>3.8</strong></td>
390
+ </tr>
391
+ </tbody>
392
+ </table>
393
+ </div>
394
+ * We evaluate this benchmark using chain-of-thought prompting. Specifically, for MME, we used this technique only for the Cognition set.
395
+
396
+
397
+ <sup>+</sup> Token Density: number of pixels encoded into each visual token at maximum resolution, i.e., # pixels at maximum resolution / # visual tokens.
398
+
399
+ Note: For proprietary models, we calculate token density based on the image encoding charging strategy defined in the official API documentation, which provides an upper-bound estimation.
400
+
401
+
402
+ **Multi-image and Video Understanding:**
403
+
404
+ <details>
405
+ <summary>click to view</summary>
406
+ <div align="center">
407
+
408
+ <table style="margin: 0px auto;">
409
+ <thead>
410
+ <tr>
411
+ <th align="left">Model</th>
412
+ <th>Size</th>
413
+ <th>BLINK val</th>
414
+ <th>Mantis Eval</th>
415
+ <th>MIRB</th>
416
+ <th>Video-MME (wo / w subs)</th>
417
+ </tr>
418
+ </thead>
419
+ <tbody align="center">
420
+ <tr>
421
+ <td colspan="6" align="left"><strong>Proprietary</strong></td>
422
+ </tr>
423
+ <tr>
424
+ <td nowrap="nowrap" align="left">GPT-4o-20240513</td>
425
+ <td>-</td>
426
+ <td><strong>68.0</strong></td>
427
+ <td>-</td>
428
+ <td>-</td>
429
+ <td><strong>71.9/77.2<strong></td>
430
+ </tr>
431
+ <tr>
432
+ <td nowrap="nowrap" align="left">GPT4V</td>
433
+ <td>-</td>
434
+ <td>54.6</td>
435
+ <td>62.7</td>
436
+ <td>53.1</td>
437
+ <td>59.9/63.3</td>
438
+ </tr>
439
+ <tr>
440
+ <td colspan="6" align="left"><strong>Open-source</strong></td>
441
+ </tr>
442
+ <tr>
443
+ <td nowrap="nowrap" align="left">LLaVA-NeXT-Interleave 14B</td>
444
+ <td>14B</td>
445
+ <td>52.6</td>
446
+ <td>66.4</td>
447
+ <td>30.2</td>
448
+ <td>-</td>
449
+ </tr>
450
+ <tr>
451
+ <td nowrap="nowrap" align="left">LLaVA-OneVision-72B</td>
452
+ <td>72B</td>
453
+ <td>55.4</td>
454
+ <td><strong>77.6</strong></td>
455
+ <td>-</td>
456
+ <td><u>66.2/69.5</u></td>
457
+ </tr>
458
+ <tr>
459
+ <td nowrap="nowrap" align="left">MANTIS 8B</td>
460
+ <td>8B</td>
461
+ <td>49.1</td>
462
+ <td>59.5</td>
463
+ <td>34.8</td>
464
+ <td>-</td>
465
+ </tr>
466
+ <tr>
467
+ <td nowrap="nowrap" align="left">Qwen2-VL-7B</td>
468
+ <td>8B</td>
469
+ <td>53.2</td>
470
+ <td>69.6*</td>
471
+ <td><strong>67.6*</strong></td>
472
+ <td>63.3/69.0</td>
473
+ </tr>
474
+ <tr>
475
+ <td nowrap="nowrap" align="left">InternVL2.5-8B</td>
476
+ <td>8B</td>
477
+ <td>54.8</td>
478
+ <td>67.7</td>
479
+ <td>52.5</td>
480
+ <td>64.2/66.9</td>
481
+ </tr>
482
+ <tr>
483
+ <td nowrap="nowrap" align="left">MiniCPM-V 2.6</td>
484
+ <td>8B</td>
485
+ <td>53.0</td>
486
+ <td>69.1</td>
487
+ <td>53.8</td>
488
+ <td>60.9/63.6</td>
489
+ </tr>
490
+ <tr>
491
+ <td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
492
+ <td>8B</td>
493
+ <td><u>56.7</u></td>
494
+ <td><u>71.9</u></td>
495
+ <td><u>58.6</u></td>
496
+ <td>63.9/67.9</td>
497
+ </tr>
498
+ </tbody>
499
+ </table>
500
+
501
+ </div>
502
+ * We evaluate officially released checkpoints by ourselves.
503
+
504
+ </details>
505
+
506
+
507
+ #### Audio understanding and speech conversation results.
508
+
509
+ **Audio Understanding:**
510
+
511
+ <div align="center">
512
+ <table style="margin: 0px auto;">
513
+ <thead>
514
+ <tr>
515
+ <th align="left">Task</th>
516
+ <th>Size</th>
517
+ <th colspan="3">ASR (zh)</th>
518
+ <th colspan="3">ASR (en)</th>
519
+ <th colspan="2">AST</th>
520
+ <th>Emotion</th>
521
+ </tr>
522
+ <tr>
523
+ <th align="left">Metric</th>
524
+ <td></td>
525
+ <th colspan="3">CER↓</th>
526
+ <th colspan="3">WER↓</th>
527
+ <th colspan="2">BLEU↑</th>
528
+ <th>ACC↑</th>
529
+ </tr>
530
+ <tr>
531
+ <th align="left">Dataset</th>
532
+ <td></td>
533
+ <th>AISHELL-1</th>
534
+ <th>Fleurs zh</th>
535
+ <th>WenetSpeech test-net</th>
536
+ <th>LibriSpeech test-clean</th>
537
+ <th>GigaSpeech</th>
538
+ <th>TED-LIUM</th>
539
+ <th>CoVoST en2zh</th>
540
+ <th>CoVoST zh2en</th>
541
+ <th>MELD emotion</th>
542
+ </tr>
543
+ </thead>
544
+ <tbody align="center">
545
+ <tr>
546
+ <td colspan="11" align="left"><strong>Proprietary</strong></td>
547
+ </tr>
548
+ <tr>
549
+ <td nowrap="nowrap" align="left">GPT-4o-Realtime</td>
550
+ <td>-</td>
551
+ <td>7.3*</td>
552
+ <td><u>5.4*</u></td>
553
+ <td>28.9*</td>
554
+ <td>2.6*</td>
555
+ <td>12.9*</td>
556
+ <td>4.8*</td>
557
+ <td>37.1*</td>
558
+ <td>15.7*</td>
559
+ <td>33.2*</td>
560
+ </tr>
561
+ <tr>
562
+ <td nowrap="nowrap" align="left">Gemini 1.5 Pro</td>
563
+ <td>-</td>
564
+ <td>4.5*</td>
565
+ <td>5.9*</td>
566
+ <td>14.3*</td>
567
+ <td>2.9*</td>
568
+ <td>10.6*</td>
569
+ <td><strong>3.0*</strong></td>
570
+ <td><u>47.3*</u></td>
571
+ <td>22.6*</td>
572
+ <td>48.4*</td>
573
+ </tr>
574
+ <tr>
575
+ <td colspan="11" align="left"><strong>Open-Source</strong></td>
576
+ </tr>
577
+ <tr>
578
+ <td nowrap="nowrap" align="left">Qwen2-Audio-7B</td>
579
+ <td>8B</td>
580
+ <td>-</td>
581
+ <td>7.5</td>
582
+ <td>-</td>
583
+ <td><strong>1.6</strong></td>
584
+ <td>-</td>
585
+ <td>-</td>
586
+ <td>45.2</td>
587
+ <td><u>24.4</u></td>
588
+ <td><strong>55.3</strong></td>
589
+ </tr>
590
+ <tr>
591
+ <td nowrap="nowrap" align="left">Qwen2-Audio-7B-Instruct</td>
592
+ <td>8B</td>
593
+ <td>2.6*</td>
594
+ <td>6.9*</td>
595
+ <td><u>10.3*</u></td>
596
+ <td>3.1*</td>
597
+ <td><u>9.7</u>*</td>
598
+ <td>5.9*</td>
599
+ <td>39.5*</td>
600
+ <td>22.9*</td>
601
+ <td>17.4*</td>
602
+ </tr>
603
+ <tr>
604
+ <td nowrap="nowrap" align="left">GLM-4-Voice-Base</td>
605
+ <td>9B</td>
606
+ <td><u>2.5</u></td>
607
+ <td>-</td>
608
+ <td>-</td>
609
+ <td>2.8</td>
610
+ <td>-</td>
611
+ <td>-</td>
612
+ <td>-</td>
613
+ <td>-</td>
614
+ </tr>
615
+ <tr>
616
+ <td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
617
+ <td>8B</td>
618
+ <td><strong>1.6</strong></td>
619
+ <td><strong>4.4</strong></td>
620
+ <td><strong>6.9</strong></td>
621
+ <td><u>1.7</u></td>
622
+ <td><strong>8.7</strong></td>
623
+ <td><strong>3.0</strong></td>
624
+ <td><strong>48.2</strong></td>
625
+ <td><strong>27.2</strong></td>
626
+ <td><u>52.4</u></td>
627
+ </tr>
628
+ </tbody>
629
+ </table>
630
+ </div>
631
+ * We evaluate officially released checkpoints by ourselves.<br><br>
632
+
633
+ **Speech Generation:**
634
+
635
+ <div align="center">
636
+ <table style="margin: 0px auto;">
637
+ <thead>
638
+ <tr>
639
+ <th align="left">Task</th>
640
+ <th>Size</th>
641
+ <th colspan="9">SpeechQA</th>
642
+ </tr>
643
+ <tr>
644
+ <th align="left">Metric</th>
645
+ <th></th>
646
+ <th colspan="3">ACC↑</th>
647
+ <th>G-Eval (10 point)↑</th>
648
+ <th>Semantic ELO score↑</th>
649
+ <th>Acoustic ELO score↑</th>
650
+ <th>Overall ELO score↑</th>
651
+ <th>UTMOS↑</th>
652
+ <th>ASR-WER↓</th>
653
+ </tr>
654
+ <tr>
655
+ <th align="left">Dataset</th>
656
+ <th></th>
657
+ <th>Speech Llama Q.</th>
658
+ <th>Speech Web Q.</th>
659
+ <th>Speech Trivia QA</th>
660
+ <th>Speech AlpacaEval</th>
661
+ <th colspan="5">AudioArena</th>
662
+ </tr>
663
+ </thead>
664
+ <tbody align="center">
665
+ <tr>
666
+ <td colspan="11" align="left"><strong>Proprietary</strong></td>
667
+ </tr>
668
+ <tr>
669
+ <td nowrap="nowrap" align="left">GPT-4o-Realtime</td>
670
+ <td></td>
671
+ <td><strong>71.7</strong></td>
672
+ <td><strong>51.6</strong></td>
673
+ <td><strong>69.7</strong></td>
674
+ <td><strong>7.4</strong></td>
675
+ <td><strong>1157</strong></td>
676
+ <td><strong>1203</strong></td>
677
+ <td><strong>1200</strong></td>
678
+ <td><strong>4.2</strong></td>
679
+ <td><strong>2.3</strong></td>
680
+ </tr>
681
+ <tr>
682
+ <td colspan="11" align="left"><strong>Open-Source</strong></td>
683
+ </tr>
684
+ <tr>
685
+ <td nowrap="nowrap" align="left">GLM-4-Voice</td>
686
+ <td>9B</td>
687
+ <td>50.0</td>
688
+ <td>32.0</td>
689
+ <td>36.4</td>
690
+ <td><u>5.1</u></td>
691
+ <td>999</td>
692
+ <td>1147</td>
693
+ <td>1035</td>
694
+ <td><u>4.1</u></td>
695
+ <td><u>11.7</u></td>
696
+ </tr>
697
+ <tr>
698
+ <td nowrap="nowrap" align="left">Llama-Omni</td>
699
+ <td>8B</td>
700
+ <td>45.3</td>
701
+ <td>22.9</td>
702
+ <td>10.7</td>
703
+ <td>3.9</td>
704
+ <td>960</td>
705
+ <td>878</td>
706
+ <td>897</td>
707
+ <td>3.2</td>
708
+ <td>24.3</td>
709
+ </tr>
710
+ <tr>
711
+ <td nowrap="nowrap" align="left">Moshi</td>
712
+ <td>7B</td>
713
+ <td>43.7</td>
714
+ <td>23.8</td>
715
+ <td>16.7</td>
716
+ <td>2.4</td>
717
+ <td>871</td>
718
+ <td>808</td>
719
+ <td>875</td>
720
+ <td>2.8</td>
721
+ <td>8.2</td>
722
+ </tr>
723
+ <tr>
724
+ <td nowrap="nowrap" align="left">Mini-Omni</td>
725
+ <td>1B</td>
726
+ <td>22.0</td>
727
+ <td>12.8</td>
728
+ <td>6.9</td>
729
+ <td>2.5</td>
730
+ <td>926</td>
731
+ <td>803</td>
732
+ <td>865</td>
733
+ <td>3.4</td>
734
+ <td>10.0</td>
735
+ </tr>
736
+ <tr>
737
+ <td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
738
+ <td>8B</td>
739
+ <td><u>61.0</u></td>
740
+ <td><u>40.0</u></td>
741
+ <td><u>40.2</u></td>
742
+ <td><u>5.1</u></td>
743
+ <td><u>1088</u></td>
744
+ <td><u>1163</u></td>
745
+ <td><u>1131</u></td>
746
+ <td><strong>4.2</strong></td>
747
+ <td>9.8</td>
748
+ </tr>
749
+ </tbody>
750
+ </table>
751
+ </div>
752
+ All results are from AudioEvals, and the evaluation methods along with further details can be found in <a href="https://github.com/OpenBMB/UltraEval-Audio" target="_blank">UltraEval-Audio</a>.<br><br>
753
+
754
+ **End-to-end Voice Cloning**
755
+
756
+ <div align="center">
757
+ <table style="margin: 0px auto;">
758
+ <thead>
759
+ <tr>
760
+ <th align="left">Task</th>
761
+ <th colspan="2">Voice cloning</th>
762
+ </tr>
763
+ <tr>
764
+ <th align="left">Metric</th>
765
+ <th>SIMO↑</th>
766
+ <th>SIMO↑</th>
767
+ </tr>
768
+ <tr>
769
+ <th align="left">Dataset</th>
770
+ <th>Seed-TTS test-zh</th>
771
+ <th>Seed-TTS test-en</th>
772
+ </tr>
773
+ </thead>
774
+ <tbody align="center">
775
+ <tr>
776
+ <td nowrap="nowrap" align="left">F5-TTS</td>
777
+ <td><strong>76</strong></td>
778
+ <td><strong>67</strong></td>
779
+ </tr>
780
+ <tr>
781
+ <td nowrap="nowrap" align="left">CosyVoice</td>
782
+ <td><u>75</u></td>
783
+ <td><u>64</u></td>
784
+ </tr>
785
+ <tr>
786
+ <td nowrap="nowrap" align="left">FireRedTTS</td>
787
+ <td>63</td>
788
+ <td>46</td>
789
+ </tr>
790
+ <tr>
791
+ <td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
792
+ <td>57</td>
793
+ <td>47</td>
794
+ </tr>
795
+ </tbody>
796
+ </table>
797
+ </div>
798
+
799
+
800
+ #### Multimodal live streaming results.
801
+
802
+ **Multimodal Live Streaming:** results on StreamingBench
803
+
804
+ <table style="margin: 0px auto;">
805
+ <thead>
806
+ <tr>
807
+ <th align="left">Model</th>
808
+ <th>Size</th>
809
+ <th>Real-Time Video Understanding</th>
810
+ <th>Omni-Source Understanding</th>
811
+ <th>Contextual Understanding</th>
812
+ <th>Overall</th>
813
+ </tr>
814
+ </thead>
815
+ <tbody align="center">
816
+ <tr>
817
+ <td colspan="7" align="left"><strong>Proprietary</strong></td>
818
+ </tr>
819
+ <tr>
820
+ <td nowrap="nowrap" align="left">Gemini 1.5 Pro</td>
821
+ <td>-</td>
822
+ <td><u>77.4</u></td>
823
+ <td><strong>67.8</strong></td>
824
+ <td><strong>51.1</strong></td>
825
+ <td><strong>70.3</strong></td>
826
+ </tr>
827
+ <tr>
828
+ <td nowrap="nowrap" align="left">GPT-4o-202408</td>
829
+ <td>-</td>
830
+ <td>74.5</td>
831
+ <td>51.0</td>
832
+ <td><u>48.0</u></td>
833
+ <td>64.1</td>
834
+ </tr>
835
+ <tr>
836
+ <td nowrap="nowrap" align="left">Claude-3.5-Sonnet</td>
837
+ <td>-</td>
838
+ <td>74.0</td>
839
+ <td>41.4</td>
840
+ <td>37.8</td>
841
+ <td>59.7</td>
842
+ </tr>
843
+ <tr>
844
+ <td colspan="9" align="left"><strong>Open-source</strong></td>
845
+ </tr>
846
+ <tr>
847
+ <td nowrap="nowrap" align="left">VILA-1.5</td>
848
+ <td>8B</td>
849
+ <td>61.5</td>
850
+ <td>37.5</td>
851
+ <td>26.7</td>
852
+ <td>49.5</td>
853
+ </tr>
854
+ <tr>
855
+ <td nowrap="nowrap" align="left">LongVA</td>
856
+ <td>7B</td>
857
+ <td>63.1</td>
858
+ <td>35.9</td>
859
+ <td>30.2</td>
860
+ <td>50.7</td>
861
+ </tr>
862
+ <tr>
863
+ <td nowrap="nowrap" align="left">LLaVA-Next-Video-34B</td>
864
+ <td>34B</td>
865
+ <td>69.8</td>
866
+ <td>41.7</td>
867
+ <td>34.3</td>
868
+ <td>56.7</td>
869
+ </tr>
870
+ <tr>
871
+ <td nowrap="nowrap" align="left">Qwen2-VL-7B</td>
872
+ <td>8B</td>
873
+ <td>71.2</td>
874
+ <td>40.7</td>
875
+ <td>33.1</td>
876
+ <td>57.0</td>
877
+ </tr>
878
+ <tr>
879
+ <td nowrap="nowrap" align="left">InternVL2-8B</td>
880
+ <td>8B</td>
881
+ <td>70.1</td>
882
+ <td>42.7</td>
883
+ <td>34.1</td>
884
+ <td>57.0</td>
885
+ </tr>
886
+ <tr>
887
+ <td nowrap="nowrap" align="left">VITA-1.5</td>
888
+ <td>8B</td>
889
+ <td>70.9</td>
890
+ <td>40.8</td>
891
+ <td>35.8</td>
892
+ <td>57.4</td>
893
+ </tr>
894
+ <tr>
895
+ <td nowrap="nowrap" align="left">LLaVA-OneVision-7B</td>
896
+ <td>8B</td>
897
+ <td>74.3</td>
898
+ <td>40.8</td>
899
+ <td>31.0</td>
900
+ <td>58.4</td>
901
+ </tr>
902
+ <tr>
903
+ <td nowrap="nowrap" align="left">InternLM-XC2.5-OL-7B</td>
904
+ <td>8B</td>
905
+ <td>75.4</td>
906
+ <td>46.2</td>
907
+ <td>33.6</td>
908
+ <td>60.8</td>
909
+ </tr>
910
+ <tr>
911
+ <td nowrap="nowrap" align="left">MiniCPM-V 2.6</td>
912
+ <td>8B</td>
913
+ <td>72.4</td>
914
+ <td>40.2</td>
915
+ <td>33.4</td>
916
+ <td>57.7</td>
917
+ </tr>
918
+ <tr>
919
+ <td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
920
+ <td>8B</td>
921
+ <td><strong>79.9</strong></td>
922
+ <td><u>53.4</u></td>
923
+ <td>38.5</td>
924
+ <td><u>66.0</u></td>
925
+ </tr>
926
+ </tbody>
927
+ </table>
928
+
929
+
930
+
931
+ ### Examples <!-- omit in toc -->
932
+
933
+ We deploy MiniCPM-o 2.6 on end devices. The demo video is the raw-speed recording on an iPad Pro and a Web demo.
934
+
935
+ <div align="center">
936
+ <a href="https://youtu.be/JFJg9KZ_iZk"><img src="https://github.com/OpenBMB/MiniCPM-o/raw/main/assets/o-2dot6-demo-video-preview.png", width=70%></a>
937
+ </div>
938
+
939
+ <br>
940
+
941
+
942
+ <div style="display: flex; flex-direction: column; align-items: center;">
943
+ <img src="https://github.com/OpenBMB/MiniCPM-o/raw/main/assets/minicpmo2_6/minicpmo2_6_math_intersect.png" alt="math" style="margin-bottom: 5px;">
944
+ <img src="https://github.com/OpenBMB/MiniCPM-o/raw/main/assets/minicpmo2_6/minicpmo2_6_diagram_train_NN.png" alt="diagram" style="margin-bottom: 5px;">
945
+ <img src="https://github.com/OpenBMB/MiniCPM-o/raw/main/assets/minicpmo2_6/minicpmo2_6_multi-image_bike.png" alt="bike" style="margin-bottom: 5px;">
946
+ </div>
947
+
948
+
949
+
950
+
951
+ ## Online Demo
952
+ Click here to try the online demo of [MiniCPM-o 2.6](https://minicpm-omni-webdemo-us.modelbest.cn).
953
+
954
+
955
+ ## Usage
956
+ Inference using Huggingface transformers on NVIDIA GPUs. Please ensure that `transformers==4.44.2` is installed, as other versions may have compatibility issues. We are investigating this issue. Requirements tested on python 3.10:
957
+ ```
958
+ Pillow==10.1.0
959
+ torch==2.3.1
960
+ torchaudio==2.3.1
961
+ torchvision==0.18.1
962
+ transformers==4.44.2
963
+ librosa==0.9.0
964
+ soundfile==0.12.1
965
+ vector-quantize-pytorch==1.18.5
966
+ vocos==0.1.0
967
+ decord
968
+ moviepy
969
+ ```
970
+
971
+
972
+ ### Model initialization
973
+ ```python
974
+
975
+ import torch
976
+ from PIL import Image
977
+ from transformers import AutoModel, AutoTokenizer
978
+
979
+ # load omni model default, the default init_vision/init_audio/init_tts is True
980
+ # if load vision-only model, please set init_audio=False and init_tts=False
981
+ # if load audio-only model, please set init_vision=False
982
+ model = AutoModel.from_pretrained(
983
+ 'openbmb/MiniCPM-o-2_6',
984
+ trust_remote_code=True,
985
+ attn_implementation='sdpa', # sdpa or flash_attention_2
986
+ torch_dtype=torch.bfloat16,
987
+ init_vision=True,
988
+ init_audio=True,
989
+ init_tts=True
990
+ )
991
+
992
+
993
+ model = model.eval().cuda()
994
+ tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True)
995
+
996
+ # In addition to vision-only mode, tts processor and vocos also needs to be initialized
997
+ model.init_tts()
998
+ ```
999
+
1000
+ If you are using an older version of PyTorch, you might encounter this issue `"weight_norm_fwd_first_dim_kernel" not implemented for 'BFloat16'`, Please convert the TTS to float32 type.
1001
+ ```python
1002
+ model.tts.float()
1003
+ ```
1004
+
1005
+ ### Omni mode
1006
+ We provide two inference modes: chat and streaming
1007
+
1008
+ #### Chat inference
1009
+ ```python
1010
+ import math
1011
+ import numpy as np
1012
+ from PIL import Image
1013
+ from moviepy.editor import VideoFileClip
1014
+ import tempfile
1015
+ import librosa
1016
+ import soundfile as sf
1017
+
1018
+ def get_video_chunk_content(video_path, flatten=True):
1019
+ video = VideoFileClip(video_path)
1020
+ print('video_duration:', video.duration)
1021
+
1022
+ with tempfile.NamedTemporaryFile(suffix=".wav", delete=True) as temp_audio_file:
1023
+ temp_audio_file_path = temp_audio_file.name
1024
+ video.audio.write_audiofile(temp_audio_file_path, codec="pcm_s16le", fps=16000)
1025
+ audio_np, sr = librosa.load(temp_audio_file_path, sr=16000, mono=True)
1026
+ num_units = math.ceil(video.duration)
1027
+
1028
+ # 1 frame + 1s audio chunk
1029
+ contents= []
1030
+ for i in range(num_units):
1031
+ frame = video.get_frame(i+1)
1032
+ image = Image.fromarray((frame).astype(np.uint8))
1033
+ audio = audio_np[sr*i:sr*(i+1)]
1034
+ if flatten:
1035
+ contents.extend(["<unit>", image, audio])
1036
+ else:
1037
+ contents.append(["<unit>", image, audio])
1038
+
1039
+ return contents
1040
+
1041
+ video_path="assets/Skiing.mp4"
1042
+ # if use voice clone prompt, please set ref_audio
1043
+ ref_audio_path = 'assets/demo.wav'
1044
+ ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True)
1045
+ sys_msg = model.get_sys_prompt(ref_audio=ref_audio, mode='omni', language='en')
1046
+ # or use default prompt
1047
+ # sys_msg = model.get_sys_prompt(mode='omni', language='en')
1048
+
1049
+ contents = get_video_chunk_content(video_path)
1050
+ msg = {"role":"user", "content": contents}
1051
+ msgs = [sys_msg, msg]
1052
+
1053
+ # please set generate_audio=True and output_audio_path to save the tts result
1054
+ generate_audio = True
1055
+ output_audio_path = 'output.wav'
1056
+
1057
+ res = model.chat(
1058
+ msgs=msgs,
1059
+ tokenizer=tokenizer,
1060
+ sampling=True,
1061
+ temperature=0.5,
1062
+ max_new_tokens=4096,
1063
+ omni_input=True, # please set omni_input=True when omni inference
1064
+ use_tts_template=True,
1065
+ generate_audio=generate_audio,
1066
+ output_audio_path=output_audio_path,
1067
+ max_slice_nums=1,
1068
+ use_image_id=False,
1069
+ return_dict=True
1070
+ )
1071
+ print(res)
1072
+
1073
+ ## You will get the answer: The person in the picture is skiing down a snowy slope.
1074
+ # import IPython
1075
+ # IPython.display.Audio('output.wav')
1076
+
1077
+ ```
1078
+ #### Streaming inference
1079
+ ```python
1080
+ # a new conversation need reset session first, it will reset the kv-cache
1081
+ model.reset_session()
1082
+
1083
+ contents = get_video_chunk_content(video_path, flatten=False)
1084
+ session_id = '123'
1085
+ generate_audio = True
1086
+
1087
+ # 1. prefill system prompt
1088
+ res = model.streaming_prefill(
1089
+ session_id=session_id,
1090
+ msgs=[sys_msg],
1091
+ tokenizer=tokenizer
1092
+ )
1093
+
1094
+ # 2. prefill video/audio chunks
1095
+ for content in contents:
1096
+ msgs = [{"role":"user", "content": content}]
1097
+ res = model.streaming_prefill(
1098
+ session_id=session_id,
1099
+ msgs=msgs,
1100
+ tokenizer=tokenizer
1101
+ )
1102
+
1103
+ # 3. generate
1104
+ res = model.streaming_generate(
1105
+ session_id=session_id,
1106
+ tokenizer=tokenizer,
1107
+ temperature=0.5,
1108
+ generate_audio=generate_audio
1109
+ )
1110
+
1111
+ audios = []
1112
+ text = ""
1113
+
1114
+ if generate_audio:
1115
+ for r in res:
1116
+ audio_wav = r.audio_wav
1117
+ sampling_rate = r.sampling_rate
1118
+ txt = r.text
1119
+
1120
+ audios.append(audio_wav)
1121
+ text += txt
1122
+
1123
+ res = np.concatenate(audios)
1124
+ sf.write("output.wav", res, samplerate=sampling_rate)
1125
+ print("text:", text)
1126
+ print("audio saved to output.wav")
1127
+ else:
1128
+ for r in res:
1129
+ text += r['text']
1130
+ print("text:", text)
1131
+
1132
+ ```
1133
+
1134
+
1135
+ ### Speech and Audio Mode
1136
+
1137
+ Model initialization
1138
+
1139
+ ```python
1140
+ import torch
1141
+ import librosa
1142
+ from transformers import AutoModel, AutoTokenizer
1143
+
1144
+ model = AutoModel.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True,
1145
+ attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
1146
+ model = model.eval().cuda()
1147
+ tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True)
1148
+
1149
+ model.init_tts()
1150
+ model.tts.float()
1151
+ ```
1152
+
1153
+ <hr/>
1154
+
1155
+ #### Mimick
1156
+
1157
+ `Mimick` task reflects a model's end-to-end speech modeling capability. The model takes audio input, and outputs an ASR transcription and subsequently reconstructs the original audio with high similarity. The higher the similarity between the reconstructed audio and the original audio, the stronger the model's foundational capability in end-to-end speech modeling.
1158
+
1159
+ ```python
1160
+ mimick_prompt = "Please repeat each user's speech, including voice style and speech content."
1161
+ audio_input, _ = librosa.load('./assets/input_examples/Trump_WEF_2018_10s.mp3', sr=16000, mono=True) # load the audio to be mimicked
1162
+
1163
+ # can also try `./assets/input_examples/cxk_original.wav`,
1164
+ # `./assets/input_examples/fast-pace.wav`,
1165
+ # `./assets/input_examples/chi-english-1.wav`
1166
+ # `./assets/input_examples/exciting-emotion.wav`
1167
+ # for different aspects of speech-centric features.
1168
+
1169
+ msgs = [{'role': 'user', 'content': [mimick_prompt, audio_input]}]
1170
+ res = model.chat(
1171
+ msgs=msgs,
1172
+ tokenizer=tokenizer,
1173
+ sampling=True,
1174
+ max_new_tokens=128,
1175
+ use_tts_template=True,
1176
+ temperature=0.3,
1177
+ generate_audio=True,
1178
+ output_audio_path='output_mimick.wav', # save the tts result to output_audio_path
1179
+ )
1180
+ ```
1181
+
1182
+ <hr/>
1183
+
1184
+ #### General Speech Conversation with Configurable Voices
1185
+
1186
+ A general usage scenario of `MiniCPM-o-2.6` is role-playing a specific character based on the audio prompt. It will mimic the voice of the character to some extent and act like the character in text, including language style. In this mode, `MiniCPM-o-2.6` sounds **more natural and human-like**. Self-defined audio prompts can be used to customize the voice of the character in an end-to-end manner.
1187
+
1188
+
1189
+ ```python
1190
+ ref_audio, _ = librosa.load('./assets/input_examples/icl_20.wav', sr=16000, mono=True) # load the reference audio
1191
+ sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='audio_roleplay', language='en')
1192
+
1193
+ # round one
1194
+ user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]}
1195
+ msgs = [sys_prompt, user_question]
1196
+ res = model.chat(
1197
+ msgs=msgs,
1198
+ tokenizer=tokenizer,
1199
+ sampling=True,
1200
+ max_new_tokens=128,
1201
+ use_tts_template=True,
1202
+ generate_audio=True,
1203
+ temperature=0.3,
1204
+ output_audio_path='result_roleplay_round_1.wav',
1205
+ )
1206
+
1207
+ # round two
1208
+ history = msgs.append({'role': 'assistant', 'content': res})
1209
+ user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]}
1210
+ msgs = history.append(user_question)
1211
+ res = model.chat(
1212
+ msgs=msgs,
1213
+ tokenizer=tokenizer,
1214
+ sampling=True,
1215
+ max_new_tokens=128,
1216
+ use_tts_template=True,
1217
+ generate_audio=True,
1218
+ temperature=0.3,
1219
+ output_audio_path='result_roleplay_round_2.wav',
1220
+ )
1221
+ print(res)
1222
+ ```
1223
+
1224
+ <hr/>
1225
+
1226
+ #### Speech Conversation as an AI Assistant
1227
+
1228
+ An enhanced feature of `MiniCPM-o-2.6` is to act as an AI assistant, but only with limited choice of voices. In this mode, `MiniCPM-o-2.6` is **less human-like and more like a voice assistant**. In this mode, the model is more instruction-following. For demo, you are suggested to use `assistant_female_voice`, `assistant_male_voice`, and `assistant_default_female_voice`. Other voices may work but not as stable as the default voices.
1229
+
1230
+ *Please note that, `assistant_female_voice` and `assistant_male_voice` are more stable but sounds like robots, while `assistant_default_female_voice` is more human-alike but not stable, its voice often changes in multiple turns. We suggest you to try stable voices `assistant_female_voice` and `assistant_male_voice`.*
1231
+
1232
+ ```python
1233
+ ref_audio, _ = librosa.load('./assets/input_examples/assistant_female_voice.wav', sr=16000, mono=True) # or use `./assets/input_examples/assistant_male_voice.wav`
1234
+ sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='audio_assistant', language='en')
1235
+ user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]} # load the user's audio question
1236
+
1237
+ # round one
1238
+ msgs = [sys_prompt, user_question]
1239
+ res = model.chat(
1240
+ msgs=msgs,
1241
+ tokenizer=tokenizer,
1242
+ sampling=True,
1243
+ max_new_tokens=128,
1244
+ use_tts_template=True,
1245
+ generate_audio=True,
1246
+ temperature=0.3,
1247
+ output_audio_path='result_assistant_round_1.wav',
1248
+ )
1249
+
1250
+ # round two
1251
+ history = msgs.append({'role': 'assistant', 'content': res})
1252
+ user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]}
1253
+ msgs = history.append(user_question)
1254
+ res = model.chat(
1255
+ msgs=msgs,
1256
+ tokenizer=tokenizer,
1257
+ sampling=True,
1258
+ max_new_tokens=128,
1259
+ use_tts_template=True,
1260
+ generate_audio=True,
1261
+ temperature=0.3,
1262
+ output_audio_path='result_assistant_round_2.wav',
1263
+ )
1264
+ print(res)
1265
+ ```
1266
+
1267
+ <hr/>
1268
+
1269
+ #### Instruction-to-Speech
1270
+
1271
+ `MiniCPM-o-2.6` can also do Instruction-to-Speech, aka **Voice Creation**. You can describe a voice in detail, and the model will generate a voice that matches the description. For more Instruction-to-Speech sample instructions, you can refer to https://voxinstruct.github.io/VoxInstruct/.
1272
+
1273
+ ```python
1274
+ instruction = 'Speak like a male charming superstar, radiating confidence and style in every word.'
1275
+
1276
+ msgs = [{'role': 'user', 'content': [instruction]}]
1277
+
1278
+ res = model.chat(
1279
+ msgs=msgs,
1280
+ tokenizer=tokenizer,
1281
+ sampling=True,
1282
+ max_new_tokens=128,
1283
+ use_tts_template=True,
1284
+ generate_audio=True,
1285
+ temperature=0.3,
1286
+ output_audio_path='result_voice_creation.wav',
1287
+ )
1288
+ ```
1289
+
1290
+ <hr/>
1291
+
1292
+ #### Voice Cloning
1293
+
1294
+ `MiniCPM-o-2.6` can also do zero-shot text-to-speech, aka **Voice Cloning**. With this mode, model will act like a TTS model.
1295
+
1296
+
1297
+ ```python
1298
+ ref_audio, _ = librosa.load('./assets/input_examples/icl_20.wav', sr=16000, mono=True) # load the reference audio
1299
+ sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='voice_cloning', language='en')
1300
+ text_prompt = f"Please read the text below."
1301
+ user_question = {'role': 'user', 'content': [text_prompt, "content that you want to read"]}
1302
+
1303
+ msgs = [sys_prompt, user_question]
1304
+ res = model.chat(
1305
+ msgs=msgs,
1306
+ tokenizer=tokenizer,
1307
+ sampling=True,
1308
+ max_new_tokens=128,
1309
+ use_tts_template=True,
1310
+ generate_audio=True,
1311
+ temperature=0.3,
1312
+ output_audio_path='result_voice_cloning.wav',
1313
+ )
1314
+
1315
+ ```
1316
+
1317
+ <hr/>
1318
+
1319
+ #### Addressing Various Audio Understanding Tasks
1320
+
1321
+ `MiniCPM-o-2.6` can also be used to address various audio understanding tasks, such as ASR, speaker analysis, general audio captioning, and sound scene tagging.
1322
+
1323
+ For audio-to-text tasks, you can use the following prompts:
1324
+
1325
+ - ASR with ZH(same as AST en2zh): `请仔细听这段音频片段,并将其内容逐字记录。`
1326
+ - ASR with EN(same as AST zh2en): `Please listen to the audio snippet carefully and transcribe the content.`
1327
+ - Speaker Analysis: `Based on the speaker's content, speculate on their gender, condition, age range, and health status.`
1328
+ - General Audio Caption: `Summarize the main content of the audio.`
1329
+ - General Sound Scene Tagging: `Utilize one keyword to convey the audio's content or the associated scene.`
1330
+
1331
+ ```python
1332
+ task_prompt = "Please listen to the audio snippet carefully and transcribe the content." + "\n" # can change to other prompts.
1333
+ audio_input, _ = librosa.load('./assets/input_examples/audio_understanding.mp3', sr=16000, mono=True) # load the audio to be captioned
1334
+
1335
+ msgs = [{'role': 'user', 'content': [task_prompt, audio_input]}]
1336
+
1337
+ res = model.chat(
1338
+ msgs=msgs,
1339
+ tokenizer=tokenizer,
1340
+ sampling=True,
1341
+ max_new_tokens=128,
1342
+ use_tts_template=True,
1343
+ generate_audio=True,
1344
+ temperature=0.3,
1345
+ output_audio_path='result_audio_understanding.wav',
1346
+ )
1347
+ print(res)
1348
+ ```
1349
+
1350
+
1351
+ ### Vision-Only mode
1352
+
1353
+ `MiniCPM-o-2_6` has the same inference methods as `MiniCPM-V-2_6`
1354
+
1355
+ #### Chat with single image
1356
+ ```python
1357
+ # test.py
1358
+ image = Image.open('xx.jpg').convert('RGB')
1359
+ question = 'What is in the image?'
1360
+ msgs = [{'role': 'user', 'content': [image, question]}]
1361
+ res = model.chat(
1362
+ image=None,
1363
+ msgs=msgs,
1364
+ tokenizer=tokenizer
1365
+ )
1366
+ print(res)
1367
+
1368
+ ## if you want to use streaming, please make sure sampling=True and stream=True
1369
+ ## the model.chat will return a generator
1370
+ res = model.chat(
1371
+ msgs=msgs,
1372
+ tokenizer=tokenizer,
1373
+ sampling=True,
1374
+ stream=True
1375
+ )
1376
+ generated_text = ""
1377
+ for new_text in res:
1378
+ generated_text += new_text
1379
+ print(new_text, flush=True, end='')
1380
+ ```
1381
+
1382
+ #### Chat with multiple images
1383
+ <details>
1384
+ <summary> Click to show Python code running MiniCPM-o 2.6 with multiple images input. </summary>
1385
+
1386
+ ```python
1387
+ image1 = Image.open('image1.jpg').convert('RGB')
1388
+ image2 = Image.open('image2.jpg').convert('RGB')
1389
+ question = 'Compare image 1 and image 2, tell me about the differences between image 1 and image 2.'
1390
+ msgs = [{'role': 'user', 'content': [image1, image2, question]}]
1391
+ answer = model.chat(
1392
+ msgs=msgs,
1393
+ tokenizer=tokenizer
1394
+ )
1395
+ print(answer)
1396
+ ```
1397
+ </details>
1398
+
1399
+ #### In-context few-shot learning
1400
+ <details>
1401
+ <summary> Click to view Python code running MiniCPM-o 2.6 with few-shot input. </summary>
1402
+
1403
+ ```python
1404
+ question = "production date"
1405
+ image1 = Image.open('example1.jpg').convert('RGB')
1406
+ answer1 = "2023.08.04"
1407
+ image2 = Image.open('example2.jpg').convert('RGB')
1408
+ answer2 = "2007.04.24"
1409
+ image_test = Image.open('test.jpg').convert('RGB')
1410
+ msgs = [
1411
+ {'role': 'user', 'content': [image1, question]}, {'role': 'assistant', 'content': [answer1]},
1412
+ {'role': 'user', 'content': [image2, question]}, {'role': 'assistant', 'content': [answer2]},
1413
+ {'role': 'user', 'content': [image_test, question]}
1414
+ ]
1415
+ answer = model.chat(
1416
+ msgs=msgs,
1417
+ tokenizer=tokenizer
1418
+ )
1419
+ print(answer)
1420
+ ```
1421
+ </details>
1422
+
1423
+ #### Chat with video
1424
+ <details>
1425
+ <summary> Click to view Python code running MiniCPM-o 2.6 with video input. </summary>
1426
+
1427
+ ```python
1428
+ MAX_NUM_FRAMES=64 # if cuda OOM set a smaller number
1429
+ def encode_video(video_path):
1430
+ def uniform_sample(l, n):
1431
+ gap = len(l) / n
1432
+ idxs = [int(i * gap + gap / 2) for i in range(n)]
1433
+ return [l[i] for i in idxs]
1434
+ vr = VideoReader(video_path, ctx=cpu(0))
1435
+ sample_fps = round(vr.get_avg_fps() / 1) # FPS
1436
+ frame_idx = [i for i in range(0, len(vr), sample_fps)]
1437
+ if len(frame_idx) > MAX_NUM_FRAMES:
1438
+ frame_idx = uniform_sample(frame_idx, MAX_NUM_FRAMES)
1439
+ frames = vr.get_batch(frame_idx).asnumpy()
1440
+ frames = [Image.fromarray(v.astype('uint8')) for v in frames]
1441
+ print('num frames:', len(frames))
1442
+ return frames
1443
+ video_path ="video_test.mp4"
1444
+ frames = encode_video(video_path)
1445
+ question = "Describe the video"
1446
+ msgs = [
1447
+ {'role': 'user', 'content': frames + [question]},
1448
+ ]
1449
+ # Set decode params for video
1450
+ params={}
1451
+ params["use_image_id"] = False
1452
+ params["max_slice_nums"] = 2 # use 1 if cuda OOM and video resolution > 448*448
1453
+ answer = model.chat(
1454
+ msgs=msgs,
1455
+ tokenizer=tokenizer,
1456
+ **params
1457
+ )
1458
+ print(answer)
1459
+ ```
1460
+ </details>
1461
+
1462
+ Please look at [GitHub](https://github.com/OpenBMB/MiniCPM-o) for more detail about usage.
1463
+
1464
+
1465
+ ## Inference with llama.cpp<a id="llamacpp"></a>
1466
+ MiniCPM-o 2.6 (vision-only mode) can run with llama.cpp. See our fork of [llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpm-omni) and [readme](https://github.com/OpenBMB/llama.cpp/blob/minicpm-omni/examples/llava/README-minicpmo2.6.md) for more detail.
1467
+
1468
+
1469
+ ## Int4 quantized version
1470
+ Download the int4 quantized version for lower GPU memory (7GB) usage: [MiniCPM-o-2_6-int4](https://huggingface.co/openbmb/MiniCPM-o-2_6-int4).
1471
+
1472
+
1473
+ ## License
1474
+ #### Model License
1475
+ * The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
1476
+ * The usage of MiniCPM-o and MiniCPM-V series model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md).
1477
+ * The models and weights of MiniCPM are completely free for academic research. After filling out a ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, MiniCPM-o 2.6 weights are also available for free commercial use.
1478
+
1479
+
1480
+ #### Statement
1481
+ * As an LMM, MiniCPM-o 2.6 generates contents by learning a large mount of multimodal corpora, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-o 2.6 does not represent the views and positions of the model developers
1482
+ * We will not be liable for any problems arising from the use of the MinCPM-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
1483
+
1484
+ ## Key Techniques and Other Multimodal Projects
1485
+
1486
+ 👏 Welcome to explore key techniques of MiniCPM-o 2.6 and other multimodal projects of our team:
1487
+
1488
+ [VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V)
1489
+
1490
+ ## Citation
1491
+
1492
+ If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️!
1493
+
1494
+ ```bib
1495
+ @article{yao2024minicpm,
1496
+ title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone},
1497
+ author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and others},
1498
+ journal={arXiv preprint arXiv:2408.01800},
1499
+ year={2024}
1500
+ }
1501
+ ```