Sicong nielsr HF Staff commited on
Commit
69e1532
·
verified ·
1 Parent(s): aa8707e

Improve model card: Correct pipeline tag, link to code, and project page (#1)

Browse files

- Improve model card: Correct pipeline tag, link to code, and project page (031c4813290e1105c372a5cd336e3e965e529065)
- Update README.md (bd9eb3ded38209d6ba1960767425fc0568b5d82a)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +11 -5
README.md CHANGED
@@ -1,16 +1,17 @@
1
  ---
2
- library_name: transformers
3
- license: apache-2.0
4
- language:
5
- - en
6
  base_model:
7
  - Qwen/Qwen2.5-VL-7B-Instruct
8
- pipeline_tag: visual-question-answering
 
 
 
 
9
  tags:
10
  - multi-modal
11
  - large-language-model
12
  ---
13
 
 
14
  <p align="center">
15
  <img src="https://github.com/LengSicong/MMR1/blob/main/assets/logo.png?raw=true" width="150" style="margin-bottom: 0.2;"/>
16
  <p>
@@ -22,6 +23,10 @@ MMR1: Advancing the Frontiers of Multimodal Reasoning</a></h3>
22
  ## 📰 News
23
  * **[2025.03.11]** 🔥🔥 Release MMR1-Math-v0, achieving SOTA with only 6k data!
24
 
 
 
 
 
25
 
26
  ## Model Description
27
  MMR1-Math-v0-7B is a Large Multimodal Model specialized in mathematical tasks. Remarkably, MMR1-Math-v0-7B achieves state-of-the-art performance among open-source 7B multimodal models, competing effectively even against proprietary models with significantly larger parameter sizes—all trained using only 6k carefully curated data instances.
@@ -177,4 +182,5 @@ If you find MMR1 useful for your research and applications, please cite using th
177
  year={2025},
178
  howpublished={\url{https://github.com/LengSicong/MMR1}},
179
  }
 
180
  ```
 
1
  ---
 
 
 
 
2
  base_model:
3
  - Qwen/Qwen2.5-VL-7B-Instruct
4
+ language:
5
+ - en
6
+ library_name: transformers
7
+ license: apache-2.0
8
+ pipeline_tag: image-text-to-text
9
  tags:
10
  - multi-modal
11
  - large-language-model
12
  ---
13
 
14
+ ```markdown
15
  <p align="center">
16
  <img src="https://github.com/LengSicong/MMR1/blob/main/assets/logo.png?raw=true" width="150" style="margin-bottom: 0.2;"/>
17
  <p>
 
23
  ## 📰 News
24
  * **[2025.03.11]** 🔥🔥 Release MMR1-Math-v0, achieving SOTA with only 6k data!
25
 
26
+ ## Links
27
+ Code: https://github.com/LengSicong/MMR1
28
+
29
+ This model was presented in the paper [LMM-R1: Empowering 3B LMMs with Strong Reasoning Abilities Through Two-Stage Rule-Based RL](https://arxiv.org/abs/2503.07536). Code can be found at https://github.com/LengSicong/MMR1
30
 
31
  ## Model Description
32
  MMR1-Math-v0-7B is a Large Multimodal Model specialized in mathematical tasks. Remarkably, MMR1-Math-v0-7B achieves state-of-the-art performance among open-source 7B multimodal models, competing effectively even against proprietary models with significantly larger parameter sizes—all trained using only 6k carefully curated data instances.
 
182
  year={2025},
183
  howpublished={\url{https://github.com/LengSicong/MMR1}},
184
  }
185
+ ```
186
  ```