MME-SCI / README.md
JCruan's picture
Update README.md
7f973a6 verified
metadata
license: cc-by-nc-sa-4.0
task_categories:
  - image-text-to-text
  - visual-question-answering
language:
  - zh
  - en
  - es
  - fr
  - ja
tags:
  - Math
  - Physics
  - Chemistry
  - Biology
  - Multilingual
size_categories:
  - 1K<n<10K

MME-SCI: A Comprehensive and Challenging Science Benchmark for Multimodal Large Language Models

MME-SCI is a comprehensive multimodal benchmark designed to evaluate the scientific reasoning capabilities of Multimodal Large Language Models (MLLMs). It addresses key limitations of existing benchmarks by focusing on multilingual adaptability, comprehensive modality coverage, and fine-grained knowledge point annotation.

🌟 Key Features

  • Multilingual Support: Covers 5 languages (Chinese, English, French, Spanish, Japanese) to assess cross-lingual scientific reasoning.
  • Full Modality Coverage: Supports 3 evaluation modes (text-only, image-only, image-text hybrid) to test multimodal robustness.
  • Multidisciplinary Focus: Includes 4 core subjects (mathematics, physics, chemistry, biology) at the high school level.
  • Fine-Grained Knowledge Annotation: Annotates 63 specific knowledge points (e.g., "Magnetic Field" in physics, "Trigonometric Functions" in mathematics) for targeted performance analysis.
  • High-Quality Data: 1,019 manually curated question-answer pairs, with 83.3% sourced from 2025 and 16.2% from 2024 to ensure novelty and avoid training data contamination.

📊 Dataset Details

Attribute Details
Total Samples 1,019 question-answer pairs
Subjects & Distribution - Mathematics: 284
- Physics: 368
- Chemistry: 151
- Biology: 216
Languages Chinese, English, French, Spanish, Japanese
Modality Modes - Text-only: Pure textual questions
- Image-only: Screenshot-based questions
- Image-text hybrid: Combined visual and textual inputs
Question Types Single-choice, multiple-choice, fill-in-the-blank (all with verifiable answers for automated scoring)
Knowledge Points 63 fine-grained concepts (e.g., "Regulation of Plant Life Activities" in biology, "Basic Laws of Heredity" in genetics)

📈 Evaluation Results

MME-SCI has been tested on 20 MLLMs (16 open-source, 4 closed-source), revealing significant challenges for existing models:

Main Results on MME-SCI.

📝 Data Curation

The dataset was built through a rigorous 4-step process:

  1. Sample Filtering: High-difficulty questions selected by experts (top 0.1% in Gaokao).
  2. Data Digitization: Conversion to JSON format with OCR for image-based content.
  3. Language Transformation: Professional translation to 5 languages.
  4. Post-Audit: Cross-validation by 3 reviewers to ensure quality.

📚 Citation

If you use MME-SCI in your research, please cite:

@article{ruan2025mme,
title={MME-SCI: A Comprehensive and Challenging Science Benchmark for Multimodal Large Language Models},
author={Ruan, Jiacheng and Jiang, Dan and Gao, Xian and Liu, Ting and Fu, Yuzhuo and Kang, Yangyang},
journal={arXiv preprint arXiv:2508.13938},
year={2025}
}

🔗 Resources

📧 Contact

For questions, reach out to [email protected].