m3cot-augmentation / README.md
Yuting6's picture
Update README.md
17124cb verified
metadata
dataset_info:
  features:
    - name: images
      sequence: image
    - name: problem
      dtype: string
    - name: answer
      dtype: string
  splits:
    - name: train
      num_bytes: 977774629.562
      num_examples: 7861
    - name: test
      num_bytes: 142173516
      num_examples: 1000
  download_size: 1059251976
  dataset_size: 1119948145.562
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*

license: mit datasets: - Yuting6/geoqa-r1v-augmentation - Yuting6/math-8k-augmentation - Yuting6/m3cot-augmentation - Yuting6/TQA-augmentation - Yuting6/Geo3k-augmentation - Yuting6/geoqa-r1v-noise - Yuting6/geoqa-r1v-crop - Yuting6/geoqa-r1v-blur - Yuting6/geoqa-r1v-8k-rotated - Yuting6/geoqa-r1v-8k-mixup base_model: - Qwen/Qwen2.5-VL-7B-Instruct

Vision Matters: Simple Visual Perturbations Can Boost Multimodal Math Reasoning

Paper Title and Link

The model was presented in the paper Vision Matters: Simple Visual Perturbations Can Boost Multimodal Math Reasoning. You can also find the paper on arXiv: Vision Matters: Simple Visual Perturbations Can Boost Multimodal Math Reasoning (arXiv:2506.09736)

Paper Abstract

Vision-Matters is a simple visual perturbation framework that can be easily integrated into existing post-training pipelines including SFT, DPO, and GRPO. Our findings highlight the critical role of visual perturbation: better reasoning begins with better seeing.