MSR-VTT / README.md
friedrichor's picture
Update README.md
c1af215 verified
metadata
configs:
  - config_name: train_9k
    data_files:
      - split: train
        path: msrvtt_train_9k.json
  - config_name: train_7k
    data_files:
      - split: train
        path: msrvtt_train_7k.json
  - config_name: test_1k
    data_files:
      - split: test
        path: msrvtt_test_1k.json
task_categories:
  - text-to-video
  - text-retrieval
  - video-classification
language:
  - en
size_categories:
  - 1K<n<10K

MSRVTT contains 10K video clips and 200K captions.

We adopt the standard 1K-A split protocol, which was introduced in JSFusion and has since become the de facto benchmark split in the Text-Video Retrieval field.

Train:

  • train_7k: 7,010 videos, 140,200 captions
  • train_9k: 9,000 videos, 180,000 captions

Test:

  • test_1k: 1,000 videos, 1,000 captions

🌟 Citation

@inproceedings{xu2016msrvtt,
  title={Msr-vtt: A large video description dataset for bridging video and language},
  author={Xu, Jun and Mei, Tao and Yao, Ting and Rui, Yong},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2016}
}