The Dataset Viewer has been disabled on this dataset.

Dataset Description:

This is a fully annotated, synthetically generated dataset consisting of 1,000 demonstrations of a single Franka Panda robot arm performing a fixed-order three-cube stacking task in Isaac Lab. The robot consistently stacks cubes in the order: blue (bottom) → red (middle) → green (top).

The dataset was produced using the following pipeline:

  • Collected 10 human teleoperation demonstrations of the stacking task.
  • Used Isaac Lab’s Mimic tool [1] to simulate 1,000 high-quality trajectories in Isaac Sim.
  • Applied Cosmos Transfer1 model [2] to augment the RGB visuals from the table camera with photorealistic domain adaptation.

Each demonstration includes synchronized multimodal data:

  • RGB videos from both a table-mounted and wrist-mounted camera.
  • Depth, segmentation, and surface normal maps from the table camera.
  • Full low-level robot and object states (joints, end-effector, gripper, cube poses).
  • Action sequences executed by the robot.

This dataset is ideal for behavior cloning, policy learning, and generalist robotic manipulation research.

This dataset is ready for commercial use.

Dataset Owner(s):

NVIDIA Corporation

Dataset Creation Date:

05/14/2025

License/Terms of Use:

This dataset is governed by the Creative Commons Attribution 4.0 International License (CC-BY-4.0).

Intended Usage:

This dataset is intended for:

  • Training robot manipulation policies using behavior cloning.
  • Research in generalist robotics and task-conditioned agents.
  • Sim-to-real transfer studies and visual domain adaptation.

Dataset Characterization:

Data Collection Method

  • Automated
  • Automatic/Sensors
  • Synthetic

10 human teleoperated demonstrations were used to bootstrap a Mimic-based simulation [1] in Isaac Sim. All 1,000 demos are generated automatically followed by domain-randomized visual augmentation using Cosmos Transfer1 [2].

Labeling Method

  • Not Applicable

Dataset Format:

We provide the Mimic generated 1000 demonstrations and the 1000 Cosmos augmented demonstrations in separate HDF5 dataset files (mimic_dataset_1k.hdf5 and cosmos_dataset_1k.hdf5 respectively). Each demo in each file consists of a time-indexed sequence of the following modalities:

Actions

  • 7D vector: 6D relative end-effector motion + 1D gripper action

Observations

  • Robot states: Joint positions, velocities, and gripper open/close state
  • EEF states: End-effector 6-DOF pose
  • Cube states: Poses (positions + orientations) for blue, red, and green cubes
  • Table camera visuals:
    • 200×200 RGB
    • 200×200 Depth
    • 200×200 Segmentation mask
    • 200×200 Surface normal map
  • Wrist camera visuals:
    • 200×200 RGB

The datasets' trajectories can be replayed in simulation using Isaac Lab (refer to this script). The videos can be extracted in MP4 format using this script.

Dataset Quantification:

Record Count

  • mimic_dataset_1k

    • Number of demonstrations/trajectories: 1000
    • Number of RGB videos: 2000 (1000 table camera + 1000 wrist camera)
    • Number of depth videos: 1000 (table camera)
    • Number of segmentation videos: 1000 (table camera)
    • Number of normal map videos: 1000 (table camera)
  • cosmos_dataset_1k

    • Number of demonstrations/trajectories: 1000
    • Number of RGB videos: 2000 (1000 table camera + 1000 wrist camera)
    • Number of depth videos: 1000 (table camera)
    • Number of segmentation videos: 1000 (table camera)
    • Number of normal map videos: 1000 (table camera)

Total Storage

  • 69.4 GB

Reference(s):

[1] @inproceedings{mandlekar2023mimicgen,
    title={MimicGen: A Data Generation System for Scalable Robot Learning using Human Demonstrations},
    author={Mandlekar, Ajay and Nasiriany, Soroush and Wen, Bowen and Akinola, Iretiayo and Narang, Yashraj and Fan, Linxi and Zhu, Yuke and Fox, Dieter},
    booktitle={7th Annual Conference on Robot Learning},
    year={2023}
    }
[2] @misc{nvidia2025cosmostransfer1conditionalworldgeneration,
      title = {Cosmos-Transfer1: Conditional World Generation with Adaptive Multimodal Control},
      author = {NVIDIA and Abu Alhaija, Hassan and Alvarez, Jose and Bala, Maciej and Cai, Tiffany and Cao, Tianshi and Cha, Liz and Chen, Joshua and Chen, Mike and Ferroni, Francesco and Fidler, Sanja and Fox, Dieter and Ge, Yunhao and Gu, Jinwei and Hassani, Ali and Isaev, Michael and Jannaty, Pooya and Lan, Shiyi and Lasser, Tobias and Ling, Huan and Liu, Ming-Yu and Liu, Xian and Lu, Yifan and Luo, Alice and Ma, Qianli and Mao, Hanzi and Ramos, Fabio and Ren, Xuanchi and Shen, Tianchang and Tang, Shitao and Wang, Ting-Chun and Wu, Jay and Xu, Jiashu and Xu, Stella and Xie, Kevin and Ye, Yuchong and Yang, Xiaodong and Zeng, Xiaohui and Zeng, Yu},
      journal = {arXiv preprint arXiv:2503.14492},
      year = {2025},
      url = {https://arxiv.org/abs/2503.14492}
    }

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns here.

Downloads last month
144

Collection including nvidia/PhysicalAI-Robotics-Manipulation-Augmented