File size: 7,387 Bytes
fd423c0
9a73dc0
fd423c0
 
c8a2210
 
3df0671
 
fc8bb7d
 
3df0671
 
21a9878
9a73dc0
3df0671
 
8ab8e06
3df0671
 
 
8ab8e06
3df0671
 
8ab8e06
9a73dc0
3df0671
8ab8e06
9a73dc0
3df0671
 
9a73dc0
9fcc839
 
 
3df0671
9a73dc0
9fcc839
3df0671
 
9315fd6
 
 
8ab8e06
 
 
 
 
 
9a73dc0
8ab8e06
9a73dc0
 
9315fd6
 
 
9a73dc0
 
 
 
 
 
 
 
 
 
8ab8e06
 
 
9a73dc0
8ab8e06
 
 
9a73dc0
3df0671
 
 
9a73dc0
85aaa35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a411328
 
 
9fcc839
3df0671
 
8ab8e06
3df0671
 
921aba5
 
 
 
 
 
3df0671
 
 
921aba5
 
 
 
 
 
3df0671
21a9878
 
921aba5
21a9878
 
 
921aba5
 
 
 
 
 
fb88900
 
21a9878
3df0671
 
8ab8e06
3df0671
 
9a73dc0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
---
license: cc-by-4.0
task_categories:
- robotics
tags:
- robotics
---

# PhysicalAI Robotics Manipulation in the Kitchen

## Dataset Description:

PhysicalAI-Robotics-Manipulation-Kitchen is a dataset of automatic generated motions of robots performing operations such as opening and closing cabinets, drawers, dishwashers and fridges. The dataset was generated in IsaacSim leveraging reasoning algorithms and optimization-based motion planning to find solutions to the tasks automatically [1, 3]. The dataset includes a bimanual manipulator built with Kinova Gen3 arms. The environments are kitchen scenes where the furniture and appliances were procedurally generated [2].
This dataset is available for commercial use.


## Dataset Contact(s)
Fabio Ramos ([email protected]) <br>
Anqi Li ([email protected])

## Dataset Creation Date
03/18/2025

## License/Terms of Use
cc-by-4.0

## Intended Usage
This dataset is provided in LeRobot format and is intended for training robot policies and foundation models.

## Dataset Characterization
* Data Collection Method<br>
  * Automated <br>
  * Automatic/Sensors <br>
  * Synthetic <br>

* Labeling Method<br>
  * Synthetic <br>

## Dataset Format
Within the collection, there are eight datasets in LeRobot format `open_cabinet`, `close_cabinet`, `open_dishwasher`, `close_dishwasher`, `open_fridge`, close_fridge`, `open_drawer` and `close_drawer`. 
* `open cabinet`: The robot opens a cabinet in the kitchen. <br>
* `close cabinet`: The robot closes the door of a cabinet in the kitchen. <br>
* `open dishwasher`: The robot opens the door of a dishwasher in the kitchen. <br>
* `close dishwasher`: The robot closes the door of a dishwasher in the kitchen. <br>
* `open fridge`: The robot opens the fridge door. <br>
* `close fridge`: The robot closes the fridge door. <br>
* `open drawer`: The robot opens a drawer in the kitchen. <br>
* `close drawer`: The robot closes a drawer in the kitchen. <br>

The videos below illustrate three examples of the tasks: 

<div style="display: flex; justify-content: flex-start;">
<img src="./assets/episode_000009.gif" width="300" height="300" alt="open_dishwasher" />
<img src="./assets/episode_000008.gif" width="300" height="300" alt="open_cabinet" />
<img src="./assets/episode_000029.gif" width="300" height="300" alt="open_fridge" />
</div>

* action modality: 34D which includes joint states for the two arms, gripper joints, pan and tilt joints, torso joint, and front and back wheels.
* observation modalities
  * observation.state: 13D where the first 12D are the vectorized transform matrix of the "object of interest". The 13th entry is the joint value for the articulated object of interest (i.e. drawer, cabinet, etc).
  * observation.image.world__world_camera: 512x512 images of RGB, depth and semantic segmentation renderings stored as mp4 videos.
  * observation.image.external_camera: 512x512 images of RGB, depth and semantic segmentation renderings stored as mp4 videos.
  * observation.image.world__robot__right_arm_camera_color_frame__right_hand_camera: 512x512 images of RGB, depth and semantic segmentation renderings stored as mp4 videos.
  * observation.image.world__robot__left_arm_camera_color_frame__left_hand_camera: 512x512 images of RGB, depth and semantic segmentation renderings stored as mp4 videos.
  * observation.image.world__robot__camera_link__head_camera: 512x512 images of RGB, depth and semantic segmentation renderings stored as mp4 videos.


The videos below illustrate three of the cameras used in the dataset. 
<div style="display: flex; justify-content: flex-start;">
<img src="./assets/episode_000004_world.gif" width="300" height="300" alt="world" />
<img src="./assets/episode_000004.gif" width="300" height="300" alt="head" />
<img src="./assets/episode_000004_wrist.gif" width="300" height="300" alt="wrist" />
</div>


## Dataset Quantification
Record Count:
* `open_cabinet`
  * number of episodes: 78
  * number of frames: 39292
  * number of videos: 1170 (390 RGB videos, 390 depth videos, 390 semantic segmentation videos)
* `close_cabinet`
  * number of episodes: 205
  * number of frames: 99555
  * number of videos: 3075 (1025 RGB videos, 1025 depth videos, 1025 semantic segmentation videos)
* `open_dishwasher`
  * number of episodes: 72
  * number of frames: 28123
  * number of videos: 1080 (360 RGB videos, 360 depth videos, 360 semantic segmentation videos)
* `close_dishwasher`
  * number of episodes: 74
  * number of frames: 36078
  * number of videos: 1110 (370 RGB videos, 370 depth videos, 370 semantic segmentation videos)
* `open_fridge`
  * number of episodes: 193
  * number of frames: 93854
  * number of videos: 2895 (965 RGB videos, 965 depth videos, 965 semantic segmentation videos)
* `close_fridge`
  * number of episodes: 76
  * number of frames: 41894
  * number of videos: 1140 (380 RGB videos, 380 depth videos, 380 semantic segmentation videos)
* `open_drawer`
  * number of episodes: 99
  * number of frames: 37214
  * number of videos: 1485 (495 RGB videos, 495 depth videos, 495 semantic segmentation videos)
* `close_drawer`
  * number of episodes: 77
  * number of frames: 28998
  * number of videos: 1155 (385 RGB videos, 385 depth videos, 385 semantic segmentation videos)

<!-- Total = 1.1GB + 2.7G + 795M + 1.1G + 2.6G + 1.2G + 1.1G + 826M -->

Total storage: 11.4 GB


## Reference(s)
```
[1] @inproceedings{garrett2020pddlstream,
    title={Pddlstream: Integrating symbolic planners and blackbox samplers via optimistic adaptive planning},
    author={Garrett, Caelan Reed and Lozano-P{\'e}rez, Tom{\'a}s and Kaelbling, Leslie Pack},
    booktitle={Proceedings of the international conference on automated planning and scheduling},
    volume={30},
    pages={440--448},
    year={2020}
}

[2] @article{Eppner2024,
    title = {scene_synthesizer: A Python Library for Procedural Scene Generation in Robot Manipulation},
    author = {Clemens Eppner and Adithyavairavan Murali and Caelan Garrett and Rowland O'Flaherty and Tucker Hermans and Wei Yang and Dieter Fox},
    journal = {Journal of Open Source Software}
    publisher = {The Open Journal},
    year = {2024},
    Note = {\url{https://scene-synthesizer.github.io/}}
}

[3] @inproceedings{curobo_icra23,
    author={Sundaralingam, Balakumar and Hari, Siva Kumar Sastry and
        Fishman, Adam and Garrett, Caelan and Van Wyk, Karl and Blukis, Valts and
        Millane, Alexander and Oleynikova, Helen and Handa, Ankur and
        Ramos, Fabio and Ratliff, Nathan and Fox, Dieter},
    booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
    title={CuRobo: Parallelized Collision-Free Robot Motion Generation},
    year={2023},
    volume={},
    number={},
    pages={8112-8119},
    doi={10.1109/ICRA48891.2023.10160765}
}

```

## Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications.  When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.   

Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).