Datasets:
ArXiv:
License:
Update dataset card to AndroidWorld
#3
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,219 +1,13 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
viewer: false
|
|
|
|
|
4 |
---
|
5 |
|
|
|
6 |
|
7 |
-
|
8 |
-
This document describes the acquisition of the pre-training data used by OS-ATLAS [OS-ATLAS: A Foundation Action Model for Generalist GUI Agents](https://huggingface.co/papers/2410.23218).
|
9 |
|
10 |
-
|
11 |
-
|
12 |
-
[\[🏠Homepage\]](https://osatlas.github.io) [\[💻Code\]](https://github.com/OS-Copilot/OS-Atlas) [\[🚀Quick Start\]](#quick-start) [\[📝Paper\]](https://arxiv.org/abs/2410.23218) [\[🤗Models\]](https://huggingface.co/collections/OS-Copilot/os-atlas-67246e44003a1dfcc5d0d045) [\[🤗ScreenSpot-v2\]](https://huggingface.co/datasets/OS-Copilot/ScreenSpot-v2)
|
13 |
-
|
14 |
-
</div>
|
15 |
-
|
16 |
-

|
17 |
-
|
18 |
-
|
19 |
-
**Notes:** In GUI grounding data, the position of the target element is recorded in the `bbox` key, represented by `[left, top, right, bottom]`.
|
20 |
-
Each value is a [0, 1] decimal number indicating the ratio of the corresponding position to the width or height of the image.
|
21 |
-
|
22 |
-
The data stored in this dataset consists of raw data containing **only** element grounding information. When training a model, you need to use the corresponding prompts to wrap these data.
|
23 |
-
|
24 |
-
The data we released is divided into three domains: mobile, desktop and web.
|
25 |
-
|
26 |
-
All annotation data is stored in JSON format and each sample contains:
|
27 |
-
* `img_filename`: the interface screenshot file
|
28 |
-
* `instruction`: human instruction or referring expression extracted from ally tree or html
|
29 |
-
* `bbox`: the bounding box of the target element corresponding to instruction
|
30 |
-
|
31 |
-
Some data also contains a `data_type`, which records the type of an element in its structured information, if it can be obtained.
|
32 |
-
|
33 |
-
***
|
34 |
-
|
35 |
-
### Mobile data
|
36 |
-
|
37 |
-
This part of data is stored under the *mobile_domain* directory. Our mobile grounding data consists of four parts.
|
38 |
-
|
39 |
-
#### AMEX
|
40 |
-
|
41 |
-
Android Multi-annotation EXpo (AMEX) is a comprehensive, large-scale dataset designed for generalist mobile GUI-control agents [1].
|
42 |
-
|
43 |
-
The annotation data is stored in
|
44 |
-
|
45 |
-
-`amex_raw.json`
|
46 |
-
|
47 |
-
Due to the single file size limitation of Hugging Face datasets, we stored the Amex images in *zip* format and split them into several sub-files.
|
48 |
-
|
49 |
-
- `amex_images_part_aa`
|
50 |
-
- `amex_images_part_ab`
|
51 |
-
- `amex_images_part_ac`
|
52 |
-
|
53 |
-
You need to first merge these split files back into the original file and then extract the contents.
|
54 |
-
|
55 |
-
```
|
56 |
-
cat amex_images_part_* > amex_images.zip
|
57 |
-
7z x amex_images.zip -aoa -o/path/to/extract/folder
|
58 |
-
```
|
59 |
-
|
60 |
-
#### UIBert
|
61 |
-
|
62 |
-
UIBert [2] is a dataset extended from Rico dataset [3] for two tasks: similar UI component retrieval and referring expression component retrieval.
|
63 |
-
|
64 |
-
The annotation data is stored in
|
65 |
-
|
66 |
-
- `uibert_raw.json`
|
67 |
-
|
68 |
-
The UIBert images are stored in
|
69 |
-
|
70 |
-
- `UIBert.zip`
|
71 |
-
|
72 |
-
#### Widget Captioning and RICOSCA
|
73 |
-
|
74 |
-
Widget Captioning data are collected by [4].
|
75 |
-
|
76 |
-
RICOSCA is a dataset automatically labeled using Android VH in [5]
|
77 |
-
|
78 |
-
The annotation data is stored in
|
79 |
-
|
80 |
-
- `widget_captioning.json`
|
81 |
-
- `ricosca.json`
|
82 |
-
|
83 |
-
The rico images are stored in
|
84 |
-
|
85 |
-
- `rico_imgs.zip`
|
86 |
-
|
87 |
-
#### Android_world_data
|
88 |
-
|
89 |
-
This part of data are sampled from a android environment for building and benchmarking autonomous computer control agents [6].
|
90 |
-
|
91 |
-
The annotation data is stored in
|
92 |
-
|
93 |
-
- `aw_mobile.json`
|
94 |
-
|
95 |
-
The rico images are stored in
|
96 |
-
|
97 |
-
- `mobile_images.zip`
|
98 |
-
|
99 |
-
***
|
100 |
-
|
101 |
-
### Desktop data
|
102 |
-
|
103 |
-
This part of data is stored under the *desktop_domain* directory.
|
104 |
-
|
105 |
-
All of the desktop grounding data is collected from the real environments of personal computers running different operating systems. Each image is split into multiple sub-images to enhance data diversity.
|
106 |
-
|
107 |
-
Our desktop grounding data consists of three parts: Windows, Linux and MacOS.
|
108 |
-
|
109 |
-
**The image and annotation data for each operating system are stored in corresponding zip and json files.**
|
110 |
-
|
111 |
-
It is worth noting that, due to the large size of the Windows image data, the split files need to be merged before extraction.
|
112 |
-
|
113 |
-
```
|
114 |
-
cat windows_image_part_* > windows_images.zip
|
115 |
-
7z x windows_images.zip -aoa -o/path/to/extract/folder
|
116 |
-
```
|
117 |
-
|
118 |
-
***
|
119 |
-
|
120 |
-
### Web data
|
121 |
-
|
122 |
-
This part of data is stored under the *web_domain* directory.
|
123 |
-
|
124 |
-
Our desktop grounding data consists of two parts.
|
125 |
-
|
126 |
-
#### Seeclick web data
|
127 |
-
|
128 |
-
The web data from SeeClick [7] was crawled from websites provided by Common Crawl, containing more than 270k webpage screenshots and over 3 million webpage elements.
|
129 |
-
|
130 |
-
The annotation data is stored in
|
131 |
-
|
132 |
-
- `seeclick_web.json`
|
133 |
-
|
134 |
-
The images are stored into split files and need to be merged before extraction.
|
135 |
-
|
136 |
-
```
|
137 |
-
cat seeclick_web_image_part_* > seeclick_web_images.zip
|
138 |
-
7z x seeclick_web_images.zip -aoa -o/path/to/extract/folder
|
139 |
-
```
|
140 |
-
|
141 |
-
#### Fineweb_crawled_data
|
142 |
-
|
143 |
-
This part of data is crawled from web pages from the latest URLs obtained from FineWeb [8], a cleaned and deduplicated English dataset derived from Common Crawl.
|
144 |
-
|
145 |
-
Since this portion of the data contains at least 1.6 million images, we have compressed them into 10 zip files, from `fineweb_3m_s11.zip` to `fineweb_3m_s52.zip`.
|
146 |
-
|
147 |
-
Please extract them into the same directory.
|
148 |
-
|
149 |
-
As an example,
|
150 |
-
|
151 |
-
```
|
152 |
-
7z x fineweb_3m_s11.zip -aoa -o/same/path/to/extract/fineweb
|
153 |
-
```
|
154 |
-
|
155 |
-
The annotation data is stored in
|
156 |
-
|
157 |
-
- `fineweb_3m.json`
|
158 |
-
|
159 |
-
***
|
160 |
-
|
161 |
-
### Best practice
|
162 |
-
|
163 |
-
During the training of **OS-Atlas-4B**, we randomly sampled predefined prompts to wrap the grounding data. Additionally, we scaled the relative coordinates of each element (in the range [0, 1]) by multiplying them by 1000 before inputting them into the model for training.
|
164 |
-
|
165 |
-
Below is an example of a data entry:
|
166 |
-
|
167 |
-
```
|
168 |
-
{
|
169 |
-
"conversations": [
|
170 |
-
{
|
171 |
-
"from": "human",
|
172 |
-
"value": "<image>\nUsing the provided screenshot, I'll describe webpage elements for you to locate (with bbox).\n<ref>media-maniacs.org</ref>\n<ref>Home</ref>\n<ref>Sitemap</ref>\n<ref>shop you can have what you choose 2012</ref>"
|
173 |
-
},
|
174 |
-
{
|
175 |
-
"from": "gpt",
|
176 |
-
"value": "<ref>media-maniacs.org</ref><box>[[70,856,141,871]]</box>\n<ref>Home</ref><box>[[21,935,43,951]]</box>\n<ref>Sitemap</ref><box>[[21,919,52,934]]</box>\n<ref>shop you can have what you choose 2012</ref><box>[[368,839,523,855]]</box>"
|
177 |
-
}
|
178 |
-
]
|
179 |
-
}
|
180 |
-
```
|
181 |
-
|
182 |
-
**OS-Atlas-7B**
|
183 |
-
|
184 |
-
```
|
185 |
-
{
|
186 |
-
"conversations": [
|
187 |
-
{
|
188 |
-
"from": "human",
|
189 |
-
"value": "<image>\nUsing the provided screenshot, I'll describe webpage elements for you to locate (with bbox).\n<|object_ref_start|>Facebook<|object_ref_end|>\n<|object_ref_start|>Subscribe<|object_ref_end|>\n<|object_ref_start|>Twitter<|object_ref_end|>\n<|object_ref_start|>Read More<|object_ref_end|>\n<|object_ref_start|>Read More<|object_ref_end|>"
|
190 |
-
},
|
191 |
-
{
|
192 |
-
"from": "gpt",
|
193 |
-
"value": "<|object_ref_start|>Facebook<|object_ref_end|><|box_start|>(4,955),(36,970)<|box_end|>\n<|object_ref_start|>Subscribe<|object_ref_end|><|box_start|>(4,913),(43,932)<|box_end|>\n<|object_ref_start|>Twitter<|object_ref_end|><|box_start|>(39,955),(62,970)<|box_end|>\n<|object_ref_start|>Read More<|object_ref_end|><|box_start|>(30,138),(73,157)<|box_end|>\n<|object_ref_start|>Read More<|object_ref_end|><|box_start|>(30,139),(73,155)<|box_end|>"
|
194 |
-
}
|
195 |
-
]
|
196 |
-
}
|
197 |
-
```
|
198 |
-
|
199 |
-
The prompts we used are stored in `prompts.json`.
|
200 |
-
|
201 |
-
***
|
202 |
-
|
203 |
-
**The following are the open-source datasets we used as data sources. We welcome everyone to check the details and cite these sources accordingly!**
|
204 |
-
|
205 |
-
[1] [AMEX: Android Multi-annotation Expo Dataset for Mobile GUI Agents](https://arxiv.org/abs/2407.17490)
|
206 |
-
|
207 |
-
[2] [UIBert: Learning Generic Multimodal Representations for UI Understanding](https://arxiv.org/abs/2107.13731)
|
208 |
-
|
209 |
-
[3] [Rico: A mobile app dataset for building data-driven design applications](https://dl.acm.org/doi/pdf/10.1145/3126594.3126651)
|
210 |
-
|
211 |
-
[4] [Widget Captioning: Generating Natural Language Description for Mobile User Interface Elements](https://arxiv.org/pdf/2010.04295.pdf)
|
212 |
-
|
213 |
-
[5] [Mapping Natural Language Instructions to Mobile UI Action Sequences](https://arxiv.org/pdf/2005.03776)
|
214 |
-
|
215 |
-
[6] [ANDROIDWORLD: A Dynamic Benchmarking Environment for Autonomous Agents](https://arxiv.org/abs/2405.14573)
|
216 |
-
|
217 |
-
[7] [SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents](https://arxiv.org/abs/2401.10935)
|
218 |
-
|
219 |
-
[8] [The fineweb datasets: Decanting the web for the finest text data at scale](https://arxiv.org/abs/2406.17557)
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
viewer: false
|
4 |
+
task_categories:
|
5 |
+
- robotics
|
6 |
---
|
7 |
|
8 |
+
This repository contains data related to the [AndroidWorld: A Dynamic Benchmarking Environment for Autonomous Agents](https://huggingface.co/papers/2405.14573) environment.
|
9 |
|
10 |
+
AndroidWorld is a fully functional Android environment that provides reward signals for 116 programmatic tasks across 20 real-world Android apps.
|
|
|
11 |
|
12 |
+
* **Paper:** [https://huggingface.co/papers/2405.14573](https://huggingface.co/papers/2405.14573)
|
13 |
+
* **Code:** [github.com/google-research/android\_world](github.com/google-research/android_world)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|