Update README.md
Browse files
README.md
CHANGED
|
@@ -1,29 +1,36 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
# RDT-1B
|
| 5 |
|
| 6 |
RDT-1B is a 1B-parameter imitation learning Diffusion Transformer pre-trained on 1M+ multi-robot episodes. Given a language instruction and 3-view RGB image observations, RDT can predict the next
|
| 7 |
64 robot actions. RDT is inherently compatible with almost all kinds of modern mobile manipulators, from single-arm to dual-arm, joint to EEF, pos. to vel., and even with a mobile chassis.
|
| 8 |
|
| 9 |
-
All the code and model weights are licensed under MIT license.
|
| 10 |
|
| 11 |
-
Please refer to our [project page](
|
| 12 |
|
| 13 |
## Model Details
|
| 14 |
|
| 15 |
-
- **Developed by**
|
| 16 |
- **License:** MIT
|
| 17 |
-
- **
|
| 18 |
-
- **
|
| 19 |
-
|
| 20 |
-
- **Repository:** [
|
| 21 |
-
- **Paper :** [
|
| 22 |
- **Project Page:** https://rdt-robotics.github.io/rdt-robotics/
|
| 23 |
|
| 24 |
## Uses
|
| 25 |
|
| 26 |
-
RDT
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
|
| 28 |
Please refer to [our repository](https://github.com/GeneralEmbodiedSystem/RoboticsDiffusionTransformer/blob/main/docs/pretrain.md) for all the above guides.
|
| 29 |
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
---
|
| 6 |
# RDT-1B
|
| 7 |
|
| 8 |
RDT-1B is a 1B-parameter imitation learning Diffusion Transformer pre-trained on 1M+ multi-robot episodes. Given a language instruction and 3-view RGB image observations, RDT can predict the next
|
| 9 |
64 robot actions. RDT is inherently compatible with almost all kinds of modern mobile manipulators, from single-arm to dual-arm, joint to EEF, pos. to vel., and even with a mobile chassis.
|
| 10 |
|
| 11 |
+
All the [code]() and pretrained model weights are licensed under MIT license.
|
| 12 |
|
| 13 |
+
Please refer to our [project page](https://rdt-robotics.github.io/rdt-robotics/) and [paper]() for more information.
|
| 14 |
|
| 15 |
## Model Details
|
| 16 |
|
| 17 |
+
- **Developed by** RDT Team from Tsinghua University.
|
| 18 |
- **License:** MIT
|
| 19 |
+
- **Language(s) (NLP):** en
|
| 20 |
+
- **Model Architecture:** Diffusion Transformer.
|
| 21 |
+
- **Pretrain dataset:** Curated pretrain dataset collected from 46 datasets. Please see [here]() for detail.
|
| 22 |
+
- **Repository:** [repo_url]
|
| 23 |
+
- **Paper :** [paper_url]
|
| 24 |
- **Project Page:** https://rdt-robotics.github.io/rdt-robotics/
|
| 25 |
|
| 26 |
## Uses
|
| 27 |
|
| 28 |
+
RDT takes language instruction, image observations and proprioception as input, and predicts the next 64 robot actions in the form of unified action space vector,
|
| 29 |
+
including all the main physical quantities of robots, including the end-effector and joint, position and velocity, base movement, etc.
|
| 30 |
+
|
| 31 |
+
### Getting Started
|
| 32 |
+
|
| 33 |
+
RDT-1B supports finetuning on custom dataset, deploying and inferencing on real-robots, as well as pretraining the model.
|
| 34 |
|
| 35 |
Please refer to [our repository](https://github.com/GeneralEmbodiedSystem/RoboticsDiffusionTransformer/blob/main/docs/pretrain.md) for all the above guides.
|
| 36 |
|