Create README.md
Browse files
reconstruction_benchmark/README.md
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
This folder contains the reconstructions used in the reconstruction benchmark in the paper.
|
| 2 |
+
For each sequence, we provide:
|
| 3 |
+
- The input data
|
| 4 |
+
- **Images**: hardware synchronised camera images from the three Alphasense cameras
|
| 5 |
+
- **Vilens_slam**: undistorted lidar point cloud obtained from VILENS-SLAM. The timestamps are synchronised with the images with motion undistortion.
|
| 6 |
+
- **T_gt_lidar.txt**: the global transform between the lidar map and the ground truth map in). This allows one to compare the reconstruction with the ground truth in a single coordinate system.
|
| 7 |
+
-Note 1: the raw point cloud is 10 Hz and raw camera is 20 Hz. Here, the point cloud provided comes from the pose-graph SLAM where a node is spawned every 1 metre travelled. The resultant frequency of the camera image and lidar cloud provided is about 1 Hz.
|
| 8 |
+
- The reconstructions
|
| 9 |
+
- **lidar_cloud_merged_error.pcd**: merged lidar cloud file.
|
| 10 |
+
- **nerfacto_cloud_metric_gt_frame_error.pcd**: exported point cloud from nerfacto.
|
| 11 |
+
- **openmvs_dense_cloud_gt_frame_error.pcd**: dense MVS point cloud from OpenMVS.
|
| 12 |
+
- Note 1: all reconstruction are coloured by point-to-point distance to the ground truth, i.e. reconstruction errors.
|
| 13 |
+
- Note 2: all reconstructions are filtered by the ground truth’s occupancy map **gt_cloud.bt** to avoid penalising points in the unknown space. This is described in [SiLVR](https://arxiv.org/abs/2502.02657v1) section V.C.2
|