yifutao commited on
Commit
0d1f8d8
·
verified ·
1 Parent(s): fd58680

Update novel_view_synthesis_benchmark/README.md

Browse files
novel_view_synthesis_benchmark/README.md CHANGED
@@ -1,21 +1,21 @@
1
  This folder contains the input used in the novel-view synthesis benchmark in the paper.
2
 
3
  We provide data in the following sites:
4
- Blenheim Palace
5
- 05 for training and in-sequence evaluation;
6
- 01 for out-of-sequence evaluation
7
- Keble College 04
8
- The earlier part around the quad is for training and in-sequence evaluation
9
- The later part within the lawn is for out-of-sequence evaluation
10
- Radcliffe Observatory Quarter (ROQ) 01
11
- The earlier part near the fountain is for training and in-sequence evaluation;
12
- The later part back to the fountain with a different route is for out-of-sequence evaluation
13
 
14
 
15
  For each sequence, we provide:
16
- Images_train_eval: images from the three cameras. Note that for each image there is a prefix of either “eval” or “train” to indicate the training/test split. This is to comply with nerfstudio’s configuration. See this example to run nerfstudio with this data
17
- Sparse:COLMAP results, which is optionally used by 3DGS for Gaussian initialisation
18
- Transforms_train_eval.json: the metadata for training a NeRF. It includes each camera’s pose and the camera parameters. This can be used directly by nerfstudio.
19
 
20
  ---------------------------------------------------------------------------------------------------------------------------
21
 
@@ -25,5 +25,4 @@ Our camera uses auto-exposure, which is crucial for capturing scenes with differ
25
 
26
  This also brings up the challenge of colour consistency. In the above example, the colour of the same building is different due to change in exposure time. In the example below, the colour of the same building is different not only because of the exposure change, but also the lighting condition is different - the data is captured from different dates.
27
 
28
-
29
  Therefore, we also upload an additional colmap result of the Bodleian Library 01+02. This can facilitate research to handle the lighting inconsistency to produce reconstruction with a uniform texture across a longer temporal horizon.
 
1
  This folder contains the input used in the novel-view synthesis benchmark in the paper.
2
 
3
  We provide data in the following sites:
4
+ - Blenheim Palace
5
+ - 05 for training and **in-sequence** evaluation;
6
+ - 01 for **out-of-sequence** evaluation
7
+ - Keble College 04
8
+ - The earlier part around the quad is for training and **in-sequence** evaluation
9
+ - The later part within the lawn is for **out-of-sequence** evaluation
10
+ - Radcliffe Observatory Quarter (ROQ) 01
11
+ - The earlier part near the fountain is for training and **in-sequence** evaluation;
12
+ - The later part back to the fountain with a different route is for **out-of-sequence** evaluation
13
 
14
 
15
  For each sequence, we provide:
16
+ - I**mages_train_eval**: images from the three cameras. Note that for each image there is a prefix of either “eval” or “train” to indicate the training/test split. This is to comply with nerfstudio’s configuration. See this example to run nerfstudio with this data
17
+ - **Sparse**: COLMAP results, which is optionally used by 3DGS for Gaussian initialisation
18
+ - **Transforms_train_eval.json**: the metadata for training a NeRF. It includes each camera’s pose and the camera parameters. This can be used directly by nerfstudio.
19
 
20
  ---------------------------------------------------------------------------------------------------------------------------
21
 
 
25
 
26
  This also brings up the challenge of colour consistency. In the above example, the colour of the same building is different due to change in exposure time. In the example below, the colour of the same building is different not only because of the exposure change, but also the lighting condition is different - the data is captured from different dates.
27
 
 
28
  Therefore, we also upload an additional colmap result of the Bodleian Library 01+02. This can facilitate research to handle the lighting inconsistency to produce reconstruction with a uniform texture across a longer temporal horizon.