blumenstiel commited on
Commit
687a219
·
1 Parent(s): 4469ce0

Update README

Browse files
Files changed (1) hide show
  1. README.md +5 -14
README.md CHANGED
@@ -21,25 +21,16 @@ Samples from the TerraMesh dataset with seven spatiotemporal aligned modalities.
21
 
22
  ---
23
 
24
- ## Key features
25
-
26
- | Aspect | Value |
27
- | ---------- |--------------------------------------------------------------------------------------------------------------------------------|
28
- | Modalities | S‑2 L1C (13 bands), S‑2 L2A (12 bands), S‑2 RGB (3 bands), S‑1 GRD (VV, VH), S‑1 RTC (VV, VH), NDVI, Copernicus DEM, ESRI LULC |
29
- | Samples | **9 089 536** train · **89 088** val |
30
- | Patch size | 264 × 264 pixel (10 m grid) |
31
- | Years | 2017 – 2024 (peak 2019 ‑ 2023) |
32
- | License | CC‑BY‑SA‑4.0 |
33
-
34
- ---
35
-
36
  ## Dataset organisation
37
 
38
  The archive ships two top‑level splits `train/` and `val/`, each holding one folder per modality. More details with the dataset release end of June.
39
 
40
  ---
41
 
42
- ## Global coverage & class distribution
 
 
 
43
 
44
  Heat map of the sample count in a one-degree grid. | Monthly distribution of all S-2 timestamps.
45
  :-------------------------:|:-------------------------:
@@ -54,7 +45,7 @@ Heat map of the sample count in a one-degree grid. | Monthly distribution of al
54
  TerraMesh was used to pre-train [TerraMind-B](https://huggingface.co/ibm-esa-geospatial/TerraMind-1.0-base).
55
  On the six evaluated segmentation tasks from PANGAEA bench, TerraMind‑B reaches an average mIoU of 66.6%, the best overall score with an average rank of 2.33. This amounts to roughly a 3pp improvement over the next‑best open model (CROMA), underscoring the benefits of pre‑training on TerraMesh.
56
  Compared to an ablation model pre-trained only on SSL4EO-S12 locations TerraMind-B performs overall 1pp better with better global generalization on more remote tasks like CTM-SS.
57
-
58
  ---
59
 
60
  ## Citation
 
21
 
22
  ---
23
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  ## Dataset organisation
25
 
26
  The archive ships two top‑level splits `train/` and `val/`, each holding one folder per modality. More details with the dataset release end of June.
27
 
28
  ---
29
 
30
+ ## Description
31
+
32
+ TerraMesh fuses complementary optical, radar, topographic and thematic layers into pixel‑aligned 10 m cubes, allowing models to learn joint representations of land cover, vegetation dynamics and surface structure at planetary scale.
33
+ The dataset is globally distributed and covers multiple years.
34
 
35
  Heat map of the sample count in a one-degree grid. | Monthly distribution of all S-2 timestamps.
36
  :-------------------------:|:-------------------------:
 
45
  TerraMesh was used to pre-train [TerraMind-B](https://huggingface.co/ibm-esa-geospatial/TerraMind-1.0-base).
46
  On the six evaluated segmentation tasks from PANGAEA bench, TerraMind‑B reaches an average mIoU of 66.6%, the best overall score with an average rank of 2.33. This amounts to roughly a 3pp improvement over the next‑best open model (CROMA), underscoring the benefits of pre‑training on TerraMesh.
47
  Compared to an ablation model pre-trained only on SSL4EO-S12 locations TerraMind-B performs overall 1pp better with better global generalization on more remote tasks like CTM-SS.
48
+ More details in our [paper](https://arxiv.org/abs/2504.11172).
49
  ---
50
 
51
  ## Citation