|
--- |
|
license: cc-by-nc-nd-4.0 |
|
task_categories: |
|
- text-to-image |
|
--- |
|
|
|
The **Git-10M** dataset is a global-scale remote sensing image-text pair dataset, consisting of over **10 million** image-text pairs with geographical locations and resolution information. |
|
|
|
## CC-BY-NC-ND-4.0 License: This dataset is not allowed to be modified or distributed without authorization! |
|
|
|
<h1> |
|
<a href="https://chen-yang-liu.github.io/Text2Earth/">Project Page: https://chen-yang-liu.github.io/Text2Earth/ </a> |
|
</h1> |
|
|
|
<div align="center"> |
|
<img src="https://github.com/Chen-Yang-Liu/Text2Earth/raw/main/images/dataset.png" width="1000"/> |
|
</div> |
|
|
|
|
|
## View samples from the dataset |
|
```python |
|
from datasets import load_dataset |
|
import math |
|
|
|
def XYZToLonLat(x,y,z): |
|
# Transform tile-location to (longitude,latitude) |
|
n = 2**z*1.0 |
|
lon = x / n * 360.0 - 180.0 # longitude |
|
lat = math.atan(math.sinh(math.pi * (1 - 2.0 * y / n))) |
|
lat = math.degrees(lat) # latitude |
|
return lon,lat |
|
|
|
|
|
# load dataset |
|
save_path = 'xxxxx' |
|
ds = load_dataset.load('lcybuaa/Git-10M', cache_dir=save_path) |
|
train_dataset = ds["train"] |
|
|
|
|
|
for i, example in enumerate(train_dataset): |
|
# PIL image: |
|
image = example["image"] |
|
# filename of the image: |
|
img_name = example["img_name"] |
|
# visual quality score as shown in Fig. 5 of the paper. |
|
img_quality_score = example['img_quality_score'] |
|
# caption of the image |
|
caption = example['caption'] |
|
# word length of the caption as shown in Fig. 6 of the paper. |
|
caption_length = example['caption_length'] |
|
# image spatial resolution as shown in Fig. 4 of the paper. |
|
resolution = example['resolution'] |
|
# image Geolocation as shown in Fig. 3 of the paper. |
|
Google_location = example['Google_location'] |
|
Level_TileZ, TileX, TileY = Google_location.split('_') |
|
longitude, latitude = XYZToLonLat(TileX, TileY, Level_TileZ) |
|
|
|
# More Tips: |
|
# Resolution = 2 ** (17 - Level_TileZ) |
|
|
|
``` |
|
|
|
## Git-RSCLIP: Remote Sensing Vision-Language Contrastive Pre-training Foundation Model |
|
Git-RSCLIP is pre-trained using the contrastive learning framework on the **Git-10M dataset**. |
|
Git-RSCLIP is here:[[Huggingface](https://huggingface.co/lcybuaa/Git-RSCLIP) | [Modelscope](https://modelscope.cn/models/lcybuaa1111/Git-RSCLIP)] |
|
|
|
Compare the Top1-Acc of Zero-shot classification on multiple image classification datasets: |
|
|
|
| Method | OPTIMAL31 | RSC11 | RSICB128 | WHURS19 | RS2800/RSSCN7 | CLRS | Average score | |
|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | |
|
| CLIP | 0.6 | 0.45 | 0.25 | 0.77 | 0.52 | 0.56 | 0.52 | |
|
| RemoteCLIP | 0.82 | 0.67 | 0.34 | 0.93 | 0.52 | 0.66 | 0.65 | |
|
| GeoRSCLIP | 0.83 | 0.67 | 0.35 | 0.89 | 0.63 | 0.69 | 0.68 | |
|
| SkyCLIP50 | 0.77 | 0.60 | 0.38 | 0.78 | 0.55 | 0.61 | 0.62 | |
|
| (Git-RSCLIP) Ours | **0.95** | **0.67** | **0.52** | **0.94** | **0.64** | **0.65** | **0.73** | |
|
|
|
|
|
|
|
# BibTeX entry and citation info |
|
|
|
```bibtex |
|
@ARTICLE{Text2Earth, |
|
author={Liu, Chenyang and Chen, Keyan and Zhao, Rui and Zou, Zhengxia and Shi, Zhenwei}, |
|
journal={IEEE Geoscience and Remote Sensing Magazine}, |
|
title={Text2Earth: Unlocking text-driven remote sensing image generation with a global-scale dataset and a foundation model}, |
|
year={2025}, |
|
volume={}, |
|
number={}, |
|
pages={2-23}, |
|
doi={10.1109/MGRS.2025.3560455}} |
|
``` |