Datasets:
File size: 4,530 Bytes
e265d63 036a76e e265d63 831cd58 036a76e 831cd58 036a76e 831cd58 036a76e ab994a4 036a76e 831cd58 e265d63 036a76e e265d63 036a76e e265d63 831cd58 e265d63 831cd58 e265d63 036a76e e265d63 831cd58 e265d63 831cd58 036a76e e265d63 036a76e e265d63 831cd58 e265d63 831cd58 e265d63 831cd58 036a76e e265d63 036a76e e265d63 036a76e e265d63 831cd58 036a76e e265d63 831cd58 e265d63 036a76e e265d63 036a76e e265d63 831cd58 e265d63 036a76e e265d63 036a76e e265d63 036a76e e265d63 831cd58 036a76e e265d63 831cd58 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 |
---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: tars/train/*.tar
- split: fine_tune
path: tars/fine_tune/*.tar
language:
- en
---
# Accessing the `font-square-v2` Dataset on Hugging Face
The `font-square-v2` dataset is hosted on Hugging Face at [blowing-up-groundhogs/font-square-v2](https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2). It is stored in WebDataset format, with tar files organized as follows:
- **tars/train/**: Contains `{000..499}.tar` shards for the main training split.
- **tars/fine_tune/**: Contains `{000..049}.tar` shards for fine-tuning.
Each tar file contains multiple samples, where each sample includes:
- An RGB image (`.rgb.png`)
- A black-and-white image (`.bw.png`)
- A JSON file (`.json`) with metadata (e.g. text and writer ID)
For details on how the synthetic dataset was generated, please refer to our paper: [Synthetic Dataset Generation](https://arxiv.org/pdf/2503.17074).
You can access the dataset either by downloading it locally or by streaming it directly over HTTP.
---
## 1. Downloading the Dataset Locally
You can download the dataset locally using either **Git LFS** or the [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub) Python library.
### Using Git LFS
Clone the repository (ensure [Git LFS](https://git-lfs.github.com/) is installed):
```bash
git lfs clone https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2
```
This creates a local directory `font-square-v2` containing the `tars/` folder with the subdirectories `train/` and `fine_tune/`.
### Using the huggingface_hub Python Library
Alternatively, download a snapshot of the dataset:
```python
from huggingface_hub import snapshot_download
# Download the repository; the local path is returned
local_dir = snapshot_download(repo_id="blowing-up-groundhogs/font-square-v2", repo_type="dataset")
print("Dataset downloaded to:", local_dir)
```
After downloading, the tar shards are located in:
- `local_dir/tars/train/{000..499}.tar`
- `local_dir/tars/fine_tune/{000..049}.tar`
### Using WebDataset with the Local Files
Once downloaded, you can load the dataset using [WebDataset](https://github.com/webdataset/webdataset). For example, to load the training split:
```python
import webdataset as wds
import os
local_dir = "path/to/font-square-v2" # Update as needed
# Load training shards
train_pattern = os.path.join(local_dir, "tars", "train", "{000..499}.tar")
train_dataset = wds.WebDataset(train_pattern).decode("pil")
for sample in train_dataset:
rgb_image = sample["rgb.png"] # PIL image
bw_image = sample["bw.png"] # PIL image
metadata = sample["json"]
print("Training sample metadata:", metadata)
break
```
And similarly for the fine-tune split:
```python
fine_tune_pattern = os.path.join(local_dir, "tars", "fine_tune", "{000..049}.tar")
fine_tune_dataset = wds.WebDataset(fine_tune_pattern).decode("pil")
```
---
## 2. Streaming the Dataset Directly Over HTTP
If you prefer not to download the shards, you can stream them directly from Hugging Face using the CDN (provided the tar files are public). For example:
```python
import webdataset as wds
url_pattern = (
"https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main"
"/tars/train/{000000..000499}.tar"
)
dataset = wds.WebDataset(url_pattern).decode("pil")
for sample in dataset:
rgb_image = sample["rgb.png"]
bw_image = sample["bw.png"]
metadata = sample["json"]
print("Sample metadata:", metadata)
break
```
(Adjust the shard range accordingly for the fine-tune split.)
---
## Additional Considerations
- **Decoding:**
The `.decode("pil")` method in WebDataset converts image bytes into PIL images. To use PyTorch tensors, add a transform step:
```python
import torchvision.transforms as transforms
transform = transforms.ToTensor()
dataset = (
wds.WebDataset(train_pattern)
.decode("pil")
.map(lambda sample: {
"rgb": transform(sample["rgb.png"]),
"bw": transform(sample["bw.png"]),
"metadata": sample["json"]
})
)
```
- **Shard Naming:**
Ensure your WebDataset pattern matches the following structure:
```
tars/
├── train/
│ └── {000..499}.tar
└── fine_tune/
└── {000..049}.tar
```
By following these instructions, you can easily integrate the `font-square-v2` dataset into your project for training and fine-tuning. |