Update README.md
Browse files
README.md
CHANGED
@@ -39,3 +39,52 @@ configs:
|
|
39 |
- split: cc_by_sa
|
40 |
path: data/cc_by_sa-*
|
41 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
- split: cc_by_sa
|
40 |
path: data/cc_by_sa-*
|
41 |
---
|
42 |
+
|
43 |
+
|
44 |
+
# CapyWiki-34M
|
45 |
+
|
46 |
+
CapyWiki is a collection of openly licensed and public domain image datasets from Wikimedia. It is the first from a series of openly licensed OpenCapybara datasets, with a focus on cc0 and public domain datasets.
|
47 |
+
|
48 |
+
CapyWiki contains 3 splits:
|
49 |
+
|
50 |
+
- `public_domain` split: **16.5M links** to Wikimedia images that were categorized with license info of `Public Domain`, `cc0` or equivalent. There are no restrictions on how this images can be used from a copyright standpoint.
|
51 |
+
- `cc_by` split: **3.4M links** to Wikimedia images that have a commercial usage permissive [cc-by](https://creativecommons.org/licenses/by/4.0/) or equivalent license.
|
52 |
+
- `cc_by_sa` split: **14.2M links** to Wikimedia images that have a commercial usage permissive [cc-by-sa](https://creativecommons.org/licenses/by-sa/4.0/) or equivalent license.
|
53 |
+
|
54 |
+
The dataset should contain photos, illustrations, scans, maps and any other media categories in an image format that Wikimedia hosts.
|
55 |
+
|
56 |
+
## What's the intended use of CapyWiki
|
57 |
+
|
58 |
+
Using CapyWiki to train and evaluate neural networks is possible but not exclusive.
|
59 |
+
|
60 |
+
The `public domain` image split can be used without freely, openly and without any restrictions. While the `cc-by` and `cc-by-sa` have the provisions indicated on each license.
|
61 |
+
|
62 |
+
The information contained on this Model Card is not legal advice and we recommend you conducting your own independent analysis of the content and its copyright status.
|
63 |
+
|
64 |
+
## Data format dictionary
|
65 |
+
The dataset contains:
|
66 |
+
- `url` for the image URL
|
67 |
+
- `description` as the original image description (may include original wikimedia HTML tags)
|
68 |
+
- `author` as an HTML link tag for whoever is indicated as an author in wikimedia
|
69 |
+
- `license` spelled out license name text
|
70 |
+
- `license_wiki` shortened license nickname
|
71 |
+
- `date` artefact date
|
72 |
+
- `credit` credit
|
73 |
+
|
74 |
+
## Help needed
|
75 |
+
This is a raw dataset. Tasks such as: captioning, content classification (photos, illustrations, etc.), aesthetic classifications, meta-data inclusion (width, height) or the images are open and community contributions for those are more than welcome.
|
76 |
+
|
77 |
+
## Loading the dataset for downstream use-cases
|
78 |
+
The dataset splis are in a `*.parquet` format and can be read/processed by any tool or library that can read `*.parquet` files.
|
79 |
+
|
80 |
+
If you wish to use the Hugging Face Datasets library you can load the dataset as:
|
81 |
+
|
82 |
+
```py
|
83 |
+
from datasets import load_dataset
|
84 |
+
#Load the public_domain split
|
85 |
+
dataset = load_dataset("opencapybara/CapyWiki-34M", split="public_domain")
|
86 |
+
|
87 |
+
#Now the dataset can be used for any downstream cases e.g.:
|
88 |
+
# first_500_urls = dataset[:500]['url']
|
89 |
+
```
|
90 |
+
|