File size: 4,163 Bytes
27dd354
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99363c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a6a5f25
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
license: apache-2.0
task_categories:
- sentence-similarity
language:
- en
tags:
- doi
- bibliography
- literature
- crossref
pretty_name: crossref 2025
size_categories:
- 10M<n<100M
---
## Dataset Overview

This dataset contains bibliographic metadata from the public Crossref snapshot released in 2025. It provides core fields for scholarly documents, including DOI, title, abstract, authorship, publication month and year, and URLs. The entire public dump (\~196.94 GB) was filtered and extracted into a parquet format for efficient loading and querying.

* **Total size:** 196.94 GB (parquet files)
* **Number of records:**  34,308,730

Use this dataset for large-scale text mining, bibliometric analyses, metadata enrichment, and building citation-aware tools.

## Dataset Features

Each record in the dataset contains the following fields:

| Field      | Type     | Description                                           |
| ---------- | -------- | ----------------------------------------------------- |
| `doi`      | `string` | Digital Object Identifier of the publication.         |
| `title`    | `string` | Title of the scholarly work.                          |
| `abstract` | `string` | Abstract text (when available).                       |
| `author`   | `list`   | List of author names or structured author metadata.   |
| `month`    | `int`    | Publication month (1–12).                             |
| `year`     | `int`    | Publication year (e.g., 2024, 2025).                  |
| `url`      | `string` | URL pointing to the publication page or DOI resolver. |

## Dataset Structure

The dataset is provided in Apache Parquet format. Parquet allows for efficient columnar storage and supports schema evolution. Each parquet file chunk contains the complete schema as described above.

```
root
 |-- doi: string (nullable = true)
 |-- title: string (nullable = true)
 |-- abstract: string (nullable = true)
 |-- author: array (nullable = true)
 |    |-- element: string (nullable = true)
 |-- month: int (nullable = true)
 |-- year: int (nullable = true)
 |-- url: string (nullable = true)
```

## Dataset Splits

This dataset does not come with predefined splits. Users can split based on publication year, subject areas, or random sampling as per their experimentation needs.

## Dataset Creation

### Source

* **Original data:** Public crossref metadata snapshot July 2025 via Crossref blog: "2025 public data file now available" (approx. 196.94 GB). See [https://www.crossref.org/blog/2025-public-data-file-now-available/](https://www.crossref.org/blog/2025-public-data-file-now-available/).

* **Access method:** Downloaded the public JSON dump from Academic Torrents. [See](https://academictorrents.com/details/e0eda0104902d61c025e27e4846b66491d4c9f98)

### Processing

1. **Extraction:** Parsed the Crossref dump to extract relevant fields (DOI, title, abstract, authors, month, year, URL).
2. **Transformation:** Normalized fields; authors consolidated into a list of names.
3. **Serialization:** Saved the resulting table in Parquet format for columnar efficiency.
4. **Storage:** Uploaded parquet files to Hugging Face Datasets with corresponding metadata.

Code for dataset processing and card generation is available at:

```
https://github.com/mitanshu7/PaperMatch_crossref
```

## Usage

```python
from datasets import load_dataset

dataset = load_dataset(
    "bluuebunny/crossref_metadata_2025",
    streaming=True,
    split='train'
)

# Inspect a few records
print(dataset[0])

# Filter by year
subset_2025 = dataset.filter(lambda x: x["year"] == 2025)
print(f"Records published in 2025: {len(subset_2025)}")
```

## Citation

If you use this dataset in your research, please cite the Crossref public data file:

```
@misc{crossref2025,
  title        = {{Crossref} Public Data File 2025},
  author       = {{Crossref}},
  year         = 2025,
  howpublished = {\url{https://www.crossref.org/blog/2025-public-data-file-now-available/}},
}
```


## Contact

* Repository and processing code: [mitanshu7/PaperMatch\_crossref](https://github.com/mitanshu7/PaperMatch_crossref)
* Dataset author: Mitanshu Sukhwani