Datasets:
license: apache-2.0
task_categories:
- sentence-similarity
language:
- en
tags:
- doi
- bibliography
- literature
- crossref
pretty_name: crossref 2025
size_categories:
- 10M<n<100M
Dataset Overview
This dataset contains bibliographic metadata from the public Crossref snapshot released in 2025. It provides core fields for scholarly documents, including DOI, title, abstract, authorship, publication month and year, and URLs. The entire public dump (~196.94 GB) was filtered and extracted into a parquet format for efficient loading and querying.
- Total size: 196.94 GB (parquet files)
- Number of records: 34,308,730
Use this dataset for large-scale text mining, bibliometric analyses, metadata enrichment, and building citation-aware tools.
Dataset Features
Each record in the dataset contains the following fields:
Field | Type | Description |
---|---|---|
doi |
string |
Digital Object Identifier of the publication. |
title |
string |
Title of the scholarly work. |
abstract |
string |
Abstract text (when available). |
author |
list |
List of author names or structured author metadata. |
month |
int |
Publication month (1–12). |
year |
int |
Publication year (e.g., 2024, 2025). |
url |
string |
URL pointing to the publication page or DOI resolver. |
Dataset Structure
The dataset is provided in Apache Parquet format. Parquet allows for efficient columnar storage and supports schema evolution. Each parquet file chunk contains the complete schema as described above.
root
|-- doi: string (nullable = true)
|-- title: string (nullable = true)
|-- abstract: string (nullable = true)
|-- author: array (nullable = true)
| |-- element: string (nullable = true)
|-- month: int (nullable = true)
|-- year: int (nullable = true)
|-- url: string (nullable = true)
Dataset Splits
This dataset does not come with predefined splits. Users can split based on publication year, subject areas, or random sampling as per their experimentation needs.
Dataset Creation
Source
Original data: Public crossref metadata snapshot July 2025 via Crossref blog: "2025 public data file now available" (approx. 196.94 GB). See https://www.crossref.org/blog/2025-public-data-file-now-available/.
Access method: Downloaded the public JSON dump from Academic Torrents. See
Processing
- Extraction: Parsed the Crossref dump to extract relevant fields (DOI, title, abstract, authors, month, year, URL).
- Transformation: Normalized fields; authors consolidated into a list of names.
- Serialization: Saved the resulting table in Parquet format for columnar efficiency.
- Storage: Uploaded parquet files to Hugging Face Datasets with corresponding metadata.
Code for dataset processing and card generation is available at:
https://github.com/mitanshu7/PaperMatch_crossref
Usage
from datasets import load_dataset
dataset = load_dataset(
"bluuebunny/crossref_metadata_2025",
streaming=True,
split='train'
)
# Inspect a few records
print(dataset[0])
# Filter by year
subset_2025 = dataset.filter(lambda x: x["year"] == 2025)
print(f"Records published in 2025: {len(subset_2025)}")
Citation
If you use this dataset in your research, please cite the Crossref public data file:
@misc{crossref2025,
title = {{Crossref} Public Data File 2025},
author = {{Crossref}},
year = 2025,
howpublished = {\url{https://www.crossref.org/blog/2025-public-data-file-now-available/}},
}
Contact
- Repository and processing code: mitanshu7/PaperMatch_crossref
- Dataset author: Mitanshu Sukhwani