Datasets:
File size: 5,696 Bytes
f1fa4f8 416feaa 5515a4c f1fa4f8 416feaa 06abbd4 416feaa f1fa4f8 416feaa 06abbd4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 |
---
dataset_info:
features:
- name: identifier
dtype: string
- name: dataset
dtype: string
- name: question
dtype: string
- name: rank
dtype: int64
- name: url
dtype: string
- name: read_more_link
dtype: string
- name: language
dtype: string
- name: title
dtype: string
- name: top_image
dtype: string
- name: meta_img
dtype: string
- name: images
sequence: string
- name: movies
sequence: string
- name: keywords
sequence: 'null'
- name: meta_keywords
sequence: string
- name: tags
dtype: 'null'
- name: authors
sequence: string
- name: publish_date
dtype: string
- name: summary
dtype: string
- name: meta_description
dtype: string
- name: meta_lang
dtype: string
- name: meta_favicon
dtype: string
- name: meta_site_name
dtype: string
- name: canonical_link
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 28143581642
num_examples: 2812737
download_size: 11334496137
dataset_size: 28143581642
configs:
- config_name: default
data_files:
- split: train
path:
- data/part_*
language:
- en
pretty_name: FactCheck
tags:
- FactCheck
- knowledge-graph
- question-answering
- classification
- FactBench
- YAGO
- DBpedia
- LLM-factuality
- fact-checking
license: mit
task_categories:
- question-answering
size_categories:
- 1M<n<10M
---
# Dataset Card for FactCheck
## 📝 Dataset Summary
**FactCheck** is an benchmark for evaluating LLMs on **knowledge graph fact verification**. It combines structured facts from YAGO, DBpedia, and FactBench with web-extracted evidence including questions, summaries, full text, and metadata. The dataset contains examples designed for sentence-level fact-checking and QA tasks.
## 📚 Supported Tasks
- **Question Answering**: Answer fact-checking questions derived from KG triples.
- **Benchmarking LLMs**
## 🗣 Languages
- English (`en`)
- Maybe the dataset contains Google Search Engine Results in other language too
## 🧱 Dataset Structure
Each example includes metadata fields, such as:
| Field | Type | Description |
|------------------|----------|-------------|
| `identifier` | string | Unique ID per example |
| `dataset` | string | Source KG: YAGO, DBpedia, or FactBench |
| `question` | string | Question derived from the fact |
| `rank` | int | Relevance rank of question/page |
| `url`, `read_more_link` | string | Web source links |
| `title`, `summary`, `text` | string | Extracted HTML content |
| `images`, `movies` | [string] | Media assets |
| `keywords`, `meta_keywords`, `tags`, `authors`, `publish_date`, `meta_description`, `meta_site_name`, `top_image`, `meta_img`, `canonical_link` | string or [string] | Additional metadata |
## 🚦 Data Splits
Only a **train** split is available, aggregated across 13 source files.
## 🛠 Dataset Creation
### Curation Rationale
Constructed to benchmark LLM performance on structured KG verification, with and without external evidence.
### Source Data
- **FactBench**: ~2,800 facts
- **YAGO**: ~1,400 facts
- **DBpedia**: ~9,300 facts
- Web-scraped evidence using Google SERP for contextual support.
### Processing Steps
- Facts retrieved and paired with search queries.
- Web pages were scraped, parsed, cleaned, and stored.
- Metadata normalized across all sources.
- Optional ranking and filtering applied to prioritize high-relevance evidence.
### Provenance
Compiled by the FactCheck‑AI team, anchored in public sources (KGs + web content).
## ⚠️ Personal & Sensitive Information
The FactCheck dataset does not contain personal or private data. All information is sourced from publicly accessible knowledge graphs (YAGO, DBpedia, FactBench) and web-extracted evidence. However, if you identify any content that you believe may be in conflict with privacy standards or requires further review, please contact us. We are committed to addressing such concerns promptly and making necessary adjustments.
## 🧑💻 Dataset Curators
FactCheck‑AI Team:
- **Farzad Shami** - University of Padua - [[email protected]](mailto:[email protected])
- **Stefano Marchesin** - University of Padua - [[email protected]](mailto:[email protected])
- **Gianmaria Silvello** - University of Padua - [[email protected]](mailto:[email protected])
## ✉️ Contact
For issues or questions, please raise a GitHub issue on this repo.
---
### ✅ SQL Queries for Interactive Analysis
Here are useful queries users can run in the Hugging Face SQL Console to analyze this dataset:
```sql
-- 1. Count of rows per source KG
SELECT dataset, COUNT(*) AS count
FROM train
GROUP BY dataset
ORDER BY count DESC;
````
```sql
-- 2. Daily entry counts based on publish_date
SELECT publish_date, COUNT(*) AS count
FROM train
GROUP BY publish_date
ORDER BY publish_date;
```
```sql
-- 3. Count of missing titles or summaries
SELECT
SUM(CASE WHEN title IS NULL OR title = '' THEN 1 ELSE 0 END) AS missing_title,
SUM(CASE WHEN summary IS NULL OR summary = '' THEN 1 ELSE 0 END) AS missing_summary
FROM train;
```
```sql
-- 4. Top 5 most frequent host domains
SELECT
SUBSTR(url, INSTR(url, '://')+3, INSTR(SUBSTR(url, INSTR(url,'://')+3),'/')-1) AS domain,
COUNT(*) AS count
FROM train
GROUP BY domain
ORDER BY count DESC
LIMIT 5;
```
```sql
-- 5. Average number of keywords per example
SELECT AVG(array_length(keywords, 1)) AS avg_keywords
FROM train;
```
These queries offer insights into data coverage, quality, and structure. |