Datasets:
pretty_name: EUR-Lex SUM
language:
- da
license: cc-by-sa-4.0
license_name: CC-BY-SA 4.0
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
domains:
- Legal
Dataset Card for EUR-Lex SUM
The Danish subsection of EUR-lex SUM consisting of EU legislation paired with professionally written summaries.
EUR-Lex SUM is a dataset containing summaries of EU legislation from the EUR-Lex database. It consists of pairs of full legal texts and their corresponding professionally written summaries, covering European Union legal documents. The dataset is designed for training and evaluating automatic text summarization systems, particularly for legal documents. It's valuable for natural language processing (NLP) research since it provides high-quality, human-written summaries of complex legal texts in a specialized domain.
Dataset Description
- Number of samples: 1.00K
- Number of tokens (Llama 3): 31.37M
- Average document length in tokens (min, max): 31.31K (2.14K, 1.72M)
Dataset Structure
An example from the dataset looks as follows.
{
"id": "eur-lex-sum-da_0",
"text": "21.6.2019\nDA\nDen Europæiske Unions Tidende\nL 166/26\nKOMMISSIONENS DELEGEREDE FORORDNING (EU) 2019/98[...]",
"source": "eur-lex-sum-da",
"added": "2025-03-24 00:00:00",
"created": "2024-01-01, 2025-01-01",
"token_count": 148017
}
Data Fields
An entry in the dataset consists of the following fields:
id
(str
): An unique identifier for each document.text
(str
): The content of the document.source
(str
): The source of the document (see Source Data).added
(str
): An date for when the document was added to this collection.created
(str
): An date range for when the document was originally created.token_count
(int
): The number of tokens in the sample computed using the Llama 8B tokenizer
Dataset Statistics
Additional Information
Citation Information
No citation is applicable for this work.