File size: 3,319 Bytes
89d09b7
 
 
 
 
 
 
 
 
201062b
89d09b7
 
9753d13
89d09b7
9753d13
89d09b7
 
 
9753d13
 
5c09ea0
89d09b7
 
 
9753d13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b740a79
9753d13
b740a79
9753d13
 
 
 
 
 
 
 
 
5c09ea0
9753d13
 
 
 
 
833554d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89d09b7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
license: cc-by-sa-3.0
language: zh
tags:
  - information-retrieval
  - question-answering
  - chinese
  - wikipedia
  - open-domain-qa
pretty_name: DRCD for Document Retrieval (Simplified Chinese)
---

 DRCD for Document Retrieval (Simplified Chinese)

This dataset is a reformatted version of the [Delta Reading Comprehension Dataset (DRCD)](https://github.com/DRCKnowledgeTeam/DRCD), converted to Simplified Chinese and adapted for **document-level retrieval** tasks.

## Summary

The dataset transforms the original DRCD QA data into a **document retrieval** setting, where queries are used to retrieve **entire Wikipedia articles** rather than individual passages. Each document is the full text of a Wikipedia entry.

The format is compatible with the data structure used in the **[LongEmbed benchmark]((https://github.com/THU-KEG/LongEmbed))** and can be directly plugged into LongEmbed evaluation or training pipelines.

## Key Features

- 🔤 **Language**: Simplified Chinese (converted from Traditional Chinese)
- 📚 **Domain**: General domain, from Wikipedia
- 📄 **Granularity**: **Full-document retrieval**, not passage-level
- 🔍 **Use Cases**: Long-document retrieval, reranking, open-domain QA pre-retrieval

## File Structure

### `corpus.jsonl`

Each line is a single Wikipedia article in Simplified Chinese.

```json
{"id": "doc_00001", "title": "心理", "text": "心理学是一门研究人类和动物的心理现象、意识和行为的科学。..."}
```

### `queries.jsonl`

Each line is a user query (from the DRCD question field).

``` json
{"qid": "6513-4-1", "text": "威廉·冯特为何被誉为“实验心理学之父”?"}
```

### `qrels.jsonl`

Standard relevance judgments mapping queries to relevant documents.

``` json
{"qid": "6513-4-1", "doc_id": "6513"}
```

This structure matches [LongEmbed benchmark](https://github.com/dwzhu-pku/LongEmbed)'s data format, making it suitable for evaluating long-document retrievers out of the box.

## Example: Document Retrieval Using BM25

You can quickly try out document-level retrieval using BM25 with the following code snippet:

https://gist.github.com/ihainan/a1cf382c6042b90c8e55fe415f1b29e8

Usage:

```
$ python test_long_embed_bm25.py /home/ihainan/projects/Large/AI/DRCD-Simplified-Chinese/ir_dataset/train
...0
Building prefix dict from the default dictionary ...
Loading model from cache /tmp/jieba.cache
Loading model cost 0.404 seconds.
Prefix dict has been built successfully.
...200
...400
...600
...800
...1000
...1200
...1400
...1600
...1800

Acc@1: 64.76%
nDCG@10: 76.61%
```

## License
The dataset is distributed under the Creative Commons Attribution-ShareAlike 3.0 License (CC BY-SA 3.0). You must give appropriate credit and share any derivative works under the same terms.

## Citation
If you use this dataset, please also consider citing the original DRCD paper:

```graphql
@inproceedings{shao2018drcd,
  title={DRCD: a Chinese machine reading comprehension dataset},
  author={Shao, Chih-Chieh and Chang, Chia-Hsuan and others},
  booktitle={Proceedings of the Workshop on Machine Reading for Question Answering},
  year={2018}
}
```

## Acknowledgments
- Original data provided by Delta Research Center.
- This project performed format adaptation and Simplified Chinese conversion for IR use cases.