Convert dataset to Parquet

#3
by SaylorTwift HF Staff - opened
README.md CHANGED
@@ -35,6 +35,529 @@ language_bcp47:
35
  - es-ES
36
  - it-IT
37
  - fr-FR
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  ---
39
 
40
  # Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering
 
35
  - es-ES
36
  - it-IT
37
  - fr-FR
38
+ configs:
39
+ - config_name: all
40
+ data_files:
41
+ - split: train
42
+ path: all/train-*
43
+ - split: validation
44
+ path: all/validation-*
45
+ - split: test
46
+ path: all/test-*
47
+ - config_name: ar
48
+ data_files:
49
+ - split: train
50
+ path: ar/train-*
51
+ - split: validation
52
+ path: ar/validation-*
53
+ - split: test
54
+ path: ar/test-*
55
+ - config_name: de
56
+ data_files:
57
+ - split: train
58
+ path: de/train-*
59
+ - split: validation
60
+ path: de/validation-*
61
+ - split: test
62
+ path: de/test-*
63
+ - config_name: en
64
+ data_files:
65
+ - split: train
66
+ path: en/train-*
67
+ - split: validation
68
+ path: en/validation-*
69
+ - split: test
70
+ path: en/test-*
71
+ default: true
72
+ - config_name: es
73
+ data_files:
74
+ - split: train
75
+ path: es/train-*
76
+ - split: validation
77
+ path: es/validation-*
78
+ - split: test
79
+ path: es/test-*
80
+ - config_name: fr
81
+ data_files:
82
+ - split: train
83
+ path: fr/train-*
84
+ - split: validation
85
+ path: fr/validation-*
86
+ - split: test
87
+ path: fr/test-*
88
+ - config_name: hi
89
+ data_files:
90
+ - split: train
91
+ path: hi/train-*
92
+ - split: validation
93
+ path: hi/validation-*
94
+ - split: test
95
+ path: hi/test-*
96
+ - config_name: it
97
+ data_files:
98
+ - split: train
99
+ path: it/train-*
100
+ - split: validation
101
+ path: it/validation-*
102
+ - split: test
103
+ path: it/test-*
104
+ - config_name: ja
105
+ data_files:
106
+ - split: train
107
+ path: ja/train-*
108
+ - split: validation
109
+ path: ja/validation-*
110
+ - split: test
111
+ path: ja/test-*
112
+ - config_name: pt
113
+ data_files:
114
+ - split: train
115
+ path: pt/train-*
116
+ - split: validation
117
+ path: pt/validation-*
118
+ - split: test
119
+ path: pt/test-*
120
+ dataset_info:
121
+ - config_name: all
122
+ features:
123
+ - name: id
124
+ dtype: string
125
+ - name: lang
126
+ dtype: string
127
+ - name: question
128
+ dtype: string
129
+ - name: answerText
130
+ dtype: string
131
+ - name: category
132
+ dtype: string
133
+ - name: complexityType
134
+ dtype: string
135
+ - name: questionEntity
136
+ list:
137
+ - name: name
138
+ dtype: string
139
+ - name: entityType
140
+ dtype: string
141
+ - name: label
142
+ dtype: string
143
+ - name: mention
144
+ dtype: string
145
+ - name: span
146
+ list: int32
147
+ - name: answerEntity
148
+ list:
149
+ - name: name
150
+ dtype: string
151
+ - name: label
152
+ dtype: string
153
+ splits:
154
+ - name: train
155
+ num_bytes: 32617518
156
+ num_examples: 126000
157
+ - name: validation
158
+ num_bytes: 4693442
159
+ num_examples: 18000
160
+ - name: test
161
+ num_bytes: 9305705
162
+ num_examples: 36000
163
+ download_size: 17797002
164
+ dataset_size: 46616665
165
+ - config_name: ar
166
+ features:
167
+ - name: id
168
+ dtype: string
169
+ - name: lang
170
+ dtype: string
171
+ - name: question
172
+ dtype: string
173
+ - name: answerText
174
+ dtype: string
175
+ - name: category
176
+ dtype: string
177
+ - name: complexityType
178
+ dtype: string
179
+ - name: questionEntity
180
+ list:
181
+ - name: name
182
+ dtype: string
183
+ - name: entityType
184
+ dtype: string
185
+ - name: label
186
+ dtype: string
187
+ - name: mention
188
+ dtype: string
189
+ - name: span
190
+ list: int32
191
+ - name: answerEntity
192
+ list:
193
+ - name: name
194
+ dtype: string
195
+ - name: label
196
+ dtype: string
197
+ splits:
198
+ - name: train
199
+ num_bytes: 3880925
200
+ num_examples: 14000
201
+ - name: validation
202
+ num_bytes: 559451
203
+ num_examples: 2000
204
+ - name: test
205
+ num_bytes: 1105419
206
+ num_examples: 4000
207
+ download_size: 2073235
208
+ dataset_size: 5545795
209
+ - config_name: de
210
+ features:
211
+ - name: id
212
+ dtype: string
213
+ - name: lang
214
+ dtype: string
215
+ - name: question
216
+ dtype: string
217
+ - name: answerText
218
+ dtype: string
219
+ - name: category
220
+ dtype: string
221
+ - name: complexityType
222
+ dtype: string
223
+ - name: questionEntity
224
+ list:
225
+ - name: name
226
+ dtype: string
227
+ - name: entityType
228
+ dtype: string
229
+ - name: label
230
+ dtype: string
231
+ - name: mention
232
+ dtype: string
233
+ - name: span
234
+ list: int32
235
+ - name: answerEntity
236
+ list:
237
+ - name: name
238
+ dtype: string
239
+ - name: label
240
+ dtype: string
241
+ splits:
242
+ - name: train
243
+ num_bytes: 3356063
244
+ num_examples: 14000
245
+ - name: validation
246
+ num_bytes: 481954
247
+ num_examples: 2000
248
+ - name: test
249
+ num_bytes: 956485
250
+ num_examples: 4000
251
+ download_size: 1897328
252
+ dataset_size: 4794502
253
+ - config_name: en
254
+ features:
255
+ - name: id
256
+ dtype: string
257
+ - name: lang
258
+ dtype: string
259
+ - name: question
260
+ dtype: string
261
+ - name: answerText
262
+ dtype: string
263
+ - name: category
264
+ dtype: string
265
+ - name: complexityType
266
+ dtype: string
267
+ - name: questionEntity
268
+ list:
269
+ - name: name
270
+ dtype: string
271
+ - name: entityType
272
+ dtype: string
273
+ - name: label
274
+ dtype: string
275
+ - name: mention
276
+ dtype: string
277
+ - name: span
278
+ list: int32
279
+ - name: answerEntity
280
+ list:
281
+ - name: name
282
+ dtype: string
283
+ - name: label
284
+ dtype: string
285
+ splits:
286
+ - name: train
287
+ num_bytes: 3713651
288
+ num_examples: 14000
289
+ - name: validation
290
+ num_bytes: 533751
291
+ num_examples: 2000
292
+ - name: test
293
+ num_bytes: 1057790
294
+ num_examples: 4000
295
+ download_size: 2147987
296
+ dataset_size: 5305192
297
+ - config_name: es
298
+ features:
299
+ - name: id
300
+ dtype: string
301
+ - name: lang
302
+ dtype: string
303
+ - name: question
304
+ dtype: string
305
+ - name: answerText
306
+ dtype: string
307
+ - name: category
308
+ dtype: string
309
+ - name: complexityType
310
+ dtype: string
311
+ - name: questionEntity
312
+ list:
313
+ - name: name
314
+ dtype: string
315
+ - name: entityType
316
+ dtype: string
317
+ - name: label
318
+ dtype: string
319
+ - name: mention
320
+ dtype: string
321
+ - name: span
322
+ list: int32
323
+ - name: answerEntity
324
+ list:
325
+ - name: name
326
+ dtype: string
327
+ - name: label
328
+ dtype: string
329
+ splits:
330
+ - name: train
331
+ num_bytes: 3370323
332
+ num_examples: 14000
333
+ - name: validation
334
+ num_bytes: 485203
335
+ num_examples: 2000
336
+ - name: test
337
+ num_bytes: 961828
338
+ num_examples: 4000
339
+ download_size: 1888205
340
+ dataset_size: 4817354
341
+ - config_name: fr
342
+ features:
343
+ - name: id
344
+ dtype: string
345
+ - name: lang
346
+ dtype: string
347
+ - name: question
348
+ dtype: string
349
+ - name: answerText
350
+ dtype: string
351
+ - name: category
352
+ dtype: string
353
+ - name: complexityType
354
+ dtype: string
355
+ - name: questionEntity
356
+ list:
357
+ - name: name
358
+ dtype: string
359
+ - name: entityType
360
+ dtype: string
361
+ - name: label
362
+ dtype: string
363
+ - name: mention
364
+ dtype: string
365
+ - name: span
366
+ list: int32
367
+ - name: answerEntity
368
+ list:
369
+ - name: name
370
+ dtype: string
371
+ - name: label
372
+ dtype: string
373
+ splits:
374
+ - name: train
375
+ num_bytes: 3442616
376
+ num_examples: 14000
377
+ - name: validation
378
+ num_bytes: 494627
379
+ num_examples: 2000
380
+ - name: test
381
+ num_bytes: 981861
382
+ num_examples: 4000
383
+ download_size: 1928896
384
+ dataset_size: 4919104
385
+ - config_name: hi
386
+ features:
387
+ - name: id
388
+ dtype: string
389
+ - name: lang
390
+ dtype: string
391
+ - name: question
392
+ dtype: string
393
+ - name: answerText
394
+ dtype: string
395
+ - name: category
396
+ dtype: string
397
+ - name: complexityType
398
+ dtype: string
399
+ - name: questionEntity
400
+ list:
401
+ - name: name
402
+ dtype: string
403
+ - name: entityType
404
+ dtype: string
405
+ - name: label
406
+ dtype: string
407
+ - name: mention
408
+ dtype: string
409
+ - name: span
410
+ list: int32
411
+ - name: answerEntity
412
+ list:
413
+ - name: name
414
+ dtype: string
415
+ - name: label
416
+ dtype: string
417
+ splits:
418
+ - name: train
419
+ num_bytes: 4491931
420
+ num_examples: 14000
421
+ - name: validation
422
+ num_bytes: 647607
423
+ num_examples: 2000
424
+ - name: test
425
+ num_bytes: 1282203
426
+ num_examples: 4000
427
+ download_size: 2176682
428
+ dataset_size: 6421741
429
+ - config_name: it
430
+ features:
431
+ - name: id
432
+ dtype: string
433
+ - name: lang
434
+ dtype: string
435
+ - name: question
436
+ dtype: string
437
+ - name: answerText
438
+ dtype: string
439
+ - name: category
440
+ dtype: string
441
+ - name: complexityType
442
+ dtype: string
443
+ - name: questionEntity
444
+ list:
445
+ - name: name
446
+ dtype: string
447
+ - name: entityType
448
+ dtype: string
449
+ - name: label
450
+ dtype: string
451
+ - name: mention
452
+ dtype: string
453
+ - name: span
454
+ list: int32
455
+ - name: answerEntity
456
+ list:
457
+ - name: name
458
+ dtype: string
459
+ - name: label
460
+ dtype: string
461
+ splits:
462
+ - name: train
463
+ num_bytes: 3325824
464
+ num_examples: 14000
465
+ - name: validation
466
+ num_bytes: 478224
467
+ num_examples: 2000
468
+ - name: test
469
+ num_bytes: 950678
470
+ num_examples: 4000
471
+ download_size: 1881299
472
+ dataset_size: 4754726
473
+ - config_name: ja
474
+ features:
475
+ - name: id
476
+ dtype: string
477
+ - name: lang
478
+ dtype: string
479
+ - name: question
480
+ dtype: string
481
+ - name: answerText
482
+ dtype: string
483
+ - name: category
484
+ dtype: string
485
+ - name: complexityType
486
+ dtype: string
487
+ - name: questionEntity
488
+ list:
489
+ - name: name
490
+ dtype: string
491
+ - name: entityType
492
+ dtype: string
493
+ - name: label
494
+ dtype: string
495
+ - name: mention
496
+ dtype: string
497
+ - name: span
498
+ list: int32
499
+ - name: answerEntity
500
+ list:
501
+ - name: name
502
+ dtype: string
503
+ - name: label
504
+ dtype: string
505
+ splits:
506
+ - name: train
507
+ num_bytes: 3753173
508
+ num_examples: 14000
509
+ - name: validation
510
+ num_bytes: 540236
511
+ num_examples: 2000
512
+ - name: test
513
+ num_bytes: 1072950
514
+ num_examples: 4000
515
+ download_size: 2032694
516
+ dataset_size: 5366359
517
+ - config_name: pt
518
+ features:
519
+ - name: id
520
+ dtype: string
521
+ - name: lang
522
+ dtype: string
523
+ - name: question
524
+ dtype: string
525
+ - name: answerText
526
+ dtype: string
527
+ - name: category
528
+ dtype: string
529
+ - name: complexityType
530
+ dtype: string
531
+ - name: questionEntity
532
+ list:
533
+ - name: name
534
+ dtype: string
535
+ - name: entityType
536
+ dtype: string
537
+ - name: label
538
+ dtype: string
539
+ - name: mention
540
+ dtype: string
541
+ - name: span
542
+ list: int32
543
+ - name: answerEntity
544
+ list:
545
+ - name: name
546
+ dtype: string
547
+ - name: label
548
+ dtype: string
549
+ splits:
550
+ - name: train
551
+ num_bytes: 3283012
552
+ num_examples: 14000
553
+ - name: validation
554
+ num_bytes: 472389
555
+ num_examples: 2000
556
+ - name: test
557
+ num_bytes: 936491
558
+ num_examples: 4000
559
+ download_size: 1851000
560
+ dataset_size: 4691892
561
  ---
562
 
563
  # Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering
all/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37d26b2e9089054c148a8481c64e6be20e0ecd43142c6af2e2aaa23365bd09b0
3
+ size 3817647
all/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8c5fc22f06f4a475bdbb554da77245840b14804e72e48af6e55bb1ee901e19f
3
+ size 11946541
all/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9e498241d76d58cfc0531b9008454985f608c70bcaf5c80f29a68c7d42ba0d0
3
+ size 2032814
ar/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af0cd52030ab37f0ea4a83cbacc820deab52627741e51001829e447e8d4596a3
3
+ size 443851
ar/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46d99078bee5fc84e157b46586764537d4cc9ccedc57be05518064e1547a3d78
3
+ size 1393104
ar/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00def06e118bcd10c052dcc889ac9ea885f4e4147ea8ba730df14559a8590a07
3
+ size 236280
de/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0540d40c36447f9269fbe339e74398806d1bdf7c45f20cb32f9598f64c3b3e4f
3
+ size 409320
de/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12466f819eccbf1e138075073fa963a73241003e0910cb4dad73d9a8665904a4
3
+ size 1268432
de/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79f9bd1aab3946fc9be083f986575ea353173f4aba4f50e84ce182ca6fbd7814
3
+ size 219576
en/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de8507510b59ec58c0f688040f81a76c3a9439e1c8ca5afac4a2a1d91072518c
3
+ size 463054
en/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8b5ab54920c9d3a293b4932b00a96c8d5d9b01f69a3913e0f3ecdc4a275d490
3
+ size 1435541
en/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:660d33320a2fa576be3736f85034dc657b2b439234f202a73928eb878c29498c
3
+ size 249392
es/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2ca46713a8534ad5c091ebb3e598005f3a6108f27abbf5f5d99aede32ab38b1
3
+ size 407547
es/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d709d0acebc3146b7781cb05063be5cf2d32c719bbfa222c3dcba4f70ed68ee
3
+ size 1261657
es/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b6901e92283bed12faea8b79c98a79a3118cb69ad5f8029c84adec88dee6b67
3
+ size 219001
fr/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ac6888b1122e0524e21fb7013664407578775faecea024fad7fca8a6667c379
3
+ size 416638
fr/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0181ff57900af6a8fac97f7ce7324b10907e155df2e199c0a1ff1cc1f4350e9
3
+ size 1289419
fr/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df5bf2b3e98f8dca1da878efc371e661b79bc0e85d7d06194a3d3082d50b294f
3
+ size 222839
hi/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e33cb907be561a3aedc4853f936cb3702b37ff921b91a9fd9034af76e202523
3
+ size 461337
hi/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca76c28f16a5deadb69dbe012aad5050a4c8040fae989eb7d14e1a17c1f4872d
3
+ size 1467812
hi/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ba002ad13b3e7654df6bbdf6dfef9b133cfeba4bc2b47bb6f30256f3bb21bf9
3
+ size 247533
it/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:725d63f79732be72e477e25f4f0e84317b36ecea08f418e7b3c3992b322ed529
3
+ size 406702
it/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:202d3a949a63b1e66e4f00754531fb732de38ba0afe742011dbc337f4514e890
3
+ size 1257116
it/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbe0d6f60e28906d3b46fafd3e547139e89052bbe053b3b7d420188267d084cd
3
+ size 217481
ja/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e40008e7a0ed3d329e0dfcb93a38bc7f04fbaa4fd1007f956ddba60cc88bfa36
3
+ size 436903
ja/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb05d75fa2cca6b80c0e7bd3be1d346287e9f75d2769f5fe0c261a1958c7d81f
3
+ size 1362198
ja/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1fec9d57d70d5b8b8c74589b750d1fb75bb088f974ce63732527250b4c33e24f
3
+ size 233593
mintaka.py DELETED
@@ -1,177 +0,0 @@
1
- # coding=utf-8
2
-
3
- """Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering"""
4
-
5
- import json
6
- import datasets
7
-
8
- logger = datasets.logging.get_logger(__name__)
9
-
10
- _DESCRIPTION = """\
11
- Mintaka is a complex, natural, and multilingual dataset designed for experimenting with end-to-end
12
- question-answering models. Mintaka is composed of 20,000 question-answer pairs collected in English,
13
- annotated with Wikidata entities, and translated into Arabic, French, German, Hindi, Italian,
14
- Japanese, Portuguese, and Spanish for a total of 180,000 samples.
15
- Mintaka includes 8 types of complex questions, including superlative, intersection, and multi-hop questions,
16
- which were naturally elicited from crowd workers.
17
- """
18
-
19
- _CITATION = """\
20
- @inproceedings{sen-etal-2022-mintaka,
21
- title = "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering",
22
- author = "Sen, Priyanka and Aji, Alham Fikri and Saffari, Amir",
23
- booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
24
- month = oct,
25
- year = "2022",
26
- address = "Gyeongju, Republic of Korea",
27
- publisher = "International Committee on Computational Linguistics",
28
- url = "https://aclanthology.org/2022.coling-1.138",
29
- pages = "1604--1619"
30
- }
31
- """
32
-
33
- _LICENSE = """\
34
- Copyright Amazon.com Inc. or its affiliates.
35
- Attribution 4.0 International
36
- """
37
-
38
- _TRAIN_URL = "https://raw.githubusercontent.com/amazon-science/mintaka/main/data/mintaka_train.json"
39
- _DEV_URL = "https://raw.githubusercontent.com/amazon-science/mintaka/main/data/mintaka_dev.json"
40
- _TEST_URL = "https://raw.githubusercontent.com/amazon-science/mintaka/main/data/mintaka_test.json"
41
-
42
-
43
- _LANGUAGES = ['en', 'ar', 'de', 'ja', 'hi', 'pt', 'es', 'it', 'fr']
44
-
45
- _ALL = "all"
46
-
47
- class Mintaka(datasets.GeneratorBasedBuilder):
48
- """Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering"""
49
-
50
- BUILDER_CONFIGS = [
51
- datasets.BuilderConfig(
52
- name = name,
53
- version = datasets.Version("1.0.0"),
54
- description = f"Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering for {name}",
55
- ) for name in _LANGUAGES
56
- ]
57
-
58
- BUILDER_CONFIGS.append(datasets.BuilderConfig(
59
- name = _ALL,
60
- version = datasets.Version("1.0.0"),
61
- description = f"Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering",
62
- ))
63
-
64
- DEFAULT_CONFIG_NAME = 'en'
65
-
66
- def _info(self):
67
- return datasets.DatasetInfo(
68
- description=_DESCRIPTION,
69
- features=datasets.Features(
70
- {
71
- "id": datasets.Value("string"),
72
- "lang": datasets.Value("string"),
73
- "question": datasets.Value("string"),
74
- "answerText": datasets.Value("string"),
75
- "category": datasets.Value("string"),
76
- "complexityType": datasets.Value("string"),
77
- "questionEntity": [{
78
- "name": datasets.Value("string"),
79
- "entityType": datasets.Value("string"),
80
- "label": datasets.Value("string"),
81
- "mention": datasets.Value("string"),
82
- "span": [datasets.Value("int32")],
83
- }],
84
- "answerEntity": [{
85
- "name": datasets.Value("string"),
86
- "label": datasets.Value("string"),
87
- }]
88
- },
89
- ),
90
- supervised_keys=None,
91
- citation=_CITATION,
92
- license=_LICENSE,
93
- )
94
-
95
- def _split_generators(self, dl_manager):
96
- return [
97
- datasets.SplitGenerator(
98
- name=datasets.Split.TRAIN,
99
- gen_kwargs={
100
- "file": dl_manager.download_and_extract(_TRAIN_URL),
101
- "lang": self.config.name,
102
- }
103
- ),
104
- datasets.SplitGenerator(
105
- name=datasets.Split.VALIDATION,
106
- gen_kwargs={
107
- "file": dl_manager.download_and_extract(_DEV_URL),
108
- "lang": self.config.name,
109
- }
110
- ),
111
- datasets.SplitGenerator(
112
- name=datasets.Split.TEST,
113
- gen_kwargs={
114
- "file": dl_manager.download_and_extract(_TEST_URL),
115
- "lang": self.config.name,
116
- }
117
- ),
118
- ]
119
-
120
- def _generate_examples(self, file, lang):
121
- if lang == _ALL:
122
- langs = _LANGUAGES
123
- else:
124
- langs = [lang]
125
-
126
- key_ = 0
127
-
128
- logger.info("⏳ Generating examples from = %s", ", ".join(lang))
129
-
130
- with open(file, encoding='utf-8') as json_file:
131
- data = json.load(json_file)
132
- for lang in langs:
133
- for sample in data:
134
- questionEntity = [
135
- {
136
- "name": str(qe["name"]),
137
- "entityType": qe["entityType"],
138
- "label": qe["label"] if "label" in qe else "",
139
- # span only applies for English question
140
- "mention": qe["mention"] if lang == "en" else None,
141
- "span": qe["span"] if lang == "en" else [],
142
- } for qe in sample["questionEntity"]
143
- ]
144
-
145
- answers = []
146
- if sample['answer']["answerType"] == "entity" and sample['answer']['answer'] is not None:
147
- answers = sample['answer']['answer']
148
- elif sample['answer']["answerType"] == "numerical" and "supportingEnt" in sample["answer"]:
149
- answers = sample['answer']['supportingEnt']
150
-
151
- # helper to get language for the corresponding language
152
- def get_label(labels, lang):
153
- if lang in labels:
154
- return labels[lang]
155
- if 'en' in labels:
156
- return labels['en']
157
- return None
158
-
159
- answerEntity = [
160
- {
161
- "name": str(ae["name"]),
162
- "label": get_label(ae["label"], lang),
163
- } for ae in answers
164
- ]
165
-
166
- yield key_, {
167
- "id": sample["id"],
168
- "lang": lang,
169
- "question": sample["question"] if lang == 'en' else sample['translations'][lang],
170
- "answerText": sample["answer"]["mention"],
171
- "category": sample["category"],
172
- "complexityType": sample["complexityType"],
173
- "questionEntity": questionEntity,
174
- "answerEntity": answerEntity,
175
- }
176
-
177
- key_ += 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
pt/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d23837ade3cd345a1d41f1cb604829c2f88c3ac47a51880eb2b5c92ad0133fd
3
+ size 399482
pt/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58f0733590a97c1f5077f10122752afe6dd349180616c10e2dcb20a33bc267b5
3
+ size 1236734
pt/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db3b749c095fb345c03d9cb4225e3bd6838ed9b52b63f58efe96ebe8ebe2d486
3
+ size 214784
test_mintaka.py DELETED
@@ -1,16 +0,0 @@
1
- from datasets import load_dataset
2
-
3
- source = "AmazonScience/mintaka"
4
-
5
- #dataset = load_dataset(source, "all", download_mode="force_redownload")
6
- dataset = load_dataset(source, "all")
7
-
8
- print(dataset)
9
- print(dataset["train"][0])
10
- print(dataset["train"][0:10]['question'])
11
-
12
-
13
- dataset = load_dataset(source, "en")
14
- dataset = load_dataset(source, "ar")
15
-
16
-