Tom Aarsen commited on
Commit
38ec0c3
·
1 Parent(s): a20094d

Wrap dataset details in details/summary

Browse files
Files changed (1) hide show
  1. README.md +78 -26
README.md CHANGED
@@ -1162,7 +1162,7 @@ You can finetune this model on your own dataset.
1162
 
1163
  ### Training Datasets
1164
 
1165
- #### gooaq
1166
 
1167
  * Dataset: [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c)
1168
  * Size: 3,012,496 training samples
@@ -1202,7 +1202,9 @@ You can finetune this model on your own dataset.
1202
  }
1203
  ```
1204
 
1205
- #### msmarco
 
 
1206
 
1207
  * Dataset: [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1) at [84ed2d3](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1/tree/84ed2d35626f617d890bd493b4d6db69a741e0e2)
1208
  * Size: 502,939 training samples
@@ -1242,7 +1244,9 @@ You can finetune this model on your own dataset.
1242
  }
1243
  ```
1244
 
1245
- #### squad
 
 
1246
 
1247
  * Dataset: [squad](https://huggingface.co/datasets/sentence-transformers/squad) at [d84c8c2](https://huggingface.co/datasets/sentence-transformers/squad/tree/d84c8c2ef64693264c890bb242d2e73fc0a46c40)
1248
  * Size: 87,599 training samples
@@ -1282,7 +1286,9 @@ You can finetune this model on your own dataset.
1282
  }
1283
  ```
1284
 
1285
- #### s2orc
 
 
1286
 
1287
  * Dataset: [s2orc](https://huggingface.co/datasets/sentence-transformers/s2orc) at [8cfc394](https://huggingface.co/datasets/sentence-transformers/s2orc/tree/8cfc394e83b2ebfcf38f90b508aea383df742439)
1288
  * Size: 90,000 training samples
@@ -1322,7 +1328,9 @@ You can finetune this model on your own dataset.
1322
  }
1323
  ```
1324
 
1325
- #### allnli
 
 
1326
 
1327
  * Dataset: [allnli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
1328
  * Size: 557,850 training samples
@@ -1362,7 +1370,9 @@ You can finetune this model on your own dataset.
1362
  }
1363
  ```
1364
 
1365
- #### paq
 
 
1366
 
1367
  * Dataset: [paq](https://huggingface.co/datasets/sentence-transformers/paq) at [74601d8](https://huggingface.co/datasets/sentence-transformers/paq/tree/74601d8d731019bc9c627ffc4271cdd640e1e748)
1368
  * Size: 64,371,441 training samples
@@ -1402,7 +1412,9 @@ You can finetune this model on your own dataset.
1402
  }
1403
  ```
1404
 
1405
- #### trivia_qa
 
 
1406
 
1407
  * Dataset: [trivia_qa](https://huggingface.co/datasets/sentence-transformers/trivia-qa) at [a7c36e3](https://huggingface.co/datasets/sentence-transformers/trivia-qa/tree/a7c36e3c8c8c01526bc094d79bf80d4c848b0ad0)
1408
  * Size: 73,346 training samples
@@ -1442,7 +1454,9 @@ You can finetune this model on your own dataset.
1442
  }
1443
  ```
1444
 
1445
- #### msmarco_10m
 
 
1446
 
1447
  * Dataset: [msmarco_10m](https://huggingface.co/datasets/bclavie/msmarco-10m-triplets) at [8c5139a](https://huggingface.co/datasets/bclavie/msmarco-10m-triplets/tree/8c5139a245a5997992605792faa49ec12a6eb5f2)
1448
  * Size: 10,000,000 training samples
@@ -1482,7 +1496,9 @@ You can finetune this model on your own dataset.
1482
  }
1483
  ```
1484
 
1485
- #### swim_ir
 
 
1486
 
1487
  * Dataset: [swim_ir](https://huggingface.co/datasets/nthakur/swim-ir-monolingual) at [834c20f](https://huggingface.co/datasets/nthakur/swim-ir-monolingual/tree/834c20f0ceef6a68e029fb4447d17d20bb0288c3)
1488
  * Size: 501,538 training samples
@@ -1522,7 +1538,9 @@ You can finetune this model on your own dataset.
1522
  }
1523
  ```
1524
 
1525
- #### pubmedqa
 
 
1526
 
1527
  * Dataset: [pubmedqa](https://huggingface.co/datasets/sentence-transformers/pubmedqa) at [a1ef0b5](https://huggingface.co/datasets/sentence-transformers/pubmedqa/tree/a1ef0b513b16ed490e807ac11da40e436d3a54c3)
1528
  * Size: 1,660 training samples
@@ -1562,7 +1580,9 @@ You can finetune this model on your own dataset.
1562
  }
1563
  ```
1564
 
1565
- #### miracl
 
 
1566
 
1567
  * Dataset: [miracl](https://huggingface.co/datasets/sentence-transformers/miracl) at [07e2b62](https://huggingface.co/datasets/sentence-transformers/miracl/tree/07e2b629250bf4185f4c87f640fac15949b8aa73)
1568
  * Size: 789,900 training samples
@@ -1602,7 +1622,9 @@ You can finetune this model on your own dataset.
1602
  }
1603
  ```
1604
 
1605
- #### mldr
 
 
1606
 
1607
  * Dataset: [mldr](https://huggingface.co/datasets/sentence-transformers/mldr) at [40ad767](https://huggingface.co/datasets/sentence-transformers/mldr/tree/40ad7672817ebee49e00dd25aed00e1c401881d6)
1608
  * Size: 200,000 training samples
@@ -1642,7 +1664,9 @@ You can finetune this model on your own dataset.
1642
  }
1643
  ```
1644
 
1645
- #### mr_tydi
 
 
1646
 
1647
  * Dataset: [mr_tydi](https://huggingface.co/datasets/sentence-transformers/mr-tydi) at [abbdf55](https://huggingface.co/datasets/sentence-transformers/mr-tydi/tree/abbdf55c630352da943f779610c3ce6268118351)
1648
  * Size: 354,700 training samples
@@ -1682,9 +1706,11 @@ You can finetune this model on your own dataset.
1682
  }
1683
  ```
1684
 
 
 
1685
  ### Evaluation Datasets
1686
 
1687
- #### gooaq
1688
 
1689
  * Dataset: [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c)
1690
  * Size: 3,012,496 evaluation samples
@@ -1724,7 +1750,9 @@ You can finetune this model on your own dataset.
1724
  }
1725
  ```
1726
 
1727
- #### msmarco
 
 
1728
 
1729
  * Dataset: [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1) at [84ed2d3](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1/tree/84ed2d35626f617d890bd493b4d6db69a741e0e2)
1730
  * Size: 502,939 evaluation samples
@@ -1764,7 +1792,9 @@ You can finetune this model on your own dataset.
1764
  }
1765
  ```
1766
 
1767
- #### squad
 
 
1768
 
1769
  * Dataset: [squad](https://huggingface.co/datasets/sentence-transformers/squad) at [d84c8c2](https://huggingface.co/datasets/sentence-transformers/squad/tree/d84c8c2ef64693264c890bb242d2e73fc0a46c40)
1770
  * Size: 87,599 evaluation samples
@@ -1804,7 +1834,9 @@ You can finetune this model on your own dataset.
1804
  }
1805
  ```
1806
 
1807
- #### s2orc
 
 
1808
 
1809
  * Dataset: [s2orc](https://huggingface.co/datasets/sentence-transformers/s2orc) at [8cfc394](https://huggingface.co/datasets/sentence-transformers/s2orc/tree/8cfc394e83b2ebfcf38f90b508aea383df742439)
1810
  * Size: 10,000 evaluation samples
@@ -1844,7 +1876,9 @@ You can finetune this model on your own dataset.
1844
  }
1845
  ```
1846
 
1847
- #### allnli
 
 
1848
 
1849
  * Dataset: [allnli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
1850
  * Size: 6,584 evaluation samples
@@ -1884,7 +1918,9 @@ You can finetune this model on your own dataset.
1884
  }
1885
  ```
1886
 
1887
- #### paq
 
 
1888
 
1889
  * Dataset: [paq](https://huggingface.co/datasets/sentence-transformers/paq) at [74601d8](https://huggingface.co/datasets/sentence-transformers/paq/tree/74601d8d731019bc9c627ffc4271cdd640e1e748)
1890
  * Size: 64,371,441 evaluation samples
@@ -1924,7 +1960,9 @@ You can finetune this model on your own dataset.
1924
  }
1925
  ```
1926
 
1927
- #### trivia_qa
 
 
1928
 
1929
  * Dataset: [trivia_qa](https://huggingface.co/datasets/sentence-transformers/trivia-qa) at [a7c36e3](https://huggingface.co/datasets/sentence-transformers/trivia-qa/tree/a7c36e3c8c8c01526bc094d79bf80d4c848b0ad0)
1930
  * Size: 73,346 evaluation samples
@@ -1964,7 +2002,9 @@ You can finetune this model on your own dataset.
1964
  }
1965
  ```
1966
 
1967
- #### msmarco_10m
 
 
1968
 
1969
  * Dataset: [msmarco_10m](https://huggingface.co/datasets/bclavie/msmarco-10m-triplets) at [8c5139a](https://huggingface.co/datasets/bclavie/msmarco-10m-triplets/tree/8c5139a245a5997992605792faa49ec12a6eb5f2)
1970
  * Size: 10,000,000 evaluation samples
@@ -2004,7 +2044,9 @@ You can finetune this model on your own dataset.
2004
  }
2005
  ```
2006
 
2007
- #### swim_ir
 
 
2008
 
2009
  * Dataset: [swim_ir](https://huggingface.co/datasets/nthakur/swim-ir-monolingual) at [834c20f](https://huggingface.co/datasets/nthakur/swim-ir-monolingual/tree/834c20f0ceef6a68e029fb4447d17d20bb0288c3)
2010
  * Size: 501,538 evaluation samples
@@ -2044,7 +2086,9 @@ You can finetune this model on your own dataset.
2044
  }
2045
  ```
2046
 
2047
- #### pubmedqa
 
 
2048
 
2049
  * Dataset: [pubmedqa](https://huggingface.co/datasets/sentence-transformers/pubmedqa) at [a1ef0b5](https://huggingface.co/datasets/sentence-transformers/pubmedqa/tree/a1ef0b513b16ed490e807ac11da40e436d3a54c3)
2050
  * Size: 1,660 evaluation samples
@@ -2084,7 +2128,9 @@ You can finetune this model on your own dataset.
2084
  }
2085
  ```
2086
 
2087
- #### miracl
 
 
2088
 
2089
  * Dataset: [miracl](https://huggingface.co/datasets/sentence-transformers/miracl) at [07e2b62](https://huggingface.co/datasets/sentence-transformers/miracl/tree/07e2b629250bf4185f4c87f640fac15949b8aa73)
2090
  * Size: 789,900 evaluation samples
@@ -2124,7 +2170,9 @@ You can finetune this model on your own dataset.
2124
  }
2125
  ```
2126
 
2127
- #### mldr
 
 
2128
 
2129
  * Dataset: [mldr](https://huggingface.co/datasets/sentence-transformers/mldr) at [40ad767](https://huggingface.co/datasets/sentence-transformers/mldr/tree/40ad7672817ebee49e00dd25aed00e1c401881d6)
2130
  * Size: 200,000 evaluation samples
@@ -2164,7 +2212,9 @@ You can finetune this model on your own dataset.
2164
  }
2165
  ```
2166
 
2167
- #### mr_tydi
 
 
2168
 
2169
  * Dataset: [mr_tydi](https://huggingface.co/datasets/sentence-transformers/mr-tydi) at [abbdf55](https://huggingface.co/datasets/sentence-transformers/mr-tydi/tree/abbdf55c630352da943f779610c3ce6268118351)
2170
  * Size: 354,700 evaluation samples
@@ -2204,6 +2254,8 @@ You can finetune this model on your own dataset.
2204
  }
2205
  ```
2206
 
 
 
2207
  ### Training Hyperparameters
2208
  #### Non-Default Hyperparameters
2209
 
 
1162
 
1163
  ### Training Datasets
1164
 
1165
+ <details><summary>gooaq</summary>
1166
 
1167
  * Dataset: [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c)
1168
  * Size: 3,012,496 training samples
 
1202
  }
1203
  ```
1204
 
1205
+ </details>
1206
+
1207
+ <details><summary>msmarco</summary>
1208
 
1209
  * Dataset: [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1) at [84ed2d3](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1/tree/84ed2d35626f617d890bd493b4d6db69a741e0e2)
1210
  * Size: 502,939 training samples
 
1244
  }
1245
  ```
1246
 
1247
+ </details>
1248
+
1249
+ <details><summary>squad</summary>
1250
 
1251
  * Dataset: [squad](https://huggingface.co/datasets/sentence-transformers/squad) at [d84c8c2](https://huggingface.co/datasets/sentence-transformers/squad/tree/d84c8c2ef64693264c890bb242d2e73fc0a46c40)
1252
  * Size: 87,599 training samples
 
1286
  }
1287
  ```
1288
 
1289
+ </details>
1290
+
1291
+ <details><summary>s2orc</summary>
1292
 
1293
  * Dataset: [s2orc](https://huggingface.co/datasets/sentence-transformers/s2orc) at [8cfc394](https://huggingface.co/datasets/sentence-transformers/s2orc/tree/8cfc394e83b2ebfcf38f90b508aea383df742439)
1294
  * Size: 90,000 training samples
 
1328
  }
1329
  ```
1330
 
1331
+ </details>
1332
+
1333
+ <details><summary>allnli</summary>
1334
 
1335
  * Dataset: [allnli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
1336
  * Size: 557,850 training samples
 
1370
  }
1371
  ```
1372
 
1373
+ </details>
1374
+
1375
+ <details><summary>paq</summary>
1376
 
1377
  * Dataset: [paq](https://huggingface.co/datasets/sentence-transformers/paq) at [74601d8](https://huggingface.co/datasets/sentence-transformers/paq/tree/74601d8d731019bc9c627ffc4271cdd640e1e748)
1378
  * Size: 64,371,441 training samples
 
1412
  }
1413
  ```
1414
 
1415
+ </details>
1416
+
1417
+ <details><summary>trivia</summary>_qa
1418
 
1419
  * Dataset: [trivia_qa](https://huggingface.co/datasets/sentence-transformers/trivia-qa) at [a7c36e3](https://huggingface.co/datasets/sentence-transformers/trivia-qa/tree/a7c36e3c8c8c01526bc094d79bf80d4c848b0ad0)
1420
  * Size: 73,346 training samples
 
1454
  }
1455
  ```
1456
 
1457
+ </details>
1458
+
1459
+ <details><summary>msmarco_10m</summary>
1460
 
1461
  * Dataset: [msmarco_10m](https://huggingface.co/datasets/bclavie/msmarco-10m-triplets) at [8c5139a](https://huggingface.co/datasets/bclavie/msmarco-10m-triplets/tree/8c5139a245a5997992605792faa49ec12a6eb5f2)
1462
  * Size: 10,000,000 training samples
 
1496
  }
1497
  ```
1498
 
1499
+ </details>
1500
+
1501
+ <details><summary>swim_ir</summary>
1502
 
1503
  * Dataset: [swim_ir](https://huggingface.co/datasets/nthakur/swim-ir-monolingual) at [834c20f](https://huggingface.co/datasets/nthakur/swim-ir-monolingual/tree/834c20f0ceef6a68e029fb4447d17d20bb0288c3)
1504
  * Size: 501,538 training samples
 
1538
  }
1539
  ```
1540
 
1541
+ </details>
1542
+
1543
+ <details><summary>pubmedqa</summary>
1544
 
1545
  * Dataset: [pubmedqa](https://huggingface.co/datasets/sentence-transformers/pubmedqa) at [a1ef0b5](https://huggingface.co/datasets/sentence-transformers/pubmedqa/tree/a1ef0b513b16ed490e807ac11da40e436d3a54c3)
1546
  * Size: 1,660 training samples
 
1580
  }
1581
  ```
1582
 
1583
+ </details>
1584
+
1585
+ <details><summary>miracl</summary>
1586
 
1587
  * Dataset: [miracl](https://huggingface.co/datasets/sentence-transformers/miracl) at [07e2b62](https://huggingface.co/datasets/sentence-transformers/miracl/tree/07e2b629250bf4185f4c87f640fac15949b8aa73)
1588
  * Size: 789,900 training samples
 
1622
  }
1623
  ```
1624
 
1625
+ </details>
1626
+
1627
+ <details><summary>mldr</summary>
1628
 
1629
  * Dataset: [mldr](https://huggingface.co/datasets/sentence-transformers/mldr) at [40ad767](https://huggingface.co/datasets/sentence-transformers/mldr/tree/40ad7672817ebee49e00dd25aed00e1c401881d6)
1630
  * Size: 200,000 training samples
 
1664
  }
1665
  ```
1666
 
1667
+ </details>
1668
+
1669
+ <details><summary>mr_tydi</summary>
1670
 
1671
  * Dataset: [mr_tydi](https://huggingface.co/datasets/sentence-transformers/mr-tydi) at [abbdf55](https://huggingface.co/datasets/sentence-transformers/mr-tydi/tree/abbdf55c630352da943f779610c3ce6268118351)
1672
  * Size: 354,700 training samples
 
1706
  }
1707
  ```
1708
 
1709
+ </details>
1710
+
1711
  ### Evaluation Datasets
1712
 
1713
+ <details><summary>gooaq</summary>
1714
 
1715
  * Dataset: [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c)
1716
  * Size: 3,012,496 evaluation samples
 
1750
  }
1751
  ```
1752
 
1753
+ </details>
1754
+
1755
+ <details><summary>msmarco</summary>
1756
 
1757
  * Dataset: [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1) at [84ed2d3](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1/tree/84ed2d35626f617d890bd493b4d6db69a741e0e2)
1758
  * Size: 502,939 evaluation samples
 
1792
  }
1793
  ```
1794
 
1795
+ </details>
1796
+
1797
+ <details><summary>squad</summary>
1798
 
1799
  * Dataset: [squad](https://huggingface.co/datasets/sentence-transformers/squad) at [d84c8c2](https://huggingface.co/datasets/sentence-transformers/squad/tree/d84c8c2ef64693264c890bb242d2e73fc0a46c40)
1800
  * Size: 87,599 evaluation samples
 
1834
  }
1835
  ```
1836
 
1837
+ </details>
1838
+
1839
+ <details><summary>s2orc</summary>
1840
 
1841
  * Dataset: [s2orc](https://huggingface.co/datasets/sentence-transformers/s2orc) at [8cfc394](https://huggingface.co/datasets/sentence-transformers/s2orc/tree/8cfc394e83b2ebfcf38f90b508aea383df742439)
1842
  * Size: 10,000 evaluation samples
 
1876
  }
1877
  ```
1878
 
1879
+ </details>
1880
+
1881
+ <details><summary>allnli</summary>
1882
 
1883
  * Dataset: [allnli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
1884
  * Size: 6,584 evaluation samples
 
1918
  }
1919
  ```
1920
 
1921
+ </details>
1922
+
1923
+ <details><summary>paq</summary>
1924
 
1925
  * Dataset: [paq](https://huggingface.co/datasets/sentence-transformers/paq) at [74601d8](https://huggingface.co/datasets/sentence-transformers/paq/tree/74601d8d731019bc9c627ffc4271cdd640e1e748)
1926
  * Size: 64,371,441 evaluation samples
 
1960
  }
1961
  ```
1962
 
1963
+ </details>
1964
+
1965
+ <details><summary>trivia_qa</summary>
1966
 
1967
  * Dataset: [trivia_qa](https://huggingface.co/datasets/sentence-transformers/trivia-qa) at [a7c36e3](https://huggingface.co/datasets/sentence-transformers/trivia-qa/tree/a7c36e3c8c8c01526bc094d79bf80d4c848b0ad0)
1968
  * Size: 73,346 evaluation samples
 
2002
  }
2003
  ```
2004
 
2005
+ </details>
2006
+
2007
+ <details><summary>msmarco_10m</summary>
2008
 
2009
  * Dataset: [msmarco_10m](https://huggingface.co/datasets/bclavie/msmarco-10m-triplets) at [8c5139a](https://huggingface.co/datasets/bclavie/msmarco-10m-triplets/tree/8c5139a245a5997992605792faa49ec12a6eb5f2)
2010
  * Size: 10,000,000 evaluation samples
 
2044
  }
2045
  ```
2046
 
2047
+ </details>
2048
+
2049
+ <details><summary>swim_ir</summary>
2050
 
2051
  * Dataset: [swim_ir](https://huggingface.co/datasets/nthakur/swim-ir-monolingual) at [834c20f](https://huggingface.co/datasets/nthakur/swim-ir-monolingual/tree/834c20f0ceef6a68e029fb4447d17d20bb0288c3)
2052
  * Size: 501,538 evaluation samples
 
2086
  }
2087
  ```
2088
 
2089
+ </details>
2090
+
2091
+ <details><summary>pubmedqa</summary>
2092
 
2093
  * Dataset: [pubmedqa](https://huggingface.co/datasets/sentence-transformers/pubmedqa) at [a1ef0b5](https://huggingface.co/datasets/sentence-transformers/pubmedqa/tree/a1ef0b513b16ed490e807ac11da40e436d3a54c3)
2094
  * Size: 1,660 evaluation samples
 
2128
  }
2129
  ```
2130
 
2131
+ </details>
2132
+
2133
+ <details><summary>miracl</summary>
2134
 
2135
  * Dataset: [miracl](https://huggingface.co/datasets/sentence-transformers/miracl) at [07e2b62](https://huggingface.co/datasets/sentence-transformers/miracl/tree/07e2b629250bf4185f4c87f640fac15949b8aa73)
2136
  * Size: 789,900 evaluation samples
 
2170
  }
2171
  ```
2172
 
2173
+ </details>
2174
+
2175
+ <details><summary>mldr</summary>
2176
 
2177
  * Dataset: [mldr](https://huggingface.co/datasets/sentence-transformers/mldr) at [40ad767](https://huggingface.co/datasets/sentence-transformers/mldr/tree/40ad7672817ebee49e00dd25aed00e1c401881d6)
2178
  * Size: 200,000 evaluation samples
 
2212
  }
2213
  ```
2214
 
2215
+ </details>
2216
+
2217
+ <details><summary>mr_tydi</summary>
2218
 
2219
  * Dataset: [mr_tydi](https://huggingface.co/datasets/sentence-transformers/mr-tydi) at [abbdf55](https://huggingface.co/datasets/sentence-transformers/mr-tydi/tree/abbdf55c630352da943f779610c3ce6268118351)
2220
  * Size: 354,700 evaluation samples
 
2254
  }
2255
  ```
2256
 
2257
+ </details>
2258
+
2259
  ### Training Hyperparameters
2260
  #### Non-Default Hyperparameters
2261