url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 48
51
| id
int64 600M
2.19B
| node_id
stringlengths 18
24
| number
int64 2
6.73k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
listlengths 0
30
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | draft
null | pull_request
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3608
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3608/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3608/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3608/events
|
https://github.com/huggingface/datasets/issues/3608
| 1,109,310,981 |
I_kwDODunzps5CHr4F
| 3,608 |
Add support for continuous metrics (RMSE, MAE)
|
{
"login": "ck37",
"id": 50770,
"node_id": "MDQ6VXNlcjUwNzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/50770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ck37",
"html_url": "https://github.com/ck37",
"followers_url": "https://api.github.com/users/ck37/followers",
"following_url": "https://api.github.com/users/ck37/following{/other_user}",
"gists_url": "https://api.github.com/users/ck37/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ck37/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ck37/subscriptions",
"organizations_url": "https://api.github.com/users/ck37/orgs",
"repos_url": "https://api.github.com/users/ck37/repos",
"events_url": "https://api.github.com/users/ck37/events{/privacy}",
"received_events_url": "https://api.github.com/users/ck37/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false | null |
[] | null |
[
"Hey @ck37 \r\n\r\nYou can always use a custom metric as explained [in this guide from HF](https://huggingface.co/docs/datasets/master/loading_metrics.html#using-a-custom-metric-script).\r\n\r\nIf this issue needs to be contributed to (for enhancing the metric API) I think [this link](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html) would be helpful for the `MAE` metric.",
"You can use a local metric script just by providing its path instead of the usual shortcut name ",
"#self-assign I have starting working on this issue to enhance the metric API."
] | 2022-01-20T13:35:36 | 2022-03-09T17:18:20 | 2022-03-09T17:18:20 |
NONE
| null | null | null |
**Is your feature request related to a problem? Please describe.**
I am uploading our dataset and models for the "Constructing interval measures" method we've developed, which uses item response theory to convert multiple discrete labels into a continuous spectrum for hate speech. Once we have this outcome our NLP models conduct regression rather than classification, so binary metrics are not relevant. The only continuous metrics available at https://huggingface.co/metrics are pearson & spearman correlation, which don't ensure that the prediction is on the same scale as the outcome.
**Describe the solution you'd like**
I would like to be able to tag our models on the Hub with the following metrics:
- RMSE
- MAE
**Describe alternatives you've considered**
I don't know if there are any alternatives.
**Additional context**
Our preprint is available here: https://arxiv.org/abs/2009.10277 . We are making it available for use in Jigsaw's Toxic Severity Rating Kaggle competition: https://www.kaggle.com/c/jigsaw-toxic-severity-rating/overview . I have our first model uploaded to the Hub at https://huggingface.co/ucberkeley-dlab/hate-measure-roberta-large
Thanks,
Chris
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3608/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3606
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3606/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3606/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3606/events
|
https://github.com/huggingface/datasets/issues/3606
| 1,108,918,701 |
I_kwDODunzps5CGMGt
| 3,606 |
audio column not saved correctly after resampling
|
{
"login": "laphang",
"id": 24724502,
"node_id": "MDQ6VXNlcjI0NzI0NTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/24724502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laphang",
"html_url": "https://github.com/laphang",
"followers_url": "https://api.github.com/users/laphang/followers",
"following_url": "https://api.github.com/users/laphang/following{/other_user}",
"gists_url": "https://api.github.com/users/laphang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laphang/subscriptions",
"organizations_url": "https://api.github.com/users/laphang/orgs",
"repos_url": "https://api.github.com/users/laphang/repos",
"events_url": "https://api.github.com/users/laphang/events{/privacy}",
"received_events_url": "https://api.github.com/users/laphang/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"Hi ! We just released a new version of `datasets` that should fix this.\r\n\r\nI tested resampling and using save/load_from_disk afterwards and it seems to be fixed now",
"Hi @lhoestq, \r\n\r\nJust tested the latest datasets version, and confirming that this is fixed for me. \r\n\r\nThanks!",
"Also, just an FYI, data that I had saved (with save_to_disk) previously from common voice using datasets==1.17.0 now give the error below when loading (with load_from disk) using datasets==1.18.0. \r\n\r\nHowever, when starting fresh using load_dataset, then doing the resampling, the save/load_from disk worked fine. \r\n\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<timed exec> in <module>\r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1747 return Dataset.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1748 elif fs.isfile(Path(dest_dataset_path, config.DATASETDICT_JSON_FILENAME).as_posix()):\r\n-> 1749 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1750 else:\r\n 1751 raise FileNotFoundError(\r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in load_from_disk(dataset_dict_path, fs, keep_in_memory)\r\n 769 else Path(dest_dataset_dict_path, k).as_posix()\r\n 770 )\r\n--> 771 dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)\r\n 772 return dataset_dict\r\n 773 \r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1118 info=dataset_info,\r\n 1119 split=split,\r\n-> 1120 fingerprint=state[\"_fingerprint\"],\r\n 1121 )\r\n 1122 \r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint)\r\n 655 if self.info.features.type != inferred_features.type:\r\n 656 raise ValueError(\r\n--> 657 f\"External features info don't match the dataset:\\nGot\\n{self.info.features}\\nwith type\\n{self.info.features.type}\\n\\nbut expected something like\\n{inferred_features}\\nwith type\\n{inferred_features.type}\"\r\n 658 )\r\n 659 \r\n\r\nValueError: External features info don't match the dataset:\r\nGot\r\n{'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=48000, mono=True, id=None), 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)}\r\nwith type\r\nstruct<accent: string, age: string, audio: struct<bytes: binary, path: string>, client_id: string, down_votes: int64, gender: string, locale: string, path: string, segment: string, sentence: string, up_votes: int64>\r\n\r\nbut expected something like\r\n{'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': {'path': Value(dtype='string', id=None), 'bytes': Value(dtype='binary', id=None)}, 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)}\r\nwith type\r\nstruct<accent: string, age: string, audio: struct<path: string, bytes: binary>, client_id: string, down_votes: int64, gender: string, locale: string, path: string, segment: string, sentence: string, up_votes: int64> \r\n```"
] | 2022-01-20T06:37:10 | 2022-01-23T01:41:01 | 2022-01-23T01:24:14 |
NONE
| null | null | null |
## Describe the bug
After resampling the audio column, saving with save_to_disk doesn't seem to save with the correct type.
## Steps to reproduce the bug
- load a subset of common voice dataset (48Khz)
- resample audio column to 16Khz
- save with save_to_disk()
- load with load_from_disk()
## Expected results
I expected that after saving the data, and then loading it back in, the audio column has the correct dataset.Audio type (i.e. same as before saving it)
{'accent': Value(dtype='string', id=None),
'age': Value(dtype='string', id=None),
'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None),
'client_id': Value(dtype='string', id=None),
'down_votes': Value(dtype='int64', id=None),
'gender': Value(dtype='string', id=None),
'locale': Value(dtype='string', id=None),
'path': Value(dtype='string', id=None),
'segment': Value(dtype='string', id=None),
'sentence': Value(dtype='string', id=None),
'up_votes': Value(dtype='int64', id=None)}
## Actual results
Audio column does not have the right type
{'accent': Value(dtype='string', id=None),
'age': Value(dtype='string', id=None),
'audio': {'bytes': Value(dtype='binary', id=None),
'path': Value(dtype='string', id=None)},
'client_id': Value(dtype='string', id=None),
'down_votes': Value(dtype='int64', id=None),
'gender': Value(dtype='string', id=None),
'locale': Value(dtype='string', id=None),
'path': Value(dtype='string', id=None),
'segment': Value(dtype='string', id=None),
'sentence': Value(dtype='string', id=None),
'up_votes': Value(dtype='int64', id=None)}
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: linux
- Python version:
- PyArrow version:
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3606/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3604
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3604/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3604/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3604/events
|
https://github.com/huggingface/datasets/issues/3604
| 1,108,477,316 |
I_kwDODunzps5CEgWE
| 3,604 |
Dataset Viewer not showing Previews for Private Datasets
|
{
"login": "abidlabs",
"id": 1778297,
"node_id": "MDQ6VXNlcjE3NzgyOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abidlabs",
"html_url": "https://github.com/abidlabs",
"followers_url": "https://api.github.com/users/abidlabs/followers",
"following_url": "https://api.github.com/users/abidlabs/following{/other_user}",
"gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions",
"organizations_url": "https://api.github.com/users/abidlabs/orgs",
"repos_url": "https://api.github.com/users/abidlabs/repos",
"events_url": "https://api.github.com/users/abidlabs/events{/privacy}",
"received_events_url": "https://api.github.com/users/abidlabs/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false |
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Sure, it's on the roadmap.",
"Closing in favor of https://github.com/huggingface/datasets-server/issues/39."
] | 2022-01-19T19:29:26 | 2022-09-26T08:04:43 | 2022-09-26T08:04:43 |
MEMBER
| null | null | null |
## Dataset viewer issue for 'abidlabs/test-audio-13'
It seems that the dataset viewer does not show previews for `private` datasets, even for the user who's private dataset it is. See [1] for example. If I change the visibility to public, then it does show, but it would be useful to have the viewer even for private datasets.

**Link:**
[1] https://huggingface.co/datasets/abidlabs/test-audio-13
**Am I the one who added this dataset?**
Yes
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3604/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3599
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3599/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3599/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3599/events
|
https://github.com/huggingface/datasets/issues/3599
| 1,108,111,607 |
I_kwDODunzps5CDHD3
| 3,599 |
The `add_column()` method does not work if used on dataset sliced with `select()`
|
{
"login": "ThGouzias",
"id": 59422506,
"node_id": "MDQ6VXNlcjU5NDIyNTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/59422506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ThGouzias",
"html_url": "https://github.com/ThGouzias",
"followers_url": "https://api.github.com/users/ThGouzias/followers",
"following_url": "https://api.github.com/users/ThGouzias/following{/other_user}",
"gists_url": "https://api.github.com/users/ThGouzias/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ThGouzias/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ThGouzias/subscriptions",
"organizations_url": "https://api.github.com/users/ThGouzias/orgs",
"repos_url": "https://api.github.com/users/ThGouzias/repos",
"events_url": "https://api.github.com/users/ThGouzias/events{/privacy}",
"received_events_url": "https://api.github.com/users/ThGouzias/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"similar #3611 "
] | 2022-01-19T13:36:50 | 2022-01-28T15:35:57 | 2022-01-28T15:35:57 |
NONE
| null | null | null |
Hello, I posted this as a question on the forums ([here](https://discuss.huggingface.co/t/add-column-does-not-work-if-used-on-dataset-sliced-with-select/13893)):
I have a dataset with 2000 entries
> dataset = Dataset.from_dict({'colA': list(range(2000))})
and from which I want to extract the first one thousand rows, create a new dataset with these and also add a new column to it:
> dataset2 = dataset.select(list(range(1000)))
> final_dataset = dataset2.add_column('colB', list(range(1000)))
This gives an error
>ArrowInvalid: Added column's length must match table's length. Expected length 2000 but got length 1000
So it looks like even though it is a dataset with 1000 rows, it "remembers" the shape of the one it was sliced from.
## Actual results
```
ArrowInvalid Traceback (most recent call last)
<ipython-input-138-e806860f3ce3> in <module>
----> 1 final_dataset = dataset2.add_column('colB', list(range(1000)))
~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
468 }
469 # apply actual function
--> 470 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
471 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
472 # re-apply format to the output
~/.local/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
404 # Call actual function
405
--> 406 out = func(self, *args, **kwargs)
407
408 # Update fingerprint of in-place transforms + update in-place history of transforms
~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_column(self, name, column, new_fingerprint)
3343 column_table = InMemoryTable.from_pydict({name: column})
3344 # Concatenate tables horizontally
-> 3345 table = ConcatenationTable.from_tables([self._data, column_table], axis=1)
3346 # Update features
3347 info = self.info.copy()
~/.local/lib/python3.8/site-packages/datasets/table.py in from_tables(cls, tables, axis)
729 table_blocks = to_blocks(table)
730 blocks = _extend_blocks(blocks, table_blocks, axis=axis)
--> 731 return cls.from_blocks(blocks)
732
733 @property
~/.local/lib/python3.8/site-packages/datasets/table.py in from_blocks(cls, blocks)
668 @classmethod
669 def from_blocks(cls, blocks: TableBlockContainer) -> "ConcatenationTable":
--> 670 blocks = cls._consolidate_blocks(blocks)
671 if isinstance(blocks, TableBlock):
672 table = blocks
~/.local/lib/python3.8/site-packages/datasets/table.py in _consolidate_blocks(cls, blocks)
664 return cls._merge_blocks(blocks, axis=0)
665 else:
--> 666 return cls._merge_blocks(blocks)
667
668 @classmethod
~/.local/lib/python3.8/site-packages/datasets/table.py in _merge_blocks(cls, blocks, axis)
650 merged_blocks += list(block_group)
651 else: # both
--> 652 merged_blocks = [cls._merge_blocks(row_block, axis=1) for row_block in blocks]
653 if all(len(row_block) == 1 for row_block in merged_blocks):
654 merged_blocks = cls._merge_blocks(
~/.local/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0)
650 merged_blocks += list(block_group)
651 else: # both
--> 652 merged_blocks = [cls._merge_blocks(row_block, axis=1) for row_block in blocks]
653 if all(len(row_block) == 1 for row_block in merged_blocks):
654 merged_blocks = cls._merge_blocks(
~/.local/lib/python3.8/site-packages/datasets/table.py in _merge_blocks(cls, blocks, axis)
647 for is_in_memory, block_group in groupby(blocks, key=lambda x: isinstance(x, InMemoryTable)):
648 if is_in_memory:
--> 649 block_group = [InMemoryTable(cls._concat_blocks(list(block_group), axis=axis))]
650 merged_blocks += list(block_group)
651 else: # both
~/.local/lib/python3.8/site-packages/datasets/table.py in _concat_blocks(blocks, axis)
626 else:
627 for name, col in zip(table.column_names, table.columns):
--> 628 pa_table = pa_table.append_column(name, col)
629 return pa_table
630 else:
~/.local/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.append_column()
~/.local/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.add_column()
~/.local/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.local/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Added column's length must match table's length. Expected length 2000 but got length 1000
```
A solution provided by @mariosasko is to use `dataset2.flatten_indices()` after the `select()` and before attempting to add the new column:
> dataset = Dataset.from_dict({'colA': list(range(2000))})
> dataset2 = dataset.select(list(range(1000)))
> dataset2 = dataset2.flatten_indices()
> final_dataset = dataset2.add_column('colB', list(range(1000)))
which works.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.13.2 (note: also checked with version 1.17.0, still the same error)
- Platform: Ubuntu 20.04.3
- Python version: 3.8.10
- PyArrow version: 6.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3599/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3599/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3598
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3598/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3598/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3598/events
|
https://github.com/huggingface/datasets/issues/3598
| 1,108,107,199 |
I_kwDODunzps5CDF-_
| 3,598 |
Readme info not being parsed to show on Dataset card page
|
{
"login": "davidcanovas",
"id": 79796807,
"node_id": "MDQ6VXNlcjc5Nzk2ODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/79796807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidcanovas",
"html_url": "https://github.com/davidcanovas",
"followers_url": "https://api.github.com/users/davidcanovas/followers",
"following_url": "https://api.github.com/users/davidcanovas/following{/other_user}",
"gists_url": "https://api.github.com/users/davidcanovas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidcanovas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidcanovas/subscriptions",
"organizations_url": "https://api.github.com/users/davidcanovas/orgs",
"repos_url": "https://api.github.com/users/davidcanovas/repos",
"events_url": "https://api.github.com/users/davidcanovas/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidcanovas/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"i suspect a markdown parsing error, @severo do you want to take a quick look at it when you have some time?",
"# Problem\r\nThe issue seems to coming from the front matter of the README\r\n```---\r\nannotations_creators:\r\n- no-annotation\r\nlanguage_creators:\r\n- machine-generated\r\nlanguages:\r\n- 'ca'\r\n- 'de'\r\nlicenses:\r\n- cc-by-4.0\r\nmultilinguality:\r\n- translation\r\npretty_name: Catalan-German aligned corpora to train NMT systems.\r\nsize_categories:\r\n- \"1M<n<10M\" \r\nsource_datasets:\r\n- extended|tilde_model\r\ntask_categories:\r\n- machine-translation\r\ntask_ids:\r\n- machine-translation\r\n---\r\n``` \r\n# Solution\r\nThe fix is to correctly style the README as explained [here](https://huggingface.co/docs/datasets/v1.12.0/dataset_card.html). I have also correctly parsed the font matter as shown below:\r\n```\r\n---\r\nannotations_creators: []\r\nlanguage_creators: [machine-generated]\r\nlanguages: ['ca', 'de']\r\nlicenses: []\r\nmultilinguality:\r\n- multilingual\r\npretty_name: 'Catalan-German aligned corpora to train NMT systems.'\r\nsize_categories: \r\n- 1M<n<10M\r\nsource_datasets: ['extended|tilde_model']\r\ntask_categories: ['machine-translation']\r\ntask_ids: ['machine-translation']\r\n---\r\n```\r\nYou can find the README for a sample dataset [here](https://huggingface.co/datasets/ritwikraha/Test)",
"Thank you. It finally worked implementing your changes and leaving a white line between title and text in the description.",
"Thanks, if this solves your issue, can you please close it?"
] | 2022-01-19T13:32:29 | 2022-01-21T10:20:01 | 2022-01-21T10:20:01 |
NONE
| null | null | null |
## Describe the bug
The info contained in the README.md file is not being shown in the dataset main page. Basic info and table of contents are properly formatted in the README.
## Steps to reproduce the bug
# Sample code to reproduce the bug
The README file is this one: https://huggingface.co/datasets/softcatala/Tilde-MODEL-Catalan/blob/main/README.md
## Expected results
README info should appear in the Dataset card page.
## Actual results
Nothing is shown. However, labels are parsed and shown successfully.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3598/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3597
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3597/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3597/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3597/events
|
https://github.com/huggingface/datasets/issues/3597
| 1,108,092,864 |
I_kwDODunzps5CDCfA
| 3,597 |
ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content
|
{
"login": "amitkml",
"id": 49492030,
"node_id": "MDQ6VXNlcjQ5NDkyMDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/49492030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amitkml",
"html_url": "https://github.com/amitkml",
"followers_url": "https://api.github.com/users/amitkml/followers",
"following_url": "https://api.github.com/users/amitkml/following{/other_user}",
"gists_url": "https://api.github.com/users/amitkml/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amitkml/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amitkml/subscriptions",
"organizations_url": "https://api.github.com/users/amitkml/orgs",
"repos_url": "https://api.github.com/users/amitkml/repos",
"events_url": "https://api.github.com/users/amitkml/events{/privacy}",
"received_events_url": "https://api.github.com/users/amitkml/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"Hi! The `cd` command in Jupyer/Colab needs to start with `%`, so this should work:\r\n```\r\n!git clone https://github.com/huggingface/datasets.git\r\n%cd datasets\r\n!pip install -e \".[streaming]\"\r\n```",
"thanks @mariosasko i had the same mistake and your solution is what was needed"
] | 2022-01-19T13:19:28 | 2022-08-05T12:35:51 | 2022-02-14T08:46:34 |
NONE
| null | null | null |
## Bug
The install of streaming dataset is giving following error.
## Steps to reproduce the bug
```python
! git clone https://github.com/huggingface/datasets.git
! cd datasets
! pip install -e ".[streaming]"
```
## Actual results
Cloning into 'datasets'...
remote: Enumerating objects: 50816, done.
remote: Counting objects: 100% (2356/2356), done.
remote: Compressing objects: 100% (1606/1606), done.
remote: Total 50816 (delta 834), reused 1741 (delta 525), pack-reused 48460
Receiving objects: 100% (50816/50816), 72.47 MiB | 27.68 MiB/s, done.
Resolving deltas: 100% (22541/22541), done.
Checking out files: 100% (6722/6722), done.
ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3597/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3596
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3596/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3596/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3596/events
|
https://github.com/huggingface/datasets/issues/3596
| 1,107,345,338 |
I_kwDODunzps5CAL-6
| 3,596 |
Loss of cast `Image` feature on certain dataset method
|
{
"login": "davanstrien",
"id": 8995957,
"node_id": "MDQ6VXNlcjg5OTU5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davanstrien",
"html_url": "https://github.com/davanstrien",
"followers_url": "https://api.github.com/users/davanstrien/followers",
"following_url": "https://api.github.com/users/davanstrien/following{/other_user}",
"gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions",
"organizations_url": "https://api.github.com/users/davanstrien/orgs",
"repos_url": "https://api.github.com/users/davanstrien/repos",
"events_url": "https://api.github.com/users/davanstrien/events{/privacy}",
"received_events_url": "https://api.github.com/users/davanstrien/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"Hi! Thanks for reporting! The issue with `cast_column` should be fixed by #3575 and after we merge that PR I'll start working on the `push_to_hub` support for the `Image`/`Audio` feature.",
"> Hi! Thanks for reporting! The issue with `cast_column` should be fixed by #3575 and after we merge that PR I'll start working on the `push_to_hub` support for the `Image`/`Audio` feature.\r\n\r\nThanks, I'll keep an eye out for #3575 getting merged. I managed to use `push_to_hub` sucesfully with images when they were loaded via `map` - something like `ds.map(lambda example: {\"img\": load_image_function(example['fname']})`, this only pushed the images to the hub if the `load_image_function` return a PIL Image without the filename attribute though. I guess this might often be the prefered behaviour though. \r\n",
"Hi ! We merged the PR and did a release of `datasets` that includes the changes. Can you try updating `datasets` and try again ?",
"> Hi ! We merged the PR and did a release of `datasets` that includes the changes. Can you try updating `datasets` and try again ?\r\n\r\nThanks for checking. There is no longer an error when calling `select` but it appears the cast value isn't preserved. Before `select`\r\n\r\n```python\r\ndataset.features\r\n{'url': Image(id=None)}\r\n```\r\n\r\nafter select:\r\n```\r\n{'url': Value(dtype='string', id=None)}\r\n```\r\n\r\nUpdated Colab example [here](https://colab.research.google.com/gist/davanstrien/4e88f55a3675c279b5c2f64299ae5c6f/potential_casting_bug.ipynb) ",
"Hmmm, if I re-run your google colab I'm getting the right type at the end:\r\n```\r\nsample.features\r\n# {'url': Image(id=None)}\r\n```",
"Appolgies - I've just run again and also got this output. I have also sucesfully used the `push_to_hub` method. I think this is fixed now so will close this issue. ",
"Fixed in #3575 "
] | 2022-01-18T20:44:01 | 2022-01-21T18:07:28 | 2022-01-21T18:07:28 |
MEMBER
| null | null | null |
## Describe the bug
When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to an `image`. This also happens when using select on a dataset which has had a column cast to an `Image`.
I suspect this might be related to https://github.com/huggingface/datasets/pull/3556 but I don't believe that pull request fixes this issue.
## Steps to reproduce the bug
An example of casting a url to an image followed by using the `select` method:
```python
from datasets import Dataset
from datasets import features
url = "https://cf.ltkcdn.net/cats/images/std-lg/246866-1200x816-grey-white-kitten.webp"
data_dict = {"url": [url]*2}
dataset = Dataset.from_dict(data_dict)
dataset = dataset.cast_column('url',features.Image())
sample = dataset.select([1])
```
[example notebook](https://gist.github.com/davanstrien/06e53f4383c28ae77ce1b30d0eaf0d70#file-potential_casting_bug-ipynb)
## Expected results
The cast value is maintained when further methods are applied to the dataset.
## Actual results
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-12-47f393bc2d0d> in <module>()
----> 1 sample = dataset.select([1])
4 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
487 }
488 # apply actual function
--> 489 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
490 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
491 # re-apply format to the output
/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
409 # Call actual function
410
--> 411 out = func(self, *args, **kwargs)
412
413 # Update fingerprint of in-place transforms + update in-place history of transforms
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint)
2772 )
2773 else:
-> 2774 return self._new_dataset_with_indices(indices_buffer=buf_writer.getvalue(), fingerprint=new_fingerprint)
2775
2776 @transmit_format
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _new_dataset_with_indices(self, indices_cache_file_name, indices_buffer, fingerprint)
2688 split=self.split,
2689 indices_table=indices_table,
-> 2690 fingerprint=fingerprint,
2691 )
2692
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint)
664 if self.info.features.type != inferred_features.type:
665 raise ValueError(
--> 666 f"External features info don't match the dataset:\nGot\n{self.info.features}\nwith type\n{self.info.features.type}\n\nbut expected something like\n{inferred_features}\nwith type\n{inferred_features.type}"
667 )
668
ValueError: External features info don't match the dataset:
Got
{'url': Image(id=None)}
with type
struct<url: extension<arrow.py_extension_type<ImageExtensionType>>>
but expected something like
{'url': Value(dtype='string', id=None)}
with type
struct<url: string>
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.1.dev0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 3.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3596/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3587
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3587/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3587/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3587/events
|
https://github.com/huggingface/datasets/issues/3587
| 1,106,719,182 |
I_kwDODunzps5B9zHO
| 3,587 |
No module named 'fsspec.archive'
|
{
"login": "shuuchen",
"id": 13246825,
"node_id": "MDQ6VXNlcjEzMjQ2ODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/13246825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuuchen",
"html_url": "https://github.com/shuuchen",
"followers_url": "https://api.github.com/users/shuuchen/followers",
"following_url": "https://api.github.com/users/shuuchen/following{/other_user}",
"gists_url": "https://api.github.com/users/shuuchen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shuuchen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shuuchen/subscriptions",
"organizations_url": "https://api.github.com/users/shuuchen/orgs",
"repos_url": "https://api.github.com/users/shuuchen/repos",
"events_url": "https://api.github.com/users/shuuchen/events{/privacy}",
"received_events_url": "https://api.github.com/users/shuuchen/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[] | 2022-01-18T10:17:01 | 2022-08-11T09:57:54 | 2022-01-18T10:33:10 |
NONE
| null | null | null |
## Describe the bug
Cannot import datasets after installation.
## Steps to reproduce the bug
```shell
$ python
Python 3.9.7 (default, Sep 16 2021, 13:09:58)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import datasets
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/__init__.py", line 34, in <module>
from .arrow_dataset import Dataset, concatenate_datasets
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 61, in <module>
from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_writer.py", line 28, in <module>
from .features import (
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/features/__init__.py", line 2, in <module>
from .audio import Audio
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/features/audio.py", line 7, in <module>
from ..utils.streaming_download_manager import xopen
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py", line 18, in <module>
from ..filesystems import COMPRESSION_FILESYSTEMS
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/filesystems/__init__.py", line 6, in <module>
from . import compression
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/filesystems/compression.py", line 5, in <module>
from fsspec.archive import AbstractArchiveFileSystem
ModuleNotFoundError: No module named 'fsspec.archive'
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3587/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3586
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3586/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3586/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3586/events
|
https://github.com/huggingface/datasets/issues/3586
| 1,106,455,672 |
I_kwDODunzps5B8yx4
| 3,586 |
Revisit `enable/disable_` toggle function prefix
|
{
"login": "jaketae",
"id": 25360440,
"node_id": "MDQ6VXNlcjI1MzYwNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaketae",
"html_url": "https://github.com/jaketae",
"followers_url": "https://api.github.com/users/jaketae/followers",
"following_url": "https://api.github.com/users/jaketae/following{/other_user}",
"gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaketae/subscriptions",
"organizations_url": "https://api.github.com/users/jaketae/orgs",
"repos_url": "https://api.github.com/users/jaketae/repos",
"events_url": "https://api.github.com/users/jaketae/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaketae/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false |
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null |
[] | 2022-01-18T04:09:55 | 2022-03-14T15:01:08 | 2022-03-14T15:01:08 |
CONTRIBUTOR
| null | null | null |
As discussed in https://github.com/huggingface/transformers/pull/15167, we should revisit the `enable/disable_` toggle function prefix, potentially in favor of `set_enabled_`. Concretely, this translates to
- De-deprecating `disable_progress_bar()`
- Adding `enable_progress_bar()`
- On the caching side, adding `enable_caching` and `disable_caching`
Additional decisions have to be made with regards to the existing `set_enabled_X` functions; that is, whether to keep them as is or deprecate them in favor of the aforementioned functions.
cc @mariosasko @lhoestq
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3586/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3586/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3585
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3585/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3585/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3585/events
|
https://github.com/huggingface/datasets/issues/3585
| 1,105,821,470 |
I_kwDODunzps5B6X8e
| 3,585 |
Datasets streaming + map doesn't work for `Audio`
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"This seems related to https://github.com/huggingface/datasets/issues/3505."
] | 2022-01-17T12:55:42 | 2022-01-20T13:28:00 | 2022-01-20T13:28:00 |
CONTRIBUTOR
| null | null | null |
## Describe the bug
When using audio datasets in streaming mode, applying a `map(...)` before iterating leads to an error as the key `array` does not exist anymore.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("common_voice", "en", streaming=True, split="train")
def map_fn(batch):
print("audio keys", batch["audio"].keys())
batch["audio"] = batch["audio"]["array"][:100]
return batch
ds = ds.map(map_fn)
sample = next(iter(ds))
```
I think the audio is somehow decoded before `.map(...)` is actually called.
## Expected results
IMO, the above code snippet should work.
## Actual results
```bash
audio keys dict_keys(['path', 'bytes'])
Traceback (most recent call last):
File "./run_audio.py", line 15, in <module>
sample = next(iter(ds))
File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 341, in __iter__
for key, example in self._iter():
File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 338, in _iter
yield from ex_iterable
File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 192, in __iter__
yield key, self.function(example)
File "./run_audio.py", line 9, in map_fn
batch["input"] = batch["audio"]["array"][:100]
KeyError: 'array'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.1.dev0
- Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 6.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3585/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3584
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3584/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3584/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3584/events
|
https://github.com/huggingface/datasets/issues/3584
| 1,105,231,768 |
I_kwDODunzps5B4H-Y
| 3,584 |
https://huggingface.co/datasets/huggingface/transformers-metadata
|
{
"login": "ecankirkic",
"id": 37082592,
"node_id": "MDQ6VXNlcjM3MDgyNTky",
"avatar_url": "https://avatars.githubusercontent.com/u/37082592?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ecankirkic",
"html_url": "https://github.com/ecankirkic",
"followers_url": "https://api.github.com/users/ecankirkic/followers",
"following_url": "https://api.github.com/users/ecankirkic/following{/other_user}",
"gists_url": "https://api.github.com/users/ecankirkic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ecankirkic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ecankirkic/subscriptions",
"organizations_url": "https://api.github.com/users/ecankirkic/orgs",
"repos_url": "https://api.github.com/users/ecankirkic/repos",
"events_url": "https://api.github.com/users/ecankirkic/events{/privacy}",
"received_events_url": "https://api.github.com/users/ecankirkic/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false | null |
[] | null |
[] | 2022-01-17T00:18:14 | 2022-02-14T08:51:27 | 2022-02-14T08:51:27 |
NONE
| null | null | null |
## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3584/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3583
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3583/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3583/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3583/events
|
https://github.com/huggingface/datasets/issues/3583
| 1,105,195,144 |
I_kwDODunzps5B3_CI
| 3,583 |
Add The Medical Segmentation Decathlon Dataset
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] |
open
| false |
{
"login": "pri1311",
"id": 64613009,
"node_id": "MDQ6VXNlcjY0NjEzMDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/64613009?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pri1311",
"html_url": "https://github.com/pri1311",
"followers_url": "https://api.github.com/users/pri1311/followers",
"following_url": "https://api.github.com/users/pri1311/following{/other_user}",
"gists_url": "https://api.github.com/users/pri1311/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pri1311/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pri1311/subscriptions",
"organizations_url": "https://api.github.com/users/pri1311/orgs",
"repos_url": "https://api.github.com/users/pri1311/repos",
"events_url": "https://api.github.com/users/pri1311/events{/privacy}",
"received_events_url": "https://api.github.com/users/pri1311/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "pri1311",
"id": 64613009,
"node_id": "MDQ6VXNlcjY0NjEzMDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/64613009?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pri1311",
"html_url": "https://github.com/pri1311",
"followers_url": "https://api.github.com/users/pri1311/followers",
"following_url": "https://api.github.com/users/pri1311/following{/other_user}",
"gists_url": "https://api.github.com/users/pri1311/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pri1311/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pri1311/subscriptions",
"organizations_url": "https://api.github.com/users/pri1311/orgs",
"repos_url": "https://api.github.com/users/pri1311/repos",
"events_url": "https://api.github.com/users/pri1311/events{/privacy}",
"received_events_url": "https://api.github.com/users/pri1311/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Hello! I have recently been involved with a medical image segmentation project myself and was going through the `The Medical Segmentation Decathlon Dataset` as well. \r\nI haven't yet had experience adding datasets to this repository yet but would love to get started. Should I take this issue?\r\nIf yes, I've got two questions -\r\n1. There are 10 different datasets available, so are all datasets to be added in a single PR, or one at a time? \r\n2. Since it's a competition, masks for the test-set are not available. How is that to be tackled? Sorry if it's a silly question, I have recently started exploring `datasets`.",
"Hi! Sure, feel free to take this issue. You can self-assign the issue by commenting `#self-assign`.\r\n\r\nTo answer your questions:\r\n1. It makes the most sense to add each one as a separate config, so one dataset script with 10 configs in a single PR.\r\n2. Just set masks in the test set to `None`.\r\n\r\nNote that the images/masks in this dataset are in NIfTI format, which our `Image` feature currently doesn't support, so I think it's best to yield the paths to the images/masks in the script and add a preprocessing section to the card where we explain how to load/process the images/masks with `nibabel` (I can help with that). \r\n\r\n",
"> Note that the images/masks in this dataset are in NIfTI format, which our `Image` feature currently doesn't support, so I think it's best to yield the paths to the images/masks in the script and add a preprocessing section to the card where we explain how to load/process the images/masks with `nibabel` (I can help with that).\r\n\r\nGotcha, thanks. Will start working on the issue and let you know in case of any doubt.",
"#self-assign",
"This is great! There is a first model on the HUb that uses this dataset! https://huggingface.co/MONAI/example_spleen_segmentation"
] | 2022-01-16T21:42:25 | 2022-03-18T10:44:42 | null |
NONE
| null | null | null |
## Adding a Dataset
- **Name:** *The Medical Segmentation Decathlon Dataset*
- **Description:** The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data, and small objects.
- **Paper:** [link to the dataset paper if available](https://arxiv.org/abs/2106.05735)
- **Data:** http://medicaldecathlon.com/
- **Motivation:** Hugging Face seeks to democratize ML for society. One of the growing niches within ML is the ML + Medicine community. Key data sets will help increase the supply of HF resources for starting an initial community.
(cc @osanseviero @abidlabs )
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3583/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3582
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3582/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3582/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3582/events
|
https://github.com/huggingface/datasets/issues/3582
| 1,104,877,303 |
I_kwDODunzps5B2xb3
| 3,582 |
conll 2003 dataset source url is no longer valid
|
{
"login": "rcanand",
"id": 303900,
"node_id": "MDQ6VXNlcjMwMzkwMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/303900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcanand",
"html_url": "https://github.com/rcanand",
"followers_url": "https://api.github.com/users/rcanand/followers",
"following_url": "https://api.github.com/users/rcanand/following{/other_user}",
"gists_url": "https://api.github.com/users/rcanand/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rcanand/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rcanand/subscriptions",
"organizations_url": "https://api.github.com/users/rcanand/orgs",
"repos_url": "https://api.github.com/users/rcanand/repos",
"events_url": "https://api.github.com/users/rcanand/events{/privacy}",
"received_events_url": "https://api.github.com/users/rcanand/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] |
closed
| false |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"I came to open the same issue.",
"Thanks for reporting !\r\n\r\nI pushed a temporary fix on `master` that uses an URL from a previous commit to access the dataset for now, until we have a better solution",
"I changed the URL again to use another host, the fix is available on `master` and we'll probably do a new release of `datasets` tomorrow.\r\n\r\nIn the meantime, feel free to do `load_dataset(..., revision=\"master\")` to use the fixed script",
"We just released a new version of `datasets` with a working URL. Feel free to update `datasets` and try again :)",
"Hello! Unfortunately, this URL does not work for me. \r\nCould you please tell me how I can solve the problem?\r\n\r\n`>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"conll2003\")\r\nDownloading and preparing dataset conll2003/conll2003 (download: 4.63 MiB, generated: 9.78 MiB, post-processed: Unknown size, total: 14.41 MiB) to /home/dafedo/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/40e7cb6bcc374f7c349c83acd1e9352a4f09474eb691f64f364ee62eb65d0ca6...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/load.py\", line 745, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/builder.py\", line 574, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/builder.py\", line 630, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/dafedo/.cache/huggingface/modules/datasets_modules/datasets/conll2003/40e7cb6bcc374f7c349c83acd1e9352a4f09474eb691f64f364ee62eb65d0ca6/conll2003.py\", line 196, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract(urls_to_download)\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 287, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 195, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 203, in map_nested\r\n mapped = [\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 204, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 142, in _single_map_nested\r\n return function(data_struct)\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 218, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 281, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 621, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt\r\n`\r\n\r\nI receive the same error when I run \"itrain run_configs/conll2003.json\" from https://github.com/adapter-hub/efficient-task-transfer\r\n\r\nThank you very much in advance!\r\n\r\nRegards, \r\nDaria\r\n",
"Can you try updating `datasets` and try again ?\r\n```\r\npip install -U datasets\r\n```",
"@lhoestq Thank you very much for your answer! \r\n\r\nIt works this way, but for my research I need datasets==1.6.3 or closest to it because otherwise the other package would not work as it is built on this version.\r\nDo you have any other suggestion? I would really appreciate it. Maybe which version of the datasets is without hard-coded link but closest to 1.6.3\r\n",
"No problem, I have solved it. \r\nThank you anyway.",
"Out of curiosity, which package has the `datasets==1.6.3` requirement ?"
] | 2022-01-15T23:04:17 | 2022-07-20T13:06:40 | 2022-01-21T16:57:32 |
NONE
| null | null | null |
## Describe the bug
Loading `conll2003` dataset fails because it was removed (just yesterday 1/14/2022) from the location it is looking for.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("conll2003")
```
## Expected results
The dataset should load.
## Actual results
It is looking for the dataset at `https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt` but it was removed from there yesterday (see [commit](https://github.com/davidsbatista/NER-datasets/commit/9d8f45cc7331569af8eb3422bbe1c97cbebd5690) that removed the file and related [issue](https://github.com/davidsbatista/NER-datasets/issues/8)).
- We should replace this with an alternate valid location.
- this is being referenced in the huggingface course chapter 7 [colab notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter7/section2_pt.ipynb), which is also broken.
```python
FileNotFoundError Traceback (most recent call last)
<ipython-input-4-27c956bec93c> in <module>()
1 from datasets import load_dataset
2
----> 3 raw_datasets = load_dataset("conll2003")
11 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params)
610 )
611 elif response is not None and response.status_code == 404:
--> 612 raise FileNotFoundError(f"Couldn't find file at {url}")
613 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
614 if head_error is not None:
FileNotFoundError: Couldn't find file at https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3582/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 5,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3582/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3581
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3581/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3581/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3581/events
|
https://github.com/huggingface/datasets/issues/3581
| 1,104,857,822 |
I_kwDODunzps5B2sre
| 3,581 |
Unable to create a dataset from a parquet file in S3
|
{
"login": "regCode",
"id": 18012903,
"node_id": "MDQ6VXNlcjE4MDEyOTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/18012903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regCode",
"html_url": "https://github.com/regCode",
"followers_url": "https://api.github.com/users/regCode/followers",
"following_url": "https://api.github.com/users/regCode/following{/other_user}",
"gists_url": "https://api.github.com/users/regCode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/regCode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/regCode/subscriptions",
"organizations_url": "https://api.github.com/users/regCode/orgs",
"repos_url": "https://api.github.com/users/regCode/repos",
"events_url": "https://api.github.com/users/regCode/events{/privacy}",
"received_events_url": "https://api.github.com/users/regCode/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null |
[
"Hi ! Currently it only works with local paths, file-like objects are not supported yet"
] | 2022-01-15T21:34:16 | 2022-02-14T08:52:57 | null |
NONE
| null | null | null |
## Describe the bug
Trying to create a dataset from a parquet file in S3.
## Steps to reproduce the bug
```python
import s3fs
from datasets import Dataset
s3 = s3fs.S3FileSystem(anon=False)
with s3.open(PATH_LTR_TOY_CLEAN_DATASET, 'rb') as s3file:
dataset = Dataset.from_parquet(s3file)
```
## Expected results
A new Dataset object
## Actual results
```AttributeError: 'S3File' object has no attribute 'decode'```
```
AttributeError Traceback (most recent call last)
<command-2452877612515691> in <module>
5
6 with s3.open(PATH_LTR_TOY_CLEAN_DATASET, 'rb') as s3file:
----> 7 dataset = Dataset.from_parquet(s3file)
/databricks/python/lib/python3.8/site-packages/datasets/arrow_dataset.py in from_parquet(path_or_paths, split, features, cache_dir, keep_in_memory, columns, **kwargs)
907 from .io.parquet import ParquetDatasetReader
908
--> 909 return ParquetDatasetReader(
910 path_or_paths,
911 split=split,
/databricks/python/lib/python3.8/site-packages/datasets/io/parquet.py in __init__(self, path_or_paths, split, features, cache_dir, keep_in_memory, **kwargs)
28 path_or_paths = path_or_paths if isinstance(path_or_paths, dict) else {self.split: path_or_paths}
29 hash = _PACKAGED_DATASETS_MODULES["parquet"][1]
---> 30 self.builder = Parquet(
31 cache_dir=cache_dir,
32 data_files=path_or_paths,
/databricks/python/lib/python3.8/site-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, base_path, info, features, use_auth_token, namespace, data_files, data_dir, **config_kwargs)
246
247 if data_files is not None and not isinstance(data_files, DataFilesDict):
--> 248 data_files = DataFilesDict.from_local_or_remote(
249 sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
250 )
/databricks/python/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token)
576 for key, patterns_for_key in patterns.items():
577 out[key] = (
--> 578 DataFilesList.from_local_or_remote(
579 patterns_for_key,
580 base_path=base_path,
/databricks/python/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token)
544 ) -> "DataFilesList":
545 base_path = base_path if base_path is not None else str(Path().resolve())
--> 546 data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
547 origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token)
548 return cls(data_files, origin_metadata)
/databricks/python/lib/python3.8/site-packages/datasets/data_files.py in resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
191 data_files = []
192 for pattern in patterns:
--> 193 if is_remote_url(pattern):
194 data_files.append(Url(pattern))
195 else:
/databricks/python/lib/python3.8/site-packages/datasets/utils/file_utils.py in is_remote_url(url_or_filename)
115
116 def is_remote_url(url_or_filename: str) -> bool:
--> 117 parsed = urlparse(url_or_filename)
118 return parsed.scheme in ("http", "https", "s3", "gs", "hdfs", "ftp")
119
/usr/lib/python3.8/urllib/parse.py in urlparse(url, scheme, allow_fragments)
370 Note that we don't break the components up in smaller bits
371 (e.g. netloc is a single string) and we don't expand % escapes."""
--> 372 url, scheme, _coerce_result = _coerce_args(url, scheme)
373 splitresult = urlsplit(url, scheme, allow_fragments)
374 scheme, netloc, url, query, fragment = splitresult
/usr/lib/python3.8/urllib/parse.py in _coerce_args(*args)
122 if str_input:
123 return args + (_noop,)
--> 124 return _decode_args(args) + (_encode_result,)
125
126 # Result objects are more helpful than simple tuples
/usr/lib/python3.8/urllib/parse.py in _decode_args(args, encoding, errors)
106 def _decode_args(args, encoding=_implicit_encoding,
107 errors=_implicit_errors):
--> 108 return tuple(x.decode(encoding, errors) if x else '' for x in args)
109
110 def _coerce_args(*args):
/usr/lib/python3.8/urllib/parse.py in <genexpr>(.0)
106 def _decode_args(args, encoding=_implicit_encoding,
107 errors=_implicit_errors):
--> 108 return tuple(x.decode(encoding, errors) if x else '' for x in args)
109
110 def _coerce_args(*args):
AttributeError: 'S3File' object has no attribute 'decode'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: Ubuntu 20.04.3 LTS
- Python version: 3.8.10
- PyArrow version: 6.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3581/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3580
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3580/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3580/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3580/events
|
https://github.com/huggingface/datasets/issues/3580
| 1,104,663,242 |
I_kwDODunzps5B19LK
| 3,580 |
Bug in wiki bio load
|
{
"login": "tuhinjubcse",
"id": 3104771,
"node_id": "MDQ6VXNlcjMxMDQ3NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3104771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuhinjubcse",
"html_url": "https://github.com/tuhinjubcse",
"followers_url": "https://api.github.com/users/tuhinjubcse/followers",
"following_url": "https://api.github.com/users/tuhinjubcse/following{/other_user}",
"gists_url": "https://api.github.com/users/tuhinjubcse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuhinjubcse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuhinjubcse/subscriptions",
"organizations_url": "https://api.github.com/users/tuhinjubcse/orgs",
"repos_url": "https://api.github.com/users/tuhinjubcse/repos",
"events_url": "https://api.github.com/users/tuhinjubcse/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuhinjubcse/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] |
closed
| false | null |
[] | null |
[
"+1, here's the error I got: \r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>>\r\n>>> load_dataset(\"wiki_bio\")\r\nDownloading: 7.58kB [00:00, 4.42MB/s]\r\nDownloading: 2.71kB [00:00, 1.30MB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to /home/jxm3/.cache/huggingface/datasets/wiki_bio/default/1.1.0/5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/load.py\", line 1694, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/builder.py\", line 595, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/builder.py\", line 662, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/jxm3/.cache/huggingface/modules/datasets_modules/datasets/wiki_bio/5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9/wiki_bio.py\", line 125, in _split_generators\r\n data_dir = dl_manager.download_and_extract(my_urls)\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 308, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 196, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 251, in map_nested\r\n return function(data_struct)\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 217, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 298, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 612, in get_from_cache\r\n raise FileNotFoundError(f\"Couldn't find file at {url}\")\r\nFileNotFoundError: Couldn't find file at https://drive.google.com/uc?export=download&id=1L7aoUXzHPzyzQ0ns4ApBbYepsjFOtXil\r\n>>>\r\n```\r\n",
"@alejandrocros and @lhoestq - you added the wiki_bio dataset in #1173. It doesn't work anymore. Can you take a look at this?",
"And if something is wrong with Google Drive, you could try to download (and collate and unzip) from here: https://github.com/DavidGrangier/wikipedia-biography-dataset",
"Hi ! Thanks for reporting. I've downloaded the data and concatenated them into a zip file available here: https://huggingface.co/datasets/wiki_bio/tree/main/data\r\n\r\nI guess we can update the dataset script to use this zip file now :)"
] | 2022-01-15T10:04:33 | 2022-01-31T08:38:09 | 2022-01-31T08:38:09 |
NONE
| null | null | null |
wiki_bio is failing to load because of a failing drive link . Can someone fix this ?


a
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3580/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3580/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3578
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3578/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3578/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3578/events
|
https://github.com/huggingface/datasets/issues/3578
| 1,103,403,287 |
I_kwDODunzps5BxJkX
| 3,578 |
label information get lost after parquet serialization
|
{
"login": "Tudyx",
"id": 56633664,
"node_id": "MDQ6VXNlcjU2NjMzNjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/56633664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tudyx",
"html_url": "https://github.com/Tudyx",
"followers_url": "https://api.github.com/users/Tudyx/followers",
"following_url": "https://api.github.com/users/Tudyx/following{/other_user}",
"gists_url": "https://api.github.com/users/Tudyx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tudyx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tudyx/subscriptions",
"organizations_url": "https://api.github.com/users/Tudyx/orgs",
"repos_url": "https://api.github.com/users/Tudyx/repos",
"events_url": "https://api.github.com/users/Tudyx/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tudyx/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"Hi ! We did a release of `datasets` today that may fix this issue. Can you try updating `datasets` and trying again ?\r\n\r\nEDIT: the issue is still there actually\r\n\r\nI think we can fix that by storing the Features in the parquet schema metadata, and then reload them when loading the parquet file",
"This info is stored in the Parquet schema metadata as of https://github.com/huggingface/datasets/pull/5516"
] | 2022-01-14T10:10:38 | 2023-07-25T15:44:53 | 2023-07-25T15:44:53 |
NONE
| null | null | null |
## Describe the bug
In *dataset_info.json* file, information about the label get lost after the dataset serialization.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# normal save
dataset = load_dataset('glue', 'sst2', split='train')
dataset.save_to_disk("normal_save")
# save after parquet serialization
dataset.to_parquet("glue-sst2-train.parquet")
dataset = load_dataset("parquet", data_files='glue-sst2-train.parquet')
dataset.save_to_disk("save_after_parquet")
```
## Expected results
I expected to keep label information in *dataset_info.json* file even after parquet serialization
## Actual results
In the normal serialization i got
```json
"label": {
"num_classes": 2,
"names": [
"negative",
"positive"
],
"names_file": null,
"id": null,
"_type": "ClassLabel"
},
```
And after parquet serialization i got
```json
"label": {
"dtype": "int64",
"id": null,
"_type": "Value"
},
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.0
- Platform: ubuntu 20.04
- Python version: 3.8.10
- PyArrow version: 6.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3578/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3577
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3577/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3577/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3577/events
|
https://github.com/huggingface/datasets/issues/3577
| 1,102,598,241 |
I_kwDODunzps5BuFBh
| 3,577 |
Add The Mexican Emotional Speech Database (MESD)
|
{
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] |
open
| false | null |
[] | null |
[] | 2022-01-13T23:49:36 | 2022-01-27T14:14:38 | null |
NONE
| null | null | null |
## Adding a Dataset
- **Name:** *The Mexican Emotional Speech Database (MESD)*
- **Description:** *Contains 864 voice recordings with six different prosodies: anger, disgust, fear, happiness, neutral, and sadness. Furthermore, three voice categories are included: female adult, male adult, and child. *
- **Paper:** *[Paper](https://ieeexplore.ieee.org/abstract/document/9629934/authors#authors)*
- **Data:** *[link to the Github repository or current dataset location](https://data.mendeley.com/datasets/cy34mh68j9/3)*
- **Motivation:** *Would add Spanish speech data to the HF datasets :) *
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3577/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3572
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3572/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3572/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3572/events
|
https://github.com/huggingface/datasets/issues/3572
| 1,100,634,244 |
I_kwDODunzps5BmliE
| 3,572 |
ConnectionError in IndicGLUE dataset
|
{
"login": "sahoodib",
"id": 79107194,
"node_id": "MDQ6VXNlcjc5MTA3MTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/79107194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sahoodib",
"html_url": "https://github.com/sahoodib",
"followers_url": "https://api.github.com/users/sahoodib/followers",
"following_url": "https://api.github.com/users/sahoodib/following{/other_user}",
"gists_url": "https://api.github.com/users/sahoodib/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sahoodib/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sahoodib/subscriptions",
"organizations_url": "https://api.github.com/users/sahoodib/orgs",
"repos_url": "https://api.github.com/users/sahoodib/repos",
"events_url": "https://api.github.com/users/sahoodib/events{/privacy}",
"received_events_url": "https://api.github.com/users/sahoodib/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"@sahoodib, thanks for reporting.\r\n\r\nIndeed, none of the data links appearing in the IndicGLUE website are working, e.g.: https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/evaluations/soham-articles.tar.gz\r\n```\r\n<Error>\r\n<Code>UserProjectAccountProblem</Code>\r\n<Message>User project billing account not in good standing.</Message>\r\n<Details>\r\nThe billing account for the owning project is disabled in state delinquent\r\n</Details>\r\n</Error>\r\n```\r\n\r\nWe have contacted the data owners to inform them about their issue and ask them if they plan to fix it.",
"Yesterday I resent a reminder email with more AI4Bharat-related people in the loop.\r\n\r\nI also opened an issue in their repos:\r\n- https://github.com/AI4Bharat/indicnlp_corpus/issues/14\r\n- https://github.com/AI4Bharat/ai4bharat.org/issues/71",
"We have received a reply from the authors reporting they have updated the URLs of their data files and opened a PR. See:\r\n- #4978 "
] | 2022-01-12T17:59:36 | 2022-09-15T21:57:34 | 2022-09-15T21:57:34 |
NONE
| null | null | null |
While I am trying to load IndicGLUE dataset (https://huggingface.co/datasets/indic_glue) it is giving me with the error:
```
ConnectionError: Couldn't reach https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/evaluations/wikiann-ner.tar.gz (error 403)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3572/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3568
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3568/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3568/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3568/events
|
https://github.com/huggingface/datasets/issues/3568
| 1,100,380,631 |
I_kwDODunzps5BlnnX
| 3,568 |
Downloading Hugging Face Medical Dialog Dataset NonMatchingSplitsSizesError
|
{
"login": "fabianslife",
"id": 49265757,
"node_id": "MDQ6VXNlcjQ5MjY1NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/49265757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fabianslife",
"html_url": "https://github.com/fabianslife",
"followers_url": "https://api.github.com/users/fabianslife/followers",
"following_url": "https://api.github.com/users/fabianslife/following{/other_user}",
"gists_url": "https://api.github.com/users/fabianslife/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fabianslife/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fabianslife/subscriptions",
"organizations_url": "https://api.github.com/users/fabianslife/orgs",
"repos_url": "https://api.github.com/users/fabianslife/repos",
"events_url": "https://api.github.com/users/fabianslife/events{/privacy}",
"received_events_url": "https://api.github.com/users/fabianslife/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Hi @fabianslife, thanks for reporting.\r\n\r\nI think you were using an old version of `datasets` because this bug was already fixed in version `1.13.0` (13 Oct 2021):\r\n- Fix: 55fd140a63b8f03a0e72985647e498f1fc799d3f\r\n- PR: #3046\r\n- Issue: #2969 \r\n\r\nPlease, feel free to update the library: `pip install -U datasets`."
] | 2022-01-12T14:03:44 | 2022-02-14T09:32:34 | 2022-02-14T09:32:34 |
NONE
| null | null | null |
I wanted to download the Nedical Dialog Dataset from huggingface, using this github link:
https://github.com/huggingface/datasets/tree/master/datasets/medical_dialog
After downloading the raw datasets from google drive, i unpacked everything and put it in the same folder as the medical_dialog.py which is:
```
import copy
import os
import re
import datasets
_CITATION = """\
@article{chen2020meddiag,
title={MedDialog: a large-scale medical dialogue dataset},
author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao},
journal={arXiv preprint arXiv:2004.03329},
year={2020}
}
"""
_DESCRIPTION = """\
The MedDialog dataset (English) contains conversations (in English) between doctors and patients.\
It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. \
The raw dialogues are from healthcaremagic.com and icliniq.com.\
All copyrights of the data belong to healthcaremagic.com and icliniq.com.
"""
_HOMEPAGE = "https://github.com/UCSD-AI4H/Medical-Dialogue-System"
_LICENSE = ""
class MedicalDialog(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
BUILDER_CONFIGS = [
datasets.BuilderConfig(name="en", description="The dataset of medical dialogs in English.", version=VERSION),
datasets.BuilderConfig(name="zh", description="The dataset of medical dialogs in Chinese.", version=VERSION),
]
@property
def manual_download_instructions(self):
return """\
\n For English:\nYou need to go to https://drive.google.com/drive/folders/1g29ssimdZ6JzTST6Y8g6h-ogUNReBtJD?usp=sharing,\
and manually download the dataset from Google Drive. Once it is completed,
a file named Medical-Dialogue-Dataset-English-<timestamp-info>.zip will appear in your Downloads folder(
or whichever folder your browser chooses to save files to). Unzip the folder to obtain
a folder named "Medical-Dialogue-Dataset-English" several text files.
Now, you can specify the path to this folder for the data_dir argument in the
datasets.load_dataset(...) option.
The <path/to/folder> can e.g. be "/Downloads/Medical-Dialogue-Dataset-English".
The data can then be loaded using the below command:\
datasets.load_dataset("medical_dialog", name="en", data_dir="/Downloads/Medical-Dialogue-Dataset-English")`.
\n For Chinese:\nFollow the above process. Change the 'name' to 'zh'.The download link is https://drive.google.com/drive/folders/1r09_i8nJ9c1nliXVGXwSqRYqklcHd9e2
**NOTE**
- A caution while downloading from drive. It is better to download single files since creating a zip might not include files <500 MB. This has been observed mutiple times.
- After downloading the files and adding them to the appropriate folder, the path of the folder can be given as input tu the data_dir path.
"""
datasets.load_dataset("medical_dialog", name="en", data_dir="Medical-Dialogue-Dataset-English")
def _info(self):
if self.config.name == "zh":
features = datasets.Features(
{
"file_name": datasets.Value("string"),
"dialogue_id": datasets.Value("int32"),
"dialogue_url": datasets.Value("string"),
"dialogue_turns": datasets.Sequence(
{
"speaker": datasets.ClassLabel(names=["病人", "医生"]),
"utterance": datasets.Value("string"),
}
),
}
)
if self.config.name == "en":
features = datasets.Features(
{
"file_name": datasets.Value("string"),
"dialogue_id": datasets.Value("int32"),
"dialogue_url": datasets.Value("string"),
"dialogue_turns": datasets.Sequence(
{
"speaker": datasets.ClassLabel(names=["Patient", "Doctor"]),
"utterance": datasets.Value("string"),
}
),
}
)
return datasets.DatasetInfo(
# This is the description that will appear on the datasets page.
description=_DESCRIPTION,
features=features,
supervised_keys=None,
# Homepage of the dataset for documentation
homepage=_HOMEPAGE,
# License for the dataset if available
license=_LICENSE,
# Citation for the dataset
citation=_CITATION,
)
def _split_generators(self, dl_manager):
"""Returns SplitGenerators."""
path_to_manual_file = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
if not os.path.exists(path_to_manual_file):
raise FileNotFoundError(
f"{path_to_manual_file} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('medical_dialog', data_dir=...)`. Manual download instructions: {self.manual_download_instructions})"
)
filepaths = [
os.path.join(path_to_manual_file, txt_file_name)
for txt_file_name in sorted(os.listdir(path_to_manual_file))
if txt_file_name.endswith("txt")
]
return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": filepaths})]
def _generate_examples(self, filepaths):
"""Yields examples. Iterates over each file and give the creates the corresponding features.
NOTE:
- The code makes some assumption on the structure of the raw .txt file.
- There are some checks to separate different id's. Hopefully, should not cause further issues later when more txt files are added.
"""
data_lang = self.config.name
id_ = -1
for filepath in filepaths:
with open(filepath, encoding="utf-8") as f_in:
# Parameters to just "sectionize" the raw data
last_part = ""
last_dialog = {}
last_list = []
last_user = ""
check_list = []
# These flags are present to have a single function address both chinese and english data
# English data is a little hahazard (i.e. the sentences spans multiple different lines),
# Chinese is compact with one line for doctor and patient.
conv_flag = False
des_flag = False
while True:
line = f_in.readline()
if not line:
break
# Extracting the dialog id
if line[:2] == "id": # Hardcode alert!
# Handling ID references that may come in the description
# These were observed in the Chinese dataset and were not
# followed by numbers
try:
dialogue_id = int(re.findall(r"\d+", line)[0])
except IndexError:
continue
# Extracting the url
if line[:4] == "http": # Hardcode alert!
dialogue_url = line.rstrip()
# Extracting the patient info from description.
if line[:11] == "Description": # Hardcode alert!
last_part = "description"
last_dialog = {}
last_list = []
last_user = ""
last_conv = {"speaker": "", "utterance": ""}
while True:
line = f_in.readline()
if (not line) or (line in ["\n", "\n\r"]):
break
else:
if data_lang == "zh": # Condition in chinese
if line[:5] == "病情描述:": # Hardcode alert!
last_user = "病人"
sen = f_in.readline().rstrip()
des_flag = True
if data_lang == "en":
last_user = "Patient"
sen = line.rstrip()
des_flag = True
if des_flag:
if sen == "":
continue
if sen in check_list:
last_conv["speaker"] = ""
last_conv["utterance"] = ""
else:
last_conv["speaker"] = last_user
last_conv["utterance"] = sen
check_list.append(sen)
des_flag = False
break
# Extracting the conversation info from dialogue.
elif line[:8] == "Dialogue": # Hardcode alert!
if last_part == "description" and len(last_conv["utterance"]) > 0:
last_part = "dialogue"
if data_lang == "zh":
last_user = "病人"
if data_lang == "en":
last_user = "Patient"
while True:
line = f_in.readline()
if (not line) or (line in ["\n", "\n\r"]):
conv_flag = False
last_user = ""
last_list.append(copy.deepcopy(last_conv))
# To ensure close of conversation, only even number of sentences
# are extracted
last_turn = len(last_list)
if int(last_turn / 2) > 0:
temp = int(last_turn / 2)
id_ += 1
last_dialog["file_name"] = filepath
last_dialog["dialogue_id"] = dialogue_id
last_dialog["dialogue_url"] = dialogue_url
last_dialog["dialogue_turns"] = last_list[: temp * 2]
yield id_, last_dialog
break
if data_lang == "zh":
if line[:3] == "病人:" or line[:3] == "医生:": # Hardcode alert!
user = line[:2] # Hardcode alert!
line = f_in.readline()
conv_flag = True
# The elif block is to ensure that multi-line sentences are captured.
# This has been observed only in english.
if data_lang == "en":
if line.strip() == "Patient:" or line.strip() == "Doctor:": # Hardcode alert!
user = line.replace(":", "").rstrip()
line = f_in.readline()
conv_flag = True
elif line[:2] != "id": # Hardcode alert!
conv_flag = True
# Continues till the next ID is parsed
if conv_flag:
sen = line.rstrip()
if sen == "":
continue
if user == last_user:
last_conv["utterance"] = last_conv["utterance"] + sen
else:
last_user = user
last_list.append(copy.deepcopy(last_conv))
last_conv["utterance"] = sen
last_conv["speaker"] = user
```
running this code gives me the error:
```
File "C:\Users\Fabia\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\utils\info_utils.py", line 74, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='medical_dialog'), 'recorded': SplitInfo(name='train', num_bytes=292801173, num_examples=229674, dataset_name='medical_dialog')}]
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3568/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3563
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3563/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3563/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3563/events
|
https://github.com/huggingface/datasets/issues/3563
| 1,099,070,368 |
I_kwDODunzps5Bgnug
| 3,563 |
Dataset.from_pandas preserves useless index
|
{
"login": "Sorrow321",
"id": 20703486,
"node_id": "MDQ6VXNlcjIwNzAzNDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/20703486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sorrow321",
"html_url": "https://github.com/Sorrow321",
"followers_url": "https://api.github.com/users/Sorrow321/followers",
"following_url": "https://api.github.com/users/Sorrow321/following{/other_user}",
"gists_url": "https://api.github.com/users/Sorrow321/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sorrow321/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sorrow321/subscriptions",
"organizations_url": "https://api.github.com/users/Sorrow321/orgs",
"repos_url": "https://api.github.com/users/Sorrow321/repos",
"events_url": "https://api.github.com/users/Sorrow321/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sorrow321/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"Hi! That makes sense. Sure, feel free to open a PR! Just a small suggestion: let's make `preserve_index` a parameter of `Dataset.from_pandas` (which we then pass to `InMemoryTable.from_pandas`) with `None` as a default value to not have this as a breaking change. "
] | 2022-01-11T12:07:07 | 2022-01-12T16:11:27 | 2022-01-12T16:11:27 |
CONTRIBUTOR
| null | null | null |
## Describe the bug
Let's say that you want to create a Dataset object from pandas dataframe. Most likely you will write something like this:
```
import pandas as pd
from datasets import Dataset
df = pd.read_csv('some_dataset.csv')
# Some DataFrame preprocessing code...
dataset = Dataset.from_pandas(df)
```
If your preprocessing code contain indexing operations like this:
```
df = df[df.col1 == some_value]
```
then your df.index can be changed from (default) ```RangeIndex(start=0, stop=16590, step=1)``` to something like this ```Int64Index([ 0, 1, 2, 3, 4, 5, 6, 7, 8,
9,
...
83979, 83980, 83981, 83982, 83983, 83984, 83985, 83986, 83987,
83988],
dtype='int64', length=16590)```
In this case, PyArrow (by default) will preserve this non-standard index. In the result, your dataset object will have the extra field that you likely don't want to have: '__index_level_0__'.
You can easily fix this by just adding extra argument ```preserve_index=False``` to call of ```InMemoryTable.from_pandas``` in ```arrow_dataset.py```.
If you approve that this isn't desirable behavior, I can make a PR fixing that.
## Environment info
- `datasets` version: 1.16.1
- Platform: Linux-5.11.0-44-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 6.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3563/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3561
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3561/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3561/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3561/events
|
https://github.com/huggingface/datasets/issues/3561
| 1,098,328,870 |
I_kwDODunzps5Bdysm
| 3,561 |
Cannot load ‘bookcorpusopen’
|
{
"login": "HUIYINXUE",
"id": 54684403,
"node_id": "MDQ6VXNlcjU0Njg0NDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/54684403?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HUIYINXUE",
"html_url": "https://github.com/HUIYINXUE",
"followers_url": "https://api.github.com/users/HUIYINXUE/followers",
"following_url": "https://api.github.com/users/HUIYINXUE/following{/other_user}",
"gists_url": "https://api.github.com/users/HUIYINXUE/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HUIYINXUE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HUIYINXUE/subscriptions",
"organizations_url": "https://api.github.com/users/HUIYINXUE/orgs",
"repos_url": "https://api.github.com/users/HUIYINXUE/repos",
"events_url": "https://api.github.com/users/HUIYINXUE/events{/privacy}",
"received_events_url": "https://api.github.com/users/HUIYINXUE/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"The host of this copy of the dataset (https://the-eye.eu) is down and has been down for a good amount of time ([potentially months](https://www.reddit.com/r/Roms/comments/q82s15/theeye_downdied/))\r\n\r\nFinding this dataset is a little esoteric, as the original authors took down the official BookCorpus dataset some time ago.\r\n\r\nThere are community-created versions of BookCorpus, such as the files hosted in the link below.\r\nhttps://battle.shawwn.com/sdb/bookcorpus/\r\n\r\nAnd more discussion here:\r\nhttps://github.com/soskek/bookcorpus\r\n\r\nDo we want to remove this dataset entirely? There's a fair argument for this, given that the official BookCorpus dataset was taken down by the authors. If not, perhaps can open a PR with the link to the community-created tar above and updated dataset description.",
"Hi! The `bookcorpusopen` dataset is not working for the same reason as explained in this comment: https://github.com/huggingface/datasets/issues/3504#issuecomment-1004564980",
"Hi @HUIYINXUE, it should work now that the data owners created a mirror server with all data, and we updated the URL in our library."
] | 2022-01-10T20:17:18 | 2022-02-14T09:19:27 | 2022-02-14T09:18:47 |
NONE
| null | null | null |
## Describe the bug
Cannot load 'bookcorpusopen'
## Steps to reproduce the bug
```python
dataset = load_dataset('bookcorpusopen')
```
or
```python
dataset = load_dataset('bookcorpusopen',script_version='master')
```
## Actual results
ConnectionError: Couldn't reach https://the-eye.eu/public/AI/pile_preliminary_components/books1.tar.gz
## Environment info
- `datasets` version: 1.9.0
- Platform: Linux version 3.10.0-1160.45.1.el7.x86_64
- Python version: 3.6.13
- PyArrow version: 6.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3561/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3558
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3558/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3558/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3558/events
|
https://github.com/huggingface/datasets/issues/3558
| 1,098,025,866 |
I_kwDODunzps5BcouK
| 3,558 |
Integrate Milvus (pymilvus) library
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false |
{
"login": "xiaofan-luan",
"id": 83447078,
"node_id": "MDQ6VXNlcjgzNDQ3MDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/83447078?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaofan-luan",
"html_url": "https://github.com/xiaofan-luan",
"followers_url": "https://api.github.com/users/xiaofan-luan/followers",
"following_url": "https://api.github.com/users/xiaofan-luan/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaofan-luan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaofan-luan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaofan-luan/subscriptions",
"organizations_url": "https://api.github.com/users/xiaofan-luan/orgs",
"repos_url": "https://api.github.com/users/xiaofan-luan/repos",
"events_url": "https://api.github.com/users/xiaofan-luan/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaofan-luan/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "xiaofan-luan",
"id": 83447078,
"node_id": "MDQ6VXNlcjgzNDQ3MDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/83447078?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaofan-luan",
"html_url": "https://github.com/xiaofan-luan",
"followers_url": "https://api.github.com/users/xiaofan-luan/followers",
"following_url": "https://api.github.com/users/xiaofan-luan/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaofan-luan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaofan-luan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaofan-luan/subscriptions",
"organizations_url": "https://api.github.com/users/xiaofan-luan/orgs",
"repos_url": "https://api.github.com/users/xiaofan-luan/repos",
"events_url": "https://api.github.com/users/xiaofan-luan/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaofan-luan/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Hi @mariosasko,Just search randomly and I found this issue~ I'm the tech lead of Milvus and we are looking forward to integrate milvus together with huggingface datasets.\r\n\r\nAny suggestion on how we could start?\r\n",
"Feel free to assign to me and we probably need some guide on it",
"@mariosasko any updates my man?\r\n",
"Hi! For starters, I suggest you take a look at this file: https://github.com/huggingface/datasets/blob/master/src/datasets/search.py, which contains all the code for Faiss/ElasticSearch support. We could set up a Slack channel for additional guidance. Let me know what you prefer.",
"> Hi! For starters, I suggest you take a look at this file: https://github.com/huggingface/datasets/blob/master/src/datasets/search.py, which contains all the code for Faiss/ElasticSearch support. We could set up a Slack channel for additional guidance. Let me know what you prefer.\r\n\r\nSure, we take a look and do some research"
] | 2022-01-10T15:20:29 | 2022-03-05T12:28:36 | null |
CONTRIBUTOR
| null | null | null |
Milvus is a popular open-source vector database. We should add a new vector index to support this project.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3558/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3558/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3555
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3555/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3555/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3555/events
|
https://github.com/huggingface/datasets/issues/3555
| 1,097,736,982 |
I_kwDODunzps5BbiMW
| 3,555 |
DuplicatedKeysError when loading tweet_qa dataset
|
{
"login": "LeonieWeissweiler",
"id": 30300891,
"node_id": "MDQ6VXNlcjMwMzAwODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/30300891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeonieWeissweiler",
"html_url": "https://github.com/LeonieWeissweiler",
"followers_url": "https://api.github.com/users/LeonieWeissweiler/followers",
"following_url": "https://api.github.com/users/LeonieWeissweiler/following{/other_user}",
"gists_url": "https://api.github.com/users/LeonieWeissweiler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeonieWeissweiler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeonieWeissweiler/subscriptions",
"organizations_url": "https://api.github.com/users/LeonieWeissweiler/orgs",
"repos_url": "https://api.github.com/users/LeonieWeissweiler/repos",
"events_url": "https://api.github.com/users/LeonieWeissweiler/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeonieWeissweiler/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Hi, we've just merged the PR with the fix. The fixed version of the dataset can be downloaded as follows:\r\n```python\r\nimport datasets\r\ndset = datasets.load_dataset(\"tweet_qa\", revision=\"master\")\r\n```"
] | 2022-01-10T10:53:11 | 2022-01-12T15:17:33 | 2022-01-12T15:13:56 |
NONE
| null | null | null |
When loading the tweet_qa dataset with `load_dataset('tweet_qa')`, the following error occurs:
`DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 2a167f9e016ba338e1813fed275a6a1e
Keys should be unique and deterministic in nature
`
Might be related to issues #2433 and #2333
- `datasets` version: 1.17.0
- Python version: 3.8.5
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3555/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3554
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3554/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3554/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3554/events
|
https://github.com/huggingface/datasets/issues/3554
| 1,097,711,367 |
I_kwDODunzps5Bbb8H
| 3,554 |
ImportError: cannot import name 'is_valid_waiter_error'
|
{
"login": "danielbellhv",
"id": 84714841,
"node_id": "MDQ6VXNlcjg0NzE0ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/84714841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danielbellhv",
"html_url": "https://github.com/danielbellhv",
"followers_url": "https://api.github.com/users/danielbellhv/followers",
"following_url": "https://api.github.com/users/danielbellhv/following{/other_user}",
"gists_url": "https://api.github.com/users/danielbellhv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danielbellhv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danielbellhv/subscriptions",
"organizations_url": "https://api.github.com/users/danielbellhv/orgs",
"repos_url": "https://api.github.com/users/danielbellhv/repos",
"events_url": "https://api.github.com/users/danielbellhv/events{/privacy}",
"received_events_url": "https://api.github.com/users/danielbellhv/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"Hi! I can't reproduce this error in Colab, but I'm assuming you are using Amazon SageMaker Studio Notebooks (you mention the `conda_pytorch_p36` kernel), so maybe @philschmid knows more about what might be causing this issue? ",
"Hey @mariosasko. Yes, I am using **Amazon SageMaker Studio Jupyter Labs**. However, I no longer need this notebook; but it would be nice to have this problem solved for others. So don't stress too much if you two can't reproduce error.",
"Hey @danielbellhv, \r\n\r\nThis issue might be related to Studio probably not having an up to date `botocore` and `boto3` version. I ran into this as well a while back. My workaround was \r\n```python\r\n# using older dataset due to incompatibility of sagemaker notebook & aws-cli with > s3fs and fsspec to >= 2021.10\r\n!pip install \"datasets==1.13\" --upgrade\r\n```\r\n\r\nIn `datasets` we use the latest `s3fs` and `fsspec` but aws-cli and notebook is not supporting this. You could also update the `aws-cli` and associated packages to get the latest `datasets` version\r\n"
] | 2022-01-10T10:32:04 | 2022-02-14T09:35:57 | 2022-02-14T09:35:57 |
NONE
| null | null | null |
Based on [SO post](https://stackoverflow.com/q/70606147/17840900).
I'm following along to this [Notebook][1], cell "**Loading the dataset**".
Kernel: `conda_pytorch_p36`.
I run:
```
! pip install datasets transformers optimum[intel]
```
Output:
```
Requirement already satisfied: datasets in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (1.17.0)
Requirement already satisfied: transformers in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (4.15.0)
Requirement already satisfied: optimum[intel] in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (0.1.3)
Requirement already satisfied: numpy>=1.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (1.19.5)
Requirement already satisfied: dill in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.3.4)
Requirement already satisfied: tqdm>=4.62.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (4.62.3)
Requirement already satisfied: huggingface-hub<1.0.0,>=0.1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.2.1)
Requirement already satisfied: packaging in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (21.3)
Requirement already satisfied: pyarrow!=4.0.0,>=3.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (6.0.1)
Requirement already satisfied: pandas in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (1.1.5)
Requirement already satisfied: xxhash in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2.0.2)
Requirement already satisfied: aiohttp in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (3.8.1)
Requirement already satisfied: fsspec[http]>=2021.05.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2021.11.1)
Requirement already satisfied: dataclasses in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.8)
Requirement already satisfied: multiprocess in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.70.12.2)
Requirement already satisfied: importlib-metadata in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (4.5.0)
Requirement already satisfied: requests>=2.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2.25.1)
Requirement already satisfied: pyyaml>=5.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (5.4.1)
Requirement already satisfied: regex!=2019.12.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (2021.4.4)
Requirement already satisfied: tokenizers<0.11,>=0.10.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (0.10.3)
Requirement already satisfied: filelock in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (3.0.12)
Requirement already satisfied: sacremoses in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (0.0.46)
Requirement already satisfied: torch>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.10.1)
Requirement already satisfied: sympy in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.8)
Requirement already satisfied: coloredlogs in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (15.0.1)
Requirement already satisfied: pycocotools in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (2.0.3)
Requirement already satisfied: neural-compressor>=1.7 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.9)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets) (3.10.0.0)
Requirement already satisfied: sigopt in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.2.0)
Requirement already satisfied: opencv-python in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (4.5.1.48)
Requirement already satisfied: cryptography in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (3.4.7)
Requirement already satisfied: py-cpuinfo in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.0.0)
Requirement already satisfied: gevent in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (21.1.2)
Requirement already satisfied: schema in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.7.5)
Requirement already satisfied: psutil in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (5.8.0)
Requirement already satisfied: gevent-websocket in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.10.1)
Requirement already satisfied: hyperopt in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.2.7)
Requirement already satisfied: Flask in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (2.0.1)
Requirement already satisfied: prettytable in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (2.5.0)
Requirement already satisfied: Flask-SocketIO in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (5.1.1)
Requirement already satisfied: scikit-learn in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.24.2)
Requirement already satisfied: Pillow in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.4.0)
Requirement already satisfied: Flask-Cors in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (3.0.10)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging->datasets) (2.4.7)
Requirement already satisfied: chardet<5,>=3.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (4.0.0)
Requirement already satisfied: certifi>=2017.4.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (2021.5.30)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (1.26.5)
Requirement already satisfied: idna<3,>=2.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (2.10)
Requirement already satisfied: yarl<2.0,>=1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.6.3)
Requirement already satisfied: charset-normalizer<3.0,>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (2.0.9)
Requirement already satisfied: attrs>=17.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (21.2.0)
Requirement already satisfied: asynctest==0.13.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (0.13.0)
Requirement already satisfied: idna-ssl>=1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.1.0)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (4.0.1)
Requirement already satisfied: aiosignal>=1.1.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.2.0)
Requirement already satisfied: frozenlist>=1.1.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.2.0)
Requirement already satisfied: multidict<7.0,>=4.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (5.1.0)
Requirement already satisfied: humanfriendly>=9.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from coloredlogs->optimum[intel]) (10.0)
Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata->datasets) (3.4.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pandas->datasets) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pandas->datasets) (2021.1)
Requirement already satisfied: matplotlib>=2.1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (3.3.4)
Requirement already satisfied: cython>=0.27.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (0.29.23)
Requirement already satisfied: setuptools>=18.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (52.0.0.post20210125)
Requirement already satisfied: joblib in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (1.0.1)
Requirement already satisfied: click in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (8.0.1)
Requirement already satisfied: six in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (1.16.0)
Requirement already satisfied: mpmath>=0.19 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sympy->optimum[intel]) (1.2.1)
Requirement already satisfied: kiwisolver>=1.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from matplotlib>=2.1.0->pycocotools->optimum[intel]) (1.3.1)
Requirement already satisfied: cycler>=0.10 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/cycler-0.10.0-py3.6.egg (from matplotlib>=2.1.0->pycocotools->optimum[intel]) (0.10.0)
Requirement already satisfied: cffi>=1.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from cryptography->neural-compressor>=1.7->optimum[intel]) (1.14.5)
Requirement already satisfied: Werkzeug>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (2.0.2)
Requirement already satisfied: Jinja2>=3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (3.0.1)
Requirement already satisfied: itsdangerous>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (2.0.1)
Requirement already satisfied: python-socketio>=5.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (5.5.0)
Requirement already satisfied: zope.event in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (4.5.0)
Requirement already satisfied: greenlet<2.0,>=0.4.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (1.1.0)
Requirement already satisfied: zope.interface in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (5.4.0)
Requirement already satisfied: future in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (0.18.2)
Requirement already satisfied: cloudpickle in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (1.6.0)
Requirement already satisfied: networkx>=2.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (2.5)
Requirement already satisfied: scipy in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (1.5.3)
Requirement already satisfied: py4j in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (0.10.7)
Requirement already satisfied: wcwidth in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from prettytable->neural-compressor>=1.7->optimum[intel]) (0.2.5)
Requirement already satisfied: contextlib2>=0.5.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from schema->neural-compressor>=1.7->optimum[intel]) (0.6.0.post1)
Requirement already satisfied: threadpoolctl>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from scikit-learn->neural-compressor>=1.7->optimum[intel]) (2.1.0)
Requirement already satisfied: pyOpenSSL>=20.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (20.0.1)
Requirement already satisfied: pypng>=0.0.20 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (0.0.21)
Requirement already satisfied: kubernetes<13.0.0,>=12.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (12.0.1)
Requirement already satisfied: rsa<5.0.0,>=4.7 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (4.7.2)
Requirement already satisfied: boto3<2.0.0,==1.16.34 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (1.16.34)
Requirement already satisfied: Pint<0.17.0,>=0.16.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (0.16.1)
Requirement already satisfied: GitPython>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (3.1.18)
Requirement already satisfied: backoff<2.0.0,>=1.10.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (1.11.1)
Requirement already satisfied: ipython>=5.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (7.16.1)
Requirement already satisfied: docker<5.0.0,>=4.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (4.4.4)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (0.10.0)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (0.3.7)
Requirement already satisfied: botocore<1.20.0,>=1.19.34 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (1.19.63)
Requirement already satisfied: pycparser in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from cffi>=1.12->cryptography->neural-compressor>=1.7->optimum[intel]) (2.20)
Requirement already satisfied: websocket-client>=0.32.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from docker<5.0.0,>=4.4.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.58.0)
Requirement already satisfied: gitdb<5,>=4.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from GitPython>=2.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.0.9)
Requirement already satisfied: traitlets>=4.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.3.3)
Requirement already satisfied: jedi>=0.10 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.17.2)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (3.0.19)
Requirement already satisfied: backcall in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.0)
Requirement already satisfied: pygments in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (2.9.0)
Requirement already satisfied: pexpect in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.8.0)
Requirement already satisfied: decorator in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.0.9)
Requirement already satisfied: pickleshare in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.5)
Requirement already satisfied: MarkupSafe>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Jinja2>=3.0->Flask->neural-compressor>=1.7->optimum[intel]) (2.0.1)
Requirement already satisfied: google-auth>=1.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (1.30.2)
Requirement already satisfied: requests-oauthlib in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (1.3.0)
Requirement already satisfied: importlib-resources in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Pint<0.17.0,>=0.16.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.4.0)
Requirement already satisfied: python-engineio>=4.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from python-socketio>=5.0.2->Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (4.3.0)
Requirement already satisfied: bidict>=0.21.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from python-socketio>=5.0.2->Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (0.21.4)
Requirement already satisfied: pyasn1>=0.1.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from rsa<5.0.0,>=4.7->sigopt->neural-compressor>=1.7->optimum[intel]) (0.4.8)
Requirement already satisfied: smmap<6,>=3.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gitdb<5,>=4.0.1->GitPython>=2.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.0.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from google-auth>=1.0.1->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from google-auth>=1.0.1->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (4.2.2)
Requirement already satisfied: parso<0.8.0,>=0.7.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from jedi>=0.10->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.1)
Requirement already satisfied: ipython-genutils in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from traitlets>=4.2->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.0)
Requirement already satisfied: ptyprocess>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pexpect->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.0)
Requirement already satisfied: oauthlib>=3.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests-oauthlib->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (3.1.1)
```
---
**Cell:**
```python
from datasets import load_dataset, load_metric
```
OR
```python
import datasets
```
**Traceback:**
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-7-34fb7ba3338d> in <module>
----> 1 from datasets import load_dataset, load_metric
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/__init__.py in <module>
32 )
33
---> 34 from .arrow_dataset import Dataset, concatenate_datasets
35 from .arrow_reader import ArrowReader, ReadInstruction
36 from .arrow_writer import ArrowWriter
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/arrow_dataset.py in <module>
59 from . import config, utils
60 from .arrow_reader import ArrowReader
---> 61 from .arrow_writer import ArrowWriter, OptimizedTypedSequence
62 from .features import ClassLabel, Features, FeatureType, Sequence, Value, _ArrayXD, pandas_types_mapper
63 from .filesystems import extract_path_from_uri, is_remote_filesystem
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/arrow_writer.py in <module>
26
27 from . import config, utils
---> 28 from .features import (
29 Features,
30 ImageExtensionType,
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/features/__init__.py in <module>
1 # flake8: noqa
----> 2 from .audio import Audio
3 from .features import *
4 from .features import (
5 _ArrayXD,
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/features/audio.py in <module>
5 import pyarrow as pa
6
----> 7 from ..utils.streaming_download_manager import xopen
8
9
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/utils/streaming_download_manager.py in <module>
16
17 from .. import config
---> 18 from ..filesystems import COMPRESSION_FILESYSTEMS
19 from .download_manager import DownloadConfig, map_nested
20 from .file_utils import (
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/filesystems/__init__.py in <module>
11
12 if _has_s3fs:
---> 13 from .s3filesystem import S3FileSystem # noqa: F401
14
15 COMPRESSION_FILESYSTEMS: List[compression.BaseCompressedFileFileSystem] = [
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/filesystems/s3filesystem.py in <module>
----> 1 import s3fs
2
3
4 class S3FileSystem(s3fs.S3FileSystem):
5 """
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/s3fs/__init__.py in <module>
----> 1 from .core import S3FileSystem, S3File
2 from .mapping import S3Map
3
4 from ._version import get_versions
5
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/s3fs/core.py in <module>
12 from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper
13
---> 14 import aiobotocore
15 import botocore
16 import aiobotocore.session
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/__init__.py in <module>
----> 1 from .session import get_session, AioSession
2
3 __all__ = ['get_session', 'AioSession']
4 __version__ = '1.3.0'
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/session.py in <module>
4 from botocore import retryhandler, translate
5 from botocore.exceptions import PartialCredentialsError
----> 6 from .client import AioClientCreator, AioBaseClient
7 from .hooks import AioHierarchicalEmitter
8 from .parsers import AioResponseParserFactory
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/client.py in <module>
11 from .args import AioClientArgsCreator
12 from .utils import AioS3RegionRedirector
---> 13 from . import waiter
14
15 history_recorder = get_global_history_recorder()
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/waiter.py in <module>
4 from botocore.exceptions import ClientError
5 from botocore.waiter import WaiterModel # noqa: F401, lgtm[py/unused-import]
----> 6 from botocore.waiter import Waiter, xform_name, logger, WaiterError, \
7 NormalizedOperationMethod as _NormalizedOperationMethod, is_valid_waiter_error
8 from botocore.docs.docstring import WaiterDocstring
ImportError: cannot import name 'is_valid_waiter_error'
```
Please let me know if there's anything else I can add to post.
[1]: https://github.com/huggingface/notebooks/blob/master/examples/text_classification_quantization_inc.ipynb
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3554/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3553
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3553/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3553/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3553/events
|
https://github.com/huggingface/datasets/issues/3553
| 1,097,252,275 |
I_kwDODunzps5BZr2z
| 3,553 |
set_format("np") no longer works for Image data
|
{
"login": "cgarciae",
"id": 5862228,
"node_id": "MDQ6VXNlcjU4NjIyMjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5862228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cgarciae",
"html_url": "https://github.com/cgarciae",
"followers_url": "https://api.github.com/users/cgarciae/followers",
"following_url": "https://api.github.com/users/cgarciae/following{/other_user}",
"gists_url": "https://api.github.com/users/cgarciae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cgarciae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cgarciae/subscriptions",
"organizations_url": "https://api.github.com/users/cgarciae/orgs",
"repos_url": "https://api.github.com/users/cgarciae/repos",
"events_url": "https://api.github.com/users/cgarciae/events{/privacy}",
"received_events_url": "https://api.github.com/users/cgarciae/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"A quick fix for now is doing this:\r\n\r\n```python\r\nX_train = np.stack(dataset[\"train\"][\"image\"])[..., None]",
"This error also propagates to jax and is even trickier to fix, since `.with_format(type='jax')` will use numpy conversion internally (and fail). For a three line failure:\r\n\r\n```python\r\ndataset = datasets.load_dataset(\"mnist\")\r\ndataset.set_format(\"jax\")\r\nX_train = dataset[\"train\"][\"image\"]\r\n```",
"Hi! We've recently introduced a new Image feature that yields PIL Images (and caches transforms on them) instead of arrays.\r\n\r\nHowever, this feature requires a custom transform to yield np arrays directly:\r\n```python\r\nddict = datasets.load_dataset(\"mnist\")\r\n\r\ndef pil_image_to_array(batch):\r\n return {\"image\": [np.array(img) for img in batch[\"image\"]]} # or jnp.array(img) for Jax\r\n\r\nddict.set_transform(pil_image_to_array, columns=\"image\", output_all_columns=True)\r\n```\r\n\r\n[Docs](https://huggingface.co/docs/datasets/master/process.html#format-transform) on `set_transform`.\r\n\r\nAlso, the approach proposed by @cgarciae is not the best because it loads the entire column in memory.\r\n\r\n@albertvillanova @lhoestq WDYT? The Audio and the Image feature currently don't support the TF/Jax/PT Formatters, but for the Numpy Formatter maybe it makes more sense to return np arrays (and not a dict in the case of the Audio feature or a PIL Image object in the case of the Image feature).",
"Yes I agree it should return arrays and not a PIL image (and possible an array instead of a dict for audio data).\r\nI'm currently finishing some code refactoring of the image and audio and opening a PR today. Maybe we can look into that after the refactoring",
"This has been fixed in https://github.com/huggingface/datasets/pull/5072, which is included in the latest release of `datasets`."
] | 2022-01-09T17:18:13 | 2022-10-14T12:03:55 | 2022-10-14T12:03:54 |
NONE
| null | null | null |
## Describe the bug
`dataset.set_format("np")` no longer works for image data, previously you could load the MNIST like this:
```python
dataset = load_dataset("mnist")
dataset.set_format("np")
X_train = dataset["train"]["image"][..., None] # <== No longer a numpy array
```
but now it doesn't work, `set_format("np")` seems to have no effect and the dataset just returns a list/array of PIL images instead of numpy arrays as requested.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3553/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3553/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3550
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3550/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3550/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3550/events
|
https://github.com/huggingface/datasets/issues/3550
| 1,096,522,377 |
I_kwDODunzps5BW5qJ
| 3,550 |
Bug in `openbookqa` dataset
|
{
"login": "lucadiliello",
"id": 23355969,
"node_id": "MDQ6VXNlcjIzMzU1OTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucadiliello",
"html_url": "https://github.com/lucadiliello",
"followers_url": "https://api.github.com/users/lucadiliello/followers",
"following_url": "https://api.github.com/users/lucadiliello/following{/other_user}",
"gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions",
"organizations_url": "https://api.github.com/users/lucadiliello/orgs",
"repos_url": "https://api.github.com/users/lucadiliello/repos",
"events_url": "https://api.github.com/users/lucadiliello/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucadiliello/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] |
closed
| false | null |
[] | null |
[
"Closed by:\r\n- #4259"
] | 2022-01-07T17:32:57 | 2022-05-04T06:33:00 | 2022-05-04T06:32:19 |
CONTRIBUTOR
| null | null | null |
## Describe the bug
Dataset entries contains a typo.
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> obqa = load_dataset('openbookqa', 'main')
>>> obqa['train'][0]
```
## Expected results
```python
{'id': '7-980', 'question_stem': 'The sun is responsible for', 'choices': {'text': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting'], 'label': ['A', 'B', 'C', 'D']}, 'answerKey': 'D'}
```
## Actual results
```python
{'id': '7-980', 'question_stem': 'The sun is responsible for', 'choices': {'text': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting'], 'label': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting']}, 'answerKey': 'D'}
```
The bug is present in all configs and all splits.
## Environment info
- `datasets` version: 1.17.0
- Platform: Linux-5.4.0-1057-aws-x86_64-with-glibc2.27
- Python version: 3.9.7
- PyArrow version: 4.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3550/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3548
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3548/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3548/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3548/events
|
https://github.com/huggingface/datasets/issues/3548
| 1,096,409,512 |
I_kwDODunzps5BWeGo
| 3,548 |
Specify the feature types of a dataset on the Hub without needing a dataset script
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false |
{
"login": "abidlabs",
"id": 1778297,
"node_id": "MDQ6VXNlcjE3NzgyOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abidlabs",
"html_url": "https://github.com/abidlabs",
"followers_url": "https://api.github.com/users/abidlabs/followers",
"following_url": "https://api.github.com/users/abidlabs/following{/other_user}",
"gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions",
"organizations_url": "https://api.github.com/users/abidlabs/orgs",
"repos_url": "https://api.github.com/users/abidlabs/repos",
"events_url": "https://api.github.com/users/abidlabs/events{/privacy}",
"received_events_url": "https://api.github.com/users/abidlabs/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "abidlabs",
"id": 1778297,
"node_id": "MDQ6VXNlcjE3NzgyOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abidlabs",
"html_url": "https://github.com/abidlabs",
"followers_url": "https://api.github.com/users/abidlabs/followers",
"following_url": "https://api.github.com/users/abidlabs/following{/other_user}",
"gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions",
"organizations_url": "https://api.github.com/users/abidlabs/orgs",
"repos_url": "https://api.github.com/users/abidlabs/repos",
"events_url": "https://api.github.com/users/abidlabs/events{/privacy}",
"received_events_url": "https://api.github.com/users/abidlabs/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"After looking into this, discovered that this is already supported if the `dataset_infos.json` file is configured correctly! Here is a working example: https://huggingface.co/datasets/abidlabs/test-audio-13\r\n\r\nThis should be probably be documented, though. "
] | 2022-01-07T15:17:06 | 2022-01-20T14:48:38 | 2022-01-20T14:48:38 |
MEMBER
| null | null | null |
**Is your feature request related to a problem? Please describe.**
Currently if I upload a CSV with paths to audio files, the column type is string instead of Audio.
**Describe the solution you'd like**
I'd like to be able to specify the types of the column, so that when loading the dataset I directly get the features types I want.
The feature types could read from the `dataset_infos.json` for example.
**Describe alternatives you've considered**
Create a dataset script to specify the features, but that seems complicated for a simple thing.
cc @abidlabs
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3548/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3548/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3547
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3547/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3547/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3547/events
|
https://github.com/huggingface/datasets/issues/3547
| 1,096,405,515 |
I_kwDODunzps5BWdIL
| 3,547 |
Datasets created with `push_to_hub` can't be accessed in offline mode
|
{
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"Thanks for reporting. I think this can be fixed by improving the `CachedDatasetModuleFactory` and making it look into the `parquet` cache directory (datasets from push_to_hub are loaded with the parquet dataset builder). I'll look into it",
"Hi, I'm having the same issue. Is there any update on this?",
"We haven't had a chance to fix this yet. If someone would like to give it a try I'd be happy to give some guidance",
"@lhoestq Do you have an idea of what changes need to be made to `CachedDatasetModuleFactory`? I would be willing to take a crack at it. Currently unable to train with datasets I have `push_to_hub` on a cluster whose compute nodes are not connected to the internet.\r\n\r\nIt looks like it might be this line:\r\n\r\nhttps://github.com/huggingface/datasets/blob/0c1d099f87a883e52c42d3fd1f1052ad3967e647/src/datasets/load.py#L994\r\n\r\nWhich wouldn't pick up the stuff saved under `\"datasets/allenai___parquet/*\"`. Additionally, the datasets saved under `\"datasets/allenai___parquet/*\"` appear to have hashes in their name, e.g. `\"datasets/allenai___parquet/my_dataset-def9ee5552a1043e\"`. This would not be detected by `CachedDatasetModuleFactory`, which currently looks for subdirectories\r\n\r\nhttps://github.com/huggingface/datasets/blob/0c1d099f87a883e52c42d3fd1f1052ad3967e647/src/datasets/load.py#L995-L999",
"`importable_directory_path` is used to find a **dataset script** that was previously downloaded and cached from the Hub\r\n\r\nHowever in your case there's no dataset script on the Hub, only parquet files. So the logic must be extended for this case.\r\n\r\nIn particular I think you can add a new logic in the case where `hashes is None` (i.e. if there's no dataset script associated to the dataset in the cache).\r\n\r\nIn this case you can check directly in the in the datasets cache for a directory named `<namespace>__parquet` and a subdirectory named `<config_id>`. The config_id must match `{self.name.replace(\"/\", \"--\")}-*`. \r\n\r\nIn your case those two directories correspond to `allenai___parquet` and then `allenai--my_dataset-def9ee5552a1043e`\r\n\r\nThen you can find the most recent version of the dataset in subdirectories (e.g. sorting using the last modified time of the `dataset_info.json` file).\r\n\r\nFinally, we will need return the module that is used to load the dataset from the cache. It is the same module than the one that would have been normally used if you had an internet connection.\r\n\r\nAt that point you can ping me, because we will need to pass all this:\r\n- `module_path = _PACKAGED_DATASETS_MODULES[\"parquet\"][0]`\r\n- `hash` it corresponds the name of the directory that contains the .arrow file, inside `<namespace>__parquet/<config_id>`\r\n- ` builder_kwargs = {\"hash\": hash, \"repo_id\": self.name, \"config_id\": config_id}`\r\nand currently `config_id` is not a valid argument for a `DatasetBuilder`\r\n\r\nI think in the future we want to change this caching logic completely, since I don't find it super easy to play with.",
"Hi! Is there a workaround for the time being?\r\nLike passing `data_dir` or something like that?\r\n\r\nI would like to use [this diffuser example](https://github.com/huggingface/diffusers/tree/main/examples/unconditional_image_generation) on my cluster whose nodes are not connected to the internet. I have downloaded the dataset online form the login node.",
"Hi ! Yes you can save your dataset locally with `my_dataset.save_to_disk(\"path/to/local\")` and reload it later with `load_from_disk(\"path/to/local\")`\r\n\r\n(removing myself from assignees since I'm currently not working on this right now)",
"Still not fixed? ......",
"Any idea @lhoestq who to tag to fix this ? This is a very annoying bug, which is becoming more and more present since the push_to_hub API is getting used more ?",
"Perhaps @mariosasko ? Thanks a lot for the great work on the lib !",
"It should be easier to implement now that we improved the caching of datasets from `push_to_hub`: each dataset has its own directory in the cache.\r\n\r\nThe cache structure has been improved in https://github.com/huggingface/datasets/pull/5331. Now the cache structure is `\"{namespace__}<dataset_name>/<config_name>/<version>/<hash>/\"` which contains the arrow files `\"<dataset_name>-<split>.arrow\"` and `\"dataset_info.json\"`. \r\n\r\nThe idea is to extend `CachedDatasetModuleFactory` to also check if this directory exists in the cache (in addition to the already existing cache check) and return the requested dataset module. The module name can be found in the JSON file in the `builder_name` field.",
"Any progress?",
"I started a PR to draft the logic to reload datasets from the cache fi they were created with push_to_hub: https://github.com/huggingface/datasets/pull/6459\r\n\r\nFeel free to try it out",
"It seems that this does not support dataset with uppercase name ",
"Which version of `datasets` are you using ? This issue has been fixed with `datasets` 2.16",
"I can confirm that this problem is still happening with `datasets` 2.17.0, installed from pip",
"Can you share a code or a dataset that reproduces the issue ? It seems to work fine on my side.",
"Yeah, \r\n```python\r\ndataset = load_dataset(\"roneneldan/TinyStories\")\r\n```\r\nI tried it with:\r\n```python\r\ndataset = load_dataset(\"roneneldan/tinystories\")\r\n```\r\nand it worked.\r\n\r\n> It seems that this does not support dataset with uppercase name\r\n\r\n@fecet was right, but if you just put the name lowercase, it works. "
] | 2022-01-07T15:12:25 | 2024-02-15T17:41:24 | 2023-12-21T15:13:12 |
CONTRIBUTOR
| null | null | null |
## Describe the bug
In offline mode, one can still access previously-cached datasets. This fails with datasets created with `push_to_hub`.
## Steps to reproduce the bug
in Python:
```
import datasets
mpwiki = datasets.load_dataset("teven/matched_passages_wikidata")
```
in bash:
```
export HF_DATASETS_OFFLINE=1
```
in Python:
```
import datasets
mpwiki = datasets.load_dataset("teven/matched_passages_wikidata")
```
## Expected results
`datasets` should find the previously-cached dataset.
## Actual results
ConnectionError: Couln't reach the Hugging Face Hub for dataset 'teven/matched_passages_wikidata': Offline mode is enabled
## Environment info
- `datasets` version: 1.16.2.dev0
- Platform: Linux-4.18.0-193.70.1.el8_2.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 3.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3547/reactions",
"total_count": 7,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 4
}
|
https://api.github.com/repos/huggingface/datasets/issues/3547/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3544
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3544/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3544/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3544/events
|
https://github.com/huggingface/datasets/issues/3544
| 1,095,784,681 |
I_kwDODunzps5BUFjp
| 3,544 |
Ability to split a dataset in multiple files.
|
{
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null |
[] | 2022-01-06T23:02:25 | 2022-01-06T23:02:25 | null |
CONTRIBUTOR
| null | null | null |
Hello,
**Is your feature request related to a problem? Please describe.**
My use case is that I have one writer that adds columns and multiple workers reading the same `Dataset`. Each worker should have access to columns added by the writer when they reload the dataset.
I understand that we shouldn't overwrite an arrow file as this could cause Segfault and so on. Before 1.16, I was able to overwrite the dataset and that would work most of the time with some retries.
**Describe the solution you'd like**
I was thinking that if we could append `Dataset._data_files`, when the workers reload the Dataset, they would get the new columns.
**Describe alternatives you've considered**
I currently need to
1. Save multiple "versions" of the dataset and load the latest.
2. Try working with cache files to get the latest columns.
**Additional context**
I think this would be a great addition to HFDataset as Parquet supports multi-files input out of the box!
I can make a PR myself with some pointers as needed :)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3544/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3543
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3543/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3543/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3543/events
|
https://github.com/huggingface/datasets/issues/3543
| 1,095,226,438 |
I_kwDODunzps5BR9RG
| 3,543 |
Allow loading community metrics from the hub, just like datasets
|
{
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] |
closed
| false | null |
[] | null |
[
"Hi ! Thanks for your message :) This is a great idea indeed. We haven't started working on this yet though. For now I guess you can host your metric on the Hub (either with your model or your dataset) and use `hf_hub_download` to download it (docs [here](https://github.com/huggingface/huggingface_hub/blob/main/docs/hub/how-to-downstream.md#cached_download))",
"This is a great solution in the meantime, thanks!",
"Here's the code I used, in case it can be of help to someone else:\r\n```python\r\nimport os, shutil\r\nfrom huggingface_hub import hf_hub_download\r\ndef download_metric(repo_id, file_path):\r\n # repo_id: for models \"username/model_name\", for datasets \"datasets/username/model_name\"\r\n local_metric_path = hf_hub_download(repo_id=repo_id, filename=file_path)\r\n updated_local_metric_path = (os.path.dirname(local_metric_path) + os.path.basename(local_metric_path).replace(\".\", \"_\") + \".py\")\r\n shutil.copy(local_metric_path, updated_local_metric_path)\r\n return updated_local_metric_path\r\n\r\nmetric = load_metric(download_metric(REPO_ID, FILE_PATH))\r\n```",
"Solved with https://github.com/huggingface/evaluate 🤗 ",
"Yay!! cc @lvwerra @sashavor @douwekiela \r\n\r\nPlease share your feedback @eladsegal =)"
] | 2022-01-06T11:26:26 | 2022-05-31T20:59:14 | 2022-05-31T20:53:37 |
CONTRIBUTOR
| null | null | null |
**Is your feature request related to a problem? Please describe.**
Currently, I can load a metric implemented by me by providing the local path to the file in `load_metric`.
However, there is no option to do it with the metric uploaded to the hub.
This means that if I want to allow other users to use it, they must download it first which makes the usage less smooth.
**Describe the solution you'd like**
Load metrics from the hub just like datasets are loaded.
In order to not break stuff, the convention can be to put the metric file in a "metrics" folder in the hub.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3543/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3543/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3541
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3541/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3541/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3541/events
|
https://github.com/huggingface/datasets/issues/3541
| 1,095,033,828 |
I_kwDODunzps5BROPk
| 3,541 |
Support 7-zip compressed data files
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"This should also resolve: https://github.com/huggingface/datasets/issues/3185."
] | 2022-01-06T07:11:03 | 2022-07-19T10:18:30 | null |
MEMBER
| null | null | null |
**Is your feature request related to a problem? Please describe.**
We should support 7-zip compressed data files:
- [x] in `extract`:
- #4672
- [ ] in `iter_archive`: for streaming mode
both in streaming and non-streaming modes.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3541/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3540
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3540/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3540/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3540/events
|
https://github.com/huggingface/datasets/issues/3540
| 1,094,900,336 |
I_kwDODunzps5BQtpw
| 3,540 |
How to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset?
|
{
"login": "CindyTing",
"id": 35062414,
"node_id": "MDQ6VXNlcjM1MDYyNDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/35062414?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CindyTing",
"html_url": "https://github.com/CindyTing",
"followers_url": "https://api.github.com/users/CindyTing/followers",
"following_url": "https://api.github.com/users/CindyTing/following{/other_user}",
"gists_url": "https://api.github.com/users/CindyTing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CindyTing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CindyTing/subscriptions",
"organizations_url": "https://api.github.com/users/CindyTing/orgs",
"repos_url": "https://api.github.com/users/CindyTing/repos",
"events_url": "https://api.github.com/users/CindyTing/events{/privacy}",
"received_events_url": "https://api.github.com/users/CindyTing/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null |
[] | 2022-01-06T02:13:42 | 2022-01-06T02:17:39 | null |
NONE
| null | null | null |
Hi,
I use torch.utils.data.Dataset to define my own data, but I need to use the 'map' function of datasets.arrow_dataset.Dataset later, so I hope to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset.
Here is an example.
```
from torch.utils.data import Dataset
from datasets.arrow_dataset import Dataset as HFDataset
class ADataset(Dataset):
def __init__(self, data):
super().__init__()
self.data = data
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class MDataset():
def __init__(self, tokenizer: AutoTokenizer, data_args, training_args):
self.train_dataset = ADataset(data_args)
self.tokenizer = tokenizer
self.data_args = data_args
self.train_dataset = self.train_dataset.map(
self.process_function,
batched=True,
remove_columns=column_names,
load_from_cache_file=True,
desc="Running tokenizer on train dataset",
)
def process_function(self, examples):
sentences = [" ".join(sample[0][3]) for sample in examples]
tokenized = self.tokenizer(
sentences,
max_length=self.max_seq_len,
padding=self.padding,
truncation=True)
```
But it would raise an ERROR, AttributeError: 'ADataset' object has no attribute 'map'.
so how to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset?
Thanks in advance!
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3540/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3533
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3533/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3533/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3533/events
|
https://github.com/huggingface/datasets/issues/3533
| 1,094,156,147 |
I_kwDODunzps5BN39z
| 3,533 |
Task search function on hub not working correctly
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
},
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"known issue due to https://github.com/huggingface/datasets/pull/2362 (and [internal](https://github.com/huggingface/moon-landing/issues/946)) , will be solved soon",
"hmm actually i have no recollection of why I said that",
"Because it has dots in some YAML keys, it can't be parsed and indexed by the back-end"
] | 2022-01-05T09:36:30 | 2022-05-12T14:45:57 | null |
CONTRIBUTOR
| null | null | null |
When I want to look at all datasets of the category: `speech-processing` *i.e.* https://huggingface.co/datasets?task_categories=task_categories:speech-processing&sort=downloads , then the following dataset doesn't show up for some reason:
- https://huggingface.co/datasets/speech_commands
even thought it's task tags seem correct:
https://raw.githubusercontent.com/huggingface/datasets/master/datasets/speech_commands/README.md
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3533/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3531
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3531/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3531/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3531/events
|
https://github.com/huggingface/datasets/issues/3531
| 1,094,033,280 |
I_kwDODunzps5BNZ-A
| 3,531 |
Give clearer instructions to add the YAML tags
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[] | 2022-01-05T06:44:20 | 2022-01-17T15:54:36 | 2022-01-17T15:54:36 |
MEMBER
| null | null | null |
## Describe the bug
As reported by @julien-c, many community datasets contain the line `YAML tags:` at the top of the YAML section in the header of the README file. See e.g.: https://huggingface.co/datasets/bigscience/P3/commit/a03bea08cf4d58f268b469593069af6aeb15de32
Maybe we should give clearer instruction/hints in the README template.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3531/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3522
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3522/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3522/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3522/events
|
https://github.com/huggingface/datasets/issues/3522
| 1,093,807,586 |
I_kwDODunzps5BMi3i
| 3,522 |
wmt19 is broken (zh-en)
|
{
"login": "AjayP13",
"id": 5404177,
"node_id": "MDQ6VXNlcjU0MDQxNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5404177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AjayP13",
"html_url": "https://github.com/AjayP13",
"followers_url": "https://api.github.com/users/AjayP13/followers",
"following_url": "https://api.github.com/users/AjayP13/following{/other_user}",
"gists_url": "https://api.github.com/users/AjayP13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AjayP13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AjayP13/subscriptions",
"organizations_url": "https://api.github.com/users/AjayP13/orgs",
"repos_url": "https://api.github.com/users/AjayP13/repos",
"events_url": "https://api.github.com/users/AjayP13/events{/privacy}",
"received_events_url": "https://api.github.com/users/AjayP13/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] |
closed
| false | null |
[] | null |
[
"This issue is not reproducible."
] | 2022-01-04T22:33:45 | 2022-05-06T16:27:37 | 2022-05-06T16:27:37 |
NONE
| null | null | null |
## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("wmt19", 'zh-en')
```
## Expected results
The dataset should download.
## Actual results
`ConnectionError: Couldn't reach ftp://cwmt-wmt:[email protected]/parallel/casia2015.zip`
## Environment info
- `datasets` version: 1.15.1
- Platform: Linux
- Python version: 3.8
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3522/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3518
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3518/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3518/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3518/events
|
https://github.com/huggingface/datasets/issues/3518
| 1,093,063,455 |
I_kwDODunzps5BJtMf
| 3,518 |
Add PubMed Central Open Access dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"In the framework of BigScience:\r\n- bigscience-workshop/data_tooling#121\r\n\r\nI have created this dataset as a community dataset: https://huggingface.co/datasets/albertvillanova/pmc_open_access\r\n\r\nHowever, I was wondering that it may be more appropriate to move it under an org namespace: `pubmed_central` or `pmc`\r\nThis way, we could add other datasets I'm also working on: Author Manuscript Dataset, Historical OCR Dataset, LitArch Open Access Subset.\r\n\r\nWhat do you think? @lhoestq @mariosasko ",
"Why not ! Having them under such namespaces would also help people searching for this kind of datasets.\r\nWe can also invite people from pubmed at one point",
"DONE: https://huggingface.co/datasets/pmc/open_access"
] | 2022-01-04T06:54:35 | 2022-01-17T15:25:57 | 2022-01-17T15:25:57 |
MEMBER
| null | null | null |
## Adding a Dataset
- **Name:** PubMed Central Open Access
- **Description:** The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under license terms that allow reuse.
- **Paper:** *link to the dataset paper if available*
- **Data:** https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3518/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3515
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3515/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3515/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3515/events
|
https://github.com/huggingface/datasets/issues/3515
| 1,092,624,695 |
I_kwDODunzps5BICE3
| 3,515 |
`ExpectedMoreDownloadedFiles` for `evidence_infer_treatment`
|
{
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Thanks for reporting @VictorSanh.\r\n\r\nI'm looking at it... "
] | 2022-01-03T15:58:38 | 2022-02-14T13:21:43 | 2022-02-14T13:21:43 |
MEMBER
| null | null | null |
## Describe the bug
I am trying to load a dataset called `evidence_infer_treatment`. The first subset (`1.1`) works fine but the second returns an error (`2.0`). It downloads a file but crashes during the checksums.
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> load_dataset("evidence_infer_treatment", "2.0")
Downloading and preparing dataset evidence_infer_treatment/2.0 (download: 34.84 MiB, generated: 91.46 MiB, post-processed: Unknown size, total: 126.30 MiB) to /home/victor_huggingface_co/.cache/huggingface/datasets/evidence_infer_treatment/2.0/2.0.0/6812655bfd26cbaa58c84eab098bf6403694b06c6ae2ded603c55681868a1e24...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/load.py", line 1669, in load_dataset
use_auth_token=use_auth_token,
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 594, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 664, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 33, in verify_checksums
raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))
datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'http://evidence-inference.ebm-nlp.com/v2.0.tar.gz'}
```
I did try to pass the argument `ignore_verifications=True` but run into an error when trying to build the dataset:
```python
>>> load_dataset("evidence_infer_treatment", "2.0", ignore_verifications=True, download_mode="force_redownload")
Downloading and preparing dataset evidence_infer_treatment/2.0 (download: 34.84 MiB, generated: 91.46 MiB, post-processed: Unknown size, total: 126.30 MiB) to /home/victor_huggingface_co/.cache/huggingface/datasets/evidence_infer_treatment/2.0/2.0.0/6812655bfd26cbaa58c84eab098bf6403694b06c6ae2ded603c55681868a1e24...
Downloading: 164MB [00:23, 6.98MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/load.py", line 1669, in load_dataset
use_auth_token=use_auth_token,
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 594, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 681, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 1080, in _prepare_split
example = self.info.features.encode_example(record)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 1032, in encode_example
return encode_nested_example(self, example)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 807, in encode_nested_example
k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 807, in <dictcomp>
k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 829, in encode_nested_example
list_dict[k] = [encode_nested_example(dict_tuples[0], o) for o in dict_tuples[1:]]
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 829, in <listcomp>
list_dict[k] = [encode_nested_example(dict_tuples[0], o) for o in dict_tuples[1:]]
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 828, in encode_nested_example
for k, dict_tuples in utils.zip_dict(schema.feature, *obj):
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 136, in zip_dict
yield key, tuple(d[key] for d in dicts)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 136, in <genexpr>
yield key, tuple(d[key] for d in dicts)
KeyError: ''
```
## Environment info
- `datasets` version: 1.16.1
- Platform: Linux-5.0.0-1020-gcp-x86_64-with-debian-buster-sid
- Python version: 3.7.11
- PyArrow version: 6.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3515/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3512
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3512/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3512/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3512/events
|
https://github.com/huggingface/datasets/issues/3512
| 1,092,359,973 |
I_kwDODunzps5BHBcl
| 3,512 |
No Data format found
|
{
"login": "shazzad47",
"id": 57741378,
"node_id": "MDQ6VXNlcjU3NzQxMzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57741378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shazzad47",
"html_url": "https://github.com/shazzad47",
"followers_url": "https://api.github.com/users/shazzad47/followers",
"following_url": "https://api.github.com/users/shazzad47/following{/other_user}",
"gists_url": "https://api.github.com/users/shazzad47/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shazzad47/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shazzad47/subscriptions",
"organizations_url": "https://api.github.com/users/shazzad47/orgs",
"repos_url": "https://api.github.com/users/shazzad47/repos",
"events_url": "https://api.github.com/users/shazzad47/events{/privacy}",
"received_events_url": "https://api.github.com/users/shazzad47/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false | null |
[] | null |
[
"Hi, which dataset is giving you an error?"
] | 2022-01-03T09:41:11 | 2022-01-17T13:26:05 | 2022-01-17T13:26:05 |
NONE
| null | null | null |
## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3512/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3511
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3511/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3511/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3511/events
|
https://github.com/huggingface/datasets/issues/3511
| 1,092,170,411 |
I_kwDODunzps5BGTKr
| 3,511 |
Dataset
|
{
"login": "MIKURI0114",
"id": 92849978,
"node_id": "U_kgDOBYjHOg",
"avatar_url": "https://avatars.githubusercontent.com/u/92849978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MIKURI0114",
"html_url": "https://github.com/MIKURI0114",
"followers_url": "https://api.github.com/users/MIKURI0114/followers",
"following_url": "https://api.github.com/users/MIKURI0114/following{/other_user}",
"gists_url": "https://api.github.com/users/MIKURI0114/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MIKURI0114/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MIKURI0114/subscriptions",
"organizations_url": "https://api.github.com/users/MIKURI0114/orgs",
"repos_url": "https://api.github.com/users/MIKURI0114/repos",
"events_url": "https://api.github.com/users/MIKURI0114/events{/privacy}",
"received_events_url": "https://api.github.com/users/MIKURI0114/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false | null |
[] | null |
[
"Can you reopen with the correct dataset name (if relevant)?\r\n\r\nThanks",
"The dataset viewer was down tonight. It works again."
] | 2022-01-03T02:03:23 | 2022-01-03T08:41:26 | 2022-01-03T08:23:07 |
NONE
| null | null | null |
## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3511/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3510
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3510/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3510/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3510/events
|
https://github.com/huggingface/datasets/issues/3510
| 1,091,997,004 |
I_kwDODunzps5BFo1M
| 3,510 |
`wiki_dpr` details for Open Domain Question Answering tasks
|
{
"login": "pk1130",
"id": 40918514,
"node_id": "MDQ6VXNlcjQwOTE4NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/40918514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pk1130",
"html_url": "https://github.com/pk1130",
"followers_url": "https://api.github.com/users/pk1130/followers",
"following_url": "https://api.github.com/users/pk1130/following{/other_user}",
"gists_url": "https://api.github.com/users/pk1130/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pk1130/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pk1130/subscriptions",
"organizations_url": "https://api.github.com/users/pk1130/orgs",
"repos_url": "https://api.github.com/users/pk1130/repos",
"events_url": "https://api.github.com/users/pk1130/events{/privacy}",
"received_events_url": "https://api.github.com/users/pk1130/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi ! According to the DPR paper, the wikipedia dump is the one from Dec. 20, 2018.\r\nEach instance contains a paragraph of at most 100 word, as well as the title of the wikipedia page it comes from and the DPR embedding (a 768-d vector).",
"Closed by:\r\n- #3534"
] | 2022-01-02T11:04:01 | 2022-02-17T13:46:20 | 2022-02-17T13:46:20 |
NONE
| null | null | null |
Hey guys!
Thanks for creating the `wiki_dpr` dataset!
I am currently trying to use the dataset for context retrieval using DPR on NQ questions and need details about what each of the files and data instances mean, which version of the Wikipedia dump it uses, etc. Please respond at your earliest convenience regarding the same! Thanks a ton!
P.S.: (If one of @thomwolf @lewtun @lhoestq could respond, that would be even better since they have the first-hand details of the dataset. If anyone else has those, please reach out! Thanks!)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3510/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3507
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3507/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3507/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3507/events
|
https://github.com/huggingface/datasets/issues/3507
| 1,091,214,808 |
I_kwDODunzps5BCp3Y
| 3,507 |
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] |
closed
| false | null |
[] | null |
[
"IMO, the data streaming test is good enough of a test that the dataset works correctly (assuming that we can more or less ensure that if streaming works then the non-streaming case will also work), so that for datasets that have a working dataset preview, we can remove the dummy data IMO. On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data.\r\n\r\nI don't really have an opinion regarding the JSON metadata as I don't know enough about it.\r\n\r\n",
"I don't know all the details, but generally I'd be in favor of unifying the metadata formats into YAML inside .md (and so deprecating the dataset_infos.json) \r\n\r\n(Ultimately the CI can run on \"HuggingFace Actions\" instead of on GitHub)",
"The dataset_infos.json file currently has these useful infos for each dataset configuration, that I think can be moved to the dataset tags:\r\n- Size of the dataset in MB: download size, arrow file size, and total size (sum of download + arrow)\r\n- Size of each split in MB and number of examples. Again this can be moved to the dataset tags\r\n- Feature type of each column\r\n- supported task templates (it defines what columns correspond to the features and labels for example)\r\n\r\nBut it also has this, which I'm not sure if it should be in the tags or not:\r\n- Checksums of the downloaded files for integrity verifications\r\n\r\nSo ultimately this file could probably be deprecated in favor of having the infos in the tags.\r\n\r\n> Also note that for generating both (dataset_infos.json file and dummy data), the entire dataset needs being downloaded. This can be an issue for huge datasets (like WIT, with 400 GB of data).\r\n\r\nTo get the exact number of examples and size in MB of the dataset, one needs to download and generate it completely. IMO these infos are very important when someone considers using a dataset. Though using streaming we could do some extrapolation to have approximate values instead.\r\n\r\nFor the integrity verifications we also need the number of examples and the checksums of the downloaded files, so it requires the dataset to be fully downloaded once. This can be optional though.\r\n\r\n> IMO, the data streaming test is good enough of a test that the dataset works correctly (assuming that we can more or less ensure that if streaming works then the non-streaming case will also work)\r\n\r\nI agree with this. Usually if a dataset works in streaming mode, then it works in non-streaming mode (the other way around is not true though).\r\n\r\n> On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data.\r\n\r\nYes indeed, or at least make sure that it was tested on the true data.",
"(note that if we wanted to display sizes, etc we could also pretty easily parse the `dataset_infos.json` on the hub side)",
"I agree that we can move the relevant parts of `dataset_infos.json` to the YAML tags.\r\n\r\n> On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data. <\r\n> > Yes indeed, or at least make sure that it was tested on the true data.\r\n\r\nI like the idea of testing streaming and falling back to the dummy data test if streaming does not work. Generating dummy data can be very tedious, so this would be a nice incentive for the contributors to make their datasets streamable. ",
"CC: @severo ",
"About dummy data, please see e.g. this PR: https://github.com/huggingface/datasets/pull/3692/commits/62368daac0672041524a471386d5e78005cf357a\r\n- I updated the previous dummy data: I just had to rename the file and its directory\r\n - the dummy data zip contains only a single file: `pubmed22n0001.xml.gz`\r\n\r\nThen I discover it fails: https://app.circleci.com/pipelines/github/huggingface/datasets/9800/workflows/173a4433-8feb-4fc6-ab9e-59762084e3e1/jobs/60437\r\n```\r\nNo such file or directory: '.../dummy_data/pubmed22n0002.xml.gz'\r\n```\r\n- it needs dummy data for all the 1114 files: \r\n `_URLs = [f\"ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed22n{i:04d}.xml.gz\" for i in range(1, 1115)]`\r\n- this confirms me that it never passed the test: these dummy data files were not present before my PR\r\n- therefore, is it really useful the data test if we just ignore it when it does not pass?\r\n\r\nIn relation with JSON metadata, I'm generating the file for `pubmed` (see above) in a GCP instance: it's running for more than 3 hours and only 9 million examples generated so far (before my PR, it had 32 million, now it has more).",
"I mention in https://github.com/huggingface/datasets-server/wiki/Preliminary-design that the future \"datasets server\" could be in charge of generating both the dummy data and the dataset-info.json file if required (or their equivalent).",
"Hi ! I think dummy data generation is out of scope for the datasets server, since it's about generating the original data files.\r\n\r\nThat would be amazing to have it generate the dataset_infos.json though !",
"From some offline discussion with @mariosasko and especially for vision datasets, we'll probably not require dummy data anymore and use streaming instead :) This will make adding a new dataset much easier.\r\nThis should also make sure that streaming works as expected directly in the CI, without having to check the dataset viewer once the PR is merged",
"OK. I removed the \"dummy data\" item from the services of the dataset server",
"It seems that migration from dataset-info.json to dataset card YAML has been acted.\r\n\r\nProbably it's a good idea, but I didn't find the pros and cons of this decision, so I put some I could think of:\r\n\r\npros:\r\n- only one file to parse, share, sync\r\n- it gives a hint to the users that if you write your dataset card, you should also specify the metadata\r\n\r\ncons:\r\n- the metadata header might be very long, before reaching the start of the README/dataset card. It might be surprising when you edit the file because the metadata is not shown on top when the dataset card is rendered (dataset page). It also somewhat prevents including large strings like the checksums\r\n- YAML vs JSON: not sure which one is easier for users to fill and maintain\r\n- two concepts are mixed in the same file (metadata and documentation). This means that if you're interested only in one of them, you still have to know how to parse the whole file.\r\n- [low priority] besides the JSON file, we might want to support yaml or toml file if the user prefers (as [prettier](https://prettier.io/docs/en/configuration.html) and others do for their config files, for example). Inside the md, I understand that only YAML is allowed",
"> the metadata header might be very long, before reaching the start of the README/dataset card. It might be surprising when you edit the file because the metadata is not shown on top when the dataset card is rendered (dataset page). It also somewhat prevents including large strings like the checksums\r\n\r\nNote that we could simply not have the checksums in the YAML metadata at all, or maybe at one point have a pointer to another file instead.\r\n\r\nWe can also choose to hide (collapse) certain sections in the YAML by default when we open the dataset card editor.\r\n\r\n> two concepts are mixed in the same file (metadata and documentation). This means that if you're interested only in one of them, you still have to know how to parse the whole file.\r\n\r\nI think it's fine for now. Later if we really end up with too many YAML sections we can see if we need to tweak the API endpoints or the `datasets`/`huggingface_hub` tools\r\n\r\n> YAML vs JSON: not sure which one is easier for users to fill and maintain\r\n\r\nRegarding YAML vs JSON: I think YAML is easier to write by hand, and I also think that it's better for consistency - i.e. we're using more and more YAML to configure models/datasets/spaces",
"I didn't know the decision was already taken. Good to know. 😅",
"> the metadata header might be very long, before reaching the start of the README/dataset card. It might be surprising when you edit the file because the metadata is not shown on top when the dataset card is rendered (dataset page). It also somewhat prevents including large strings like the checksums\r\n\r\nWe can definitely work on this on the hub side to make the UX better",
"Tensorflow Datasets catalog includes a community catalog where you can find and use HF datasets (see [here](https://www.tensorflow.org/datasets/community_catalog/huggingface)).\r\n\r\nFYI I noticed today that they are using the exported dataset_infos.json files from github to get the metadata (see their code [here](https://github.com/tensorflow/datasets/blob/a482f01c036a10496f5e22e69a2ef81b707cc418/tensorflow_datasets/scripts/documentation/build_community_catalog.py#L261))",
"Metadata is now stored as YAML, and dummy data is deprecated, so I think we can close this issue."
] | 2021-12-30T17:04:25 | 2022-11-04T15:31:38 | 2022-11-04T15:31:37 |
MEMBER
| null | null | null |
I open this PR to have a public discussion about this topic and make a decision.
As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)?
On the other hand, the dummy data is necessary for testing (in our CI suite) that the canonical dataset loads correctly. However:
- the dataset preview feature is already an indirect test that the dataset loads correctly (it also tests it is streamable though)
- we are migrating canonical datasets to the Hub
Do we really need to continue testing them in out CI?
Also note that for generating both (dataset_infos.json file and dummy data), the entire dataset needs being downloaded. This can be an issue for huge datasets (like WIT, with 400 GB of data).
Feel free to ping other people for the discussion.
CC: @lhoestq @mariosasko @thomwolf @julien-c @patrickvonplaten @anton-l @LysandreJik @yjernite @nateraw
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3507/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3505
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3505/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3505/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3505/events
|
https://github.com/huggingface/datasets/issues/3505
| 1,091,150,820 |
I_kwDODunzps5BCaPk
| 3,505 |
cast_column function not working with map function in streaming mode for Audio features
|
{
"login": "ashu5644",
"id": 8268102,
"node_id": "MDQ6VXNlcjgyNjgxMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8268102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashu5644",
"html_url": "https://github.com/ashu5644",
"followers_url": "https://api.github.com/users/ashu5644/followers",
"following_url": "https://api.github.com/users/ashu5644/following{/other_user}",
"gists_url": "https://api.github.com/users/ashu5644/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashu5644/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashu5644/subscriptions",
"organizations_url": "https://api.github.com/users/ashu5644/orgs",
"repos_url": "https://api.github.com/users/ashu5644/repos",
"events_url": "https://api.github.com/users/ashu5644/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashu5644/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Hi! This is probably due to the fact that `IterableDataset.map` sets `features` to `None` before mapping examples. We can fix the issue by passing the old `features` dict to the map generator and performing encoding/decoding there (before calling the map transform function)."
] | 2021-12-30T14:52:01 | 2022-01-18T19:54:07 | 2022-01-18T19:54:07 |
NONE
| null | null | null |
## Describe the bug
I am trying to use Audio class for loading audio features using custom dataset. I am able to cast 'audio' feature into 'Audio' format with cast_column function. On using map function, I am not getting 'Audio' casted feature but getting path of audio file only.
I am getting features of 'audio' of string type with load_dataset call. After using cast_column 'audio' feature is converted into 'Audio' type. But in map function I am not able to get Audio type for audio feature & getting string type data containing path of file only. So I am not able to use processor in encode function.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset, Audio
from transformers import Wav2Vec2Processor
def encode(batch, processor):
print("Audio: ",batch['audio'])
batch["input_values"] = processor(batch["audio"]['array'], sampling_rate=16000).input_values
return batch
def print_ds(ds):
iterator = iter(ds)
for d in iterator:
print("Data: ",d)
break
processor = Wav2Vec2Processor.from_pretrained(pretrained_model_path)
dataset = load_dataset("custom_dataset.py","train",data_files={'train':'train_path.txt'},
data_dir="data", streaming=True, split="train")
print("Features: ",dataset.features)
print_ds(dataset)
dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000))
print("Features: ",dataset.features)
print_ds(dataset)
dataset = dataset.map(lambda x: encode(x,processor))
print("Features: ",dataset.features)
print_ds(dataset)
```
## Expected results
map function not printing Audio type features be used with processor function and getting error in processor call due to this.
## Actual results
# after load_dataset call
Features: {'sentence': Value(dtype='string', id=None), 'audio': Value(dtype='string', id=None)}
Data: {'sentence': 'और अपने पेट को माँ की स्वादिष्ट गरमगरम जलेबियाँ हड़पते\n', 'audio': 'data/0116_003.wav'}
# after cast_column call
Features: {'sentence': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None)}
Data: {'sentence': 'और अपने पेट को माँ की स्वादिष्ट गरमगरम जलेबियाँ हड़पते\n', 'audio': {'path': 'data/0116_003.wav', 'array': array([ 1.2662281e-06, 1.0264218e-06, -1.3615092e-06, ...,
1.3017889e-02, 1.0085563e-02, 4.8155054e-03], dtype=float32), 'sampling_rate': 16000}}
# after map call
Features: None
Audio: data/0116_003.wav
Traceback (most recent call last):
File "demo2.py", line 36, in <module>
print_ds(dataset)
File "demo2.py", line 11, in print_ds
for d in iterator:
File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 341, in __iter__
for key, example in self._iter():
File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 338, in _iter
yield from ex_iterable
File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 192, in __iter__
yield key, self.function(example)
File "demo2.py", line 32, in <lambda>
dataset = dataset.map(lambda x: batch_encode(x,processor))
File "demo2.py", line 6, in batch_encode
batch["input_values"] = processor(batch["audio"]['array'], sampling_rate=16000).input_values
TypeError: string indices must be integers
## Environment info
- `datasets` version: 1.17.0
- Platform: Linux-4.14.243 with-debian-bullseye-sid
- Python version: 3.7.9
- PyArrow version: 6.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3505/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3504
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3504/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3504/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3504/events
|
https://github.com/huggingface/datasets/issues/3504
| 1,090,682,230 |
I_kwDODunzps5BAn12
| 3,504 |
Unable to download PUBMED_title_abstracts_2019_baseline.jsonl.zst
|
{
"login": "ToddMorrill",
"id": 12600692,
"node_id": "MDQ6VXNlcjEyNjAwNjky",
"avatar_url": "https://avatars.githubusercontent.com/u/12600692?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ToddMorrill",
"html_url": "https://github.com/ToddMorrill",
"followers_url": "https://api.github.com/users/ToddMorrill/followers",
"following_url": "https://api.github.com/users/ToddMorrill/following{/other_user}",
"gists_url": "https://api.github.com/users/ToddMorrill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ToddMorrill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ToddMorrill/subscriptions",
"organizations_url": "https://api.github.com/users/ToddMorrill/orgs",
"repos_url": "https://api.github.com/users/ToddMorrill/repos",
"events_url": "https://api.github.com/users/ToddMorrill/events{/privacy}",
"received_events_url": "https://api.github.com/users/ToddMorrill/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Hi @ToddMorrill, thanks for reporting.\r\n\r\nThree weeks ago I contacted the team who created the Pile dataset to report this issue with their data host server: https://the-eye.eu\r\n\r\nThey told me that unfortunately, the-eye was heavily affected by the recent tornado catastrophe in the US. They hope to have their data back online asap.",
"Hi @ToddMorrill, people from the Pile team have mirrored their data in a new host server: https://mystic.the-eye.eu\r\n\r\nSee:\r\n- #3627\r\n\r\nIt should work if you update your URL.\r\n\r\nWe should also update the URL in our course material.",
"The old URL is still present in the HuggingFace course here: \r\nhttps://huggingface.co/course/chapter5/4?fw=pt\r\n\r\nI have created a PR for the Notebook here: https://github.com/huggingface/notebooks/pull/148\r\nNot sure if the HTML is in a public repo. I wasn't able to find it. ",
"Fixed the other two URLs here: \r\nhttps://github.com/mwunderlich/notebooks/pull/1",
"Both URLs are broken now\r\n`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`\r\nAnd\r\n`ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))`\r\n\r\n\r\n",
"I was able to find a torrent with \"The Pile\" dataset here: [The Pile An 800GB Dataset of Diverse Text for Language Modeling ](https://academictorrents.com/details/0d366035664fdf51cfbe9f733953ba325776e667)\r\n\r\nThe complete dataset is huge, so I would suggest you to download only the \"PUBMED_title_abstracts_2019_baseline.jsonl.zst\" file, which is about 7GB. You can do this by using a torrent client of your choice (I typically utilize Transmission, which is pre-installed in Ubuntu distributions).\r\n\r\n",
"@albertvillanova another issue:\r\n```\r\n15 experiment_compute_diveristy_coeff_single_dataset_then_combined_datasets_with_domain_weights()\r\n16 File \"/lfs/ampere1/0/brando9/beyond-scale-language-data-diversity/src/diversity/div_coeff.py\", line 474, in experiment_compute_diveristy_coeff_single_dataset_then_combined_datasets_with_domain_weights\r\n17 column_names = next(iter(dataset)).keys()\r\n18 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/iterable_dataset.py\", line 1353, in __iter__\r\n19 for key, example in ex_iterable:\r\n20 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/iterable_dataset.py\", line 207, in __iter__\r\n21 yield from self.generate_examples_fn(**self.kwargs)\r\n22 File \"/lfs/ampere1/0/brando9/.cache/huggingface/modules/datasets_modules/datasets/EleutherAI--pile/ebea56d358e91cf4d37b0fde361d563bed1472fbd8221a21b38fc8bb4ba554fb/pile.py\", line 236, in _generate_examples\r\n23 with zstd.open(open(files[subset], \"rb\"), \"rt\", encoding=\"utf-8\") as f:\r\n24 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/streaming.py\", line 74, in wrapper\r\n25 return function(*args, download_config=download_config, **kwargs)\r\n26 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py\", line 496, in xopen\r\n27 file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()\r\n28 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/core.py\", line 134, in open\r\n29 return self.__enter__()\r\n30 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/core.py\", line 102, in __enter__\r\n31 f = self.fs.open(self.path, mode=mode)\r\n32 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/spec.py\", line 1241, in open\r\n33 f = self._open(\r\n34 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/implementations/http.py\", line 356, in _open\r\n35 size = size or self.info(path, **kwargs)[\"size\"]\r\n36 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py\", line 121, in wrapper\r\n37 return sync(self.loop, func, *args, **kwargs)\r\n38 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py\", line 106, in sync\r\n39 raise return_result\r\n40 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py\", line 61, in _runner\r\n41 result[0] = await coro\r\n42 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/implementations/http.py\", line 430, in _info\r\n43 raise FileNotFoundError(url) from exc\r\n44 FileNotFoundError: https://the-eye.eu/public/AI/pile_preliminary_components/NIH_ExPORTER_awarded_grant_text.jsonl.zst\r\n```\r\n\r\nany suggestions?",
"related: https://github.com/huggingface/datasets/issues/6144",
"this seems to work but it's rather annoying.\r\n\r\nSummary of how to make it work:\r\n1. get urls to parquet files into a list\r\n2. load list to load_dataset via `load_dataset('parquet', data_files=urls)` (note api names to hf are really confusing sometimes)\r\n3. then it should work, print a batch of text.\r\n\r\npresudo code\r\n```python\r\nurls_hacker_news = [\r\n \"https://huggingface.co/datasets/EleutherAI/pile/resolve/refs%2Fconvert%2Fparquet/hacker_news/pile-train-00000-of-00004.parquet\",\r\n \"https://huggingface.co/datasets/EleutherAI/pile/resolve/refs%2Fconvert%2Fparquet/hacker_news/pile-train-00001-of-00004.parquet\",\r\n \"https://huggingface.co/datasets/EleutherAI/pile/resolve/refs%2Fconvert%2Fparquet/hacker_news/pile-train-00002-of-00004.parquet\",\r\n \"https://huggingface.co/datasets/EleutherAI/pile/resolve/refs%2Fconvert%2Fparquet/hacker_news/pile-train-00003-of-00004.parquet\"\r\n]\r\n\r\n...\r\n\r\n\r\n # streaming = False\r\n from diversity.pile_subset_urls import urls_hacker_news\r\n path, name, data_files = 'parquet', 'hacker_news', urls_hacker_news\r\n # not changing\r\n batch_size = 512\r\n today = datetime.datetime.now().strftime('%Y-m%m-d%d-t%Hh_%Mm_%Ss')\r\n run_name = f'{path} div_coeff_{num_batches=} ({today=} ({name=}) {data_mixture_name=} {probabilities=})'\r\n print(f'{run_name=}')\r\n\r\n # - Init wandb\r\n debug: bool = mode == 'dryrun'\r\n run = wandb.init(mode=mode, project=\"beyond-scale\", name=run_name, save_code=True)\r\n wandb.config.update({\"num_batches\": num_batches, \"path\": path, \"name\": name, \"today\": today, 'probabilities': probabilities, 'batch_size': batch_size, 'debug': debug, 'data_mixture_name': data_mixture_name, 'streaming': streaming, 'data_files': data_files})\r\n # run.notify_on_failure() # https://community.wandb.ai/t/how-do-i-set-the-wandb-alert-programatically-for-my-current-run/4891\r\n print(f'{debug=}')\r\n print(f'{wandb.config=}')\r\n\r\n # -- Get probe network\r\n from datasets import load_dataset\r\n import torch\r\n from transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n\r\n tokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\n if tokenizer.pad_token_id is None:\r\n tokenizer.pad_token = tokenizer.eos_token\r\n probe_network = GPT2LMHeadModel.from_pretrained(\"gpt2\")\r\n device = torch.device(f\"cuda:{0}\" if torch.cuda.is_available() else \"cpu\")\r\n probe_network = probe_network.to(device)\r\n\r\n # -- Get data set\r\n def my_load_dataset(path, name):\r\n print(f'{path=} {name=} {streaming=}')\r\n if path == 'json' or path == 'bin' or path == 'csv':\r\n print(f'{data_files_prefix+name=}')\r\n return load_dataset(path, data_files=data_files_prefix+name, streaming=streaming, split=\"train\").with_format(\"torch\")\r\n elif path == 'parquet':\r\n print(f'{data_files=}')\r\n return load_dataset(path, data_files=data_files, streaming=streaming, split=\"train\").with_format(\"torch\")\r\n else:\r\n return load_dataset(path, name, streaming=streaming, split=\"train\").with_format(\"torch\")\r\n # - get data set for real now\r\n if isinstance(path, str):\r\n dataset = my_load_dataset(path, name)\r\n else:\r\n print('-- interleaving datasets')\r\n datasets = [my_load_dataset(path, name).with_format(\"torch\") for path, name in zip(path, name)]\r\n [print(f'{dataset.description=}') for dataset in datasets]\r\n dataset = interleave_datasets(datasets, probabilities)\r\n print(f'{dataset=}')\r\n batch = dataset.take(batch_size)\r\n print(f'{next(iter(batch))=}')\r\n column_names = next(iter(batch)).keys()\r\n print(f'{column_names=}')\r\n\r\n # - Prepare functions to tokenize batch\r\n def preprocess(examples):\r\n return tokenizer(examples[\"text\"], padding=\"max_length\", max_length=128, truncation=True, return_tensors=\"pt\")\r\n remove_columns = column_names # remove all keys that are not tensors to avoid bugs in collate function in task2vec's pytorch data loader\r\n def map(batch):\r\n return batch.map(preprocess, batched=True, remove_columns=remove_columns)\r\n tokenized_batch = map(batch)\r\n print(f'{next(iter(tokenized_batch))=}')\r\n```\r\n\r\nhttps://stackoverflow.com/questions/76891189/how-to-download-data-from-hugging-face-that-is-visible-on-the-data-viewer-but-th/76902681#76902681\r\n\r\nhttps://discuss.huggingface.co/t/how-to-download-data-from-hugging-face-that-is-visible-on-the-data-viewer-but-the-files-are-not-available/50555/5?u=severo"
] | 2021-12-29T18:23:20 | 2023-08-14T23:28:48 | 2022-02-17T15:04:25 |
NONE
| null | null | null |
## Describe the bug
I am unable to download the PubMed dataset from the link provided in the [Hugging Face Course (Chapter 5 Section 4)](https://huggingface.co/course/chapter5/4?fw=pt).
https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
# This takes a few minutes to run, so go grab a tea or coffee while you wait :)
data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
pubmed_dataset
```
I also tried with `wget` as follows.
```
wget https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst
```
## Expected results
I expect to be able to download this file.
## Actual results
Traceback
```
---------------------------------------------------------------------------
timeout Traceback (most recent call last)
/usr/lib/python3/dist-packages/urllib3/connection.py in _new_conn(self)
158 try:
--> 159 conn = connection.create_connection(
160 (self._dns_host, self.port), self.timeout, **extra_kw
/usr/lib/python3/dist-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
83 if err is not None:
---> 84 raise err
85
/usr/lib/python3/dist-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
73 sock.bind(source_address)
---> 74 sock.connect(sa)
75 return sock
timeout: timed out
During handling of the above exception, another exception occurred:
ConnectTimeoutError Traceback (most recent call last)
/usr/lib/python3/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
664 # Make the request on the httplib connection object.
--> 665 httplib_response = self._make_request(
666 conn,
/usr/lib/python3/dist-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
375 try:
--> 376 self._validate_conn(conn)
377 except (SocketTimeout, BaseSSLError) as e:
/usr/lib/python3/dist-packages/urllib3/connectionpool.py in _validate_conn(self, conn)
995 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
--> 996 conn.connect()
997
/usr/lib/python3/dist-packages/urllib3/connection.py in connect(self)
313 # Add certificate verification
--> 314 conn = self._new_conn()
315 hostname = self.host
/usr/lib/python3/dist-packages/urllib3/connection.py in _new_conn(self)
163 except SocketTimeout:
--> 164 raise ConnectTimeoutError(
165 self,
ConnectTimeoutError: (<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)')
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
/usr/lib/python3/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
438 if not chunked:
--> 439 resp = conn.urlopen(
440 method=request.method,
/usr/lib/python3/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
718
--> 719 retries = retries.increment(
720 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
/usr/lib/python3/dist-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace)
435 if new_retry.is_exhausted():
--> 436 raise MaxRetryError(_pool, url, error or ResponseError(cause))
437
MaxRetryError: HTTPSConnectionPool(host='the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)'))
During handling of the above exception, another exception occurred:
ConnectTimeout Traceback (most recent call last)
/tmp/ipykernel_15104/606583593.py in <module>
3 # This takes a few minutes to run, so go grab a tea or coffee while you wait :)
4 data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
----> 5 pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
6 pubmed_dataset
~/.local/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1655
1656 # Create a dataset builder
-> 1657 builder_instance = load_dataset_builder(
1658 path=path,
1659 name=name,
~/.local/lib/python3.8/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1492 download_config = download_config.copy() if download_config else DownloadConfig()
1493 download_config.use_auth_token = use_auth_token
-> 1494 dataset_module = dataset_module_factory(
1495 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1496 )
~/.local/lib/python3.8/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1116 # Try packaged
1117 if path in _PACKAGED_DATASETS_MODULES:
-> 1118 return PackagedDatasetModuleFactory(
1119 path, data_files=data_files, download_config=download_config, download_mode=download_mode
1120 ).get_module()
~/.local/lib/python3.8/site-packages/datasets/load.py in get_module(self)
773 else get_patterns_locally(str(Path().resolve()))
774 )
--> 775 data_files = DataFilesDict.from_local_or_remote(patterns, use_auth_token=self.downnload_config.use_auth_token)
776 module_path, hash = _PACKAGED_DATASETS_MODULES[self.name]
777 builder_kwargs = {"hash": hash, "data_files": data_files}
~/.local/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token)
576 for key, patterns_for_key in patterns.items():
577 out[key] = (
--> 578 DataFilesList.from_local_or_remote(
579 patterns_for_key,
580 base_path=base_path,
~/.local/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token)
545 base_path = base_path if base_path is not None else str(Path().resolve())
546 data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
--> 547 origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token)
548 return cls(data_files, origin_metadata)
549
~/.local/lib/python3.8/site-packages/datasets/data_files.py in _get_origin_metadata_locally_or_by_urls(data_files, max_workers, use_auth_token)
492 data_files: List[Union[Path, Url]], max_workers=64, use_auth_token: Optional[Union[bool, str]] = None
493 ) -> Tuple[str]:
--> 494 return thread_map(
495 partial(_get_single_origin_metadata_locally_or_by_urls, use_auth_token=use_auth_token),
496 data_files,
~/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py in thread_map(fn, *iterables, **tqdm_kwargs)
92 """
93 from concurrent.futures import ThreadPoolExecutor
---> 94 return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
95
96
~/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py in _executor_map(PoolExecutor, fn, *iterables, **tqdm_kwargs)
74 map_args.update(chunksize=chunksize)
75 with PoolExecutor(**pool_kwargs) as ex:
---> 76 return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs))
77
78
~/.local/lib/python3.8/site-packages/tqdm/notebook.py in __iter__(self)
252 def __iter__(self):
253 try:
--> 254 for obj in super(tqdm_notebook, self).__iter__():
255 # return super(tqdm...) will not catch exception
256 yield obj
~/.local/lib/python3.8/site-packages/tqdm/std.py in __iter__(self)
1171 # (note: keep this check outside the loop for performance)
1172 if self.disable:
-> 1173 for obj in iterable:
1174 yield obj
1175 return
/usr/lib/python3.8/concurrent/futures/_base.py in result_iterator()
617 # Careful not to keep a reference to the popped future
618 if timeout is None:
--> 619 yield fs.pop().result()
620 else:
621 yield fs.pop().result(end_time - time.monotonic())
/usr/lib/python3.8/concurrent/futures/_base.py in result(self, timeout)
442 raise CancelledError()
443 elif self._state == FINISHED:
--> 444 return self.__get_result()
445 else:
446 raise TimeoutError()
/usr/lib/python3.8/concurrent/futures/_base.py in __get_result(self)
387 if self._exception:
388 try:
--> 389 raise self._exception
390 finally:
391 # Break a reference cycle with the exception in self._exception
/usr/lib/python3.8/concurrent/futures/thread.py in run(self)
55
56 try:
---> 57 result = self.fn(*self.args, **self.kwargs)
58 except BaseException as exc:
59 self.future.set_exception(exc)
~/.local/lib/python3.8/site-packages/datasets/data_files.py in _get_single_origin_metadata_locally_or_by_urls(data_file, use_auth_token)
483 if isinstance(data_file, Url):
484 data_file = str(data_file)
--> 485 return (request_etag(data_file, use_auth_token=use_auth_token),)
486 else:
487 data_file = str(data_file.resolve())
~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in request_etag(url, use_auth_token)
489 def request_etag(url: str, use_auth_token: Optional[Union[str, bool]] = None) -> Optional[str]:
490 headers = get_authentication_headers_for_url(url, use_auth_token=use_auth_token)
--> 491 response = http_head(url, headers=headers, max_retries=3)
492 response.raise_for_status()
493 etag = response.headers.get("ETag") if response.ok else None
~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in http_head(url, proxies, headers, cookies, allow_redirects, timeout, max_retries)
474 headers = copy.deepcopy(headers) or {}
475 headers["user-agent"] = get_datasets_user_agent(user_agent=headers.get("user-agent"))
--> 476 response = _request_with_retry(
477 method="HEAD",
478 url=url,
~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params)
407 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err:
408 if tries > max_retries:
--> 409 raise err
410 else:
411 logger.info(f"{method} request to {url} timed out, retrying... [{tries/max_retries}]")
~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params)
403 tries += 1
404 try:
--> 405 response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
406 success = True
407 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err:
/usr/lib/python3/dist-packages/requests/api.py in request(method, url, **kwargs)
58 # cases, and look like a memory leak in others.
59 with sessions.Session() as session:
---> 60 return session.request(method=method, url=url, **kwargs)
61
62
/usr/lib/python3/dist-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
531 }
532 send_kwargs.update(settings)
--> 533 resp = self.send(prep, **send_kwargs)
534
535 return resp
/usr/lib/python3/dist-packages/requests/sessions.py in send(self, request, **kwargs)
644
645 # Send the request
--> 646 r = adapter.send(request, **kwargs)
647
648 # Total elapsed time of the request (approximately)
/usr/lib/python3/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
502 # TODO: Remove this in 3.0.0: see #2811
503 if not isinstance(e.reason, NewConnectionError):
--> 504 raise ConnectTimeout(e, request=request)
505
506 if isinstance(e.reason, ResponseError):
ConnectTimeout: HTTPSConnectionPool(host='the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)'))
```
## Environment info
- `datasets` version: 1.17.0
- Platform: Linux-5.11.0-43-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 6.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3504/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3503
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3503/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3503/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3503/events
|
https://github.com/huggingface/datasets/issues/3503
| 1,090,472,735 |
I_kwDODunzps5A_0sf
| 3,503 |
Batched in filter throws error
|
{
"login": "gpucce",
"id": 32967787,
"node_id": "MDQ6VXNlcjMyOTY3Nzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gpucce",
"html_url": "https://github.com/gpucce",
"followers_url": "https://api.github.com/users/gpucce/followers",
"following_url": "https://api.github.com/users/gpucce/following{/other_user}",
"gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gpucce/subscriptions",
"organizations_url": "https://api.github.com/users/gpucce/orgs",
"repos_url": "https://api.github.com/users/gpucce/repos",
"events_url": "https://api.github.com/users/gpucce/events{/privacy}",
"received_events_url": "https://api.github.com/users/gpucce/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
}
] | null |
[] | 2021-12-29T12:01:04 | 2022-01-04T10:24:27 | 2022-01-04T10:24:27 |
CONTRIBUTOR
| null | null | null |
I hope this is really a bug, I could not find it among the open issues
## Describe the bug
using `batched=False` in DataSet.filter throws error
```python
TypeError: filter() got an unexpected keyword argument 'batched'
```
but in the docs it is lister as an argument.
## Steps to reproduce the bug
```python
task = "mnli"
max_length = 128
tokenizer = AutoTokenizer.from_pretrained("./pretrained_models/pretrained_models_drozd/sl250.m.gsic.titech.ac.jp:8000/21.11.17_06.30.32_roberta-base_a0057/checkpoints/smpl_400M/hf/")
dataset = load_dataset("glue", task)
task_to_keys = {
"cola": ("sentence", None),
"mnli": ("premise", "hypothesis"),
"mnli-mm": ("premise", "hypothesis"),
"mrpc": ("sentence1", "sentence2"),
"qnli": ("question", "sentence"),
"qqp": ("question1", "question2"),
"rte": ("sentence1", "sentence2"),
"sst2": ("sentence", None),
"stsb": ("sentence1", "sentence2"),
"wnli": ("sentence1", "sentence2"),
}
##### tokenization_parameters
sentence1_key, sentence2_key = task_to_keys[task]
def preprocess_function(examples, max_length):
if sentence2_key is None:
return tokenizer(
examples[sentence1_key], truncation=True, max_length=max_length
)
return tokenizer(
examples[sentence1_key],
examples[sentence2_key],
truncation=False,
padding="max_length",
max_length=max_length,
)
encoded_dataset = dataset.map(
lambda x: preprocess_function(x, max_length=max_length), batched=False
)
encoded_dataset.filter(lambda x: len(x['input_ids']) <= max_length, batched=False)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.16.1, 1.17.0
- Platform: ubuntu
- Python version: 3.8.12
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3503/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3499
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3499/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3499/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3499/events
|
https://github.com/huggingface/datasets/issues/3499
| 1,090,132,618 |
I_kwDODunzps5A-hqK
| 3,499 |
Adjusting chunk size for streaming datasets
|
{
"login": "JoelNiklaus",
"id": 3775944,
"node_id": "MDQ6VXNlcjM3NzU5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoelNiklaus",
"html_url": "https://github.com/JoelNiklaus",
"followers_url": "https://api.github.com/users/JoelNiklaus/followers",
"following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}",
"gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions",
"organizations_url": "https://api.github.com/users/JoelNiklaus/orgs",
"repos_url": "https://api.github.com/users/JoelNiklaus/repos",
"events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoelNiklaus/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false | null |
[] | null |
[
"Hi ! Data streaming uses `fsspec` to read the data files progressively. IIRC the block size for buffering is 5MiB by default. So every time you finish iterating over a block, it downloads the next one. You can still try to increase the `fsspec` block size for buffering if it can help. To do so you just need to increase `fsspec.spec.AbstractBufferedFile.DEFAULT_BLOCK_SIZE `\r\n\r\nCurrently this is unfortunately done in a single thread, so it blocks the processing to download and uncompress the next block. At one point it would be nice to be able to do that in parallel !",
"Hi! Thanks for the help, I will try it :)"
] | 2021-12-28T21:17:53 | 2022-05-06T16:29:05 | 2022-05-06T16:29:05 |
CONTRIBUTOR
| null | null | null |
**Is your feature request related to a problem? Please describe.**
I want to use mc4 which I cannot save locally, so I stream it. However, I want to process the entire dataset and filter some documents from it. With the current chunk size of around 1000 documents (right?) I hit a performance bottleneck because of the frequent decompressing.
**Describe the solution you'd like**
I would appreciate a parameter in the load_dataset function, that allows me to set the chunksize myself (to a value like 100'000 in my case). Like that, I hope to improve the processing time.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3499/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3497
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3497/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3497/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3497/events
|
https://github.com/huggingface/datasets/issues/3497
| 1,090,050,148 |
I_kwDODunzps5A-Nhk
| 3,497 |
Changing sampling rate in audio dataset and subsequently mapping with `num_proc > 1` leads to weird bug
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Same error occures when using max samples with https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py",
"I'm seeing this too, when using preprocessing_num_workers with \r\nhttps://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py"
] | 2021-12-28T18:03:49 | 2022-01-21T13:22:27 | 2022-01-21T13:22:27 |
CONTRIBUTOR
| null | null | null |
Running:
```python
from datasets import load_dataset, DatasetDict
import datasets
from transformers import AutoFeatureExtractor
raw_datasets = DatasetDict()
raw_datasets["train"] = load_dataset("common_voice", "ab", split="train")
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
raw_datasets = raw_datasets.cast_column(
"audio", datasets.features.Audio(sampling_rate=feature_extractor.sampling_rate)
)
num_workers = 16
def prepare_dataset(batch):
sample = batch["audio"]
inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"])
batch["input_values"] = inputs.input_values[0]
batch["input_length"] = len(batch["input_values"])
return batch
raw_datasets.map(
prepare_dataset,
remove_columns=next(iter(raw_datasets.values())).column_names,
num_proc=16,
desc="preprocess datasets",
)
```
gives
```bash
File "/home/patrick/experiments/run_bug.py", line 25, in <module>
raw_datasets.map(
File "/home/patrick/python_bin/datasets/dataset_dict.py", line 492, in map
{
File "/home/patrick/python_bin/datasets/dataset_dict.py", line 493, in <dictcomp>
k: dataset.map(
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2139, in map
shards = [
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2140, in <listcomp>
self.shard(num_shards=num_proc, index=rank, contiguous=True, keep_in_memory=keep_in_memory)
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 3164, in shard
return self.select(
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 485, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/patrick/python_bin/datasets/fingerprint.py", line 411, in wrapper
out = func(self, *args, **kwargs)
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2756, in select
return self._new_dataset_with_indices(indices_buffer=buf_writer.getvalue(), fingerprint=new_fingerprint)
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2667, in _new_dataset_with_indices
return Dataset(
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 659, in __init__
raise ValueError(
ValueError: External features info don't match the dataset:
Got
{'client_id': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None), 'down_votes': Value(dtype='int64', id=None), 'age': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'accent': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None)}
with type
struct<client_id: string, path: string, audio: string, sentence: string, up_votes: int64, down_votes: int64, age: string, gender: string, accent: string, locale: string, segment: string>
but expected something like
{'client_id': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'audio': {'path': Value(dtype='string', id=None), 'bytes': Value(dtype='binary', id=None)}, 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None), 'down_votes': Value(dtype='int64', id=None), 'age': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'accent': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None)}
with type
struct<client_id: string, path: string, audio: struct<path: string, bytes: binary>, sentence: string, up_votes: int64, down_votes: int64, age: string, gender: string, accent: string, locale: string, segment: string>
```
Versions:
```python
- `datasets` version: 1.16.2.dev0
- Platform: Linux-5.15.8-76051508-generic-x86_64-with-glibc2.33
- Python version: 3.9.7
- PyArrow version: 6.0.1
```
and `transformers`:
```
- `transformers` version: 4.16.0.dev0
- Platform: Linux-5.15.8-76051508-generic-x86_64-with-glibc2.33
- Python version: 3.9.7
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3497/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3495
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3495/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3495/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3495/events
|
https://github.com/huggingface/datasets/issues/3495
| 1,089,983,632 |
I_kwDODunzps5A99SQ
| 3,495 |
Add VoxLingua107
|
{
"login": "jaketae",
"id": 25360440,
"node_id": "MDQ6VXNlcjI1MzYwNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaketae",
"html_url": "https://github.com/jaketae",
"followers_url": "https://api.github.com/users/jaketae/followers",
"following_url": "https://api.github.com/users/jaketae/following{/other_user}",
"gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaketae/subscriptions",
"organizations_url": "https://api.github.com/users/jaketae/orgs",
"repos_url": "https://api.github.com/users/jaketae/repos",
"events_url": "https://api.github.com/users/jaketae/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaketae/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
open
| false | null |
[] | null |
[] | 2021-12-28T15:51:43 | 2021-12-28T15:51:43 | null |
CONTRIBUTOR
| null | null | null |
## Adding a Dataset
- **Name:** VoxLingua107
- **Description:** VoxLingua107 is a speech dataset for training spoken language identification models.
- **Paper:** https://arxiv.org/abs/2011.12998
- **Data:** http://bark.phon.ioc.ee/voxlingua107/
- **Motivation:** 107 languages, totaling 6628 hours for the train split.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3495/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3495/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3491
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3491/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3491/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3491/events
|
https://github.com/huggingface/datasets/issues/3491
| 1,089,918,018 |
I_kwDODunzps5A9tRC
| 3,491 |
Update version of pib dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[] | 2021-12-28T14:03:58 | 2021-12-29T08:42:57 | 2021-12-29T08:42:57 |
MEMBER
| null | null | null |
On the Hub we have v0, while there exists v1.3.
Related to bigscience-workshop/data_tooling#130
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3491/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3490
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3490/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3490/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3490/events
|
https://github.com/huggingface/datasets/issues/3490
| 1,089,730,181 |
I_kwDODunzps5A8_aF
| 3,490 |
Does datasets support load text from HDFS?
|
{
"login": "dancingpipi",
"id": 20511825,
"node_id": "MDQ6VXNlcjIwNTExODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/20511825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dancingpipi",
"html_url": "https://github.com/dancingpipi",
"followers_url": "https://api.github.com/users/dancingpipi/followers",
"following_url": "https://api.github.com/users/dancingpipi/following{/other_user}",
"gists_url": "https://api.github.com/users/dancingpipi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dancingpipi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dancingpipi/subscriptions",
"organizations_url": "https://api.github.com/users/dancingpipi/orgs",
"repos_url": "https://api.github.com/users/dancingpipi/repos",
"events_url": "https://api.github.com/users/dancingpipi/events{/privacy}",
"received_events_url": "https://api.github.com/users/dancingpipi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null |
[
"Hi ! `datasets` currently supports reading local files or files over HTTP. We may add support for other filesystems (cloud storages, hdfs...) at one point though :)"
] | 2021-12-28T08:56:02 | 2022-02-14T14:00:51 | null |
NONE
| null | null | null |
The raw text data is stored on HDFS due to the dataset's size is too large to store on my develop machine,
so I wander does datasets support read data from hdfs?
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3490/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3488
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3488/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3488/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3488/events
|
https://github.com/huggingface/datasets/issues/3488
| 1,089,345,653 |
I_kwDODunzps5A7hh1
| 3,488 |
URL query parameters are set as path in the compression hop for fsspec
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false | null |
[] | null |
[
"I think the test passes because it simply ignore what's after `gzip://`.\r\n\r\nThe returned urlpath is expected to look like `gzip://filename::url`, and the filename is currently considered to be what's after the final `/`, hence the result.\r\n\r\nWe can decide to change this and simply have `gzip://::url`, this way we don't need to guess the filename, what do you think ?"
] | 2021-12-27T16:29:00 | 2022-01-05T15:15:25 | null |
MEMBER
| null | null | null |
## Describe the bug
There is an ssue with `StreamingDownloadManager._extract`.
I don't know how the test `test_streaming_gg_drive_gzipped` passes:
For
```python
TEST_GG_DRIVE_GZIPPED_URL = "https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz"
urlpath = StreamingDownloadManager().download_and_extract(TEST_GG_DRIVE_GZIPPED_URL)
```
gives `urlpath`:
```python
'gzip://uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz::https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz'
```
The gzip path makes no sense: `gzip://uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz`
## Steps to reproduce the bug
```python
from datasets.utils.streaming_download_manager import StreamingDownloadManager
dl_manager = StreamingDownloadManager()
urlpath = dl_manager.extract("https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz")
print(urlpath)
```
## Expected results
The query parameters should not be set as part of the path.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3488/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3485
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3485/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3485/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3485/events
|
https://github.com/huggingface/datasets/issues/3485
| 1,089,027,581 |
I_kwDODunzps5A6T39
| 3,485 |
skip columns which cannot set to specific format when set_format
|
{
"login": "tshu-w",
"id": 13161779,
"node_id": "MDQ6VXNlcjEzMTYxNzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/13161779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tshu-w",
"html_url": "https://github.com/tshu-w",
"followers_url": "https://api.github.com/users/tshu-w/followers",
"following_url": "https://api.github.com/users/tshu-w/following{/other_user}",
"gists_url": "https://api.github.com/users/tshu-w/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tshu-w/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tshu-w/subscriptions",
"organizations_url": "https://api.github.com/users/tshu-w/orgs",
"repos_url": "https://api.github.com/users/tshu-w/repos",
"events_url": "https://api.github.com/users/tshu-w/events{/privacy}",
"received_events_url": "https://api.github.com/users/tshu-w/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false | null |
[] | null |
[
"You can add columns that you wish to set into `torch` format using `dataset.set_format(\"torch\", ['id', 'abc'])` so that input batch of the transform only contains those columns",
"Sorry, I miss `output_all_columns` args and thought after `dataset.set_format(\"torch\", columns=columns)` I can only get specific columns I assigned."
] | 2021-12-27T07:19:55 | 2021-12-27T09:07:07 | 2021-12-27T09:07:07 |
NONE
| null | null | null |
**Is your feature request related to a problem? Please describe.**
When using `dataset.set_format("torch")`, I must make sure every columns in datasets can convert to `torch`, however, sometimes I want to keep some string columns.
**Describe the solution you'd like**
skip columns which cannot set to specific format when set_format instead of raise an error.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3485/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3484
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3484/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3484/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3484/events
|
https://github.com/huggingface/datasets/issues/3484
| 1,088,910,402 |
I_kwDODunzps5A53RC
| 3,484 |
make shape verification to use ArrayXD instead of nested lists for map
|
{
"login": "tshu-w",
"id": 13161779,
"node_id": "MDQ6VXNlcjEzMTYxNzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/13161779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tshu-w",
"html_url": "https://github.com/tshu-w",
"followers_url": "https://api.github.com/users/tshu-w/followers",
"following_url": "https://api.github.com/users/tshu-w/following{/other_user}",
"gists_url": "https://api.github.com/users/tshu-w/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tshu-w/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tshu-w/subscriptions",
"organizations_url": "https://api.github.com/users/tshu-w/orgs",
"repos_url": "https://api.github.com/users/tshu-w/repos",
"events_url": "https://api.github.com/users/tshu-w/events{/privacy}",
"received_events_url": "https://api.github.com/users/tshu-w/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null |
[
"Hi! \r\n\r\nYes, this makes sense for numeric values, but first I have to finish https://github.com/huggingface/datasets/pull/3336 because currently ArrayXD only allows the first dimension to be dynamic."
] | 2021-12-27T02:16:02 | 2022-01-05T13:54:03 | null |
NONE
| null | null | null |
As describe in https://github.com/huggingface/datasets/issues/2005#issuecomment-793716753 and mentioned by @mariosasko in [image feature example](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c#scrollTo=ow3XHDvf2I0B&line=1&uniqifier=1), IMO make shape verifcaiton to use ArrayXD instead of nested lists for map can help user reduce unnecessary cast. I notice datasets have done something special for `input_ids` and `attention_mask` which is also unnecessary after this feature added.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3484/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3480
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3480/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3480/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3480/events
|
https://github.com/huggingface/datasets/issues/3480
| 1,088,267,110 |
I_kwDODunzps5A3aNm
| 3,480 |
the compression format requested when saving a dataset in json format is not respected
|
{
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"Thanks for reporting @SaulLu.\r\n\r\nAt first sight I think the problem is caused because `pandas` only takes into account the `compression` parameter if called with a non-null file path or buffer. And in our implementation, we call pandas `to_json` with `None` `path_or_buf`.\r\n\r\nWe should fix this:\r\n- either handling directly the `compression` parameter ourselves\r\n- or refactoring to pass non-null path or buffer to pandas\r\n\r\nCC: @lhoestq",
"I was thinking if we can handle the `compression` parameter by ourselves? Compression types will be similar to what `pandas` offer. Initially, we can try this with 2-3 compression types and see how good/bad it is? Let me know if it sounds good, I can raise a PR for this next week",
"Hi ! Thanks for your help @bhavitvyamalik :)\r\nMaybe let's start with `gzip` ? I think it's the most common use case, then if we're fine with it we can add other compression methods"
] | 2021-12-24T09:23:51 | 2022-01-05T13:03:35 | 2022-01-05T13:03:35 |
CONTRIBUTOR
| null | null | null |
## Describe the bug
In the documentation of the `to_json` method, it is stated in the parameters that
> **to_json_kwargs – Parameters to pass to pandas’s pandas.DataFrame.to_json.
however when we pass for example `compression="gzip"`, the saved file is not compressed.
Would you also have expected compression to be applied? :relaxed:
## Steps to reproduce the bug
```python
my_dict = {"a": [1, 2, 3], "b": [1, 2, 3]}
```
### Result with datasets
```python
from datasets import Dataset
dataset = Dataset.from_dict(my_dict)
dataset.to_json("dic_with_datasets.jsonl.gz", compression="gzip")
!cat dic_with_datasets.jsonl.gz
```
output
```
{"a":1,"b":1}
{"a":2,"b":2}
{"a":3,"b":3}
```
Note: I would expected to see binary data here
### Result with pandas
```python
import pandas as pd
df = pd.DataFrame(my_dict)
df.to_json("dic_with_pandas.jsonl.gz", lines=True, orient="records", compression="gzip")
!cat dic_with_pandas.jsonl.gz
```
output
```
4��a�dic_with_pandas.jsonl��VJT�2�QJ��\� ��g��yƵ���������)���
```
Note: It looks like binary data
## Expected results
I would have expected that the saved result with datasets would also be a binary file
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.16.1
- Platform: Linux-4.18.0-193.70.1.el8_2.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.11
- PyArrow version: 5.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3480/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3479
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3479/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3479/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3479/events
|
https://github.com/huggingface/datasets/issues/3479
| 1,088,232,880 |
I_kwDODunzps5A3R2w
| 3,479 |
Dataset preview is not available (I think for all Hugging Face datasets)
|
{
"login": "Abirate",
"id": 66887439,
"node_id": "MDQ6VXNlcjY2ODg3NDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/66887439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abirate",
"html_url": "https://github.com/Abirate",
"followers_url": "https://api.github.com/users/Abirate/followers",
"following_url": "https://api.github.com/users/Abirate/following{/other_user}",
"gists_url": "https://api.github.com/users/Abirate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abirate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abirate/subscriptions",
"organizations_url": "https://api.github.com/users/Abirate/orgs",
"repos_url": "https://api.github.com/users/Abirate/repos",
"events_url": "https://api.github.com/users/Abirate/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abirate/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false |
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"You're right, we have an issue today with the datasets preview. We're investigating.",
"It should be fixed now. Thanks for reporting.",
"Down again. ",
"Fixed for good."
] | 2021-12-24T08:18:48 | 2021-12-24T14:27:46 | 2021-12-24T14:27:46 |
NONE
| null | null | null |
## Dataset viewer issue for '*french_book_reviews*'
**Link:** https://huggingface.co/datasets/Abirate/french_book_reviews
**short description of the issue**
For my dataset, the dataset preview is no longer functional (it used to work: The dataset had been added the day before and it was fine...)
And, after looking over the datasets, I discovered that this issue affects all Hugging Face datasets (as of yesterday, December 23, 2021, around 10 p.m. (CET)).
**Am I the one who added this dataset** : Yes
**Note**: here a screenshot showing the issue

**And here for glue dataset :**

|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3479/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3479/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3475
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3475/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3475/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3475/events
|
https://github.com/huggingface/datasets/issues/3475
| 1,087,352,041 |
I_kwDODunzps5Az6zp
| 3,475 |
The rotten_tomatoes dataset of movie reviews contains some reviews in Spanish
|
{
"login": "puzzler10",
"id": 17426779,
"node_id": "MDQ6VXNlcjE3NDI2Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/17426779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/puzzler10",
"html_url": "https://github.com/puzzler10",
"followers_url": "https://api.github.com/users/puzzler10/followers",
"following_url": "https://api.github.com/users/puzzler10/following{/other_user}",
"gists_url": "https://api.github.com/users/puzzler10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/puzzler10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/puzzler10/subscriptions",
"organizations_url": "https://api.github.com/users/puzzler10/orgs",
"repos_url": "https://api.github.com/users/puzzler10/repos",
"events_url": "https://api.github.com/users/puzzler10/events{/privacy}",
"received_events_url": "https://api.github.com/users/puzzler10/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false | null |
[] | null |
[
"Hi @puzzler10, thanks for reporting.\r\n\r\nPlease note this dataset is not hosted on Hugging Face Hub. See: \r\nhttps://github.com/huggingface/datasets/blob/c8f914473b041833fd47178fa4373cdcb56ac522/datasets/rotten_tomatoes/rotten_tomatoes.py#L42\r\n\r\nIf there are issues with the source data of a dataset, you should contact the data owners/creators instead. In the homepage associated with this dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/), you can find the authors of the dataset and how to contact them:\r\n> If you have any questions or comments regarding this site, please send email to Bo Pang or Lillian Lee.\r\n\r\nP.S.: Please also note that the example you gave of non-English review is in Portuguese (not Spanish). ;)",
"Maybe best to just put a quick sentence in the dataset description that highlights this? "
] | 2021-12-23T03:56:43 | 2021-12-24T00:23:03 | null |
NONE
| null | null | null |
## Describe the bug
See title. I don't think this is intentional and they probably should be removed. If they stay the dataset description should be at least updated to make it clear to the user.
## Steps to reproduce the bug
Go to the [dataset viewer](https://huggingface.co/datasets/viewer/?dataset=rotten_tomatoes) for the dataset, set the offset to 4160 for the train dataset, and scroll through the results. I found ones at index 4166 and 4173. There's others too (e.g. index 2888) but those two are easy to find like that.
## Expected results
English movie reviews only.
## Actual results
Example of a Spanish movie review (4173):
> "É uma pena que , mais tarde , o próprio filme abandone o tom de paródia e passe a utilizar os mesmos clichês que havia satirizado "
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3475/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3473
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3473/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3473/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3473/events
|
https://github.com/huggingface/datasets/issues/3473
| 1,086,937,610 |
I_kwDODunzps5AyVoK
| 3,473 |
Iterating over a vision dataset doesn't decode the images
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] |
closed
| false | null |
[] | null |
[
"As discussed, I remember I set `decoded=False` here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed.",
"> I set decoded=False here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed\r\n\r\nhttps://github.com/huggingface/datasets/pull/3430 will add more control to decoding, so I think it's OK to enable decoding in `__iter__` for now. After we merge the linked PR, the user can easily disable it again.",
"@mariosasko I wonder why there is no issue in `Audio` feature with decoding disabled in `__iter__`, whereas there is in `Image` feature.\r\n\r\nEnabling decoding in `__iter__` will make fail Audio regressions tests: https://github.com/huggingface/datasets/runs/4608657230?check_suite_focus=true\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_not_decoded\r\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_decoded\r\n========================= 2 failed, 15 passed in 8.37s =========================",
"Please also note that the regression tests were implemented in accordance with the specifications:\r\n- when doing a `map` (wich calls `__iter__`) of a function that doesn't access the audio field, the decoding should be disabled; this is why the decoding is disabled in `__iter__` (and only enabled in `__getitem__`).",
"> I wonder why there is no issue in Audio feature with decoding disabled in __iter__, whereas there is in Image feature.\r\n\r\n@albertvillanova Not sure if I understand this part. Currently, both the Image and the Audio feature don't decode data in `__iter__`, so their behavior is aligned there.\r\n",
"Therefore, this is not an issue, neither for Audio nor Image feature.\r\n\r\nCould you please elaborate more on the expected use case? @lhoestq @NielsRogge \r\n\r\nThe expected use cases (in accordance with the specs: see #2324):\r\n- decoding should be enabled when accessing a specific item (`__getitem__`)\r\n- decoding should be disabled while iterating (`__iter__`) to allow preprocessing of non-audio/image features (like label or text, for example) using `.map`\r\n- decoding should be enabled in a `.map` only if the `.map` function accesses the audio/image feature (implemented using `LazyDict`)",
"For me it's not an issue, actually. I just (mistakenly) tried to iterate over a PyTorch Dataset instead of a PyTorch DataLoader, \r\n\r\ni.e. I did this:\r\n\r\n`batch = next(iter(train_ds)) `\r\n\r\nwhereas I actually wanted to do\r\n\r\n`batch = next(iter(train_dataloader))`\r\n\r\nand then it turned out that in the first case, the image was a string of bytes rather than a Pillow image, hence Quentin opened an issue.",
"Thanks @NielsRogge for the context.\r\n\r\nSo IMO everything is working as expected.\r\n\r\nI'm closing this issue. Feel free to reopen it again if further changes of the specs should be addressed.",
"Thanks for the details :)\r\n\r\nI still think that it's unexpected to get different results when doing\r\n```python\r\nfor i in range(len(dataset)):\r\n sample = dataset[i]\r\n```\r\nand\r\n```python\r\nfor sample in dataset:\r\n pass\r\n```\r\neven though I understand that if you don't need to decode the data, then decoding image or audio data when iterating is a waste of time and resources.\r\n\r\nBut in this case users can still drop the column that need decoding to get the full speed back no ?"
] | 2021-12-22T15:26:32 | 2021-12-27T14:13:21 | 2021-12-23T15:21:57 |
MEMBER
| null | null | null |
## Describe the bug
If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned.
## Steps to reproduce the bug
```python
from datasets import load_dataset
import PIL
mnist = load_dataset("mnist", split="train")
first_image = mnist[0]["image"]
assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # passes
first_image = next(iter(mnist))["image"]
assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # fails
```
## Expected results
The image should be decoded, as a PIL Image
## Actual results
We get a dictionary
```
{'bytes': b'\x89PNG\r\n\x1a\n\x00..., 'path': None}
```
## Environment info
- `datasets` version: 1.17.1.dev0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.2
- PyArrow version: 6.0.0
The bug also exists in 1.17.0
## Investigation
I think the issue is that decoding is disabled in `__iter__`:
https://github.com/huggingface/datasets/blob/dfe5b73387c5e27de6a16b0caeb39d3b9ded66d6/src/datasets/arrow_dataset.py#L1651-L1661
Do you remember why it was disabled in the first place @albertvillanova ?
Also cc @mariosasko @NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3473/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3465
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3465/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3465/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3465/events
|
https://github.com/huggingface/datasets/issues/3465
| 1,085,400,432 |
I_kwDODunzps5AseVw
| 3,465 |
Unable to load 'cnn_dailymail' dataset
|
{
"login": "talha1503",
"id": 42352729,
"node_id": "MDQ6VXNlcjQyMzUyNzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/42352729?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/talha1503",
"html_url": "https://github.com/talha1503",
"followers_url": "https://api.github.com/users/talha1503/followers",
"following_url": "https://api.github.com/users/talha1503/following{/other_user}",
"gists_url": "https://api.github.com/users/talha1503/gists{/gist_id}",
"starred_url": "https://api.github.com/users/talha1503/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/talha1503/subscriptions",
"organizations_url": "https://api.github.com/users/talha1503/orgs",
"repos_url": "https://api.github.com/users/talha1503/repos",
"events_url": "https://api.github.com/users/talha1503/events{/privacy}",
"received_events_url": "https://api.github.com/users/talha1503/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] |
closed
| false | null |
[] | null |
[
"Hi @talha1503, thanks for reporting.\r\n\r\nIt seems there is an issue with one of the data files hosted at Google Drive:\r\n```\r\nGoogle Drive - Quota exceeded\r\n\r\nSorry, you can't view or download this file at this time.\r\n\r\nToo many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.\r\n```\r\n\r\nAs you probably know, Hugging Face does not host the data, and in this case the data owner decided to host their data at Google Drive, which has quota limits.\r\n\r\nIs there anything we could do, @lhoestq @mariosasko?",
"This looks related to https://github.com/huggingface/datasets/issues/996",
"It seems that [this](https://huggingface.co/datasets/ccdv/cnn_dailymail) copy of the dataset has fixed the problem"
] | 2021-12-21T03:32:21 | 2022-02-17T14:13:57 | 2022-02-17T14:13:57 |
NONE
| null | null | null |
## Describe the bug
I wanted to load cnn_dailymail dataset from huggingface datasets on Google Colab, but I am getting an error while loading it.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0', ignore_verifications = True)
```
## Expected results
Expecting to load 'cnn_dailymail' dataset.
## Actual results
`NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.16.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 3.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3465/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3464
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3464/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3464/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3464/events
|
https://github.com/huggingface/datasets/issues/3464
| 1,085,399,097 |
I_kwDODunzps5AseA5
| 3,464 |
struct.error: 'i' format requires -2147483648 <= number <= 2147483647
|
{
"login": "koukoulala",
"id": 30341159,
"node_id": "MDQ6VXNlcjMwMzQxMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/30341159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/koukoulala",
"html_url": "https://github.com/koukoulala",
"followers_url": "https://api.github.com/users/koukoulala/followers",
"following_url": "https://api.github.com/users/koukoulala/following{/other_user}",
"gists_url": "https://api.github.com/users/koukoulala/gists{/gist_id}",
"starred_url": "https://api.github.com/users/koukoulala/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/koukoulala/subscriptions",
"organizations_url": "https://api.github.com/users/koukoulala/orgs",
"repos_url": "https://api.github.com/users/koukoulala/repos",
"events_url": "https://api.github.com/users/koukoulala/events{/privacy}",
"received_events_url": "https://api.github.com/users/koukoulala/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false | null |
[] | null |
[
"Hi ! Can you try setting `datasets.config.MAX_TABLE_NBYTES_FOR_PICKLING` to a smaller value than `4 << 30` (4GiB), for example `500 << 20` (500MiB) ? It should reduce the maximum size of the arrow table being pickled during multiprocessing.\r\n\r\nIf it fixes the issue, we can consider lowering the default value for everyone.",
"@lhoestq I tried that just now but didn't seem to help."
] | 2021-12-21T03:29:01 | 2022-11-21T19:55:11 | null |
NONE
| null | null | null |
## Describe the bug
A clear and concise description of what the bug is.
using latest datasets=datasets-1.16.1-py3-none-any.whl
process my own multilingual dataset by following codes, and the number of rows in all dataset is 306000, the max_length of each sentence is 256:

then I get this error:

I have seen the issue in #2134 and #2150, so I don't understand why latest repo still can't deal with big dataset.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: linux docker
- Python version: 3.6
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3464/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3462
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3462/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3462/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3462/events
|
https://github.com/huggingface/datasets/issues/3462
| 1,085,049,661 |
I_kwDODunzps5ArIs9
| 3,462 |
Update swahili_news dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[] | 2021-12-20T17:44:01 | 2021-12-21T06:24:02 | 2021-12-21T06:24:01 |
MEMBER
| null | null | null |
Please note also: the HuggingFace version at https://huggingface.co/datasets/swahili_news is outdated. An updated version, with deduplicated text and official splits, can be found at https://zenodo.org/record/5514203.
## Adding a Dataset
- **Name:** swahili_news
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Related to:
- bigscience-workshop/data_tooling#107
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3462/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3459
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3459/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3459/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3459/events
|
https://github.com/huggingface/datasets/issues/3459
| 1,084,969,672 |
I_kwDODunzps5Aq1LI
| 3,459 |
dataset.filter overwriting previously set dataset._indices values, resulting in the wrong elements being selected.
|
{
"login": "mmajurski",
"id": 9354454,
"node_id": "MDQ6VXNlcjkzNTQ0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9354454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmajurski",
"html_url": "https://github.com/mmajurski",
"followers_url": "https://api.github.com/users/mmajurski/followers",
"following_url": "https://api.github.com/users/mmajurski/following{/other_user}",
"gists_url": "https://api.github.com/users/mmajurski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmajurski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmajurski/subscriptions",
"organizations_url": "https://api.github.com/users/mmajurski/orgs",
"repos_url": "https://api.github.com/users/mmajurski/repos",
"events_url": "https://api.github.com/users/mmajurski/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmajurski/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"I think this is a duplicate of [#3190](https://github.com/huggingface/datasets/issues/3190)?",
"Upgrading the datasets version as per #3190 fixes this bug. \r\nI'm Marking as closed."
] | 2021-12-20T16:16:49 | 2021-12-20T16:34:57 | 2021-12-20T16:34:57 |
NONE
| null | null | null |
## Describe the bug
When using dataset.select to select a subset of a dataset, dataset._indices are set to indicate which elements are now considered in the dataset.
The same thing happens when you shuffle the dataset; dataset._indices are set to indicate what the new order of the data is.
However, if you then use a dataset.filter, that filter interacts with those dataset._indices values in a non-intuitive manner.
https://huggingface.co/docs/datasets/_modules/datasets/arrow_dataset.html#Dataset.filter
Effectively, it looks like the original set of _indices were discared and overwritten by the set created during the filter operation.
I think this is actually an issue with how the map function handles dataset._indices. Ideally it should use the _indices it gets passed, and then return an updated _indices which reflect the map transformation applied to the starting _indices.
## Steps to reproduce the bug
```python
dataset = load_dataset('imdb', split='train', keep_in_memory=True)
dataset = dataset.shuffle(keep_in_memory=True)
dataset = dataset.select(range(0, 10), keep_in_memory=True)
print("initial 10 elements")
print(dataset['label']) # -> [1, 1, 0, 1, 0, 0, 0, 1, 0, 0]
dataset = dataset.filter(lambda x: x['label'] == 0, keep_in_memory=True)
print("filtered 10 elements looking for label 0")
print(dataset['label']) # -> [1, 1, 1, 1, 1, 1]
```
## Actual results
```
$ python indices_bug.py
initial 10 elements
[1, 1, 0, 1, 0, 0, 0, 1, 0, 0]
filtered 10 elements looking for label 0
[1, 1, 1, 1, 1, 1]
```
This code block first shuffles the dataset (to get a mix of label 0 and label 1).
Then it selects just the first 10 elements (the number of elements does not matter, 10 is just easy to visualize). The important part is that you select some subset of the dataset.
Finally, a filter is applied to pull out just the elements with `label == 0`.
The bug is that you cannot combine any dataset operation which sets the dataset._indices with filter.
In this case I have 2, shuffle and subset.
If you just use a single dataset._indices operation (in this case shuffle) the bug still shows up.
The shuffle sets the dataset._indices and then filter uses those indices in the map, then overwrites dataset._indices with the filter results.
```python
dataset = load_dataset('imdb', split='train', keep_in_memory=True)
dataset = dataset.shuffle(keep_in_memory=True)
dataset = dataset.filter(lambda x: x['label'] == 0, keep_in_memory=True)
dataset = dataset.select(range(0, 10), keep_in_memory=True)
print(dataset['label']) # -> [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
```
## Expected results
In an ideal world, the dataset filter would respect any dataset._indices values which had previously been set.
If you use dataset.filter with the base dataset (where dataset._indices has not been set) then the filter command works as expected.
## Environment info
Here are the commands required to rebuild the conda environment from scratch.
```
# create a virtual environment
conda create -n dataset_indices python=3.8 -y
# activate the virtual environment
conda activate dataset_indices
# install huggingface datasets
conda install datasets
```
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-41-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 3.0.0
### Full Conda Environment
```
$ conda env export
name: dasaset_indices
channels:
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=4.5=1_gnu
- abseil-cpp=20210324.2=h2531618_0
- aiohttp=3.8.1=py38h7f8727e_0
- aiosignal=1.2.0=pyhd3eb1b0_0
- arrow-cpp=3.0.0=py38h6b21186_4
- attrs=21.2.0=pyhd3eb1b0_0
- aws-c-common=0.4.57=he6710b0_1
- aws-c-event-stream=0.1.6=h2531618_5
- aws-checksums=0.1.9=he6710b0_0
- aws-sdk-cpp=1.8.185=hce553d0_0
- bcj-cffi=0.5.1=py38h295c915_0
- blas=1.0=mkl
- boost-cpp=1.73.0=h27cfd23_11
- bottleneck=1.3.2=py38heb32a55_1
- brotli=1.0.9=he6710b0_2
- brotli-python=1.0.9=py38heb0550a_2
- brotlicffi=1.0.9.2=py38h295c915_0
- brotlipy=0.7.0=py38h27cfd23_1003
- bzip2=1.0.8=h7b6447c_0
- c-ares=1.17.1=h27cfd23_0
- ca-certificates=2021.10.26=h06a4308_2
- certifi=2021.10.8=py38h06a4308_0
- cffi=1.14.6=py38h400218f_0
- conllu=4.4.1=pyhd3eb1b0_0
- cryptography=36.0.0=py38h9ce1e76_0
- dataclasses=0.8=pyh6d0b6a4_7
- dill=0.3.4=pyhd3eb1b0_0
- double-conversion=3.1.5=he6710b0_1
- et_xmlfile=1.1.0=py38h06a4308_0
- filelock=3.4.0=pyhd3eb1b0_0
- frozenlist=1.2.0=py38h7f8727e_0
- gflags=2.2.2=he6710b0_0
- glog=0.5.0=h2531618_0
- gmp=6.2.1=h2531618_2
- grpc-cpp=1.39.0=hae934f6_5
- huggingface_hub=0.0.17=pyhd3eb1b0_0
- icu=58.2=he6710b0_3
- idna=3.3=pyhd3eb1b0_0
- importlib-metadata=4.8.2=py38h06a4308_0
- importlib_metadata=4.8.2=hd3eb1b0_0
- intel-openmp=2021.4.0=h06a4308_3561
- krb5=1.19.2=hac12032_0
- ld_impl_linux-64=2.35.1=h7274673_9
- libboost=1.73.0=h3ff78a5_11
- libcurl=7.80.0=h0b77cf5_0
- libedit=3.1.20210910=h7f8727e_0
- libev=4.33=h7f8727e_1
- libevent=2.1.8=h1ba5d50_1
- libffi=3.3=he6710b0_2
- libgcc-ng=9.3.0=h5101ec6_17
- libgomp=9.3.0=h5101ec6_17
- libnghttp2=1.46.0=hce63b2e_0
- libprotobuf=3.17.2=h4ff587b_1
- libssh2=1.9.0=h1ba5d50_1
- libstdcxx-ng=9.3.0=hd4cf53a_17
- libthrift=0.14.2=hcc01f38_0
- libxml2=2.9.12=h03d6c58_0
- libxslt=1.1.34=hc22bd24_0
- lxml=4.6.3=py38h9120a33_0
- lz4-c=1.9.3=h295c915_1
- mkl=2021.4.0=h06a4308_640
- mkl-service=2.4.0=py38h7f8727e_0
- mkl_fft=1.3.1=py38hd3c417c_0
- mkl_random=1.2.2=py38h51133e4_0
- multiprocess=0.70.12.2=py38h7f8727e_0
- multivolumefile=0.2.3=pyhd3eb1b0_0
- ncurses=6.3=h7f8727e_2
- numexpr=2.7.3=py38h22e1b3c_1
- numpy=1.21.2=py38h20f2e39_0
- numpy-base=1.21.2=py38h79a1101_0
- openpyxl=3.0.9=pyhd3eb1b0_0
- openssl=1.1.1l=h7f8727e_0
- orc=1.6.9=ha97a36c_3
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.4=py38h06a4308_0
- py7zr=0.16.1=pyhd3eb1b0_1
- pycparser=2.21=pyhd3eb1b0_0
- pycryptodomex=3.10.1=py38h27cfd23_1
- pyopenssl=21.0.0=pyhd3eb1b0_1
- pyparsing=3.0.4=pyhd3eb1b0_0
- pyppmd=0.16.1=py38h295c915_0
- pysocks=1.7.1=py38h06a4308_0
- python=3.8.12=h12debd9_0
- python-dateutil=2.8.2=pyhd3eb1b0_0
- python-xxhash=2.0.2=py38h7f8727e_0
- pyzstd=0.14.4=py38h7f8727e_3
- re2=2020.11.01=h2531618_1
- readline=8.1=h27cfd23_0
- requests=2.26.0=pyhd3eb1b0_0
- setuptools=58.0.4=py38h06a4308_0
- six=1.16.0=pyhd3eb1b0_0
- snappy=1.1.8=he6710b0_0
- sqlite=3.36.0=hc218d9a_0
- texttable=1.6.4=pyhd3eb1b0_0
- tk=8.6.11=h1ccaba5_0
- typing_extensions=3.10.0.2=pyh06a4308_0
- uriparser=0.9.3=he6710b0_1
- utf8proc=2.6.1=h27cfd23_0
- wheel=0.37.0=pyhd3eb1b0_1
- xxhash=0.8.0=h7f8727e_3
- xz=5.2.5=h7b6447c_0
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.11=h7f8727e_4
- zstd=1.4.9=haebb681_0
- pip:
- async-timeout==4.0.2
- charset-normalizer==2.0.9
- datasets==1.16.1
- fsspec==2021.11.1
- huggingface-hub==0.2.1
- multidict==5.2.0
- pandas==1.3.5
- pyarrow==6.0.1
- pytz==2021.3
- pyyaml==6.0
- tqdm==4.62.3
- typing-extensions==4.0.1
- urllib3==1.26.7
- yarl==1.7.2
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3459/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3457
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3457/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3457/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3457/events
|
https://github.com/huggingface/datasets/issues/3457
| 1,084,862,121 |
I_kwDODunzps5Aqa6p
| 3,457 |
Add CMU Graphics Lab Motion Capture dataset
|
{
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] |
open
| false | null |
[] | null |
[
"This dataset has files in ASF/AMC format. [ The skeleton file is the ASF file (Acclaim Skeleton File). The motion file is the AMC file (Acclaim Motion Capture data). ] \r\n\r\nSome questions : \r\n1. How do we go about representing these features using datasets.Features and generate examples ?\r\n2. The dataset download link for ASF/AMC files does not have metadata information, for eg : category and subcategory information. We will need to crawl the website for this information. The authors mention \"Please don't crawl this database for all motions.\" Can we mail the authors for this information ?\r\nThe dataset structure is as follows : \r\n```\r\nsubjects\r\n\t- 01\r\n\t\t- 01_01.amc\r\n\t\t- 01_02.amc\r\n\t\t.\r\n\t\t.\r\n\t\t.\r\n\t\t- 01.asf\r\n\t- 02\r\n\t\t- 02_01.amc\r\n\t\t- 02_02.amc\r\n\t\t.\r\n\t\t.\r\n\t\t.\r\n\t\t- 02.asf\r\n```\r\nThere is no metadata regarding the category, sub-category and motion description.\r\n\r\nNeed your inputs. @mariosasko / @lhoestq \r\nThank you.\r\n",
"Hi @dnaveenr! Thanks for working on this!\r\n\r\n1. We can use the `Sequence(Value(\"string\"))` feature type for the subject's AMC files and `Value(\"string\")` for the subject's ASF file (`Value(\"string\")` represents the file paths) + the types for categories/subcategories and descriptions.\r\n2. We can use this URL to download the motion descriptions: http://mocap.cs.cmu.edu/search.php?subjectnumber=<subject_number>&motion=%%%&maincat=%&subcat=%&subtext=yes where `subject_number` is the number between 1 and 144. And to get categories/subcategories, feel free to contact the authors (they state in the FAQ they are happy to help) and ask them if they can provide the mapping from categories/subcategories to the AMC files to avoid crawling. You can also mention that your goal is to make their dataset more accessible by adding its loading script to the Hub.\r\n\r\nThe AMC files are also available in the tvd, c3d, mpg and avi formats (the links are in the [FAQ](http://mocap.cs.cmu.edu/faqs.php) section), so it would be nice to have one config for each of these additional formats. \r\n\r\nAnd additionally, we can add a `Data Preprocessing` section to the card where we explain how to load/process the files. I can help with that.",
"Hi @mariosasko ,\r\n\r\n1. Thanks for this, so we can add the file paths.\r\n2. Yes, I had already mailed the authors a couple of days back actually, asking for the metadata details[ i.e category, sub-category and motion description] . They are yet to respond though, I will wait for a couple of days and try to follow up with them again. :) Else we can use the workaround solution.\r\n\r\nYes. Supporting all the formats would be helpful. \r\n\r\n> And additionally, we can add a Data Preprocessing section to the card where we explain how to load/process the files. I can help with that.\r\n\r\nOkay. Got it."
] | 2021-12-20T14:34:39 | 2022-03-16T16:53:09 | null |
MEMBER
| null | null | null |
## Adding a Dataset
- **Name:** CMU Graphics Lab Motion Capture database
- **Description:** The database contains free motions which you can download and use.
- **Data:** http://mocap.cs.cmu.edu/
- **Motivation:** Nice motion capture dataset
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3457/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3455
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3455/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3455/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3455/events
|
https://github.com/huggingface/datasets/issues/3455
| 1,084,599,650 |
I_kwDODunzps5Apa1i
| 3,455 |
Easier information editing
|
{
"login": "borgr",
"id": 6416600,
"node_id": "MDQ6VXNlcjY0MTY2MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6416600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borgr",
"html_url": "https://github.com/borgr",
"followers_url": "https://api.github.com/users/borgr/followers",
"following_url": "https://api.github.com/users/borgr/following{/other_user}",
"gists_url": "https://api.github.com/users/borgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borgr/subscriptions",
"organizations_url": "https://api.github.com/users/borgr/orgs",
"repos_url": "https://api.github.com/users/borgr/repos",
"events_url": "https://api.github.com/users/borgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/borgr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] |
closed
| false | null |
[] | null |
[
"Hi ! I guess you are talking about the dataset cards that are in this repository on github ?\r\n\r\nI think github allows to submit a PR even for 1 line though the `Edit file` button on the page of the dataset card.\r\n\r\nMaybe let's mention this in `CONTRIBUTING.md` ?",
"We now host all the datasets on the HF Hub, where you can easily edit them through UI (for single file changes) or Git workflow (for single/multiple file changes)"
] | 2021-12-20T10:10:43 | 2023-07-25T15:36:14 | 2023-07-25T15:36:14 |
CONTRIBUTOR
| null | null | null |
**Is your feature request related to a problem? Please describe.**
It requires a lot of effort to improve a datasheet.
**Describe the solution you'd like**
UI or at least a link to the place where the code that needs to be edited is (and an easy way to edit this code directly from the site, without cloning, branching, makefile etc.)
**Describe alternatives you've considered**
The current Ux is to have the 8 steps for contribution while One just wishes to change a line a type etc.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3455/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3453
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3453/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3453/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3453/events
|
https://github.com/huggingface/datasets/issues/3453
| 1,084,515,911 |
I_kwDODunzps5ApGZH
| 3,453 |
ValueError while iter_archive
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[] | 2021-12-20T08:46:18 | 2021-12-20T10:04:59 | 2021-12-20T10:04:59 |
MEMBER
| null | null | null |
## Describe the bug
After the merge of:
- #3443
the method `iter_archive` throws a ValueError:
```
ValueError: read of closed file
```
## Steps to reproduce the bug
```python
for path, file in dl_manager.iter_archive(archive_path):
pass
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3453/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3452
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3452/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3452/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3452/events
|
https://github.com/huggingface/datasets/issues/3452
| 1,083,803,178 |
I_kwDODunzps5AmYYq
| 3,452 |
why the stratify option is omitted from test_train_split function?
|
{
"login": "j-sieger",
"id": 9985334,
"node_id": "MDQ6VXNlcjk5ODUzMzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9985334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/j-sieger",
"html_url": "https://github.com/j-sieger",
"followers_url": "https://api.github.com/users/j-sieger/followers",
"following_url": "https://api.github.com/users/j-sieger/following{/other_user}",
"gists_url": "https://api.github.com/users/j-sieger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/j-sieger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/j-sieger/subscriptions",
"organizations_url": "https://api.github.com/users/j-sieger/orgs",
"repos_url": "https://api.github.com/users/j-sieger/repos",
"events_url": "https://api.github.com/users/j-sieger/events{/privacy}",
"received_events_url": "https://api.github.com/users/j-sieger/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3761482852,
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue",
"name": "good second issue",
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues"
}
] |
closed
| false | null |
[] | null |
[
"Hi ! It's simply not added yet :)\r\n\r\nIf someone wants to contribute to add the `stratify` parameter I'd be happy to give some pointers.\r\n\r\nIn the meantime, I guess you can use `sklearn` or other tools to do a stratified train/test split over the **indices** of your dataset and then do\r\n```\r\ntrain_dataset = dataset.select(train_indices)\r\ntest_dataset = dataset.select(test_indices)\r\n```",
"Hi @lhoestq I would like to add `stratify` parameter, can you give me some pointers for adding the same ?",
"Hi ! Sure :)\r\n\r\nThe `train_test_split` method is defined here: \r\n\r\nhttps://github.com/huggingface/datasets/blob/dc62232fa1b3bcfe2fbddcb721f2d141f8908943/src/datasets/arrow_dataset.py#L3253-L3253\r\n\r\nand inside `train_test_split ` we need to create the right `train_indices` and `test_indices` that are passed here to `.select()`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/dc62232fa1b3bcfe2fbddcb721f2d141f8908943/src/datasets/arrow_dataset.py#L3450-L3464\r\n\r\nFor example if your dataset is like\r\n| | label |\r\n|---:|--------:|\r\n| 0 | 1 |\r\n| 1 | 1 |\r\n| 2 | 0 |\r\n| 3 | 0 |\r\n\r\nand the user passes `stratify=dataset[\"label\"]`, then you should get indices that look like this\r\n```\r\ntrain_indices = [0, 2]\r\ntest_indices = [1, 3]\r\n```\r\n\r\nthese indices will be passed to `.select` to return the stratified train and test splits :)\r\n\r\nFeel free to îng me if you have any question !",
"@lhoestq \r\nI just added the implementation for `stratify` option here #4322 "
] | 2021-12-18T10:37:47 | 2022-05-25T20:43:51 | 2022-05-25T20:43:51 |
NONE
| null | null | null |
why the stratify option is omitted from test_train_split function?
is there any other way implement the stratify option while splitting the dataset? as it is important point to be considered while splitting the dataset.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3452/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3452/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3450
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3450/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3450/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3450/events
|
https://github.com/huggingface/datasets/issues/3450
| 1,083,450,158 |
I_kwDODunzps5AlCMu
| 3,450 |
Unexpected behavior doing Split + Filter
|
{
"login": "jbrachat",
"id": 26432605,
"node_id": "MDQ6VXNlcjI2NDMyNjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/26432605?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbrachat",
"html_url": "https://github.com/jbrachat",
"followers_url": "https://api.github.com/users/jbrachat/followers",
"following_url": "https://api.github.com/users/jbrachat/following{/other_user}",
"gists_url": "https://api.github.com/users/jbrachat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbrachat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbrachat/subscriptions",
"organizations_url": "https://api.github.com/users/jbrachat/orgs",
"repos_url": "https://api.github.com/users/jbrachat/repos",
"events_url": "https://api.github.com/users/jbrachat/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbrachat/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"Hi ! This is an issue with `datasets` 1.12. Sorry for the inconvenience. Can you update to `>=1.13` ?\r\nsee https://github.com/huggingface/datasets/issues/3190\r\n\r\nMaybe we should also backport the bug fix to `1.12` (in a new version `1.12.2`)"
] | 2021-12-17T17:00:39 | 2023-07-25T15:38:47 | 2023-07-25T15:38:47 |
NONE
| null | null | null |
## Describe the bug
I observed unexpected behavior when applying 'train_test_split' followed by 'filter' on dataset. Elements of the training dataset eventually end up in the test dataset (after applying the 'filter')
## Steps to reproduce the bug
```
from datasets import Dataset
import pandas as pd
dic = {'x': [1,2,3,4,5,6,7,8,9], 'y':['q','w','e','r','t','y','u','i','o']}
df = pd.DataFrame.from_dict(dic)
dataset = Dataset.from_pandas(df)
split_dataset = dataset.train_test_split(test_size=0.5, shuffle=False, seed=42)
train_dataset = split_dataset["train"]
eval_dataset = split_dataset["test"]
eval_dataset_2 = eval_dataset.filter(lambda example: example['x'] % 2 == 0)
print( eval_dataset['x'])
print(eval_dataset_2['x'])
```
One observes that elements in eval_dataset2 are actually coming from the training dataset...
## Expected results
The expected results would be that the filtered eval dataset would only contain elements from the original eval dataset.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Windows 10
- Python version: 3.7
- PyArrow version: 5.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3450/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3449
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3449/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3449/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3449/events
|
https://github.com/huggingface/datasets/issues/3449
| 1,083,373,018 |
I_kwDODunzps5AkvXa
| 3,449 |
Add `__add__()`, `__iadd__()` and similar to `Dataset` class
|
{
"login": "sgraaf",
"id": 8904453,
"node_id": "MDQ6VXNlcjg5MDQ0NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8904453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgraaf",
"html_url": "https://github.com/sgraaf",
"followers_url": "https://api.github.com/users/sgraaf/followers",
"following_url": "https://api.github.com/users/sgraaf/following{/other_user}",
"gists_url": "https://api.github.com/users/sgraaf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgraaf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgraaf/subscriptions",
"organizations_url": "https://api.github.com/users/sgraaf/orgs",
"repos_url": "https://api.github.com/users/sgraaf/repos",
"events_url": "https://api.github.com/users/sgraaf/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgraaf/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] |
closed
| false | null |
[] | null |
[
"I was going through the codebase, and I believe the implementation of __add__() and __iadd__() will be similar to concatenate_datasets() after the elimination of code for arguments other than the list of datasets (info, split, axis). \r\n(Assuming elimination of axis means concatenating over axis 1.)",
"Most data frame libraries (Polars, Pandas, etc.) override `__add__` to perform (mathematical) summation, so having different behavior could lead to confusion."
] | 2021-12-17T15:29:11 | 2024-02-29T16:47:56 | 2023-07-25T15:33:56 |
NONE
| null | null | null |
**Is your feature request related to a problem? Please describe.**
No.
**Describe the solution you'd like**
I would like to be able to concatenate datasets as follows:
```python
>>> dataset["train"] += dataset["validation"]
```
... instead of using `concatenate_datasets()`:
```python
>>> raw_datasets["train"] = concatenate_datasets([raw_datasets["train"], raw_datasets["validation"]])
>>> del raw_datasets["validation"]
```
**Describe alternatives you've considered**
Well, I have considered `concatenate_datasets()` 😀
**Additional context**
N.a.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3449/timeline
| null |
not_planned
|
https://api.github.com/repos/huggingface/datasets/issues/3448
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3448/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3448/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3448/events
|
https://github.com/huggingface/datasets/issues/3448
| 1,083,231,080 |
I_kwDODunzps5AkMto
| 3,448 |
JSONDecodeError with HuggingFace dataset viewer
|
{
"login": "kathrynchapman",
"id": 57716109,
"node_id": "MDQ6VXNlcjU3NzE2MTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/57716109?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kathrynchapman",
"html_url": "https://github.com/kathrynchapman",
"followers_url": "https://api.github.com/users/kathrynchapman/followers",
"following_url": "https://api.github.com/users/kathrynchapman/following{/other_user}",
"gists_url": "https://api.github.com/users/kathrynchapman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kathrynchapman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kathrynchapman/subscriptions",
"organizations_url": "https://api.github.com/users/kathrynchapman/orgs",
"repos_url": "https://api.github.com/users/kathrynchapman/repos",
"events_url": "https://api.github.com/users/kathrynchapman/events{/privacy}",
"received_events_url": "https://api.github.com/users/kathrynchapman/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false | null |
[] | null |
[
"Hi ! I think the issue comes from the dataset_infos.json file: it has the \"flat\" field twice.\r\n\r\nCan you try deleting this file and regenerating it please ?",
"Thanks! That fixed that, but now I am getting:\r\nServer Error\r\nStatus code: 400\r\nException: KeyError\r\nMessage: 'feature'\r\n\r\nI checked the dataset_infos.json and pubmed_neg.py script, I don't use 'feature' anywhere as a key. Is the dataset viewer expecting that I do?",
"It seems that the `feature` key is missing from some feature type definition in your dataset_infos.json:\r\n```json\r\n\t\t\t\"tokens\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\"\r\n\t\t\t},\r\n\t\t\t\"tags\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\"\r\n\t\t\t}\r\n```\r\nThey should be\r\n```json\r\n\t\t\t\"tokens\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\"\r\n \"feature\": {\"dtype\": \"string\", \"id\": null, \"_type\": \"Value\"}\r\n\t\t\t},\r\n\t\t\t\"tags\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\",\r\n \"feature\": {\"num_classes\": 5, \"names\": [\"-\", \"S\", \"H\", \"N\", \"C\"], \"names_file\": null, \"id\": null, \"_type\": \"ClassLabel\"}\r\n\t\t\t}\r\n```\r\n\r\nNote that you can generate the dataset_infos.json automatically to avoid mistakes:\r\n```bash\r\ndatasets-cli test ./path/to/dataset --save_infos\r\n```"
] | 2021-12-17T12:52:41 | 2022-02-24T09:10:26 | 2022-02-24T09:10:26 |
NONE
| null | null | null |
## Dataset viewer issue for 'pubmed_neg'
**Link:** https://huggingface.co/datasets/IGESML/pubmed_neg
I am getting the error:
Status code: 400
Exception: JSONDecodeError
Message: Expecting property name enclosed in double quotes: line 61 column 2 (char 1202)
I have checked all files - I am not using single quotes anywhere. Not sure what is causing this issue.
Am I the one who added this dataset ? Yes
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3448/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3447
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3447/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3447/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3447/events
|
https://github.com/huggingface/datasets/issues/3447
| 1,082,539,790 |
I_kwDODunzps5Ahj8O
| 3,447 |
HF_DATASETS_OFFLINE=1 didn't stop datasets.builder from downloading
|
{
"login": "dunalduck0",
"id": 51274745,
"node_id": "MDQ6VXNlcjUxMjc0NzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/51274745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dunalduck0",
"html_url": "https://github.com/dunalduck0",
"followers_url": "https://api.github.com/users/dunalduck0/followers",
"following_url": "https://api.github.com/users/dunalduck0/following{/other_user}",
"gists_url": "https://api.github.com/users/dunalduck0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dunalduck0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dunalduck0/subscriptions",
"organizations_url": "https://api.github.com/users/dunalduck0/orgs",
"repos_url": "https://api.github.com/users/dunalduck0/repos",
"events_url": "https://api.github.com/users/dunalduck0/events{/privacy}",
"received_events_url": "https://api.github.com/users/dunalduck0/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"Hi ! Indeed it says \"downloading and preparing\" but in your case it didn't need to download anything since you used local files (it would have thrown an error otherwise). I think we can improve the logging to make it clearer in this case",
"@lhoestq Thank you for explaining. I am sorry but I was not clear about my intention. I didn't want to kill internet traffic; I wanted to kill all write activity. In other words, you can imagine that my storage has only read access but crashes on write.\r\n\r\nWhen run_clm.py is invoked with the same parameters, the hash in the cache directory \"datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/...\" doesn't change, and my job can load cached data properly. This is great.\r\n\r\nUnfortunately, when params change (which happens sometimes), the hash changes and the old cache is invalid. datasets builder would create a new cache directory with the new hash and create JSON builder there, even though every JSON builder is the same. I didn't find a way to avoid such behavior.\r\n\r\nThis problem can be resolved when using datasets.map() for tokenizing and grouping text. This function allows me to specify output filenames with --cache_file_names, so that the cached files are always valid.\r\n\r\nThis is the code that I used to freeze cache filenames for tokenization. I wish I could do the same to datasets.load_dataset()\r\n```\r\n tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n remove_columns=column_names,\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n desc=\"Running tokenizer on dataset\",\r\n cache_file_names={k: os.path.join(model_args.cache_dir, f'{k}-tokenized') for k in raw_datasets},\r\n )\r\n```",
"Hi ! `load_dataset` may re-generate your dataset if some parameters changed indeed. If you want to freeze a dataset loaded with `load_dataset`, I think the best solution is just to save it somewhere on your disk with `.save_to_disk(my_dataset_dir)` and reload it with `load_from_disk(my_dataset_dir)`. This way you will be able to reload the dataset without having to run `load_dataset`"
] | 2021-12-16T18:51:13 | 2022-02-17T14:16:27 | 2022-02-17T14:16:27 |
NONE
| null | null | null |
## Describe the bug
According to https://huggingface.co/docs/datasets/loading_datasets.html#loading-a-dataset-builder, setting HF_DATASETS_OFFLINE to 1 should make datasets to "run in full offline mode". It didn't work for me. At the very beginning, datasets still tried to download "custom data configuration" for JSON, despite I have run the program once and cached all data into the same --cache_dir.
"Downloading" is not an issue when running with local disk, but crashes often with cloud storage because (1) multiply GPU processes try to access the same file, AND (2) FileLocker fails to synchronize all processes, due to storage throttling. 99% of times, when the main process releases FileLocker, the file is not actually ready for access in cloud storage and thus triggers "FileNotFound" errors for all other processes. Well, another way to resolve the problem is to investigate super reliable cloud storage, but that's out of scope here.
## Steps to reproduce the bug
```
export HF_DATASETS_OFFLINE=1
python run_clm.py --model_name_or_path=models/gpt-j-6B --train_file=trainpy.v2.train.json --validation_file=trainpy.v2.eval.json --cache_dir=datacache/trainpy.v2
```
## Expected results
datasets should stop all "downloading" behavior but reuse the cached JSON configuration. I think the problem here is part of the cache directory path, "default-471372bed4b51b53", is randomly generated, and it could change if some parameters changed. And I didn't find a way to use a fixed path to ensure datasets to reuse cached data every time.
## Actual results
The logging shows datasets are still downloading into "datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426".
```
12/16/2021 10:25:59 - WARNING - datasets.builder - Using custom data configuration default-471372bed4b51b53
12/16/2021 10:25:59 - INFO - datasets.builder - Generating dataset json (datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426)
Downloading and preparing dataset json/default to datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426...
100%|██████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 17623.13it/s]
12/16/2021 10:25:59 - INFO - datasets.utils.download_manager - Downloading took 0.0 min
12/16/2021 10:26:00 - INFO - datasets.utils.download_manager - Checksum Computation took 0.0 min
100%|███████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1206.99it/s]
12/16/2021 10:26:00 - INFO - datasets.utils.info_utils - Unable to verify checksums.
12/16/2021 10:26:00 - INFO - datasets.builder - Generating split train
12/16/2021 10:26:01 - INFO - datasets.builder - Generating split validation
12/16/2021 10:26:02 - INFO - datasets.utils.info_utils - Unable to verify splits sizes.
Dataset json downloaded and prepared to datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426. Subsequent calls will reuse this data.
100%|█████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 53.54it/s]
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.16.1
- Platform: Linux
- Python version: 3.8.10
- PyArrow version: 6.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3447/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3445
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3445/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3445/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3445/events
|
https://github.com/huggingface/datasets/issues/3445
| 1,082,370,968 |
I_kwDODunzps5Ag6uY
| 3,445 |
question
|
{
"login": "BAKAYOKO0232",
"id": 38075175,
"node_id": "MDQ6VXNlcjM4MDc1MTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/38075175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BAKAYOKO0232",
"html_url": "https://github.com/BAKAYOKO0232",
"followers_url": "https://api.github.com/users/BAKAYOKO0232/followers",
"following_url": "https://api.github.com/users/BAKAYOKO0232/following{/other_user}",
"gists_url": "https://api.github.com/users/BAKAYOKO0232/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BAKAYOKO0232/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BAKAYOKO0232/subscriptions",
"organizations_url": "https://api.github.com/users/BAKAYOKO0232/orgs",
"repos_url": "https://api.github.com/users/BAKAYOKO0232/repos",
"events_url": "https://api.github.com/users/BAKAYOKO0232/events{/privacy}",
"received_events_url": "https://api.github.com/users/BAKAYOKO0232/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false | null |
[] | null |
[
"Hi ! What's your question ?"
] | 2021-12-16T15:57:00 | 2022-01-03T10:09:00 | 2022-01-03T10:09:00 |
NONE
| null | null | null |
## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3445/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3444
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3444/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3444/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3444/events
|
https://github.com/huggingface/datasets/issues/3444
| 1,082,078,961 |
I_kwDODunzps5Afzbx
| 3,444 |
Align the Dataset and IterableDataset processing API
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] |
open
| false |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Yes I agree, these should be as aligned as possible. Maybe we can also check the feedback in the survey at http://hf.co/oss-survey and see if people mentioned related things on the API (in particular if we go the breaking change way, it would be good to be sure we are taking the right direction for the community).",
"I like this proposal.\r\n\r\n> There is also an important difference in terms of behavior:\r\nDataset.map adds new columns (with dict.update)\r\nBUT\r\nIterableDataset discards previous columns (it overwrites the dict)\r\nIMO the two methods should have the same behavior. This would be an important breaking change though.\r\n\r\n> The main breaking change would be the change of behavior of IterableDataset.map, because currently it discards all the previous columns instead of keeping them.\r\n\r\nYes, this behavior of `IterableDataset.map` was surprising to me the first time I used it because I was expecting the same behavior as `Dataset.map`, so I'm OK with the breaking change here.\r\n\r\n> IterableDataset only supports \"torch\" (it misses tf, jax, pandas, arrow) and is missing the parameters: columns, output_all_columns and format_kwargs\r\n\r\n\\+ it's also missing the actual formatting code (we return unformatted tensors)\r\n> We could have a completely aligned map method if both methods were lazy by default, but this is a very big breaking change so I'm not sure we can consider doing that.\r\n\r\n> For information, TFDS does lazy map by default, and has an additional .cache() method.\r\n\r\nIf I understand this part correctly, the idea would be for `Dataset.map` to behave similarly to `Dataset.with_transform` (lazy processing) and to have an option to cache processed data (with `.cache()`). This idea is really nice because it can also be applied to `IterableDataset` to fix https://github.com/huggingface/datasets/issues/3142 (again we get the aligned APIs). However, this change would break a lot of things, so I'm still not sure if this is a step in the right direction (maybe it's OK for Datasets 2.0?) \r\n> If the two APIs are more aligned it would be awesome for the examples in transformers, and it would create a satisfactory experience for users that want to switch from one mode to the other.\r\n\r\nYes, it would be amazing to have an option to easily switch between these two modes.\r\n\r\nI agree with the rest.\r\n",
"> If I understand this part correctly, the idea would be for Dataset.map to behave similarly to Dataset.with_transform (lazy processing) and to have an option to cache processed data (with .cache()). This idea is really nice because it can also be applied to IterableDataset to fix #3142 (again we get the aligned APIs). However, this change would break a lot of things, so I'm still not sure if this is a step in the right direction (maybe it's OK for Datasets 2.0?)\r\n\r\nYea this is too big of a change in my opinion. Anyway it's fine as it is right now with streaming=lazy and regular=eager.",
"Hi, IterableDataset is also missing set_format.",
"Yes indeed, thanks. I added it to the list of methods to align in the first post",
"I just encountered the problem of the missing `fn_kwargs` parameter in the `map` method. I am commenting to give a workaround in case someone has the same problem and does not find a solution.\r\nYou can wrap your function call inside a class that contains the other parameters needed by the function called by map, like this:\r\n\r\n```python\r\ndef my_func(x, y, z):\r\n # Do things\r\n\r\nclass MyFuncWrapper:\r\n def __init__(self, y, z):\r\n self.y = y\r\n self.z = z\r\n\r\n def __call__(self, x):\r\n return my_func(x, self.y, self.z)\r\n```\r\n\r\nThen, give an instance of the `MyFuncWrapper` to the map function.",
"Any update on this? It's almost 2024😂 @lhoestq ",
"The main differences have been addressed (map, formatting) but there are still a few things to implement like Dataset.take, Dataset.skip, IterableDataset.set_format, IterableDataset.formatted_as, IterableDataset.reset_format.\r\n\r\nThe rest cannot be implemented for the general case. E.g. train_test_split and select can only work on an iterable dataset if the underlying dataset format allows it (we need to know the number of rows and have some sort of random access)"
] | 2021-12-16T11:26:11 | 2023-08-16T09:28:17 | null |
MEMBER
| null | null | null |
## Intro
items marked like <s>this</s> are done already :)
Currently the two classes have two distinct API for processing:
### The `.map()` method
Both have those parameters in common: function, batched, batch_size
- IterableDataset is missing those parameters:
<s>with_indices</s>, with_rank, <s>input_columns</s>, <s>drop_last_batch</s>, <s>remove_columns</s>, features, disable_nullable, fn_kwargs, num_proc
- Dataset also has additional parameters that are exclusive, due to caching:
keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, suffix_template, new_fingerprint
- <s>There is also an important difference in terms of behavior:
**Dataset.map adds new columns** (with dict.update)
BUT
**IterableDataset discards previous columns** (it overwrites the dict)
IMO the two methods should have the same behavior. This would be an important breaking change though.</s>
- Dataset.map is eager while IterableDataset.map is lazy
### The `.shuffle()` method
- <s>Both have an optional seed parameter, but IterableDataset requires a mandatory parameter buffer_size to control the size of the local buffer used for approximate shuffling.</s>
- <s>IterableDataset is missing the parameter generator</s>
- Also Dataset has exclusive parameters due to caching: keep_in_memory, load_from_cache_file, indices_cache_file_name, writer_batch_size, new_fingerprint
### The `.with_format()` method
- IterableDataset only supports "torch" (it misses tf, jax, pandas, arrow) and is missing the parameters: columns, output_all_columns and format_kwargs
- other methods like `set_format`, `reset_format` or `formatted_as` are also missing
### Other methods
- Both have the same `remove_columns` method
- IterableDataset is missing: <s>cast</s>, <s>cast_column</s>, <s>filter</s>, <s>rename_column</s>, <s>rename_columns</s>, class_encode_column, flatten, prepare_for_task, train_test_split, shard
- Some other methods are missing but we can discuss them: set_transform, formatted_as, with_transform
- And others don't really make sense for an iterable dataset: select, sort, add_column, add_item
- Dataset is missing skip and take, that IterableDataset implements.
## Questions
I think it would be nice to be able to switch between streaming and regular dataset easily, without changing the processing code significantly.
1. What should be aligned and what shouldn't between those two APIs ?
IMO the minimum is to align the main processing methods.
It would mean aligning breaking the current `Iterable.map` to have the same behavior as `Dataset.map` (add columns with dict.update), and add multiprocessing as well as the missing parameters. DONE ✅
It would also mean implementing the missing methods: cast, cast_column, filter, rename_column, rename_columns, class_encode_column, flatten, prepare_for_task, train_test_split, shard. WIP 🟠
2. What are the breaking changes for IterableDataset ?
The main breaking change would be the change of behavior of `IterableDataset.map`, because currently it discards all the previous columns instead of keeping them. DONE ✅
3. Shall we also do some changes for regular datasets ?
I agree the simplest would be to have the exact same methods for both Dataset and IterableDataset. However this is probably not a good idea because it would prevent users from using the best benefits of them. That's why we can keep some aspects of regular datasets as they are:
- keep the eager Dataset.map with caching
- keep the with_transform method for lazy processing
- keep Dataset.select (it could also be added to IterableDataset even though it's not recommended)
We could have a completely aligned `map` method if both methods were lazy by default, but this is a very big breaking change so I'm not sure we can consider doing that.
For information, TFDS does lazy map by default, and has an additional `.cache()` method.
## Opinions ?
I'd love to gather some opinions about this here. If the two APIs are more aligned it would be awesome for the examples in `transformers`, and it would create a satisfactory experience for users that want to switch from one mode to the other.
cc @mariosasko @albertvillanova @thomwolf @patrickvonplaten @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3444/reactions",
"total_count": 13,
"+1": 13,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3444/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3441
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3441/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3441/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3441/events
|
https://github.com/huggingface/datasets/issues/3441
| 1,081,571,784 |
I_kwDODunzps5Ad3nI
| 3,441 |
Add QuALITY dataset
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
open
| false | null |
[] | null |
[
"I'll take this one if no one hasn't yet!"
] | 2021-12-15T22:26:19 | 2021-12-28T15:17:05 | null |
MEMBER
| null | null | null |
## Adding a Dataset
- **Name:** QuALITY
- **Description:** A challenging question answering with very long contexts (Twitter [thread](https://twitter.com/sleepinyourhat/status/1471225421794529281?s=20))
- **Paper:** No ArXiv link yet, but draft is [here](https://github.com/nyu-mll/quality/blob/main/quality_preprint.pdf)
- **Data:** GitHub repo [here](https://github.com/nyu-mll/quality)
- **Motivation:** This dataset would serve as a nice way to benchmark long-range Transformer models like BigBird, Longformer and their descendants. In particular, it would be very interesting to see how the S4 model fares on this given it's impressive performance on the Long Range Arena
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3441/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3440
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3440/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3440/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3440/events
|
https://github.com/huggingface/datasets/issues/3440
| 1,081,528,426 |
I_kwDODunzps5AdtBq
| 3,440 |
datasets keeps reading from cached files, although I disabled it
|
{
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"Hi ! What version of `datasets` are you using ? Can you also provide the logs you get before it raises the error ?"
] | 2021-12-15T21:26:22 | 2022-02-24T09:12:22 | 2022-02-24T09:12:22 |
NONE
| null | null | null |
## Describe the bug
Hi,
I am trying to avoid dataset library using cached files, I get the following bug when this tried to read the cached files. I tried to do the followings:
```
from datasets import set_caching_enabled
set_caching_enabled(False)
```
also force redownlaod:
```
download_mode='force_redownload'
```
but none worked so far, this is on a cluster and on some of the machines this reads from the cached files, I really appreciate any idea on how to fully remove caching @lhoestq
many thanks
```
File "run_clm.py", line 496, in <module>
main()
File "run_clm.py", line 419, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 943, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/transformers/trainer.py", line 1445, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 172, in evaluate
output = self.eval_loop(
File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 241, in eval_loop
metrics = self.compute_pet_metrics(eval_datasets, model, self.extra_info[metric_key_prefix], task=task)
File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 268, in compute_pet_metrics
centroids = self._compute_per_token_train_centroids(model, task=task)
File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 353, in _compute_per_token_train_centroids
data = get_label_samples(self.get_per_task_train_dataset(task), label)
File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 350, in get_label_samples
return dataset.filter(lambda example: int(example['labels']) == label)
File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 470, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/fingerprint.py", line 406, in wrapper
out = func(self, *args, **kwargs)
File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2519, in filter
indices = self.map(
File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2036, in map
return self._map_single(
File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 503, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 470, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/fingerprint.py", line 406, in wrapper
out = func(self, *args, **kwargs)
File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2248, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 654, in from_file
return cls(
File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 593, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/features/features.py", line 1092, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/features/features.py", line 1081, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'candidates_ids': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'labels': Value(dtype='int64', id=None), 'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'extra_fields': {}, 'task': Value(dtype='string', id=None)}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: linux
- Python version: 3.8.12
- PyArrow version: 6.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3440/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3434
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3434/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3434/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3434/events
|
https://github.com/huggingface/datasets/issues/3434
| 1,080,917,446 |
I_kwDODunzps5AbX3G
| 3,434 |
Add The People's Speech
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"This dataset is now available on the Hub here: https://huggingface.co/datasets/MLCommons/peoples_speech"
] | 2021-12-15T11:21:21 | 2023-02-28T16:22:29 | 2023-02-28T16:22:28 |
CONTRIBUTOR
| null | null | null |
## Adding a Dataset
- **Name:** The People's Speech
- **Description:** a massive English-language dataset of audio transcriptions of full sentences.
- **Paper:** https://openreview.net/pdf?id=R8CwidgJ0yT
- **Data:** https://mlcommons.org/en/peoples-speech/
- **Motivation:** With over 30,000 hours of speech, this dataset is the largest and most diverse freely available English speech recognition corpus today.
[The article](https://thegradient.pub/new-datasets-to-democratize-speech-recognition-technology-2/) which may be useful when working on the dataset.
cc: @anton-l
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3434/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3434/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3433
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3433/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3433/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3433/events
|
https://github.com/huggingface/datasets/issues/3433
| 1,080,910,724 |
I_kwDODunzps5AbWOE
| 3,433 |
Add Multilingual Spoken Words dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[] | 2021-12-15T11:14:44 | 2022-02-22T10:03:53 | 2022-02-22T10:03:53 |
MEMBER
| null | null | null |
## Adding a Dataset
- **Name:** Multilingual Spoken Words
- **Description:** Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken words in 50 languages for academic research and commercial applications in keyword spotting and spoken term search, licensed under CC-BY 4.0. The dataset contains more than 340,000 keywords, totaling 23.4 million 1-second spoken examples (over 6,000 hours).
Read more: https://mlcommons.org/en/news/spoken-words-blog/
- **Paper:** https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/fe131d7f5a6b38b23cc967316c13dae2-Paper-round2.pdf
- **Data:** https://mlcommons.org/en/multilingual-spoken-words/
- **Motivation:**
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3433/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3433/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3431
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3431/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3431/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3431/events
|
https://github.com/huggingface/datasets/issues/3431
| 1,079,866,083 |
I_kwDODunzps5AXXLj
| 3,431 |
Unable to resolve any data file after loading once
|
{
"login": "LzyFischer",
"id": 84694183,
"node_id": "MDQ6VXNlcjg0Njk0MTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/84694183?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LzyFischer",
"html_url": "https://github.com/LzyFischer",
"followers_url": "https://api.github.com/users/LzyFischer/followers",
"following_url": "https://api.github.com/users/LzyFischer/following{/other_user}",
"gists_url": "https://api.github.com/users/LzyFischer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LzyFischer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LzyFischer/subscriptions",
"organizations_url": "https://api.github.com/users/LzyFischer/orgs",
"repos_url": "https://api.github.com/users/LzyFischer/repos",
"events_url": "https://api.github.com/users/LzyFischer/events{/privacy}",
"received_events_url": "https://api.github.com/users/LzyFischer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi ! `load_dataset` accepts as input either a local dataset directory or a dataset name from the Hugging Face Hub.\r\n\r\nSo here you are getting this error the second time because it tries to load the local `wiki_dpr` directory, instead of `wiki_dpr` from the Hub. It doesn't work since it's a **cache** directory, not a **dataset** directory in itself.\r\n\r\nTo fix that you can use another cache directory like `cache_dir=\"/data2/whr/lzy/open_domain_data/retrieval/cache\"`",
"thx a lot"
] | 2021-12-14T15:02:15 | 2022-12-11T10:53:04 | 2022-02-24T09:13:52 |
NONE
| null | null | null |
when I rerun my program, it occurs this error
" Unable to resolve any data file that matches '['**train*']' at /data2/whr/lzy/open_domain_data/retrieval/wiki_dpr with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'zip']", so how could i deal with this problem?
thx.
And below is my code .

|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3431/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3425
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3425/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3425/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3425/events
|
https://github.com/huggingface/datasets/issues/3425
| 1,078,598,140 |
I_kwDODunzps5AShn8
| 3,425 |
Getting configs names takes too long
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"maybe related to https://github.com/huggingface/datasets/issues/2859\r\n",
"It looks like it's currently calling `HfFileSystem.ls()` ~8 times at the root and for each subdirectory:\r\n- \"\"\r\n- \"en.noblocklist\"\r\n- \"en.noclean\"\r\n- \"en\"\r\n- \"multilingual\"\r\n- \"realnewslike\"\r\n\r\nCurrently `ls` is slow because it iterates on all the files inside the repository.\r\n\r\nAn easy optimization would be to cache the result of each call to `ls`.\r\nWe can also optimize `ls` by using a tree structure per directory instead of a list of all the files.\r\n",
"ok\r\n"
] | 2021-12-13T14:27:57 | 2021-12-13T14:53:33 | null |
CONTRIBUTOR
| null | null | null |
## Steps to reproduce the bug
```python
from datasets import get_dataset_config_names
get_dataset_config_names("allenai/c4")
```
## Expected results
I would expect to get the answer quickly, at least in less than 10s
## Actual results
It takes about 45s on my environment
## Environment info
- `datasets` version: 1.16.1
- Platform: Linux-5.11.0-1022-aws-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyArrow version: 4.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3425/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3423
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3423/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3423/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3423/events
|
https://github.com/huggingface/datasets/issues/3423
| 1,078,049,638 |
I_kwDODunzps5AQbtm
| 3,423 |
data duplicate when setting num_works > 1 with streaming data
|
{
"login": "cloudyuyuyu",
"id": 16486492,
"node_id": "MDQ6VXNlcjE2NDg2NDky",
"avatar_url": "https://avatars.githubusercontent.com/u/16486492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cloudyuyuyu",
"html_url": "https://github.com/cloudyuyuyu",
"followers_url": "https://api.github.com/users/cloudyuyuyu/followers",
"following_url": "https://api.github.com/users/cloudyuyuyu/following{/other_user}",
"gists_url": "https://api.github.com/users/cloudyuyuyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cloudyuyuyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cloudyuyuyu/subscriptions",
"organizations_url": "https://api.github.com/users/cloudyuyuyu/orgs",
"repos_url": "https://api.github.com/users/cloudyuyuyu/repos",
"events_url": "https://api.github.com/users/cloudyuyuyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cloudyuyuyu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] | null |
[
"Hi ! Thanks for reporting :)\r\n\r\nWhen using a PyTorch's data loader with `num_workers>1` and an iterable dataset, each worker streams the exact same data by default, resulting in duplicate data when iterating using the data loader.\r\n\r\nWe can probably fix this in `datasets` by checking `torch.utils.data.get_worker_info()` which gives the worker id if it happens.",
"> Hi ! Thanks for reporting :)\r\n> \r\n> When using a PyTorch's data loader with `num_workers>1` and an iterable dataset, each worker streams the exact same data by default, resulting in duplicate data when iterating using the data loader.\r\n> \r\n> We can probably fix this in `datasets` by checking `torch.utils.data.get_worker_info()` which gives the worker id if it happens.\r\nHi ! Thanks for reply\r\n\r\nDo u have some plans to fix the problem?\r\n",
"Isn’t that somehow a bug on PyTorch side? (Just asking because this behavior seems quite general and maybe not what would be intended)",
"From PyTorch's documentation [here](https://pytorch.org/docs/stable/data.html#dataset-types):\r\n\r\n> When using an IterableDataset with multi-process data loading. The same dataset object is replicated on each worker process, and thus the replicas must be configured differently to avoid duplicated data. See [IterableDataset](https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset) documentations for how to achieve this.\r\n\r\nIt looks like an intended behavior from PyTorch\r\n\r\nAs suggested in the [docstring of the IterableDataset class](https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset), we could pass a `worker_init_fn` to the DataLoader to fix this. It could be called `streaming_worker_init_fn` for example.\r\n\r\nHowever, while this solution works, I'm worried that many users simply don't know about this parameter and just start their training with duplicate data without knowing it. That's why I'm more in favor of integrating the check on the worker id directly in `datasets` in our implementation of `IterableDataset.__iter__`.",
"Fixed by https://github.com/huggingface/datasets/pull/4375",
"> Fixed by #4375\r\n\r\nThanks!",
"Hi there @lhoestq @cloudyuyuyu \r\nI met that problem recently, and #4375 is really useful because I finally found out I am training with duplicate data.\r\nHowever, in multi-GPU training, I'm using DDP mode and IterableDataset, which still yields duplicate data for each progress. And this is dangerous because users maybe not realize this behavior.",
"If the worker_info.id is unique per process it should work fine, could you check that they're unique ?\r\n\r\nThe code to get the worker_info in each worker is `torch.utils.data.get_worker_info()`",
"test.py\r\n```python\r\nimport json\r\nimport os\r\n\r\nimport torch\r\nfrom torch.utils.data import IterableDataset, DataLoader\r\nfrom transformers import PreTrainedTokenizer, TrainingArguments\r\n\r\nfrom common.arguments import DataTrainingArguments, ModelArguments\r\n\r\n\r\nclass MyIterableDataset(IterableDataset):\r\n def __iter__(self):\r\n worker_info = torch.utils.data.get_worker_info()\r\n print(worker_info)\r\n return iter(range(3))\r\n\r\n\r\nif __name__ == '__main__':\r\n dataset = MyIterableDataset()\r\n dataloader = DataLoader(dataset, num_workers=1)\r\n for i in dataloader:\r\n print(i)\r\n\r\n```\r\n\r\n\r\n```sh\r\n$ python3 -m torch.distributed.launch \\\r\n --nproc_per_node=2 test.py\r\nWorkerInfo(id=0, num_workers=1, seed=5545685212307804959, dataset=<__main__.MyIterableDataset object at 0x7f92648cf6a0>)\r\nWorkerInfo(id=0, num_workers=1, seed=3174108029709729025, dataset=<__main__.MyIterableDataset object at 0x7f19ab961670>)\r\ntensor([0])\r\ntensor([1])\r\ntensor([2])\r\ntensor([0])\r\ntensor([1])\r\ntensor([2])\r\n```\r\n\r\n@lhoestq they are not unique",
"It looks like a bug from pytorch no ? How can we know which data should go in which process when using DDP ?\r\n\r\nI guess we need to check `torch.distributed.get_world_size()` and `torch.distributed.get_rank()` as well. Not fan of the design here tbh, but that's how it is",
"> It looks like a bug from pytorch no ? How can we know which data should go in which process when using DDP ?\r\n> \r\n> I guess we need to check `torch.distributed.get_world_size()` and `torch.distributed.get_rank()` as well. Not fan of the design here tbh, but that's how it is\r\n\r\nMaybe we should document it?",
"Never mind. After reading the code, `IterableDatasetShard` has solved this problem.",
"I'm re-opening this one since I think it should be supported by `datasets` natively",
"hmm actually let me open a new issue on DDP - original post was for single node"
] | 2021-12-13T03:43:17 | 2022-12-14T16:04:22 | 2022-12-14T16:04:22 |
NONE
| null | null | null |
## Describe the bug
The data is repeated num_works times when we load_dataset with streaming and set num_works > 1 when construct dataloader
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import pandas as pd
import numpy as np
import os
from datasets import load_dataset
from torch.utils.data import DataLoader
from tqdm import tqdm
import shutil
NUM_OF_USER = 1000000
NUM_OF_ACTION = 50000
NUM_OF_SEQUENCE = 10000
NUM_OF_FILES = 32
NUM_OF_WORKERS = 16
if __name__ == "__main__":
shutil.rmtree("./dataset")
for i in range(NUM_OF_FILES):
sequence_data = pd.DataFrame(
{
"imei": np.random.randint(1, NUM_OF_USER, size=NUM_OF_SEQUENCE),
"sequence": np.random.randint(1, NUM_OF_ACTION, size=NUM_OF_SEQUENCE)
}
)
if not os.path.exists("./dataset"):
os.makedirs("./dataset")
sequence_data.to_csv(f"./dataset/sequence_data_{i}.csv",
index=False)
dataset = load_dataset("csv",
data_files=[os.path.join("./dataset",file) for file in os.listdir("./dataset") if file.endswith(".csv")],
split="train",
streaming=True).with_format("torch")
data_loader = DataLoader(dataset,
batch_size=1024,
num_workers=NUM_OF_WORKERS)
result = pd.DataFrame()
for i, batch in tqdm(enumerate(data_loader)):
result = pd.concat([result,
pd.DataFrame(batch)],
axis=0)
result.to_csv(f"num_work_{NUM_OF_WORKERS}.csv", index=False)
```
## Expected results
data do not duplicate
## Actual results
data duplicate NUM_OF_WORKERS = 16

## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:datasets==1.14.0
- Platform:transformers==4.11.3
- Python version:3.8
- PyArrow version:
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3423/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3423/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3422
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3422/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3422/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3422/events
|
https://github.com/huggingface/datasets/issues/3422
| 1,078,022,619 |
I_kwDODunzps5AQVHb
| 3,422 |
Error about load_metric
|
{
"login": "jiacheng-ye",
"id": 30772464,
"node_id": "MDQ6VXNlcjMwNzcyNDY0",
"avatar_url": "https://avatars.githubusercontent.com/u/30772464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiacheng-ye",
"html_url": "https://github.com/jiacheng-ye",
"followers_url": "https://api.github.com/users/jiacheng-ye/followers",
"following_url": "https://api.github.com/users/jiacheng-ye/following{/other_user}",
"gists_url": "https://api.github.com/users/jiacheng-ye/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiacheng-ye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiacheng-ye/subscriptions",
"organizations_url": "https://api.github.com/users/jiacheng-ye/orgs",
"repos_url": "https://api.github.com/users/jiacheng-ye/repos",
"events_url": "https://api.github.com/users/jiacheng-ye/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiacheng-ye/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"Hi ! I wasn't able to reproduce your error.\r\n\r\nCan you try to clear your cache at `~/.cache/huggingface/modules` and try again ?"
] | 2021-12-13T02:49:51 | 2022-01-07T14:06:47 | 2022-01-07T14:06:47 |
NONE
| null | null | null |
## Describe the bug
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1371, in load_metric
metric = metric_cls(
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
metric = load_metric("glue", "sst2")
```
## Environment info
- `datasets` version: 1.16.1
- Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyArrow version: 6.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3422/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3419
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3419/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3419/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3419/events
|
https://github.com/huggingface/datasets/issues/3419
| 1,077,350,974 |
I_kwDODunzps5ANxI-
| 3,419 |
`.to_json` is extremely slow after `.select`
|
{
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false | null |
[] | null |
[
"Hi ! It's slower indeed because a datasets on which `select`/`shard`/`train_test_split`/`shuffle` has been called has to do additional steps to retrieve the data of the dataset table in the right order.\r\n\r\nIndeed, if you call `dataset.select([0, 5, 10])`, the underlying table of the dataset is not altered to keep the examples at index 0, 5, and 10. Instead, an indices mapping is added on top of the table, that says that the first example is at index 0, the second at index 5 and the last one at index 10.\r\n\r\nTherefore accessing the examples of the dataset is slower because of the additional step that uses the indices mapping.\r\n\r\nThe step that takes the most time is to query the dataset table from a list of indices here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/047dc756ed20fbf06e6bcaf910464aba0e20610a/src/datasets/formatting/formatting.py#L61-L63\r\n\r\nIn your case it can be made significantly faster by checking if the indices are contiguous. If they're contiguous, we could pass a python `slice` or `range` instead of a list of integers to `_query_table`. This way `_query_table` will do only one lookup to get the queried batch instead of `batch_size` lookups.\r\n\r\nGiven that calling `select` with contiguous indices is a common use case I'm in favor of implementing such an optimization :)\r\nLet me know what you think",
"Hi, thanks for the response!\r\nI still don't understand why it is so much slower than iterating and saving:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\noriginal = load_dataset(\"squad\", split=\"train\")\r\noriginal.to_json(\"from_original.json\") # Takes 0 seconds\r\n\r\nselected_subset1 = original.select([i for i in range(len(original))])\r\nselected_subset1.to_json(\"from_select1.json\") # Takes 99 seconds\r\n\r\nselected_subset2 = original.select([i for i in range(int(len(original) / 2))])\r\nselected_subset2.to_json(\"from_select2.json\") # Takes 47 seconds\r\n\r\nselected_subset3 = original.select([i for i in range(len(original)) if i % 2 == 0])\r\nselected_subset3.to_json(\"from_select3.json\") # Takes 49 seconds\r\n\r\nimport json\r\nimport time\r\ndef fast_to_json(dataset, path):\r\n start = time.time()\r\n with open(path, mode=\"w\") as f:\r\n for example in dataset:\r\n f.write(json.dumps(example, separators=(',', ':')) + \"\\n\")\r\n end = time.time()\r\n print(f\"Saved {len(dataset)} examples to {path} in {end - start} seconds.\")\r\n\r\nfast_to_json(original, \"from_original_fast.json\")\r\nfast_to_json(selected_subset1, \"from_select1_fast.json\")\r\nfast_to_json(selected_subset2, \"from_select2_fast.json\")\r\nfast_to_json(selected_subset3, \"from_select3_fast.json\")\r\n```\r\n```\r\nSaved 87599 examples to from_original_fast.json in 8 seconds.\r\nSaved 87599 examples to from_select1_fast.json in 10 seconds.\r\nSaved 43799 examples to from_select2_fast.json in 6 seconds.\r\nSaved 43800 examples to from_select3_fast.json in 5 seconds.\r\n```",
"There are slight differences between what you're doing and what `to_json` is actually doing.\r\nIn particular `to_json` currently converts batches of rows (as an arrow table) to a pandas dataframe, and then to JSON Lines. From your benchmark it looks like it's faster if we don't use pandas.\r\n\r\nThanks for investigating, I think we can optimize `to_json` significantly thanks to your test.",
"Thanks for your observations, @eladsegal! I spent some time with this and tried different approaches. Turns out that https://github.com/huggingface/datasets/blob/bb13373637b1acc55f8a468a8927a56cf4732230/src/datasets/io/json.py#L100 is giving the problem when we use `to_json` after `select`. This is when `indices` parameter in `query_table` is not `None` (if it is `None` then `to_json` should work as expected)\r\n\r\nIn order to circumvent this problem, I found out instead of doing Arrow Table -> Pandas-> JSON we can directly go to JSON by using `to_pydict()` which is a little slower than the current approach but at least `select` works properly now. Lmk what you guys think of it @lhoestq, @eladsegal?",
"Sounds good to me ! Feel free to also share your benchmarks for reference @bhavitvyamalik ",
"Posting it in @eladsegal's format:\r\n\r\nFor `squad`:\r\nSaving examples using current `to_json` in 3.63 secs\r\nSaving examples to `from_select1_fast.json` in 5.00 secs\r\nSaving examples to `from_select2_fast.json` in 2.45 secs\r\nSaving examples to `from_select3_fast.json` in 2.50 secs\r\n\r\nFor `squad_v2`:\r\nSaving examples using current `to_json` in 5.26 secs\r\nSaving examples to `from_select1_fast.json` in 7.54 secs\r\nSaving examples to `from_select2_fast.json` in 3.80 secs\r\nSaving examples to `from_select3_fast.json` in 3.67 secs"
] | 2021-12-11T01:36:31 | 2021-12-21T15:49:07 | null |
CONTRIBUTOR
| null | null | null |
## Describe the bug
Saving a dataset to JSON with `to_json` is extremely slow after using `.select` on the original dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
original = load_dataset("squad", split="train")
original.to_json("from_original.json") # Takes 0 seconds
selected_subset1 = original.select([i for i in range(len(original))])
selected_subset1.to_json("from_select1.json") # Takes 212 seconds
selected_subset2 = original.select([i for i in range(int(len(original) / 2))])
selected_subset2.to_json("from_select2.json") # Takes 90 seconds
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: master (https://github.com/huggingface/datasets/commit/6090f3cfb5c819f441dd4a4bb635e037c875b044)
- Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.27
- Python version: 3.9.7
- PyArrow version: 6.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3419/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3416
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3416/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3416/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3416/events
|
https://github.com/huggingface/datasets/issues/3416
| 1,076,868,771 |
I_kwDODunzps5AL7aj
| 3,416 |
disaster_response_messages unavailable
|
{
"login": "sacdallago",
"id": 6240943,
"node_id": "MDQ6VXNlcjYyNDA5NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6240943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sacdallago",
"html_url": "https://github.com/sacdallago",
"followers_url": "https://api.github.com/users/sacdallago/followers",
"following_url": "https://api.github.com/users/sacdallago/following{/other_user}",
"gists_url": "https://api.github.com/users/sacdallago/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sacdallago/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sacdallago/subscriptions",
"organizations_url": "https://api.github.com/users/sacdallago/orgs",
"repos_url": "https://api.github.com/users/sacdallago/repos",
"events_url": "https://api.github.com/users/sacdallago/events{/privacy}",
"received_events_url": "https://api.github.com/users/sacdallago/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false | null |
[] | null |
[
"Hi, thanks for reporting! This is a duplicate of https://github.com/huggingface/datasets/issues/3240. We are working on a fix.\r\n\r\n"
] | 2021-12-10T13:49:17 | 2021-12-14T14:38:29 | 2021-12-14T14:38:29 |
NONE
| null | null | null |
## Dataset viewer issue for '* disaster_response_messages*'
**Link:** https://huggingface.co/datasets/disaster_response_messages
Dataset unavailable. Link dead: https://datasets.appen.com/appen_datasets/disaster_response_data/disaster_response_messages_training.csv
Am I the one who added this dataset ?No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3416/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3415
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3415/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3415/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3415/events
|
https://github.com/huggingface/datasets/issues/3415
| 1,076,472,534 |
I_kwDODunzps5AKarW
| 3,415 |
Non-deterministic tests: CI tests randomly fail
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"I think it might come from two different issues:\r\n1. Google Drive is an unreliable host, mainly because of quota limitations\r\n2. the staging environment can sometimes raise some errors\r\n\r\nFor Google Drive tests we could set up some retries with backup URLs if necessary I guess.\r\nFor staging on the other hand, I guess we can investigate what causes this and discuss with the back-end team",
"Closed by:\r\n- #3982"
] | 2021-12-10T06:08:59 | 2022-03-31T16:38:51 | 2022-03-31T16:38:51 |
MEMBER
| null | null | null |
## Describe the bug
Some CI tests fail randomly.
1. In https://github.com/huggingface/datasets/pull/3375/commits/c10275fe36085601cb7bdb9daee9a8f1fc734f48, there were 3 failing tests, only on Linux:
```
=========================== short test summary info ============================
FAILED tests/test_streaming_download_manager.py::test_streaming_dl_manager_get_extraction_protocol[https://drive.google.com/uc?export=download&id=1k92sUfpHxKq8PXWRr7Y5aNHXwOCNUmqh-zip]
FAILED tests/test_streaming_download_manager.py::test_streaming_gg_drive - Fi...
FAILED tests/test_streaming_download_manager.py::test_streaming_gg_drive_zipped
= 3 failed, 3553 passed, 2950 skipped, 2 xfailed, 1 xpassed, 125 warnings in 192.79s (0:03:12) =
```
2. After re-running the CI (without any change in the code) in https://github.com/huggingface/datasets/pull/3375/commits/57bfe1f342cd3c59d2510b992d5f06a0761eb147, there was only 1 failing test (one on Linux and a different one on Windows):
- On Linux:
```
=========================== short test summary info ============================
FAILED tests/test_streaming_download_manager.py::test_streaming_gg_drive_zipped
= 1 failed, 3555 passed, 2950 skipped, 2 xfailed, 1 xpassed, 125 warnings in 199.76s (0:03:19) =
```
- On Windows:
```
=========================== short test summary info ===========================
FAILED tests/test_load.py::test_load_dataset_builder_for_community_dataset_without_script
= 1 failed, 3551 passed, 2954 skipped, 2 xfailed, 1 xpassed, 121 warnings in 478.58s (0:07:58) =
```
The test `tests/test_streaming_download_manager.py::test_streaming_gg_drive_zipped` passes locally.
3. After re-running again the CI (without any change in the code) in https://github.com/huggingface/datasets/pull/3375/commits/39f32f2119cf91b86867216bb5c356c586503c6a, ALL the tests passed.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3415/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3411
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3411/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3411/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3411/events
|
https://github.com/huggingface/datasets/issues/3411
| 1,075,846,272 |
I_kwDODunzps5AIByA
| 3,411 |
[chinese wwm] load_datasets behavior not as expected when using run_mlm_wwm.py script
|
{
"login": "hyusterr",
"id": 52968111,
"node_id": "MDQ6VXNlcjUyOTY4MTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/52968111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hyusterr",
"html_url": "https://github.com/hyusterr",
"followers_url": "https://api.github.com/users/hyusterr/followers",
"following_url": "https://api.github.com/users/hyusterr/following{/other_user}",
"gists_url": "https://api.github.com/users/hyusterr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hyusterr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hyusterr/subscriptions",
"organizations_url": "https://api.github.com/users/hyusterr/orgs",
"repos_url": "https://api.github.com/users/hyusterr/repos",
"events_url": "https://api.github.com/users/hyusterr/events{/privacy}",
"received_events_url": "https://api.github.com/users/hyusterr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false | null |
[] | null |
[
"@LysandreJik not so sure who to @\r\nCould you help?",
"Hi @hyusterr, I believe it is @wlhgtc from https://github.com/huggingface/transformers/pull/9887"
] | 2021-12-09T17:54:35 | 2021-12-22T11:21:33 | null |
NONE
| null | null | null |
## Describe the bug
Model I am using (Bert, XLNet ...): bert-base-chinese
The problem arises when using:
* [https://github.com/huggingface/transformers/blob/master/examples/research_projects/mlm_wwm/run_mlm_wwm.py] the official example scripts: `rum_mlm_wwm.py`
The tasks I am working on is: pretraining whole word masking with my own dataset and ref.json file
I tried follow the run_mlm_wwm.py procedure to do whole word masking on pretraining task. my file is in .txt form, where one line represents one sample, with `9,264,784` chinese lines in total. the ref.json file is also contains 9,264,784 lines of whole word masking reference data for my chinese corpus. but when I try to adapt the run_mlm_wwm.py script, it shows that somehow after
`datasets["train"] = load_dataset(...`
`len(datasets["train"])` returns `9,265,365`
then, after `tokenized_datasets = datasets.map(...`
`len(tokenized_datasets["train"])` returns `9,265,279`
I'm really confused and tried to trace code by myself but can't know what happened after a week trial.
I want to know what happened in the `load_dataset()` function and `datasets.map` here and how did I get more lines of data than I input. so I'm here to ask.
## To reproduce
Sorry that I can't provide my data here since it did not belong to me. but I'm sure I remove the blank lines.
## Expected behavior
I expect the code run as it should. but the AssertionError in line 167 keeps raise as the line of reference json and datasets['train'] differs.
Thanks for your patient reading!
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 3.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3411/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3408
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3408/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3408/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3408/events
|
https://github.com/huggingface/datasets/issues/3408
| 1,075,642,915 |
I_kwDODunzps5AHQIj
| 3,408 |
Typo in Dataset viewer error message
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false |
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Fixed, thanks\r\n<img width=\"661\" alt=\"Capture d’écran 2021-12-22 à 12 02 30\" src=\"https://user-images.githubusercontent.com/1676121/147082881-cf700e8d-0511-4431-b214-d6cf8137db10.png\">\r\n"
] | 2021-12-09T14:34:02 | 2021-12-22T11:02:53 | 2021-12-22T11:02:53 |
MEMBER
| null | null | null |
## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
When creating an empty dataset repo, the Dataset Preview provides a helpful message that no files were found. There is a tiny typo in that message: "ressource" should be "resource"

Am I the one who added this dataset ?
N/A
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3408/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3405
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3405/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3405/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3405/events
|
https://github.com/huggingface/datasets/issues/3405
| 1,074,360,362 |
I_kwDODunzps5ACXAq
| 3,405 |
ZIP format inference does not work when files located in a dir inside the archive
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[] | 2021-12-08T12:32:15 | 2021-12-08T13:03:29 | 2021-12-08T13:03:29 |
MEMBER
| null | null | null |
## Describe the bug
When a zipped file contains archived files within a directory, the function `infer_module_for_data_files_in_archives` does not work.
It only works for files located in the root directory of the ZIP file.
## Steps to reproduce the bug
```python
infer_module_for_data_files_in_archives(["path/to/zip/file.zip"], False)
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3405/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3404
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3404/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3404/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3404/events
|
https://github.com/huggingface/datasets/issues/3404
| 1,073,657,561 |
I_kwDODunzps4__rbZ
| 3,404 |
Optimize ZIP format inference
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[] | 2021-12-07T18:44:49 | 2021-12-14T17:08:41 | 2021-12-14T17:08:41 |
MEMBER
| null | null | null |
**Is your feature request related to a problem? Please describe.**
When hundreds of ZIP files are present in a dataset, format inference takes too long.
See: https://github.com/bigscience-workshop/data_tooling/issues/232#issuecomment-986685497
**Describe the solution you'd like**
Iterate over a maximum number of files.
CC: @lhoestq
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3404/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3403
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3403/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3403/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3403/events
|
https://github.com/huggingface/datasets/issues/3403
| 1,073,622,120 |
I_kwDODunzps4__ixo
| 3,403 |
Cannot import name 'maybe_sync'
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"Hi ! Can you try updating `fsspec` ? The minimum version is `2021.05.0`",
"hey @lhoestq. I'm using `fsspec-2021.11.1` but still getting that error.",
"Maybe this discussion can help:\r\n\r\nhttps://github.com/fsspec/filesystem_spec/issues/597#issuecomment-958646964",
"Thanks @lhoestq. Downgrading `fsspec and s3fs` to `2021.10` fixed this issue!"
] | 2021-12-07T17:57:59 | 2021-12-17T07:00:35 | 2021-12-17T07:00:35 |
CONTRIBUTOR
| null | null | null |
## Describe the bug
Cannot seem to import datasets when running run_summarizer.py script on a VM set up on ovhcloud
## Steps to reproduce the bug
```python
from datasets import load_dataset
```
## Expected results
No error
## Actual results
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/lib/python3.7/site-packages/datasets/__init__.py", line 34, in <module>
from .arrow_dataset import Dataset, concatenate_datasets
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 48, in <module>
from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_writer.py", line 27, in <module>
from .features import (
File "/opt/conda/lib/python3.7/site-packages/datasets/features/__init__.py", line 2, in <module>
from .audio import Audio
File "/opt/conda/lib/python3.7/site-packages/datasets/features/audio.py", line 8, in <module>
from ..utils.streaming_download_manager import xopen
File "/opt/conda/lib/python3.7/site-packages/datasets/utils/streaming_download_manager.py", line 16, in <module>
from ..filesystems import COMPRESSION_FILESYSTEMS
File "/opt/conda/lib/python3.7/site-packages/datasets/filesystems/__init__.py", line 13, in <module>
from .s3filesystem import S3FileSystem # noqa: F401
File "/opt/conda/lib/python3.7/site-packages/datasets/filesystems/s3filesystem.py", line 1, in <module>
import s3fs
File "/opt/conda/lib/python3.7/site-packages/s3fs/__init__.py", line 1, in <module>
from .core import S3FileSystem, S3File
File "/opt/conda/lib/python3.7/site-packages/s3fs/core.py", line 11, in <module>
from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper, maybe_sync
ImportError: cannot import name 'maybe_sync' from 'fsspec.asyn' (/opt/conda/lib/python3.7/site-packages/fsspec/asyn.py)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.16.0
- Platform: OVH Cloud Tesla V100 Machine
- Python version: 3.7.9
- PyArrow version: 6.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3403/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3401
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3401/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3401/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3401/events
|
https://github.com/huggingface/datasets/issues/3401
| 1,073,603,508 |
I_kwDODunzps4__eO0
| 3,401 |
Add Wikimedia pre-processed datasets
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
open
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"As we are planning to stop using Apache Beam (our `datasets.BeamBasedBuilder`) for the generation of some datasets (including [Wikipedia](https://huggingface.co/datasets/wikipedia/blob/main/wikipedia.py)), I have been working on [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) to:\r\n- Port the Wikipedia generation script to use `datasets.GeneratorBasedBuilder` instead and place it under the \"script\" branch: https://huggingface.co/datasets/wikimedia/wikipedia/tree/script\r\n- Improve the efficiency of the code and make it highly parellizable. See:\r\n - [Parallelize dataset generation over multistreams](https://huggingface.co/datasets/wikimedia/wikipedia/commit/610c55864586dbdad7ac5a13c21a367bb000a1d3)\r\n - [Parallelize data downloading](https://huggingface.co/datasets/wikimedia/wikipedia/commit/b35d406bd9e81f08c68e7bf95d130d2f506dfe77)\r\n\r\n With these improvements, I can generate the English Wikipedia in 5h using 16 processors in a machine without needing a huge amount of RAM (the machine had 32 GB, but I think less can be used as well):\r\n ```python\r\n ds = load_dataset(\"wikimedia/wikipedia\", revision=\"script\", date=\"20231101\", language=\"en\", host=\"https://mirror.accum.se/mirror/wikimedia.org/dumps\", split=\"train\", num_proc=16)\r\n ```\r\n- Pre-process all Wikipedia languages for the latest 2023-11-01 dump and make them available to the entire community for easy use:\r\n ```python\r\n ds = load_dataset(\"wikimedia/wikipedia\", \"20231101.en\", split=\"train\", num_proc=16)\r\n ```\r\nCC: @geohci "
] | 2021-12-07T17:33:19 | 2023-11-23T07:56:29 | null |
MEMBER
| null | null | null |
## Adding a Dataset
- **Name:** Add pre-processed data to:
- *wikimedia/wikipedia*: https://huggingface.co/datasets/wikimedia/wikipedia
- *wikimedia/wikisource*: https://huggingface.co/datasets/wikimedia/wikisource
- **Description:** Add pre-processed data to the Hub for all languages
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** This will be very useful for the NLP community, as the pre-processing has a high cost for lot of researchers (both in computation and in knowledge)
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
CC: @geohci, @yjernite
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3401/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3400
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3400/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3400/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3400/events
|
https://github.com/huggingface/datasets/issues/3400
| 1,073,600,382 |
I_kwDODunzps4__dd-
| 3,400 |
Improve Wikipedia loading script
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false | null |
[] | null |
[
"Thanks! See https://public.paws.wmcloud.org/User:Isaac_(WMF)/HuggingFace%20Wikipedia%20Processing.ipynb for more implementation details / some data around the overhead induced by adding the extra preprocessing steps (stripping link prefixes and magic words)",
"Closed by:\r\n- #3435"
] | 2021-12-07T17:29:25 | 2022-03-22T16:52:28 | 2022-03-22T16:52:28 |
MEMBER
| null | null | null |
As reported by @geohci, the "wikipedia" processing/loading script could be improved by some additional small suggested processing functions:
- _extract_content(filepath):
- Replace .startswith("#redirect") with more structured approach: if elem.find(f"./{namespace}redirect") is None: continue
- _parse_and_clean_wikicode(raw_content, parser):
- Remove rm_template from cleaning -- this is redundant with .strip_code() from mwparserformhell
- Build a language-specific list of namespace prefixes to filter out per below get_namespace_prefixes
- Optional: strip prefixes like categories -- e.g., Category:Towns in Tianjin becomes Towns in Tianjin
- Optional: strip magic words
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3400/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3399
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3399/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3399/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3399/events
|
https://github.com/huggingface/datasets/issues/3399
| 1,073,593,861 |
I_kwDODunzps4__b4F
| 3,399 |
Add Wikisource dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
open
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"See notebook by @geohci: https://public.paws.wmcloud.org/User:Isaac_(WMF)/HuggingFace%20Wikisource%20Processing.ipynb"
] | 2021-12-07T17:21:31 | 2021-12-10T17:26:26 | null |
MEMBER
| null | null | null |
## Adding a Dataset
- **Name:** *wikisource*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** Additional high quality textual data, besides Wikipedia.
Add loading script as "canonical" dataset (as it is the case for ""wikipedia").
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
CC: @geohci, @yjernite
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3399/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3398
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3398/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3398/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3398/events
|
https://github.com/huggingface/datasets/issues/3398
| 1,073,590,384 |
I_kwDODunzps4__bBw
| 3,398 |
Add URL field to Wikimedia dataset instances: wikipedia,...
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false | null |
[] | null |
[
"@geohci, I think the field \"url\" does not appear in the Wikimedia dumps. Therefore I guess we should generate it, using the \"title\" field and making some transformation of it (replacing spaces with underscores) and prepending the domain (created using the language)?",
"Indeed:\r\n\r\n> To re-distribute text on Wikipedia in any form, provide credit to the authors either by including a) a [hyperlink](https://en.wikipedia.org/wiki/Hyperlink) (where possible) or [URL](https://en.wikipedia.org/wiki/URL) to the page or pages you are re-using, b) a hyperlink (where possible) or URL to an alternative, stable online copy which is freely accessible, which conforms with the license, and which provides credit to the authors in a manner equivalent to the credit given on this website, or c) a list of all authors. (Any list of authors may be filtered to exclude very small or irrelevant contributions.) This applies to text developed by the Wikipedia community. Text from external sources may attach additional attribution requirements to the work, which should be indicated on an article's face or on its talk page. For example, a page may have a banner or other notation indicating that some or all of its content was originally published somewhere else. Where such notations are visible in the page itself, they should generally be preserved by re-users.\r\n\r\nsource: https://en.wikipedia.org/wiki/Wikipedia:Copyrights\r\n\r\nI guess it's fine to add the URL field - it can be constructed easily from the title page IIRC.",
"yep, sorry forgot that that wasn't already in the dumps. specifically `f\"https://{language}.wikipedia.org/wiki/{title.replace(' ', '_')}` should do it",
"Thanks @geohci.\r\n\r\nI had already been looking for information about the conversion from title to URL and I found that apart from replacing blanks with underscores, some other special character must also be percent-encoded (e.g. `\"` to `%22`): https://meta.wikimedia.org/wiki/Help:URL\r\n\r\nTherefore, I have finally used `urllib.parse.quote` function. This additionally percent-encodes non-ASCII characters, but Wikimedia docs say these are equivalent:\r\n> For the other characters either the code or the character can be used in internal and external links, they are equivalent. The system does a conversion when needed.\r\n> [[%C3%80_propos_de_M%C3%A9ta]]\r\n> is rendered as [À_propos_de_Méta](https://meta.wikimedia.org/wiki/%C3%80_propos_de_M%C3%A9ta), almost like [À propos de Méta](https://meta.wikimedia.org/wiki/%C3%80_propos_de_M%C3%A9ta), which leads to this page on Meta with in the address bar the URL\r\n> [http://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta](https://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta)\r\n> while [http://meta.wikipedia.org/wiki/À_propos_de_Méta](https://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta) leads to the same. ",
"Closed by:\r\n- #3789 "
] | 2021-12-07T17:17:27 | 2022-03-22T16:53:27 | 2022-03-22T16:53:27 |
MEMBER
| null | null | null |
As reported by @geohci, in order to host pre-processed data in the Hub, we should add the full URL to data instances (new field "url"), so that we conform to proper attribution from license requirement. See, e.g.: https://fair-trec.github.io/docs/Fair_Ranking_2021_Participant_Instructions.pdf#subsection.3.2
This should be done for all pre-processed datasets under "wikimedia" org in the Hub: https://huggingface.co/wikimedia
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3398/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3396
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3396/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3396/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3396/events
|
https://github.com/huggingface/datasets/issues/3396
| 1,073,467,183 |
I_kwDODunzps4_-88v
| 3,396 |
Install Audio dependencies to support audio decoding
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
},
{
"id": 4027368468,
"node_id": "LA_kwDODunzps7wDMQU",
"url": "https://api.github.com/repos/huggingface/datasets/labels/audio_column",
"name": "audio_column",
"color": "F83ACF",
"default": false,
"description": ""
}
] |
closed
| false |
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"https://huggingface.co/datasets/projecte-aina/parlament_parla -> works (but we still have to show an audio player)\r\n\r\nhttps://huggingface.co/datasets/openslr -> another issue: `Message: [Errno 2] No such file or directory: '/home/hf/datasets-preview-backend/zip:/asr_javanese/data/00/00004fe6aa.flac'`",
"Done",
"https://huggingface.co/datasets/projecte-aina/parlament_parla/viewer/clean/train works\r\n\r\n<img width=\"1535\" alt=\"Capture d’écran 2022-04-12 à 13 58 47\" src=\"https://user-images.githubusercontent.com/1676121/162957855-cb3d9e2e-4b61-488c-99ca-8065cd8fe377.png\">\r\n",
"But https://huggingface.co/datasets/openslr/viewer does not work\r\n\r\n<img width=\"678\" alt=\"Capture d’écran 2022-04-12 à 13 59 46\" src=\"https://user-images.githubusercontent.com/1676121/162958013-e31ef2ae-f886-47b7-9f27-664ed3d4b5a1.png\">\r\n\r\nSame issue as #4126:\r\n\r\n```\r\nStatus code: 400\r\nException: TypeError\r\nMessage: __init__() got an unexpected keyword argument 'audio_column'\r\n```",
"Fixed:\r\n<img width=\"1561\" alt=\"Capture d’écran 2022-04-25 à 18 11 51\" src=\"https://user-images.githubusercontent.com/1676121/165129813-018ece9e-8b20-4544-844d-4e88148e738f.png\">\r\n"
] | 2021-12-07T15:11:36 | 2022-04-25T16:12:22 | 2022-04-25T16:12:01 |
MEMBER
| null | null | null |
## Dataset viewer issue for '*openslr*', '*projecte-aina/parlament_parla*'
**Link:** *https://huggingface.co/datasets/openslr*
**Link:** *https://huggingface.co/datasets/projecte-aina/parlament_parla*
Error:
```
Status code: 400
Exception: ImportError
Message: To support decoding audio files, please install 'librosa'.
```
Am I the one who added this dataset ? Yes-No
- openslr: No
- projecte-aina/parlament_parla: Yes
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3396/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3394
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3394/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3394/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3394/events
|
https://github.com/huggingface/datasets/issues/3394
| 1,073,396,308 |
I_kwDODunzps4_-rpU
| 3,394 |
Preserve all feature types when saving a dataset on the Hub with `push_to_hub`
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"According to this [comment in the forum](https://discuss.huggingface.co/t/save-datasetdict-to-huggingface-hub/12075/8?u=lhoestq), using `push_to_hub` on a dataset with `ClassLabel` can also make the feature simply disappear when it's reloaded !",
"Maybe we can also fix https://github.com/huggingface/datasets/issues/3035 while working on this because, as pointed out in my initial post, `save_to_disk` also saves the `dataset_info.json` file."
] | 2021-12-07T14:08:30 | 2021-12-21T17:00:09 | 2021-12-21T17:00:09 |
CONTRIBUTOR
| null | null | null |
Currently, if one of the dataset features is of type `ClassLabel`, saving the dataset with `push_to_hub` and reloading the dataset with `load_dataset` will return the feature of type `Value`. To fix this, we should do something similar to `save_to_disk` (which correctly preserves the types) and not only push the parquet files in `push_to_hub`, but also the dataset `info` (stored in a JSON file).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3394/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3394/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3393
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3393/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3393/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3393/events
|
https://github.com/huggingface/datasets/issues/3393
| 1,073,189,777 |
I_kwDODunzps4_95OR
| 3,393 |
Common Voice Belarusian Dataset
|
{
"login": "wiedymi",
"id": 42713027,
"node_id": "MDQ6VXNlcjQyNzEzMDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/42713027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wiedymi",
"html_url": "https://github.com/wiedymi",
"followers_url": "https://api.github.com/users/wiedymi/followers",
"following_url": "https://api.github.com/users/wiedymi/following{/other_user}",
"gists_url": "https://api.github.com/users/wiedymi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wiedymi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wiedymi/subscriptions",
"organizations_url": "https://api.github.com/users/wiedymi/orgs",
"repos_url": "https://api.github.com/users/wiedymi/repos",
"events_url": "https://api.github.com/users/wiedymi/events{/privacy}",
"received_events_url": "https://api.github.com/users/wiedymi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] |
open
| false | null |
[] | null |
[] | 2021-12-07T10:37:02 | 2021-12-09T15:56:03 | null |
NONE
| null | null | null |
## Adding a Dataset
- **Name:** *Common Voice Belarusian Dataset*
- **Description:** *[commonvoice.mozilla.org/be](https://commonvoice.mozilla.org/be)*
- **Data:** *[commonvoice.mozilla.org/be/datasets](https://commonvoice.mozilla.org/be/datasets)*
- **Motivation:** *It has more than 7GB of data, so it will be great to have it in this package so anyone can try to train something for Belarusian language.*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3393/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3393/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3392
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3392/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3392/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3392/events
|
https://github.com/huggingface/datasets/issues/3392
| 1,073,073,408 |
I_kwDODunzps4_9c0A
| 3,392 |
Dataset viewer issue for `dansbecker/hackernews_hiring_posts`
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false | null |
[] | null |
[
"This issue was fixed by me calling `all_datasets.push_to_hub(\"hackernews_hiring_posts\")`.\r\n\r\nThe previous problems were from calling `all_datasets.save_to_disk` and then pushing with `my_repo.git_add` and `my_repo.push_to_hub`.\r\n"
] | 2021-12-07T08:41:01 | 2021-12-07T14:04:28 | 2021-12-07T14:04:28 |
CONTRIBUTOR
| null | null | null |
## Dataset viewer issue for `dansbecker/hackernews_hiring_posts`
**Link:** https://huggingface.co/datasets/dansbecker/hackernews_hiring_posts
*short description of the issue*
Dataset preview not showing for uploaded DatasetDict. See https://discuss.huggingface.co/t/dataset-preview-not-showing-for-uploaded-datasetdict/12603
Am I the one who added this dataset ?
No -> @dansbecker
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3392/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3391
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3391/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3391/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3391/events
|
https://github.com/huggingface/datasets/issues/3391
| 1,072,849,055 |
I_kwDODunzps4_8mCf
| 3,391 |
method to select columns
|
{
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false | null |
[] | null |
[
"duplicate of #2655"
] | 2021-12-07T02:44:19 | 2021-12-07T02:45:27 | 2021-12-07T02:45:27 |
CONTRIBUTOR
| null | null | null |
**Is your feature request related to a problem? Please describe.**
* There is currently no way to select some columns of a dataset. In pandas, one can use `df[['col1', 'col2']]` to select columns, but in `datasets`, it results in error.
**Describe the solution you'd like**
* A new method that can be used to create a new dataset with only a list of specified columns.
**Describe alternatives you've considered**
`.remove_columns(self, columns: Union[str, List[str]], inverse: bool = False)`
Or
`.select(self, indices: Iterable = None, columns: List[str] = None)`
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3391/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3390
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3390/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3390/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3390/events
|
https://github.com/huggingface/datasets/issues/3390
| 1,072,462,456 |
I_kwDODunzps4_7Hp4
| 3,390 |
Loading dataset throws "KeyError: 'Field "builder_name" does not exist in table schema'"
|
{
"login": "R4ZZ3",
"id": 25264037,
"node_id": "MDQ6VXNlcjI1MjY0MDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/25264037?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/R4ZZ3",
"html_url": "https://github.com/R4ZZ3",
"followers_url": "https://api.github.com/users/R4ZZ3/followers",
"following_url": "https://api.github.com/users/R4ZZ3/following{/other_user}",
"gists_url": "https://api.github.com/users/R4ZZ3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/R4ZZ3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/R4ZZ3/subscriptions",
"organizations_url": "https://api.github.com/users/R4ZZ3/orgs",
"repos_url": "https://api.github.com/users/R4ZZ3/repos",
"events_url": "https://api.github.com/users/R4ZZ3/events{/privacy}",
"received_events_url": "https://api.github.com/users/R4ZZ3/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"Got solved it with push_to_hub, closing"
] | 2021-12-06T18:22:49 | 2021-12-06T20:22:05 | 2021-12-06T20:22:05 |
NONE
| null | null | null |
## Describe the bug
I have prepared dataset to datasets and now I am trying to load it back Finnish-NLP/voxpopuli_fi
I get "KeyError: 'Field "builder_name" does not exist in table schema'"
My dataset folder and files should be like @patrickvonplaten has here https://huggingface.co/datasets/flax-community/german-common-voice-processed
How my voxpopuli dataset looks like:

Part of the processing (path column is the absolute path to audio files)
```
def add_audio_column(example):
example['audio'] = example['path']
return example
voxpopuli = voxpopuli.map(add_audio_column)
voxpopuli.cast_column("audio", Audio())
voxpopuli["audio"] <-- to my knowledge this does load the local files and prepares those arrays
voxpopuli = voxpopuli.cast_column("audio", Audio(sampling_rate=16_000)) resampling 16kHz
```
I have then saved it to disk_
`voxpopuli.save_to_disk('/asr_disk/datasets_processed_new/voxpopuli')`
and made folder structure same as @patrickvonplaten
I also get same error while trying to load_dataset from his repo:

## Steps to reproduce the bug
```python
dataset = load_dataset("Finnish-NLP/voxpopuli_fi")
```
## Expected results
Dataset is loaded correctly and looks like in the first picture
## Actual results
Loading throws keyError:
KeyError: 'Field "builder_name" does not exist in table schema'
Resources I have been trying to follow:
https://huggingface.co/docs/datasets/audio_process.html
https://huggingface.co/docs/datasets/share_dataset.html
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.16.2.dev0
- Platform: Ubuntu 20.04.2 LTS
- Python version: 3.8.12
- PyArrow version: 6.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3390/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3389
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3389/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3389/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3389/events
|
https://github.com/huggingface/datasets/issues/3389
| 1,072,191,865 |
I_kwDODunzps4_6Fl5
| 3,389 |
Add EDGAR
|
{
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
open
| false | null |
[] | null |
[
"cc @juliensimon ",
"Datasets are not tracked in this repository anymore. But you can make your own dataset in the huggingface hub"
] | 2021-12-06T14:06:11 | 2022-10-05T10:40:22 | null |
MEMBER
| null | null | null |
## Adding a Dataset
- **Name:** EDGAR Database
- **Description:** https://www.sec.gov/edgar/about EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system, is the primary system for companies and others submitting documents under the Securities Act of 1933, the Securities Exchange Act of 1934, the Trust Indenture Act of 1939, and the Investment Company Act of 1940. Containing millions of company and individual filings, EDGAR benefits investors, corporations, and the U.S. economy overall by increasing the efficiency, transparency, and fairness of the securities markets. The system processes about 3,000 filings per day, serves up 3,000 terabytes of data to the public annually, and accommodates 40,000 new filers per year on average. EDGAR® and EDGARLink® are registered trademarks of the SEC.
- **Data:** https://www.sec.gov/os/accessing-edgar-data
- **Motivation:** Enabling and improving FSI (Financial Services Industry) datasets to increase ease of use
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3389/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3385
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3385/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3385/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3385/events
|
https://github.com/huggingface/datasets/issues/3385
| 1,071,742,310 |
I_kwDODunzps4_4X1m
| 3,385 |
None batched `with_transform`, `set_transform`
|
{
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null |
[
"Hi ! Thanks for the suggestion :)\r\nIt makes sense to me, and it can surely be implemented by wrapping the user's function to make it a batched function. However I'm not a big fan of the inconsistency it would create with `map`: `with_transform` is batched by default while `map` isn't.\r\n\r\nIs there something you would like to contribute ? I can give you some pointers if you want",
"Hi @lhoestq ,\r\nSorry I missed your reply.\r\n\r\nI would love to contribute. But I don't know which solution would be the best for this repo.\r\n\r\n> However I'm not a big fan of the inconsistency it would create with map: with_transform is batched by default while map isn't.\r\n\r\nI agree. What do you think about the alternative solutions?\r\n\r\n> * Convert a non-batched transform function to batched one myself.\r\n\r\nThis won't be able to use torch loader multi-worker.\r\n\r\n> * Wrap a 🤗 Dataset with torch Dataset, and add a __getitem__. 🙄\r\n\r\nThis is actually pretty simple.\r\n\r\n```python\r\nimport torch\r\n\r\nclass LazyMapTorchDataset(torch.utils.data.Dataset):\r\n def __init__(self, ds, fn):\r\n self.ds = ds\r\n self.fn = fn\r\n def __getitem__(self, i):\r\n return self.fn(self.ds[i])\r\n\r\nd = [{1:2, 2:3}, {1:3, 2:4}]\r\nds = LazyMapTorchDataset(d, lambda x:{k:v*2 for k,v in x.items()})\r\nfor i in range(2):\r\n print(f'before {d[i]}')\r\n print(f'after {ds[i]}')\r\n```\r\n```\r\nbefore {1: 2, 2: 3}\r\nafter {1: 4, 2: 6}\r\nbefore {1: 3, 2: 4}\r\nafter {1: 6, 2: 8}\r\n```\r\n\r\nBut this requires converting data to torch tensor myself. And this is really similar to `.map()`, why not just use it? So I have the next solution.\r\n\r\n> * Have lazy=False in Dataset.map, and returns a LazyDataset if lazy=True. This way the same map interface can be used, and existing code can be updated with one argument change.\r\n\r\nI think I like this solution best. Because `.with_transform` is entangled with `.with_format`, so seems more flexible to modify the `.map` than to modify `.with_transform`.\r\n\r\nThe usage looks nice, too.\r\n```python\r\n# lazy, one to one, can be parallelized via torch loader, no need to set `num_worker` beforehand.\r\ndataset = dataset.map(fn, lazy=True, batched=False)\r\n# collate_fn\r\ndataloader = Dataloader(dataset.with_format('torch'), collate_fn=collate_fn, num_workers=...) \r\n```\r\n\r\nThere are some minor decisions like whether a lazy map should be allowed before another map, but I think we can work it out later. The implementation can probably borrow from `IterableDataset`.",
"I like the idea of lazy map. On the other hand we should only have either lazy map or `with_transform` (not both). That's why I'd rather stick with `with_transform` for now (but maybe we can consider it for later major releases like `datasets` v2).\r\n\r\nI understand the issue with `with_transform` and `with_format` being exclusive, maybe we can separate them: first transform, them format.\r\n\r\nFinally I think what's also going to be important in the end will be the addition of multiprocessing to transforms"
] | 2021-12-06T05:20:54 | 2022-01-17T15:25:01 | null |
CONTRIBUTOR
| null | null | null |
**Is your feature request related to a problem? Please describe.**
A `torch.utils.data.Dataset.__getitem__` operates on a single example.
But 🤗 `Datasets.with_transform` doesn't seem to allow non-batched transform.
**Describe the solution you'd like**
Have a `batched=True` argument in `Datasets.with_transform`
**Describe alternatives you've considered**
* Convert a non-batched transform function to batched one myself.
* Wrap a 🤗 Dataset with torch Dataset, and add a `__getitem__`. 🙄
* Have `lazy=False` in `Dataset.map`, and returns a `LazyDataset` if `lazy=True`. This way the same `map` interface can be used, and existing code can be updated with one argument change.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3385/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/3381
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3381/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3381/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3381/events
|
https://github.com/huggingface/datasets/issues/3381
| 1,071,283,879 |
I_kwDODunzps4_2n6n
| 3,381 |
Unable to load audio_features from common_voice dataset
|
{
"login": "ashu5644",
"id": 8268102,
"node_id": "MDQ6VXNlcjgyNjgxMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8268102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashu5644",
"html_url": "https://github.com/ashu5644",
"followers_url": "https://api.github.com/users/ashu5644/followers",
"following_url": "https://api.github.com/users/ashu5644/following{/other_user}",
"gists_url": "https://api.github.com/users/ashu5644/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashu5644/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashu5644/subscriptions",
"organizations_url": "https://api.github.com/users/ashu5644/orgs",
"repos_url": "https://api.github.com/users/ashu5644/repos",
"events_url": "https://api.github.com/users/ashu5644/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashu5644/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"Hi ! Feel free to access `batch[\"audio\"][\"array\"]` and `batch[\"audio\"][\"sampling_rate\"]` instead\r\n\r\n`datasets` 1.16 introduced some changes in `common_voice` and now the `path` field is no longer a path to a local file (but rather the path to the file in the archive it's extracted from)",
"Thanks for the information. It works.",
"Cool ! Closing this issue then"
] | 2021-12-04T19:59:11 | 2021-12-06T17:52:42 | 2021-12-06T17:52:42 |
NONE
| null | null | null |
## Describe the bug
I am not able to load audio features from common_voice dataset
## Steps to reproduce the bug
```
from datasets import load_dataset
import torchaudio
test_dataset = load_dataset("common_voice", "hi", split="test[:2%]")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
```
## Expected results
This piece of code should return test_dataset after loading audio features.
## Actual results
Reusing dataset common_voice (/home/jovyan/.cache/huggingface/datasets/common_voice/hi/6.1.0/b879a355caa529b11f2249400b61cadd0d9433f334d5c60f8c7216ccedfecfe1)
/opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py:341: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`.
"Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 "
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
0%| | 0/3 [00:00<?, ?ex/s]formats: can't open input file `common_voice_hi_23795358.mp3': No such file or directory
0%| | 0/3 [00:00<?, ?ex/s]
Traceback (most recent call last):
File "demo_file.py", line 23, in <module>
test_dataset = test_dataset.map(speech_file_to_array_fn)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2036, in map
desc=desc,
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 518, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 485, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py", line 411, in wrapper
out = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2368, in _map_single
example = apply_function_on_filtered_inputs(example, i, offset=offset)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2277, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1978, in decorated
result = f(decorated_item, *args, **kwargs)
File "demo_file.py", line 19, in speech_file_to_array_fn
speech_array, sampling_rate = torchaudio.load(batch["path"])
File "/opt/conda/lib/python3.7/site-packages/torchaudio/backend/sox_io_backend.py", line 154, in load
filepath, frame_offset, num_frames, normalize, channels_first, format)
RuntimeError: Error loading audio file: failed to open file common_voice_hi_23795358.mp3
## Environment info
- `datasets` version: 1.16.1
- Platform: Linux-4.14.243 with-debian-bullseye-sid
- Python version: 3.7.9
- PyArrow version: 6.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3381/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/3380
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3380/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3380/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3380/events
|
https://github.com/huggingface/datasets/issues/3380
| 1,071,166,270 |
I_kwDODunzps4_2LM-
| 3,380 |
[Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem!
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[] | 2021-12-04T09:18:33 | 2022-01-11T12:29:53 | 2022-01-11T12:29:53 |
MEMBER
| null | null | null |
Thanks to all of you, `datasets` will pass 11.5k stars :star2: this week!
If you have a couple of minutes and want to participate in shaping the future of the ecosystem, please share your thoughts:
[**hf.co/oss-survey**](https://hf.co/oss-survey)
(please reply in the above feedback form rather than to this thread)
Thank you all on behalf of the HuggingFace team! 🤗
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/3380/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/3380/timeline
| null |
completed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.