url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
2.19B
node_id
stringlengths
18
24
number
int64
2
6.73k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
listlengths
0
30
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
draft
null
pull_request
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
https://api.github.com/repos/huggingface/datasets/issues/2415
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2415/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2415/comments
https://api.github.com/repos/huggingface/datasets/issues/2415/events
https://github.com/huggingface/datasets/issues/2415
903,923,097
MDU6SXNzdWU5MDM5MjMwOTc=
2,415
Cached dataset not loaded
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "It actually seems to happen all the time in above configuration:\r\n* the function `filter_by_duration` correctly loads cached processed dataset\r\n* the function `prepare_dataset` is always reexecuted\r\n\r\nI end up solving the issue by saving to disk my dataset at the end but I'm still wondering if it's a bug or limitation here.", "Hi ! The hash used for caching `map` results is the fingerprint of the resulting dataset. It is computed using three things:\r\n- the old fingerprint of the dataset\r\n- the hash of the function\r\n- the hash of the other parameters passed to `map`\r\n\r\nYou can compute the hash of your function (or any python object) with\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\n\r\nmy_func = lambda x: x + 1\r\nprint(Hasher.hash(my_func))\r\n```\r\n\r\nIf `prepare_dataset` is always executed, maybe this is because your `processor` has a different hash each time you want to execute it.", "> If `prepare_dataset` is always executed, maybe this is because your `processor` has a different hash each time you want to execute it.\r\n\r\nYes I think that was the issue.\r\n\r\nFor the hash of the function:\r\n* does it consider just the name or the actual code of the function\r\n* does it consider variables that are not passed explicitly as parameters to the functions (such as the processor here)", "> does it consider just the name or the actual code of the function\r\n\r\nIt looks at the name and the actual code and all variables such as recursively. It uses `dill` to do so, which is based on `pickle`.\r\nBasically the hash is computed using the pickle bytes of your function (computed using `dill` to support most python objects).\r\n\r\n> does it consider variables that are not passed explicitly as parameters to the functions (such as the processor here)\r\n\r\nYes it does thanks to recursive pickling.", "Thanks for these explanations. I'm closing the issue." ]
2021-05-27T15:40:06
2021-06-02T13:15:47
2021-06-02T13:15:47
CONTRIBUTOR
null
null
null
## Describe the bug I have a large dataset (common_voice, english) where I use several map and filter functions. Sometimes my cached datasets after specific functions are not loaded. I always use the same arguments, same functions, no seed… ## Steps to reproduce the bug ```python def filter_by_duration(batch): return ( batch["duration"] <= 10 and batch["duration"] >= 1 and len(batch["target_text"]) > 5 ) def prepare_dataset(batch): batch["input_values"] = processor( batch["speech"], sampling_rate=batch["sampling_rate"][0] ).input_values with processor.as_target_processor(): batch["labels"] = processor(batch["target_text"]).input_ids return batch train_dataset = train_dataset.filter( filter_by_duration, remove_columns=["duration"], num_proc=data_args.preprocessing_num_workers, ) # PROBLEM HERE -> below function is reexecuted and cache is not loaded train_dataset = train_dataset.map( prepare_dataset, remove_columns=train_dataset.column_names, batch_size=training_args.per_device_train_batch_size, batched=True, num_proc=data_args.preprocessing_num_workers, ) # Later in script set_caching_enabled(False) # apply map on trained model to eval/test sets ``` ## Expected results The cached dataset should always be reloaded. ## Actual results The function is reexecuted. I have access to cached files `cache-xxxxx.arrow`. Is there a way I can somehow load manually 2 versions and see how the hash was created for debug purposes (to know if it's an issue with dataset or function)? ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.6.2 - Platform: Linux-5.8.0-45-generic-x86_64-with-glibc2.29 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2415/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2415/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2413
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2413/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2413/comments
https://api.github.com/repos/huggingface/datasets/issues/2413/events
https://github.com/huggingface/datasets/issues/2413
903,777,557
MDU6SXNzdWU5MDM3Nzc1NTc=
2,413
AttributeError: 'DatasetInfo' object has no attribute 'task_templates'
{ "login": "jungwhank", "id": 53588015, "node_id": "MDQ6VXNlcjUzNTg4MDE1", "avatar_url": "https://avatars.githubusercontent.com/u/53588015?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jungwhank", "html_url": "https://github.com/jungwhank", "followers_url": "https://api.github.com/users/jungwhank/followers", "following_url": "https://api.github.com/users/jungwhank/following{/other_user}", "gists_url": "https://api.github.com/users/jungwhank/gists{/gist_id}", "starred_url": "https://api.github.com/users/jungwhank/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jungwhank/subscriptions", "organizations_url": "https://api.github.com/users/jungwhank/orgs", "repos_url": "https://api.github.com/users/jungwhank/repos", "events_url": "https://api.github.com/users/jungwhank/events{/privacy}", "received_events_url": "https://api.github.com/users/jungwhank/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! Can you try using a more up-to-date version ? We added the task_templates in `datasets` 1.7.0.\r\n\r\nIdeally when you're working on new datasets, you should install and use the local version of your fork of `datasets`. Here I think you tried to run the 1.7.0 tests with the 1.6.2 code" ]
2021-05-27T13:44:28
2021-06-01T01:05:47
2021-06-01T01:05:47
CONTRIBUTOR
null
null
null
## Describe the bug Hello, I'm trying to add dataset and contribute, but test keep fail with below cli. ` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<my_dataset>` ## Steps to reproduce the bug It seems like a bug when I see an error with the existing dataset, not the dataset I'm trying to add. ` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<any_dataset>` ## Expected results All test passed ## Actual results ``` # check that dataset is not empty self.parent.assertListEqual(sorted(dataset_builder.info.splits.keys()), sorted(dataset)) for split in dataset_builder.info.splits.keys(): # check that loaded datset is not empty self.parent.assertTrue(len(dataset[split]) > 0) # check that we can cast features for each task template > task_templates = dataset_builder.info.task_templates E AttributeError: 'DatasetInfo' object has no attribute 'task_templates' tests/test_dataset_common.py:175: AttributeError ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.6.2 - Platform: Darwin-20.4.0-x86_64-i386-64bit - Python version: 3.7.7 - PyTorch version (GPU?): 1.7.0 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2413/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2413/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2412
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2412/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2412/comments
https://api.github.com/repos/huggingface/datasets/issues/2412/events
https://github.com/huggingface/datasets/issues/2412
903,769,151
MDU6SXNzdWU5MDM3NjkxNTE=
2,412
Docstring mistake: dataset vs. metric
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> I can provide a PR l8er...\r\n\r\nSee #2425 " ]
2021-05-27T13:39:11
2021-06-01T08:18:04
2021-06-01T08:18:04
CONTRIBUTOR
null
null
null
This: https://github.com/huggingface/datasets/blob/d95b95f8cf3cb0cff5f77a675139b584dcfcf719/src/datasets/load.py#L582 Should better be something like: `a metric identifier on HuggingFace AWS bucket (list all available metrics and ids with ``datasets.list_metrics()``)` I can provide a PR l8er...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2412/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2412/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2407
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2407/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2407/comments
https://api.github.com/repos/huggingface/datasets/issues/2407/events
https://github.com/huggingface/datasets/issues/2407
903,111,755
MDU6SXNzdWU5MDMxMTE3NTU=
2,407
.map() function got an unexpected keyword argument 'cache_file_name'
{ "login": "cindyxinyiwang", "id": 7390482, "node_id": "MDQ6VXNlcjczOTA0ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/7390482?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cindyxinyiwang", "html_url": "https://github.com/cindyxinyiwang", "followers_url": "https://api.github.com/users/cindyxinyiwang/followers", "following_url": "https://api.github.com/users/cindyxinyiwang/following{/other_user}", "gists_url": "https://api.github.com/users/cindyxinyiwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/cindyxinyiwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cindyxinyiwang/subscriptions", "organizations_url": "https://api.github.com/users/cindyxinyiwang/orgs", "repos_url": "https://api.github.com/users/cindyxinyiwang/repos", "events_url": "https://api.github.com/users/cindyxinyiwang/events{/privacy}", "received_events_url": "https://api.github.com/users/cindyxinyiwang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @cindyxinyiwang,\r\nDid you try adding `.arrow` after `cache_file_name` argument? Here I think they're expecting something like that only for a cache file:\r\nhttps://github.com/huggingface/datasets/blob/e08362256fb157c0b3038437fc0d7a0bbb50de5c/src/datasets/arrow_dataset.py#L1556-L1558", "Hi ! `cache_file_name` is an argument of the `Dataset.map` method. Can you check that your `dataset` is indeed a `Dataset` object ?\r\n\r\nIf you loaded several splits, then it would actually be a `DatasetDict` (one dataset per split, in a dictionary).\r\nIn this case, since there are several datasets in the dict, the `DatasetDict.map` method requires a `cache_file_names` argument (with an 's'), so that you can provide one file name per split.", "I think you are right. I used cache_file_names={data1: name1, data2: name2} and it works. Thank you!" ]
2021-05-27T01:54:26
2021-05-27T13:46:40
2021-05-27T13:46:40
NONE
null
null
null
## Describe the bug I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that ".map() function got an unexpected keyword argument 'cache_file_name'". I believe I'm using the latest dataset 1.6.2. Also seems like the document and the actual code indicates there is an argument 'cache_file_name' for the .map() function. Here is the code I use ## Steps to reproduce the bug ```datasets = load_from_disk(dataset_path=my_path) [...] def tokenize_function(examples): return tokenizer(examples[text_column_name]) logger.info("Mapping dataset to tokenized dataset.") tokenized_datasets = datasets.map( tokenize_function, batched=True, num_proc=preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=True, cache_file_name="my_tokenized_file" ) ``` ## Actual results tokenized_datasets = datasets.map( TypeError: map() got an unexpected keyword argument 'cache_file_name' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:1.6.2 - Platform:Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.10 - Python version:3.8.5 - PyArrow version:3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2407/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2407/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2406
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2406/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2406/comments
https://api.github.com/repos/huggingface/datasets/issues/2406/events
https://github.com/huggingface/datasets/issues/2406
902,643,844
MDU6SXNzdWU5MDI2NDM4NDQ=
2,406
Add guide on using task templates to documentation
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false } ]
null
[]
2021-05-26T16:28:26
2022-10-05T17:07:00
2022-10-05T17:07:00
MEMBER
null
null
null
Once we have a stable API on the text classification and question answering task templates, add a guide on how to use them in the documentation.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2406/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2406/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2402
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2402/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2402/comments
https://api.github.com/repos/huggingface/datasets/issues/2402/events
https://github.com/huggingface/datasets/issues/2402
900,025,329
MDU6SXNzdWU5MDAwMjUzMjk=
2,402
PermissionError on Windows when using temp dir for caching
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
2021-05-24T21:22:59
2021-05-26T16:39:29
2021-05-26T16:39:29
CONTRIBUTOR
null
null
null
Currently, the following code raises a PermissionError on master if working on Windows: ```python # run as a script or call exit() in REPL to initiate the temp dir cleanup from datasets import * d = load_dataset("sst", split="train", keep_in_memory=False) set_caching_enabled(False) d.map(lambda ex: ex) ``` Error stack trace: ``` Traceback (most recent call last): File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\weakref.py", line 624, in _exitfunc f() File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\weakref.py", line 548, in __call__ return info.func(*info.args, **(info.kwargs or {})) File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\tempfile.py", line 799, in _cleanup _shutil.rmtree(name) File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\shutil.py", line 500, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\shutil.py", line 395, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\shutil.py", line 393, in _rmtree_unsafe os.unlink(fullname) PermissionError: [WinError 5] Access is denied: 'C:\\Users\\Mario\\AppData\\Local\\Temp\\tmp20epyhmq\\cache-87a87ffb5a956e68.arrow' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2402/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2401
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2401/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2401/comments
https://api.github.com/repos/huggingface/datasets/issues/2401/events
https://github.com/huggingface/datasets/issues/2401
899,910,521
MDU6SXNzdWU4OTk5MTA1MjE=
2,401
load_dataset('natural_questions') fails with "ValueError: External features info don't match the dataset"
{ "login": "jonrbates", "id": 15602718, "node_id": "MDQ6VXNlcjE1NjAyNzE4", "avatar_url": "https://avatars.githubusercontent.com/u/15602718?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonrbates", "html_url": "https://github.com/jonrbates", "followers_url": "https://api.github.com/users/jonrbates/followers", "following_url": "https://api.github.com/users/jonrbates/following{/other_user}", "gists_url": "https://api.github.com/users/jonrbates/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonrbates/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonrbates/subscriptions", "organizations_url": "https://api.github.com/users/jonrbates/orgs", "repos_url": "https://api.github.com/users/jonrbates/repos", "events_url": "https://api.github.com/users/jonrbates/events{/privacy}", "received_events_url": "https://api.github.com/users/jonrbates/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "I faced the similar problem. Downgrading datasets to 1.5.0 fixed it.", "Thanks for reporting, I'm looking into it", "I just opened #2438 to fix this :)", "Hi ! This has been fixed in the 1.8.0 release of `datasets`" ]
2021-05-24T18:38:53
2021-06-09T09:07:25
2021-06-09T09:07:25
NONE
null
null
null
## Describe the bug load_dataset('natural_questions') throws ValueError ## Steps to reproduce the bug ```python from datasets import load_dataset datasets = load_dataset('natural_questions', split='validation[:10]') ``` ## Expected results Call to load_dataset returns data. ## Actual results ``` Using custom data configuration default Reusing dataset natural_questions (/mnt/d/huggingface/datasets/natural_questions/default/0.0.2/19bc04755018a3ad02ee74f7045cde4ba9b4162cb64450a87030ab786b123b76) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-2-d55ab8a8cc1c> in <module> ----> 1 datasets = load_dataset('natural_questions', split='validation[:10]', cache_dir='/mnt/d/huggingface/datasets') ~/miniconda3/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs) 756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 757 ) --> 758 ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) 759 if save_infos: 760 builder_instance._save_infos() ~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in as_dataset(self, split, run_post_process, ignore_verifications, in_memory) 735 736 # Create a dataset for each of the given splits --> 737 datasets = utils.map_nested( 738 partial( 739 self._build_single_dataset, ~/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types) 193 # Singleton 194 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 195 return function(data_struct) 196 197 disable_tqdm = bool(logger.getEffectiveLevel() > INFO) ~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in _build_single_dataset(self, split, run_post_process, ignore_verifications, in_memory) 762 763 # Build base dataset --> 764 ds = self._as_dataset( 765 split=split, 766 in_memory=in_memory, ~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in _as_dataset(self, split, in_memory) 838 in_memory=in_memory, 839 ) --> 840 return Dataset(**dataset_kwargs) 841 842 def _post_process(self, dataset: Dataset, resources_paths: Dict[str, str]) -> Optional[Dataset]: ~/miniconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint) 271 assert self._fingerprint is not None, "Fingerprint can't be None in a Dataset object" 272 if self.info.features.type != inferred_features.type: --> 273 raise ValueError( 274 "External features info don't match the dataset:\nGot\n{}\nwith type\n{}\n\nbut expected something like\n{}\nwith type\n{}".format( 275 self.info.features, self.info.features.type, inferred_features, inferred_features.type ValueError: External features info don't match the dataset: Got {'id': Value(dtype='string', id=None), 'document': {'title': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None), 'html': Value(dtype='string', id=None), 'tokens': Sequence(feature={'token': Value(dtype='string', id=None), 'is_html': Value(dtype='bool', id=None)}, length=-1, id=None)}, 'question': {'text': Value(dtype='string', id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'annotations': Sequence(feature={'id': Value(dtype='string', id=None), 'long_answer': {'start_token': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'end_byte': Value(dtype='int64', id=None)}, 'short_answers': Sequence(feature={'start_token': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'end_byte': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)}, length=-1, id=None), 'yes_no_answer': ClassLabel(num_classes=2, names=['NO', 'YES'], names_file=None, id=None)}, length=-1, id=None)} with type struct<annotations: struct<id: list<item: string>, long_answer: list<item: struct<start_token: int64, end_token: int64, start_byte: int64, end_byte: int64>>, short_answers: list<item: struct<end_byte: list<item: int64>, end_token: list<item: int64>, start_byte: list<item: int64>, start_token: list<item: int64>, text: list<item: string>>>, yes_no_answer: list<item: int64>>, document: struct<title: string, url: string, html: string, tokens: struct<is_html: list<item: bool>, token: list<item: string>>>, id: string, question: struct<text: string, tokens: list<item: string>>> but expected something like {'id': Value(dtype='string', id=None), 'document': {'html': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'tokens': {'is_html': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None), 'token': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'url': Value(dtype='string', id=None)}, 'question': {'text': Value(dtype='string', id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'annotations': {'id': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'long_answer': [{'end_byte': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'start_token': Value(dtype='int64', id=None)}], 'short_answers': [{'end_byte': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'end_token': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'start_byte': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'start_token': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}], 'yes_no_answer': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}} with type struct<annotations: struct<id: list<item: string>, long_answer: list<item: struct<end_byte: int64, end_token: int64, start_byte: int64, start_token: int64>>, short_answers: list<item: struct<end_byte: list<item: int64>, end_token: list<item: int64>, start_byte: list<item: int64>, start_token: list<item: int64>, text: list<item: string>>>, yes_no_answer: list<item: int64>>, document: struct<html: string, title: string, tokens: struct<is_html: list<item: bool>, token: list<item: string>>, url: string>, id: string, question: struct<text: string, tokens: list<item: string>>> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.6.2 - Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2401/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2401/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2400
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2400/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2400/comments
https://api.github.com/repos/huggingface/datasets/issues/2400/events
https://github.com/huggingface/datasets/issues/2400
899,867,212
MDU6SXNzdWU4OTk4NjcyMTI=
2,400
Concatenate several datasets with removed columns is not working.
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\ndid you fill out the env info section manually or by copy-pasting the output of the `datasets-cli env` command?\r\n\r\nThis code should work without issues on 1.6.2 version (I'm working on master (1.6.2.dev0 version) and can't reproduce this error).", "@mariosasko you are right I was still on `1.5.0`. " ]
2021-05-24T17:40:15
2021-05-25T05:52:01
2021-05-25T05:51:59
MEMBER
null
null
null
## Describe the bug You can't concatenate datasets when you removed columns before. ## Steps to reproduce the bug ```python from datasets import load_dataset, concatenate_datasets wikiann= load_dataset("wikiann","en") wikiann["train"] = wikiann["train"].remove_columns(["langs","spans"]) wikiann["test"] = wikiann["test"].remove_columns(["langs","spans"]) assert wikiann["train"].features.type == wikiann["test"].features.type concate = concatenate_datasets([wikiann["train"],wikiann["test"]]) ``` ## Expected results Merged dataset ## Actual results ```python ValueError: External features info don't match the dataset: Got {'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'ner_tags': Sequence(feature=ClassLabel(num_classes=7, names=['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC'], names_file=None, id=None), length=-1, id=None), 'langs': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'spans': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)} with type struct<langs: list<item: string>, ner_tags: list<item: int64>, spans: list<item: string>, tokens: list<item: string>> but expected something like {'ner_tags': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)} with type struct<ner_tags: list<item: int64>, tokens: list<item: string>> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: ~1.6.2~ 1.5.0 - Platform: macos - Python version: 3.8.5 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2400/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2400/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2398
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2398/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2398/comments
https://api.github.com/repos/huggingface/datasets/issues/2398/events
https://github.com/huggingface/datasets/issues/2398
899,511,837
MDU6SXNzdWU4OTk1MTE4Mzc=
2,398
News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs
{ "login": "anassalamah", "id": 8571003, "node_id": "MDQ6VXNlcjg1NzEwMDM=", "avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anassalamah", "html_url": "https://github.com/anassalamah", "followers_url": "https://api.github.com/users/anassalamah/followers", "following_url": "https://api.github.com/users/anassalamah/following{/other_user}", "gists_url": "https://api.github.com/users/anassalamah/gists{/gist_id}", "starred_url": "https://api.github.com/users/anassalamah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anassalamah/subscriptions", "organizations_url": "https://api.github.com/users/anassalamah/orgs", "repos_url": "https://api.github.com/users/anassalamah/repos", "events_url": "https://api.github.com/users/anassalamah/events{/privacy}", "received_events_url": "https://api.github.com/users/anassalamah/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "These ranges seem to be valid English. Closing." ]
2021-05-24T10:03:34
2022-10-05T17:13:49
2022-10-05T17:13:49
NONE
null
null
null
I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi. ``` train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]') val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]') # filtering out examples that are not ar-en translations but ar-hi val_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True) ``` * I'm fairly new to using datasets so I might be doing something wrong
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2398/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2398/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2396
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2396/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2396/comments
https://api.github.com/repos/huggingface/datasets/issues/2396/events
https://github.com/huggingface/datasets/issues/2396
899,016,308
MDU6SXNzdWU4OTkwMTYzMDg=
2,396
strange datasets from OSCAR corpus
{ "login": "jerryIsHere", "id": 50871412, "node_id": "MDQ6VXNlcjUwODcxNDEy", "avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jerryIsHere", "html_url": "https://github.com/jerryIsHere", "followers_url": "https://api.github.com/users/jerryIsHere/followers", "following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}", "gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}", "starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions", "organizations_url": "https://api.github.com/users/jerryIsHere/orgs", "repos_url": "https://api.github.com/users/jerryIsHere/repos", "events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}", "received_events_url": "https://api.github.com/users/jerryIsHere/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! Thanks for reporting\r\ncc @pjox is this an issue from the data ?\r\n\r\nAnyway we should at least mention that OSCAR could contain such contents in the dataset card, you're totally right @jerryIsHere ", "Hi @jerryIsHere , sorry for the late response! Sadly this is normal, the problem comes form fasttext's classifier which we used to create the original corpus. In general the classifier is not really capable of properly recognizing Yue Chineese so the file ends un being just noise from Common Crawl. Some of these problems with OSCAR were already discussed [here](https://arxiv.org/pdf/2103.12028.pdf) but we are working on explicitly documenting the problems by language on our website. In fact, could please you open an issue on [our repo](https://github.com/oscar-corpus/oscar-website/issues) as well so that we can track it?" ]
2021-05-23T13:06:02
2021-06-17T13:54:37
null
CONTRIBUTOR
null
null
null
![image](https://user-images.githubusercontent.com/50871412/119260850-4f876b80-bc07-11eb-8894-124302600643.png) ![image](https://user-images.githubusercontent.com/50871412/119260875-675eef80-bc07-11eb-9da4-ee27567054ac.png) From the [official site ](https://oscar-corpus.com/), the Yue Chinese dataset should have 2.2KB data. 7 training instances is obviously not a right number. As I can read Yue Chinese, I call tell the last instance is definitely not something that would appear on Common Crawl. And even if you don't read Yue Chinese, you can tell the first six instance are problematic. (It is embarrassing, as the 7 training instances look exactly like something from a pornographic novel or flitting messages in a chat of a dating app) It might not be the problem of the huggingface/datasets implementation, because when I tried to download the dataset from the official site, I found out that the zip file is corrupted. I will try to inform the host of OSCAR corpus later. Awy a remake about this dataset in huggingface/datasets is needed, perhaps after the host of the dataset fixes the issue. > Hi @jerryIsHere , sorry for the late response! Sadly this is normal, the problem comes form fasttext's classifier which we used to create the original corpus. In general the classifier is not really capable of properly recognizing Yue Chineese so the file ends un being just noise from Common Crawl. Some of these problems with OSCAR were already discussed [here](https://arxiv.org/pdf/2103.12028.pdf) but we are working on explicitly documenting the problems by language on our website. In fact, could please you open an issue on [our repo](https://github.com/oscar-corpus/oscar-website/issues) as well so that we can track it? Thanks a lot, the new post is here: https://github.com/oscar-corpus/oscar-website/issues/11
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2396/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2396/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/2391
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2391/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2391/comments
https://api.github.com/repos/huggingface/datasets/issues/2391/events
https://github.com/huggingface/datasets/issues/2391
898,128,099
MDU6SXNzdWU4OTgxMjgwOTk=
2,391
Missing original answers in kilt-TriviaQA
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "repos_url": "https://api.github.com/users/PaulLerner/repos", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "That could be useful indeed! Feel free to open a PR on the dataset card if you already have some code that runs, otherwise we'll take care of it soon :) ", "I can open a PR but there is 2 details to fix:\r\n- the name for the corresponding key (e.g. `original_answer`)\r\n- how to implement it: I’m not sure what happens when you map `lambda x: {'input': ...}` as it keeps the other keys (e.g. `output`) intact but here since we want to set a nested value (e.g. `x['output']['original_answer']`) I implemented it with a regular function (not lambda), see below\r\n\r\n```py\r\ndef add_original_answer(x, trivia_qa, triviaqa_map):\r\n i = triviaqa_map[x['id']]\r\n x['output']['original_answer'] = trivia_qa['validation'][i]['answer']['value']\r\n return x\r\n```" ]
2021-05-21T14:57:07
2021-06-14T17:29:11
2021-06-14T17:29:11
CONTRIBUTOR
null
null
null
I previously opened an issue at https://github.com/facebookresearch/KILT/issues/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets ## Describe the bug The `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output']['answer']` contains a list of alternative answer which are accepted for the question. However it'd be nice to know the original answer to the question (the only fields in `output` are `'answer', 'meta', 'provenance'`) ## How to fix It can be fixed by retrieving the original answer from the original TriviaQA (e.g. `trivia_qa['train'][0]['answer']['value']`), perhaps at the same place as here where one retrieves the questions https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md#loading-the-kilt-knowledge-source-and-task-data cc @yjernite who previously answered to an issue about KILT and TriviaQA :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2391/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2391/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2388
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2388/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2388/comments
https://api.github.com/repos/huggingface/datasets/issues/2388/events
https://github.com/huggingface/datasets/issues/2388
897,767,470
MDU6SXNzdWU4OTc3Njc0NzA=
2,388
Incorrect URLs for some datasets
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false } ]
null
[]
2021-05-21T07:22:35
2021-06-04T17:39:45
2021-06-04T17:39:45
MEMBER
null
null
null
## Describe the bug It seems that the URLs for the following datasets are invalid: - [ ] `bn_hate_speech` has been renamed: https://github.com/rezacsedu/Bengali-Hate-Speech-Dataset/commit/c67ecfc4184911e12814f6b36901f9828df8a63a - [ ] `covid_tweets_japanese` has been renamed: http://www.db.info.gifu-u.ac.jp/covid-19-twitter-dataset/ As a result we can no longer load these datasets using `load_dataset`. The simple fix is to rename the URL in the dataset script - will do this asap. ## Steps to reproduce the bug ```python from datasets import load_dataset # pick one of the datasets from the list above ds = load_dataset("bn_hate_speech") ``` ## Expected results Dataset loads without error. ## Actual results ``` Downloading: 3.36kB [00:00, 1.07MB/s] Downloading: 2.03kB [00:00, 678kB/s] Using custom data configuration default Downloading and preparing dataset bn_hate_speech/default (download: 951.48 KiB, generated: 949.84 KiB, post-processed: Unknown size, total: 1.86 MiB) to /Users/lewtun/.cache/huggingface/datasets/bn_hate_speech/default/0.0.0/a2dc726e511a2177523301bcad196af05d4d8a2cff30d2769ba8aacc1f5fdb5c... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/load.py", line 744, in load_dataset builder_instance.download_and_prepare( File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/builder.py", line 574, in download_and_prepare self._download_and_prepare( File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/builder.py", line 630, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/lewtun/.cache/huggingface/modules/datasets_modules/datasets/bn_hate_speech/a2dc726e511a2177523301bcad196af05d4d8a2cff30d2769ba8aacc1f5fdb5c/bn_hate_speech.py", line 76, in _split_generators train_path = dl_manager.download_and_extract(_URL) File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 287, in download_and_extract return self.extract(self.download(url_or_urls)) File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 195, in download downloaded_path_or_paths = map_nested( File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 195, in map_nested return function(data_struct) File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 218, in _download return cached_path(url_or_filename, download_config=download_config) File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 281, in cached_path output_path = get_from_cache( File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/rezacsedu/Bengali-Hate-Speech-Dataset/main/Bengali_%20Hate_Speech_Dataset_Subset.csv ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.6.2.dev0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.8 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2388/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2388/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2387
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2387/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2387/comments
https://api.github.com/repos/huggingface/datasets/issues/2387/events
https://github.com/huggingface/datasets/issues/2387
897,566,666
MDU6SXNzdWU4OTc1NjY2NjY=
2,387
datasets 1.6 ignores cache
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Looks like there are multiple issues regarding this (#2386, #2322) and it's a WIP #2329. Currently these datasets are being loaded in-memory which is causing this issue. Quoting @mariosasko here for a quick fix:\r\n\r\n> set `keep_in_memory` to `False` when loading a dataset (`sst = load_dataset(\"sst\", keep_in_memory=False)`) to prevent it from loading in-memory. Currently, in-memory datasets fail to find cached files due to this check (always False for them)\r\n\r\n", "Hi ! Since `datasets` 1.6.0 we no longer keep small datasets (<250MB) on disk and load them in RAM instead by default. This makes data processing and iterating on data faster. However datasets in RAM currently have no way to reload previous results from the cache (since nothing is written on disk). We are working on making the caching work for datasets in RAM.\r\n\r\nUntil then, I'd recommend passing `keep_in_memory=False` to the calls to `load_dataset` like here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/223943872e8c9c3fc11db3c6e93da07f5177423f/examples/pytorch/language-modeling/run_clm.py#L233\r\n\r\nThis way you say explicitly that you want your dataset to stay on the disk, and it will be able to recover previously computed results from the cache.", "gotcha! thanks Quentin", "OK, It doesn't look like we can use the proposed workaround - see https://github.com/huggingface/transformers/issues/11801\r\n\r\nCould you please add an env var for us to be able to turn off this unwanted in our situation behavior? It is really problematic for dev work, when one needs to restart the training very often and needs a quick startup time. Manual editing of standard scripts is not a practical option when one uses examples.\r\n\r\nThis could also be a problem for tests, which will be slower because of lack of cache, albeit usually we use tiny datasets there. I think we want caching for tests.\r\n\r\nThank you.", "Hi @stas00, \r\n\r\nYou are right: an env variable is needed to turn off this behavior. I am adding it.\r\n\r\nFor the moment there is a config parameter to turn off this behavior: `datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES = None`\r\n\r\nYou can find this info in the docs:\r\n- in the docstring of the parameter `keep_in_memory` of the function [`load_datasets`](https://huggingface.co/docs/datasets/package_reference/loading_methods.html#datasets.load_dataset):\r\n- in a Note in the docs about [Loading a Dataset](https://huggingface.co/docs/datasets/loading_datasets.html#from-the-huggingface-hub)\r\n\r\n> The default in 🤗Datasets is to memory-map the dataset on drive if its size is larger than datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES (default 250 MiB); otherwise, the dataset is copied in-memory. This behavior can be disabled by setting datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES = None, and in this case the dataset is not loaded in memory.", "Yes, but this still requires one to edit the standard example scripts, so if I'm doing that already I just as well can add `keep_in_memory=False`.\r\n\r\nMay be the low hanging fruit is to add `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES` env var to match the config, and if the user sets it to 0, then it'll be the same as `keep_in_memory=False` or `datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=0`?", "@stas00, however, for the moment, setting the value to `0` is equivalent to the opposite, i.e. `keep_in_memory=True`. This means the max size until which I load in memory is 0 bytes.\r\n\r\nTell me if this is logical/convenient, or I should change it.", "In my PR, to turn off current default bahavior, you should set env variable to one of: `{\"\", \"OFF\", \"NO\", \"FALSE\"}`.\r\n\r\nFor example:\r\n```\r\nMAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=\r\n```", "IMHO, this behaviour is not very intuitive, as 0 is a normal quantity of bytes. So `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=0` to me reads as don't cache ever.\r\n\r\nAlso \"SIZE_IN_BYTES\" that can take one of `{\"\", \"OFF\", \"NO\", \"FALSE\"}` is also quite odd.\r\n\r\nI think supporting a very simple `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES` that can accept any numerical value to match the name of the variable, requires minimal logic and is very straightforward. \r\n\r\nSo if you could adjust this logic - then `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=0` is all that's needed to not do in-memory datasets.\r\n\r\nDoes it make sense?", "I understand your point @stas00, as I am not very convinced with current implementation.\r\n\r\nMy concern is: which numerical value should then pass a user who wants `keep_in_memory=True` by default, independently of dataset size? Currently it is `0` for this case.", "That's a good question, and again the normal bytes can be used for that:\r\n```\r\nMAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=1e12 # (~2**40)\r\n```\r\nSince it's unlikely that anybody will have more than 1TB RAM.\r\n\r\nIt's also silly that it uses BYTES and not MBYTES - that level of refinement doesn't seem to be of a practical use in this context.\r\n\r\nNot sure when it was added and if there are back-compat issues here, but perhaps it could be renamed `MAX_IN_MEMORY_DATASET_SIZE` and support 1M, 1G, 1T, etc. \r\n\r\nBut scientific notation is quite intuitive too, as each 000 zeros is the next M, G, T multiplier. Minus the discrepancy of 1024 vs 1000, which adds up. And it is easy to write down `1e12`, as compared to `1099511627776` (2**40). (`1.1e12` is more exact).\r\n", "Great! Thanks, @stas00.\r\n\r\nI am implementing your suggestion to turn off default value when set to `0`.\r\n\r\nFor the other suggestion (allowing different metric prefixes), I will discuss with @lhoestq to agree on its implementation.", "Awesome! Thank you, @albertvillanova!!!\r\n\r\n" ]
2021-05-21T00:12:58
2021-05-26T16:07:54
2021-05-26T16:07:54
CONTRIBUTOR
null
null
null
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612 Quoting @VictorSanh: > > I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335): > > > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-c6aefe81ca4e5152.arrow'}], 'validation': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-97cf4c813e6469c6.arrow'}]}` > > while the same command with the latest version of datasets (actually starting at `1.6.0`) gives: > > `{'train': [], 'validation': []}` > I also confirm that downgrading to `datasets==1.5.0` makes things fast again - i.e. cache is used. to reproduce: ``` USE_TF=0 python examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 \ --dataset_name "stas/openwebtext-10k" \ --output_dir output_dir \ --overwrite_output_dir \ --do_train \ --do_eval \ --max_train_samples 1000 \ --max_eval_samples 200 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --num_train_epochs 1 \ --warmup_steps 8 \ --block_size 64 \ --fp16 \ --report_to none ``` the first time the startup is slow and some 5 tqdm bars. It shouldn't do it on consequent runs. but with `datasets>1.5.0` it rebuilds on every run. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2387/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2387/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2386
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2386/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2386/comments
https://api.github.com/repos/huggingface/datasets/issues/2386/events
https://github.com/huggingface/datasets/issues/2386
897,560,049
MDU6SXNzdWU4OTc1NjAwNDk=
2,386
Accessing Arrow dataset cache_files
{ "login": "Mehrad0711", "id": 28717374, "node_id": "MDQ6VXNlcjI4NzE3Mzc0", "avatar_url": "https://avatars.githubusercontent.com/u/28717374?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mehrad0711", "html_url": "https://github.com/Mehrad0711", "followers_url": "https://api.github.com/users/Mehrad0711/followers", "following_url": "https://api.github.com/users/Mehrad0711/following{/other_user}", "gists_url": "https://api.github.com/users/Mehrad0711/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mehrad0711/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehrad0711/subscriptions", "organizations_url": "https://api.github.com/users/Mehrad0711/orgs", "repos_url": "https://api.github.com/users/Mehrad0711/repos", "events_url": "https://api.github.com/users/Mehrad0711/events{/privacy}", "received_events_url": "https://api.github.com/users/Mehrad0711/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks @bhavitvyamalik for referencing the workaround. Setting `keep_in_memory=False` is working." ]
2021-05-20T23:57:43
2021-05-21T19:18:03
2021-05-21T19:18:03
NONE
null
null
null
## Describe the bug In datasets 1.5.0 the following code snippet would have printed the cache_files: ``` train_data = load_dataset('conll2003', split='train', cache_dir='data') print(train_data.cache_files[0]['filename']) ``` However, in the newest release (1.6.1), it prints an empty list. I also tried loading the dataset with `keep_in_memory=True` argument but still `cache_files` is empty. Was wondering if this is a bug or I need to pass additional arguments so I can access the cache_files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2386/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2386/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2382
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2382/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2382/comments
https://api.github.com/repos/huggingface/datasets/issues/2382/events
https://github.com/huggingface/datasets/issues/2382
895,610,216
MDU6SXNzdWU4OTU2MTAyMTY=
2,382
DuplicatedKeysError: FAILURE TO GENERATE DATASET ! load_dataset('head_qa', 'en')
{ "login": "helloworld123-lab", "id": 75953751, "node_id": "MDQ6VXNlcjc1OTUzNzUx", "avatar_url": "https://avatars.githubusercontent.com/u/75953751?v=4", "gravatar_id": "", "url": "https://api.github.com/users/helloworld123-lab", "html_url": "https://github.com/helloworld123-lab", "followers_url": "https://api.github.com/users/helloworld123-lab/followers", "following_url": "https://api.github.com/users/helloworld123-lab/following{/other_user}", "gists_url": "https://api.github.com/users/helloworld123-lab/gists{/gist_id}", "starred_url": "https://api.github.com/users/helloworld123-lab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/helloworld123-lab/subscriptions", "organizations_url": "https://api.github.com/users/helloworld123-lab/orgs", "repos_url": "https://api.github.com/users/helloworld123-lab/repos", "events_url": "https://api.github.com/users/helloworld123-lab/events{/privacy}", "received_events_url": "https://api.github.com/users/helloworld123-lab/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2021-05-19T15:49:48
2021-05-30T13:26:16
2021-05-30T13:26:16
NONE
null
null
null
Hello everyone, I try to use head_qa dataset in [https://huggingface.co/datasets/viewer/?dataset=head_qa&config=en](url) ``` !pip install datasets from datasets import load_dataset dataset = load_dataset( 'head_qa', 'en') ``` When I write above load_dataset(.), it throws the following: ``` DuplicatedKeysError Traceback (most recent call last) <ipython-input-6-ea87002d32f0> in <module>() 2 from datasets import load_dataset 3 dataset = load_dataset( ----> 4 'head_qa', 'en') 5 frames /usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self) 347 for hash, key in self.hkey_record: 348 if hash in tmp_record: --> 349 raise DuplicatedKeysError(key) 350 else: 351 tmp_record.add(hash) DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 1 Keys should be unique and deterministic in nature ``` How can I fix the error? Thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2382/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2382/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2378
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2378/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2378/comments
https://api.github.com/repos/huggingface/datasets/issues/2378/events
https://github.com/huggingface/datasets/issues/2378
895,131,774
MDU6SXNzdWU4OTUxMzE3NzQ=
2,378
Add missing dataset_infos.json files
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false } ]
null
[]
2021-05-19T08:11:12
2021-05-19T08:11:12
null
MEMBER
null
null
null
Some of the datasets in `datasets` are missing a `dataset_infos.json` file, e.g. ``` [PosixPath('datasets/chr_en/chr_en.py'), PosixPath('datasets/chr_en/README.md')] [PosixPath('datasets/telugu_books/README.md'), PosixPath('datasets/telugu_books/telugu_books.py')] [PosixPath('datasets/reclor/README.md'), PosixPath('datasets/reclor/reclor.py')] [PosixPath('datasets/json/README.md')] [PosixPath('datasets/csv/README.md')] [PosixPath('datasets/wikihow/wikihow.py'), PosixPath('datasets/wikihow/README.md')] [PosixPath('datasets/c4/c4.py'), PosixPath('datasets/c4/README.md')] [PosixPath('datasets/text/README.md')] [PosixPath('datasets/lm1b/README.md'), PosixPath('datasets/lm1b/lm1b.py')] [PosixPath('datasets/pandas/README.md')] ``` For `json`, `text`, csv`, and `pandas` this is expected, but not for the others which should be fixed
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2378/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2378/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/2377
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2377/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2377/comments
https://api.github.com/repos/huggingface/datasets/issues/2377/events
https://github.com/huggingface/datasets/issues/2377
894,918,927
MDU6SXNzdWU4OTQ5MTg5Mjc=
2,377
ArrowDataset.save_to_disk produces files that cannot be read using pyarrow.feather
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! This is because we are actually using the arrow streaming format. We plan to switch to the arrow IPC format.\r\nMore info at #1933 ", "Not sure if this was resolved, but I am getting a similar error when trying to load a dataset.arrow file directly: `ArrowInvalid: Not an Arrow file`", "Since we're using the streaming format, you need to use `open_stream`:\r\n\r\n```python\r\nimport pyarrow as pa\r\n\r\ndef in_memory_arrow_table_from_file(filename: str) -> pa.Table:\r\n in_memory_stream = pa.input_stream(filename)\r\n opened_stream = pa.ipc.open_stream(in_memory_stream)\r\n pa_table = opened_stream.read_all()\r\n return pa_table\r\n\r\ndef memory_mapped_arrow_table_from_file(filename: str) -> pa.Table:\r\n memory_mapped_stream = pa.memory_map(filename)\r\n opened_stream = pa.ipc.open_stream(memory_mapped_stream)\r\n pa_table = opened_stream.read_all()\r\n return pa_table\r\n```", "> 由于我们使用流格式,因此您需要使用`open_stream`:\r\n> \r\n> ```python\r\n> import pyarrow as pa\r\n> \r\n> def in_memory_arrow_table_from_file(filename: str) -> pa.Table:\r\n> in_memory_stream = pa.input_stream(filename)\r\n> opened_stream = pa.ipc.open_stream(in_memory_stream)\r\n> pa_table = opened_stream.read_all()\r\n> return pa_table\r\n> \r\n> def memory_mapped_arrow_table_from_file(filename: str) -> pa.Table:\r\n> memory_mapped_stream = pa.memory_map(filename)\r\n> opened_stream = pa.ipc.open_stream(memory_mapped_stream)\r\n> pa_table = opened_stream.read_all()\r\n> return pa_table\r\n> ```\r\nThank you very much for providing the code that can read arrow file to pa_table and finally to dict, but how to implement the reverse process, how to restore a dict to arrow file?\r\n" ]
2021-05-19T02:04:37
2024-01-18T08:06:15
null
NONE
null
null
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset from pyarrow import feather dataset = load_dataset('imdb', split='train') dataset.save_to_disk('dataset_dir') table = feather.read_table('dataset_dir/dataset.arrow') ``` ## Expected results I expect that the saved dataset can be read by the official Apache Arrow methods. ## Actual results ``` File "/usr/local/lib/python3.7/site-packages/pyarrow/feather.py", line 236, in read_table reader.open(source, use_memory_map=memory_map) File "pyarrow/feather.pxi", line 67, in pyarrow.lib.FeatherReader.open File "pyarrow/error.pxi", line 123, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Not a Feather V1 or Arrow IPC file ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets-1.6.2 - Platform: Linux - Python version: 3.7 - PyArrow version: 0.17.1, also 2.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2377/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2377/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/2373
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2373/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2373/comments
https://api.github.com/repos/huggingface/datasets/issues/2373/events
https://github.com/huggingface/datasets/issues/2373
894,499,909
MDU6SXNzdWU4OTQ0OTk5MDk=
2,373
Loading dataset from local path
{ "login": "kolakows", "id": 34172905, "node_id": "MDQ6VXNlcjM0MTcyOTA1", "avatar_url": "https://avatars.githubusercontent.com/u/34172905?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kolakows", "html_url": "https://github.com/kolakows", "followers_url": "https://api.github.com/users/kolakows/followers", "following_url": "https://api.github.com/users/kolakows/following{/other_user}", "gists_url": "https://api.github.com/users/kolakows/gists{/gist_id}", "starred_url": "https://api.github.com/users/kolakows/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kolakows/subscriptions", "organizations_url": "https://api.github.com/users/kolakows/orgs", "repos_url": "https://api.github.com/users/kolakows/repos", "events_url": "https://api.github.com/users/kolakows/events{/privacy}", "received_events_url": "https://api.github.com/users/kolakows/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Version below works, checked again in the docs, and data_files should be a path.\r\n```\r\nds = datasets.load_dataset('my_script.py', \r\n data_files='/data/dir/corpus.txt', \r\n cache_dir='.')\r\n```" ]
2021-05-18T15:20:50
2021-05-18T15:36:36
2021-05-18T15:36:35
NONE
null
null
null
I'm trying to load a local dataset with the code below ``` ds = datasets.load_dataset('my_script.py', data_files='corpus.txt', data_dir='/data/dir', cache_dir='.') ``` But internally a BuilderConfig is created, which tries to use getmtime on the data_files string, without using data_dir. Is this a bug or am I not using the load_dataset correctly? https://github.com/huggingface/datasets/blob/bc61954083f74e6460688202e9f77dde2475319c/src/datasets/builder.py#L153
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2373/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2373/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2371
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2371/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2371/comments
https://api.github.com/repos/huggingface/datasets/issues/2371/events
https://github.com/huggingface/datasets/issues/2371
894,193,403
MDU6SXNzdWU4OTQxOTM0MDM=
2,371
Align question answering tasks with sub-domains
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false } ]
null
[ "Closing this issue as the `task_templates` API has been deprecated." ]
2021-05-18T09:47:59
2023-07-25T16:52:05
2023-07-25T16:52:04
MEMBER
null
null
null
As pointed out by @thomwolf in #2255 we should consider breaking with the pipeline taxonomy of `transformers` to account for the various types of question-answering domains: > `question-answering` exists in two forms: abstractive and extractive question answering. > > we can keep a generic `question-answering` but then it will probably mean diferrent schema of input/output for both (abstractive will have text for both while extractive can use spans indication as well as text). > > Or we can also propose to use `abstractive-question-answering` and `extractive-question-answering` for instance. > Maybe we could have `question-answering-abstractive` and `question-answering-extractive` if somehow we can use a for a completion or search in the future (detail). > Actually I see that people are more organizing in terms of general and sub-tasks, for instance on paperwithcode: https://paperswithcode.com/area/natural-language-processing and on nlpprogress: https://github.com/sebastianruder/NLP-progress/blob/master/english/question_answering.md#squad > > Probably the best is to align with one of these in terms of denomination, PaperWithCode is probably the most active and maintained and we work with them as well. > Maybe you want to check with a few QA datasets that this schema make sense. Typically NaturalQuestions, TriviaQA and can be good second datasets to compare to and be sure of the generality of the schema. > > A good recent list of QA datasets to compare the schemas among, is for instance in the UnitedQA paper: https://arxiv.org/abs/2101.00178 Investigate which grouping of QA is best suited for `datasets` and adapt / extend the QA task template accordingly.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2371/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2366
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2366/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2366/comments
https://api.github.com/repos/huggingface/datasets/issues/2366/events
https://github.com/huggingface/datasets/issues/2366
893,185,266
MDU6SXNzdWU4OTMxODUyNjY=
2,366
Json loader fails if user-specified features don't match the json data fields order
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2021-05-17T10:26:08
2021-06-16T10:47:49
2021-06-16T10:47:49
MEMBER
null
null
null
If you do ```python dataset = load_dataset("json", data_files=data_files, features=features) ``` Then depending on the order of the features in the json data field it fails: ```python [...] ~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files) 94 if self.config.schema: 95 # Cast allows str <-> int/float, while parse_option explicit_schema does NOT ---> 96 pa_table = pa_table.cast(self.config.schema) 97 yield i, pa_table [...] ValueError: Target schema's field names are not matching the table's field names: ['tokens', 'ner_tags'], ['ner_tags', 'tokens'] ``` This is because one must first re-order the columns of the table to match the `self.config.schema` before calling cast. One way to fix the `cast` would be to replace it with: ```python # reorder the arrays if necessary + cast to schema # we can't simply use .cast here because we may need to change the order of the columns pa_table = pa.Table.from_arrays([pa_table[name] for name in schema.names], schema=schema) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2366/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2365
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2365/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2365/comments
https://api.github.com/repos/huggingface/datasets/issues/2365/events
https://github.com/huggingface/datasets/issues/2365
893,179,697
MDU6SXNzdWU4OTMxNzk2OTc=
2,365
Missing ClassLabel encoding in Json loader
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/5", "html_url": "https://github.com/huggingface/datasets/milestone/5", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "id": 6808903, "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "title": "1.9", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 12, "state": "closed", "created_at": "2021-05-31T16:13:06", "updated_at": "2021-07-12T14:12:00", "due_on": "2021-07-08T07:00:00", "closed_at": "2021-07-09T05:50:07" }
[]
2021-05-17T10:19:10
2021-06-28T15:05:34
2021-06-28T15:05:34
MEMBER
null
null
null
Currently if you want to load a json dataset this way ```python dataset = load_dataset("json", data_files=data_files, features=features) ``` Then if your features has ClassLabel types and if your json data needs class label encoding (i.e. if the labels in the json files are strings and not integers), then it would fail: ```python [...] ~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files) 94 if self.config.schema: 95 # Cast allows str <-> int/float, while parse_option explicit_schema does NOT ---> 96 pa_table = pa_table.cast(self.config.schema) 97 yield i, pa_table [...] ArrowInvalid: Failed to parse string: 'O' as a scalar of type int64 ``` This is because it just tries to cast the string data to integers, without applying the mapping str->int first The current workaround is to do instead ```python dataset = load_dataset("json", data_files=data_files) dataset = dataset.map(features.encode_example, features=features) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2365/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2365/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2360
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2360/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2360/comments
https://api.github.com/repos/huggingface/datasets/issues/2360/events
https://github.com/huggingface/datasets/issues/2360
891,965,964
MDU6SXNzdWU4OTE5NjU5NjQ=
2,360
Automatically detect datasets with compatible task schemas
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false } ]
null
[]
2021-05-14T14:23:40
2021-05-14T14:23:40
null
MEMBER
null
null
null
See description of #2255 for details.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2360/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2360/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/2359
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2359/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2359/comments
https://api.github.com/repos/huggingface/datasets/issues/2359/events
https://github.com/huggingface/datasets/issues/2359
891,946,017
MDU6SXNzdWU4OTE5NDYwMTc=
2,359
Allow model labels to be passed during task preparation
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "We now have the `align_labels_with_mapping` method in the API for this purpose." ]
2021-05-14T13:58:28
2022-10-05T17:37:22
2022-10-05T17:37:22
MEMBER
null
null
null
Models have a config with label2id. And we have the same for datasets with the ClassLabel feature type. At one point either the model or the dataset must sync with the other. It would be great to do that on the dataset side. For example for sentiment classification on amazon reviews with you could have these labels: - "1 star", "2 stars", "3 stars", "4 stars", "5 stars" - "1", "2", "3", "4", "5" Some models may use the first set, while other models use the second set. Here in the `TextClassification` class, the user can only specify one set of labels, while many models could actually be compatible but have different sets of labels. Should we allow users to pass a list of compatible labels sets ? Then in terms of API, users could use `dataset.prepare_for_task("text-classification", labels=model.labels)` or something like that. The label set could also be the same but not in the same order. For NLI for example, some models use `["neutral", "entailment", "contradiction"]` and some others use `["neutral", "contradiction", "entailment"]`, so we should take care of updating the order of the labels in the dataset to match the labels order of the model. Let me know what you think ! This can be done in a future PR _Originally posted by @lhoestq in https://github.com/huggingface/datasets/pull/2255#discussion_r632412792_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2359/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2359/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2354
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2354/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2354/comments
https://api.github.com/repos/huggingface/datasets/issues/2354/events
https://github.com/huggingface/datasets/issues/2354
890,439,523
MDU6SXNzdWU4OTA0Mzk1MjM=
2,354
Document DatasetInfo attributes
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false } ]
null
[]
2021-05-12T20:01:29
2021-05-22T09:26:14
2021-05-22T09:26:14
MEMBER
null
null
null
**Is your feature request related to a problem? Please describe.** As noted in PR #2255, the attributes of `DatasetInfo` are not documented in the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=datasetinfo#datasetinfo). It would be nice to do so :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2354/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2350
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2350/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2350/comments
https://api.github.com/repos/huggingface/datasets/issues/2350/events
https://github.com/huggingface/datasets/issues/2350
889,580,247
MDU6SXNzdWU4ODk1ODAyNDc=
2,350
`FaissIndex.save` throws error on GPU
{ "login": "Guitaricet", "id": 2821124, "node_id": "MDQ6VXNlcjI4MjExMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2821124?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Guitaricet", "html_url": "https://github.com/Guitaricet", "followers_url": "https://api.github.com/users/Guitaricet/followers", "following_url": "https://api.github.com/users/Guitaricet/following{/other_user}", "gists_url": "https://api.github.com/users/Guitaricet/gists{/gist_id}", "starred_url": "https://api.github.com/users/Guitaricet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Guitaricet/subscriptions", "organizations_url": "https://api.github.com/users/Guitaricet/orgs", "repos_url": "https://api.github.com/users/Guitaricet/repos", "events_url": "https://api.github.com/users/Guitaricet/events{/privacy}", "received_events_url": "https://api.github.com/users/Guitaricet/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Just in case, this is a workaround that I use in my code and it seems to do the job.\r\n\r\n```python\r\nif use_gpu_index:\r\n data[\"train\"]._indexes[\"text_emb\"].faiss_index = faiss.index_gpu_to_cpu(data[\"train\"]._indexes[\"text_emb\"].faiss_index)\r\n```" ]
2021-05-12T03:41:56
2021-05-17T13:41:41
2021-05-17T13:41:41
CONTRIBUTOR
null
null
null
## Describe the bug After training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error. ``` File "index_wikipedia.py", line 119, in <module> data["train"].save_faiss_index("text_emb", index_save_path) File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 470, in save_faiss_index index.save(file) File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 334, in save faiss.write_index(index, str(file)) File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/faiss/swigfaiss_avx2.py", line 5654, in write_index return _swigfaiss.write_index(*args) RuntimeError: Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /root/miniconda3/conda-bld/faiss-pkg_1613235005464/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index ``` ## Steps to reproduce the bug Any dataset will do, I just selected a familiar one. ```python import numpy as np import datasets INDEX_STR = "OPQ16_128,IVF512,PQ32" INDEX_SAVE_PATH = "will_not_save.faiss" data = datasets.load_dataset("Fraser/news-category-dataset", split=f"train[:10000]") def encode(item): return {"text_emb": np.random.randn(768).astype(np.float32)} data = data.map(encode) data.add_faiss_index(column="text_emb", string_factory=INDEX_STR, train_size=10_000, device=0) data.save_faiss_index("text_emb", INDEX_SAVE_PATH) ``` ## Expected results Saving the index ## Actual results Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) ... don't know how to serialize this type of index ## Environment info - `datasets` version: 1.6.2 - Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.8.1+cu111 (True) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No I will be proposing a fix in a couple of minutes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2350/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2347
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2347/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2347/comments
https://api.github.com/repos/huggingface/datasets/issues/2347/events
https://github.com/huggingface/datasets/issues/2347
887,404,868
MDU6SXNzdWU4ODc0MDQ4Njg=
2,347
Add an API to access the language and pretty name of a dataset
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi ! With @bhavitvyamalik we discussed about having something like\r\n```python\r\nfrom datasets import load_dataset_card\r\n\r\ndataset_card = load_dataset_card(\"squad\")\r\nprint(dataset_card.metadata.pretty_name)\r\n# Stanford Question Answering Dataset (SQuAD)\r\nprint(dataset_card.metadata.languages)\r\n# [\"en\"]\r\n\r\n```\r\nWhat do you think ?\r\n\r\nI don't know if you already have a way to load the model tags in `transformers` but we can agree on the API to have something consistent.\r\n\r\nAlso note that the pretty name would only be used to show users something prettier than a dataset id, but in the end the source of truth will stay the dataset id (here `squad`).", "That works for me!", "maybe use the hub-backed dataset_info method? (so there's only one parser of README.md metadata)?", "What dataset_info method are you talking about @julien-c ? In `huggingface_hub` I can only see `model_info`.", "hmm the equivalent method in `datasets` (which could go into `huggingface_hub` at some point)", "Indeed, this info can now be fetched with `huggingface_hub.dataset_info`, so I think we can close this issue." ]
2021-05-11T14:10:08
2022-10-05T17:16:54
2022-10-05T17:16:53
CONTRIBUTOR
null
null
null
It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2347/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2347/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2345
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2345/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2345/comments
https://api.github.com/repos/huggingface/datasets/issues/2345/events
https://github.com/huggingface/datasets/issues/2345
886,586,872
MDU6SXNzdWU4ODY1ODY4NzI=
2,345
[Question] How to move and reuse preprocessed dataset?
{ "login": "AtmaHou", "id": 15045402, "node_id": "MDQ6VXNlcjE1MDQ1NDAy", "avatar_url": "https://avatars.githubusercontent.com/u/15045402?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AtmaHou", "html_url": "https://github.com/AtmaHou", "followers_url": "https://api.github.com/users/AtmaHou/followers", "following_url": "https://api.github.com/users/AtmaHou/following{/other_user}", "gists_url": "https://api.github.com/users/AtmaHou/gists{/gist_id}", "starred_url": "https://api.github.com/users/AtmaHou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AtmaHou/subscriptions", "organizations_url": "https://api.github.com/users/AtmaHou/orgs", "repos_url": "https://api.github.com/users/AtmaHou/repos", "events_url": "https://api.github.com/users/AtmaHou/events{/privacy}", "received_events_url": "https://api.github.com/users/AtmaHou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq @LysandreJik", "<s>Hi :) Can you share with us the code you used ?</s>\r\n\r\nEDIT: from https://github.com/huggingface/transformers/issues/11665#issuecomment-838348291 I understand you're using the run_clm.py script. Can you share your logs ?\r\n", "Also note that for the caching to work, you must reuse the exact same parameters as in the first run. Did you change any parameter ? The `preprocessing_num_workers` should also stay the same", "> Also note that for the caching to work, you must reuse the exact same parameters as in the first run. Did you change any parameter ? The `preprocessing_num_workers` should also stay the same\r\n\r\nI only changed the `preprocessing_num_workers` maybe it is the problem~ I will try again~" ]
2021-05-11T09:09:17
2021-06-11T04:39:11
2021-06-11T04:39:11
NONE
null
null
null
Hi, I am training a gpt-2 from scratch using run_clm.py. I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess), I tried to : copy path_to_cache_dir/datasets to new_cache_dir/datasets set export HF_DATASETS_CACHE="new_cache_dir/" but the program still re-preprocess the whole dataset without loading cache. I also tried to torch.save(lm_datasets, fw), but the saved file is only 14M. What is the proper way to do this?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2345/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2344
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2344/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2344/comments
https://api.github.com/repos/huggingface/datasets/issues/2344/events
https://github.com/huggingface/datasets/issues/2344
885,331,505
MDU6SXNzdWU4ODUzMzE1MDU=
2,344
Is there a way to join multiple datasets in one?
{ "login": "avacaondata", "id": 35173563, "node_id": "MDQ6VXNlcjM1MTczNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avacaondata", "html_url": "https://github.com/avacaondata", "followers_url": "https://api.github.com/users/avacaondata/followers", "following_url": "https://api.github.com/users/avacaondata/following{/other_user}", "gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}", "starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions", "organizations_url": "https://api.github.com/users/avacaondata/orgs", "repos_url": "https://api.github.com/users/avacaondata/repos", "events_url": "https://api.github.com/users/avacaondata/events{/privacy}", "received_events_url": "https://api.github.com/users/avacaondata/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi ! We don't have `join`/`merge` on a certain column as in pandas.\r\nMaybe you can just use the [concatenate_datasets](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate#datasets.concatenate_datasets) function.\r\n", "Hi! You can use `datasets_sql` for that now. As of recently, PyArrow also supports querying tables via Substrait, so I think we can start adding these methods to the API soon." ]
2021-05-10T23:16:10
2022-10-05T17:27:05
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** I need to join 2 datasets, one that is in the hub and another I've created from my files. Is there an easy way to join these 2? **Describe the solution you'd like** Id like to join them with a merge or join method, just like pandas dataframes. **Additional context** If you want to extend an existing dataset with more data, for example for training a language model, you need that functionality. I've not found it in the documentation.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2344/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/2343
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2343/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2343/comments
https://api.github.com/repos/huggingface/datasets/issues/2343/events
https://github.com/huggingface/datasets/issues/2343
883,208,539
MDU6SXNzdWU4ODMyMDg1Mzk=
2,343
Columns are removed before or after map function applied?
{ "login": "taghizad3h", "id": 8199406, "node_id": "MDQ6VXNlcjgxOTk0MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8199406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taghizad3h", "html_url": "https://github.com/taghizad3h", "followers_url": "https://api.github.com/users/taghizad3h/followers", "following_url": "https://api.github.com/users/taghizad3h/following{/other_user}", "gists_url": "https://api.github.com/users/taghizad3h/gists{/gist_id}", "starred_url": "https://api.github.com/users/taghizad3h/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/taghizad3h/subscriptions", "organizations_url": "https://api.github.com/users/taghizad3h/orgs", "repos_url": "https://api.github.com/users/taghizad3h/repos", "events_url": "https://api.github.com/users/taghizad3h/events{/privacy}", "received_events_url": "https://api.github.com/users/taghizad3h/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi! Columns are removed **after** applying the function and **before** updating the examples with the function's output (as per the docs [here](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.map.remove_columns)). I agree the docs on this should be more clear." ]
2021-05-10T02:36:20
2022-10-24T11:31:55
null
NONE
null
null
null
## Describe the bug According to the documentation when applying map function the [remove_columns ](https://huggingface.co/docs/datasets/processing.html#removing-columns) will be removed after they are passed to the function, but in the [source code](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map) it's documented that they are removed before applying function. I thinks the source code doc is more accurate, right?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2343/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/2337
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2337/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2337/comments
https://api.github.com/repos/huggingface/datasets/issues/2337/events
https://github.com/huggingface/datasets/issues/2337
881,610,567
MDU6SXNzdWU4ODE2MTA1Njc=
2,337
NonMatchingChecksumError for web_of_science dataset
{ "login": "nbroad1881", "id": 24982805, "node_id": "MDQ6VXNlcjI0OTgyODA1", "avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nbroad1881", "html_url": "https://github.com/nbroad1881", "followers_url": "https://api.github.com/users/nbroad1881/followers", "following_url": "https://api.github.com/users/nbroad1881/following{/other_user}", "gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}", "starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions", "organizations_url": "https://api.github.com/users/nbroad1881/orgs", "repos_url": "https://api.github.com/users/nbroad1881/repos", "events_url": "https://api.github.com/users/nbroad1881/events{/privacy}", "received_events_url": "https://api.github.com/users/nbroad1881/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I've raised a PR for this. Should work with `dataset = load_dataset(\"web_of_science\", \"WOS11967\", ignore_verifications=True)`once it gets merged into the main branch. Thanks for reporting this! " ]
2021-05-09T02:02:02
2021-05-10T13:35:53
2021-05-10T13:35:53
NONE
null
null
null
NonMatchingChecksumError when trying to download the web_of_science dataset. >NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://data.mendeley.com/datasets/9rw3vkcfy4/6/files/c9ea673d-5542-44c0-ab7b-f1311f7d61df/WebOfScience.zip?dl=1'] Setting `ignore_verfications=True` results in OSError. >OSError: Cannot find data file. Original error: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/37ab2c42f50d553c1d0ea432baca3e9e11fedea4aeec63a81e6b7e25dd10d4e7/WOS5736/X.txt' ```python dataset = load_dataset('web_of_science', 'WOS5736') ``` There are 3 data instances and they all don't work. 'WOS5736', 'WOS11967', 'WOS46985' datasets 1.6.2 python 3.7.10 Ubuntu 18.04.5 LTS
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2337/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2337/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2335
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2335/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2335/comments
https://api.github.com/repos/huggingface/datasets/issues/2335/events
https://github.com/huggingface/datasets/issues/2335
881,291,887
MDU6SXNzdWU4ODEyOTE4ODc=
2,335
Index error in Dataset.map
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
2021-05-08T20:44:57
2021-05-10T13:26:12
2021-05-10T13:26:12
CONTRIBUTOR
null
null
null
The following code, if executed on master, raises an IndexError (due to overflow): ```python >>> from datasets import * >>> d = load_dataset("bookcorpus", split="train") Reusing dataset bookcorpus (C:\Users\Mario\.cache\huggingface\datasets\bookcorpus\plain_text\1.0.0\44662c4a114441c35200992bea923b170e6f13f2f0beb7c14e43759cec498700) 2021-05-08 21:23:46.859818: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll >>> d.map(lambda ex: ex) 0%|▎ | 289430/74004228 [00:13<58:41, 20935.33ex/s]c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py:84: RuntimeWarning: overflow encountered in int_scalars k = i + ((j - i) * (x - arr[i]) // (arr[j] - arr[i])) 0%|▎ | 290162/74004228 [00:13<59:11, 20757.23ex/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1498, in map new_fingerprint=new_fingerprint, File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 174, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\fingerprint.py", line 340, in wrapper out = func(self, *args, **kwargs) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1799, in _map_single for i, example in enumerate(pbar): File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\site-packages\tqdm\std.py", line 1133, in __iter__ for obj in iterable: File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1145, in __iter__ format_kwargs=format_kwargs, File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1337, in _getitem pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\formatting\formatting.py", line 368, in query_table pa_subtable = _query_table(table, key) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\formatting\formatting.py", line 79, in _query_table return table.fast_slice(key % table.num_rows, 1) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py", line 128, in fast_slice i = _interpolation_search(self._offsets, offset) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py", line 91, in _interpolation_search raise IndexError(f"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.") IndexError: Invalid query '290162' for size 74004228. ``` Tested on Windows, can run on Linux if needed. EDIT: It seems like for this to happen, the default NumPy dtype has to be np.int32.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2335/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2331
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2331/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2331/comments
https://api.github.com/repos/huggingface/datasets/issues/2331/events
https://github.com/huggingface/datasets/issues/2331
879,031,427
MDU6SXNzdWU4NzkwMzE0Mjc=
2,331
Add Topical-Chat
{ "login": "ktangri", "id": 22266659, "node_id": "MDQ6VXNlcjIyMjY2NjU5", "avatar_url": "https://avatars.githubusercontent.com/u/22266659?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ktangri", "html_url": "https://github.com/ktangri", "followers_url": "https://api.github.com/users/ktangri/followers", "following_url": "https://api.github.com/users/ktangri/following{/other_user}", "gists_url": "https://api.github.com/users/ktangri/gists{/gist_id}", "starred_url": "https://api.github.com/users/ktangri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ktangri/subscriptions", "organizations_url": "https://api.github.com/users/ktangri/orgs", "repos_url": "https://api.github.com/users/ktangri/repos", "events_url": "https://api.github.com/users/ktangri/events{/privacy}", "received_events_url": "https://api.github.com/users/ktangri/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[]
2021-05-07T13:43:59
2021-05-07T13:43:59
null
NONE
null
null
null
## Adding a Dataset - **Name:** Topical-Chat - **Description:** a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don’t have explicitly defined roles - **Paper:** https://www.isca-speech.org/archive/Interspeech_2019/pdfs/3079.pdf - **Data:** https://github.com/alexa/Topical-Chat - **Motivation:** Good quality, knowledge-grounded dataset that spans a broad range of topics Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2331/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/2330
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2330/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2330/comments
https://api.github.com/repos/huggingface/datasets/issues/2330/events
https://github.com/huggingface/datasets/issues/2330
878,490,927
MDU6SXNzdWU4Nzg0OTA5Mjc=
2,330
Allow passing `desc` to `tqdm` in `Dataset.map()`
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\nShould we change `desc` in [pbar](https://github.com/huggingface/datasets/blob/81fcf88172ed5e3026ef68aed4c0ec6980372333/src/datasets/arrow_dataset.py#L1860) to something meaningful?", "I think the user could pass the `desc` parameter to `map` so that it can be displayed in the tqdm progress bar, as suggested by @cccntu.\r\n\r\nWhen there's no multiprocessing, the `desc` of the progress bar could be the `desc` passed by the user.\r\nIn multiprocessing, we were already using a `desc` equal to `\"#\" + str(rank)`.\r\nWe can change it to be `(desc or \"\") + \"#\" + str(rank)` instead.\r\n\r\nIn the end, since both `desc` and `rank` could be None, we can have:\r\n```python\r\npbar_desc = (desc or \"\") + \"#\" + str(rank) if rank is not None else desc\r\n```\r\n\r\nFinally let's remember that if we add `desc` as a new parameter to `map`, we should add it to the `ignore_kwargs` list of the `@fingerprint_transform` decorator of `Dataset._map_single` since we don't want this parameter to affect the fingerprint of the resulting dataset." ]
2021-05-07T05:52:54
2021-05-26T14:59:21
2021-05-26T14:59:21
CONTRIBUTOR
null
null
null
It's normal to have many `map()` calls, and some of them can take a few minutes, it would be nice to have a description on the progress bar. Alternative solution: Print the description before/after the `map()` call.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2330/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2330/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2327
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2327/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2327/comments
https://api.github.com/repos/huggingface/datasets/issues/2327/events
https://github.com/huggingface/datasets/issues/2327
877,565,831
MDU6SXNzdWU4Nzc1NjU4MzE=
2,327
A syntax error in example
{ "login": "mymusise", "id": 6883957, "node_id": "MDQ6VXNlcjY4ODM5NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/6883957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mymusise", "html_url": "https://github.com/mymusise", "followers_url": "https://api.github.com/users/mymusise/followers", "following_url": "https://api.github.com/users/mymusise/following{/other_user}", "gists_url": "https://api.github.com/users/mymusise/gists{/gist_id}", "starred_url": "https://api.github.com/users/mymusise/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mymusise/subscriptions", "organizations_url": "https://api.github.com/users/mymusise/orgs", "repos_url": "https://api.github.com/users/mymusise/repos", "events_url": "https://api.github.com/users/mymusise/events{/privacy}", "received_events_url": "https://api.github.com/users/mymusise/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "cc @beurkinger but I think this has been fixed internally and will soon be updated right ?", "This issue has been fixed." ]
2021-05-06T14:34:44
2021-05-20T03:04:19
2021-05-20T03:04:19
NONE
null
null
null
![image](https://user-images.githubusercontent.com/6883957/117315905-b47a5c00-aeba-11eb-91eb-b2a4a0212a56.png) Sorry to report with an image, I can't find the template source code of this snippet.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2327/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2323/comments
https://api.github.com/repos/huggingface/datasets/issues/2323/events
https://github.com/huggingface/datasets/issues/2323
876,438,507
MDU6SXNzdWU4NzY0Mzg1MDc=
2,323
load_dataset("timit_asr") gives back duplicates of just one sample text
{ "login": "ekeleshian", "id": 33647474, "node_id": "MDQ6VXNlcjMzNjQ3NDc0", "avatar_url": "https://avatars.githubusercontent.com/u/33647474?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekeleshian", "html_url": "https://github.com/ekeleshian", "followers_url": "https://api.github.com/users/ekeleshian/followers", "following_url": "https://api.github.com/users/ekeleshian/following{/other_user}", "gists_url": "https://api.github.com/users/ekeleshian/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekeleshian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekeleshian/subscriptions", "organizations_url": "https://api.github.com/users/ekeleshian/orgs", "repos_url": "https://api.github.com/users/ekeleshian/repos", "events_url": "https://api.github.com/users/ekeleshian/events{/privacy}", "received_events_url": "https://api.github.com/users/ekeleshian/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Upgrading datasets to version 1.6 fixes the issue", "This bug was fixed in #1995. Upgrading the `datasets` should work! ", "Thanks @ekeleshian for having reported.\r\n\r\nI am closing this issue once that you updated `datasets`. Feel free to reopen it if the problem persists." ]
2021-05-05T13:14:48
2021-05-07T10:32:30
2021-05-07T10:32:30
NONE
null
null
null
## Describe the bug When you look up on key ["train"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence "Would such an act of refusal be useful?". Similarly when you look up ['test'] and then ['text'], the list is one sentence repeated "The bungalow was pleasantly situated near the shore." 1680 times. I tried to work around the issue by downgrading to datasets version 1.3.0, inspired by [this post](https://www.gitmemory.com/issue/huggingface/datasets/2052/798904836) and removing the entire huggingface directory from ~/.cache, but I still get the same issue. ## Steps to reproduce the bug ```python from datasets import load_dataset timit = load_dataset("timit_asr") print(timit['train']['text']) print(timit['test']['text']) ``` ## Expected Result Rows of diverse text, like how it is shown in the [wav2vec2.0 tutorial](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) <img width="485" alt="Screen Shot 2021-05-05 at 9 09 57 AM" src="https://user-images.githubusercontent.com/33647474/117146094-d9b77f00-ad81-11eb-8306-f281850c127a.png"> ## Actual results Rows of repeated text. <img width="319" alt="Screen Shot 2021-05-05 at 9 11 53 AM" src="https://user-images.githubusercontent.com/33647474/117146231-f8b61100-ad81-11eb-834a-fc10410b0c9c.png"> ## Versions - Datasets: 1.3.0 - Python: 3.9.1 - Platform: macOS-11.2.1-x86_64-i386-64bit}
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2323/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2322
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2322/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2322/comments
https://api.github.com/repos/huggingface/datasets/issues/2322/events
https://github.com/huggingface/datasets/issues/2322
876,383,853
MDU6SXNzdWU4NzYzODM4NTM=
2,322
Calls to map are not cached.
{ "login": "villmow", "id": 2743060, "node_id": "MDQ6VXNlcjI3NDMwNjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/villmow", "html_url": "https://github.com/villmow", "followers_url": "https://api.github.com/users/villmow/followers", "following_url": "https://api.github.com/users/villmow/following{/other_user}", "gists_url": "https://api.github.com/users/villmow/gists{/gist_id}", "starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/villmow/subscriptions", "organizations_url": "https://api.github.com/users/villmow/orgs", "repos_url": "https://api.github.com/users/villmow/repos", "events_url": "https://api.github.com/users/villmow/events{/privacy}", "received_events_url": "https://api.github.com/users/villmow/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I tried upgrading to `datasets==1.6.2` and downgrading to `1.6.0`. Both versions produce the same output.\r\n\r\nDowngrading to `1.5.0` works and produces the following output for me:\r\n\r\n```bash\r\nDownloading: 9.20kB [00:00, 3.94MB/s] \r\nDownloading: 5.99kB [00:00, 3.29MB/s] \r\nNo config specified, defaulting to: sst/default\r\nDownloading and preparing dataset sst/default (download: 6.83 MiB, generated: 3.73 MiB, post-processed: Unknown size, total: 10.56 MiB) to /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b...\r\n Dataset sst downloaded and prepared to /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b. Subsequent calls will reuse this data.\r\nexecuted [0, 1]\r\n#0: 0%| | 0/5 [00:00<?, ?ba/s]\r\n#1: 0%| | 0/5 [00:00<?, ?ba/s]\r\nexecuted [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\r\nexecuted [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281]\r\nexecuted [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]\r\nexecuted [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281]\r\nexecuted [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009]\r\nexecuted [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281]\r\nexecuted [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009]\r\nexecuted [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281]\r\nexecuted [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009]\r\n#0: 100%|██████████| 5/5 [00:00<00:00, 94.83ba/s]\r\nexecuted [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281]\r\n#1: 100%|██████████| 5/5 [00:00<00:00, 92.75ba/s]\r\nexecuted [0, 1]\r\n#0: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#1: 0%| | 0/1 [00:00<?, ?ba/s]\r\nexecuted [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\r\nexecuted [551, 552, 553, 554, 555, 556, 557, 558, 559, 560]\r\n#0: 100%|██████████| 1/1 [00:00<00:00, 118.81ba/s]\r\n#1: 100%|██████████| 1/1 [00:00<00:00, 123.06ba/s]\r\nexecuted [0, 1]\r\n#0: 0%| | 0/2 [00:00<?, ?ba/s]\r\n#1: 0%| | 0/2 [00:00<?, ?ba/s]\r\nexecuted [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\r\nexecuted [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114]\r\nexecuted [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]\r\n#0: 100%|██████████| 2/2 [00:00<00:00, 119.42ba/s]\r\nexecuted [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114]\r\n#1: 100%|██████████| 2/2 [00:00<00:00, 123.33ba/s]\r\n\r\n\r\n\r\n ############################## \r\n\r\n\r\n\r\nexecuted [0, 1]\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-6079777aa097c8f8.arrow\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-2dc05c46f68eda6e.arrow\r\nexecuted [0, 1]\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-1ca347e7430b98f1.arrow\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-c0f1a73ce3ba40cd.arrow\r\nexecuted [0, 1]\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-832a1407bf1ac5b7.arrow\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-036316a259b773c4.arrow\r\n- Datasets: 1.5.0\r\n- Python: 3.8.3 (default, May 19 2020, 18:47:26) \r\n[GCC 7.3.0]\r\n- Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10\r\n```", "Hi,\r\n\r\nset `keep_in_memory` to False when loading a dataset (`sst = load_dataset(\"sst\", keep_in_memory=False)`) to prevent it from loading in-memory. Currently, in-memory datasets fail to find cached files due to this check (always False for them):\r\n\r\nhttps://github.com/huggingface/datasets/blob/241a0b4a3a868778ee91e767ad406f9da7610df2/src/datasets/arrow_dataset.py#L1718\r\n\r\n@albertvillanova It seems like this behavior was overlooked in #2182.\r\n\r\n", "Hi @villmow, thanks for reporting. \r\n\r\nAs @mariosasko has pointed out, we did not consider this case when introducing the feature of automatic in-memory for small datasets. This needs to be fixed.", "Hi ! Currently a dataset that is in memory doesn't know doesn't know in which directory it has to read/write cache files.\r\nOn the other hand, a dataset that loaded from the disk (via memory mapping) uses the directory from which the dataset is located to read/write cache files.\r\n\r\nBecause of that, currently in-memory datasets simply don't use caching.\r\n\r\nMaybe a Dataset object could have a `cache_dir` that is set to the directory where the arrow files are created during `load_dataset` ?", "Fixed once reverted the default in-memory feature:\r\nClosed by #2460 (to close issue #2458).", "Please @villmow, feel free to update to `Datasets` latest version (1.8)." ]
2021-05-05T12:11:27
2021-06-08T19:10:02
2021-06-08T19:08:21
NONE
null
null
null
## Describe the bug Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed? ## Steps to reproduce the bug ```python import datasets datasets.set_caching_enabled(True) sst = datasets.load_dataset("sst") def foo(samples, i): print("executed", i[:10]) return samples # first call x = sst.map(foo, batched=True, with_indices=True, num_proc=2) print('\n'*3, "#" * 30, '\n'*3) # second call y = sst.map(foo, batched=True, with_indices=True, num_proc=2) # print version import sys import platform print(f""" - Datasets: {datasets.__version__} - Python: {sys.version} - Platform: {platform.platform()} """) ``` ## Actual results This code prints the following output for me: ```bash No config specified, defaulting to: sst/default Reusing dataset sst (/home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/b8a7889ef01c5d3ae8c379b84cc4080f8aad3ac2bc538701cbe0ac6416fb76ff) #0: 0%| | 0/5 [00:00<?, ?ba/s] #1: 0%| | 0/5 [00:00<?, ?ba/s] executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] executed [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281] executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009] executed [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281] executed [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009] executed [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281] executed [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009] executed [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281] executed [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009] #0: 100%|██████████| 5/5 [00:00<00:00, 59.85ba/s] executed [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281] #1: 100%|██████████| 5/5 [00:00<00:00, 60.85ba/s] #0: 0%| | 0/1 [00:00<?, ?ba/s] #1: 0%| | 0/1 [00:00<?, ?ba/s]executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] #0: 100%|██████████| 1/1 [00:00<00:00, 69.32ba/s] executed [551, 552, 553, 554, 555, 556, 557, 558, 559, 560] #1: 100%|██████████| 1/1 [00:00<00:00, 70.93ba/s] #0: 0%| | 0/2 [00:00<?, ?ba/s] #1: 0%| | 0/2 [00:00<?, ?ba/s]executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009] #0: 100%|██████████| 2/2 [00:00<00:00, 63.25ba/s] executed [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114] executed [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114] #1: 100%|██████████| 2/2 [00:00<00:00, 57.69ba/s] ############################## #0: 0%| | 0/5 [00:00<?, ?ba/s] #1: 0%| | 0/5 [00:00<?, ?ba/s] executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] executed [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281] executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009] executed [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281] executed [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009] executed [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281] executed [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009] executed [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009] #0: 100%|██████████| 5/5 [00:00<00:00, 58.10ba/s] executed [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281] executed [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281] #1: 100%|██████████| 5/5 [00:00<00:00, 57.19ba/s] #0: 0%| | 0/1 [00:00<?, ?ba/s] #1: 0%| | 0/1 [00:00<?, ?ba/s] executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] #0: 100%|██████████| 1/1 [00:00<00:00, 60.10ba/s] executed [551, 552, 553, 554, 555, 556, 557, 558, 559, 560] #1: 100%|██████████| 1/1 [00:00<00:00, 53.82ba/s] #0: 0%| | 0/2 [00:00<?, ?ba/s] #1: 0%| | 0/2 [00:00<?, ?ba/s] executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009] executed [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114] #0: 100%|██████████| 2/2 [00:00<00:00, 72.76ba/s] executed [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114] #1: 100%|██████████| 2/2 [00:00<00:00, 71.55ba/s] - Datasets: 1.6.1 - Python: 3.8.3 (default, May 19 2020, 18:47:26) [GCC 7.3.0] - Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10 ``` ## Expected results Caching should work.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2322/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2319
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2319/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2319/comments
https://api.github.com/repos/huggingface/datasets/issues/2319/events
https://github.com/huggingface/datasets/issues/2319
876,251,376
MDU6SXNzdWU4NzYyNTEzNzY=
2,319
UnicodeDecodeError for OSCAR (Afrikaans)
{ "login": "sgraaf", "id": 8904453, "node_id": "MDQ6VXNlcjg5MDQ0NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/8904453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgraaf", "html_url": "https://github.com/sgraaf", "followers_url": "https://api.github.com/users/sgraaf/followers", "following_url": "https://api.github.com/users/sgraaf/following{/other_user}", "gists_url": "https://api.github.com/users/sgraaf/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgraaf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgraaf/subscriptions", "organizations_url": "https://api.github.com/users/sgraaf/orgs", "repos_url": "https://api.github.com/users/sgraaf/repos", "events_url": "https://api.github.com/users/sgraaf/events{/privacy}", "received_events_url": "https://api.github.com/users/sgraaf/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @sgraaf.\r\n\r\nI am going to have a look at it. \r\n\r\nI guess the expected codec is \"UTF-8\". Normally, when no explicitly codec is passed, Python uses one which is platform-dependent. For Linux machines, the default codec is `utf_8`, which is OK. However for Windows machine, the default codec is `cp1252`, which causes the problem.", "Awesome, thank you. 😃 ", "@sgraaf, I have just merged the fix in the master branch.\r\n\r\nYou can either:\r\n- install `datasets` from source code\r\n- wait until we make the next release of `datasets`\r\n- set the `utf-8` codec as your default instead of `cp1252`. This can be done by activating the Python [UTF-8 mode](https://www.python.org/dev/peps/pep-0540) either by passing the command-line option `-X utf8` or by setting the environment variable `PYTHONUTF8=1`." ]
2021-05-05T09:22:52
2021-05-05T10:57:31
2021-05-05T10:50:55
NONE
null
null
null
## Describe the bug When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("oscar", "unshuffled_deduplicated_af") ``` ## Expected results Anything but an error, really. ## Actual results ```python >>> from datasets import load_dataset >>> dataset = load_dataset("oscar", "unshuffled_deduplicated_af") Downloading: 14.7kB [00:00, 4.91MB/s] Downloading: 3.07MB [00:00, 32.6MB/s] Downloading and preparing dataset oscar/unshuffled_deduplicated_af (download: 62.93 MiB, generated: 163.38 MiB, post-processed: Unknown size, total: 226.32 MiB) to C:\Users\sgraaf\.cache\huggingface\datasets\oscar\unshuffled_deduplicated_af\1.0.0\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464... Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81.0/81.0 [00:00<00:00, 40.5kB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 66.0M/66.0M [00:18<00:00, 3.50MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\load.py", line 745, in load_dataset builder_instance.download_and_prepare( File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 574, in download_and_prepare self._download_and_prepare( File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 979, in _prepare_split for key, record in utils.tqdm( File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\tqdm\std.py", line 1133, in __iter__ for obj in iterable: File "C:\Users\sgraaf\.cache\huggingface\modules\datasets_modules\datasets\oscar\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464\oscar.py", line 359, in _generate_examples for line in f: File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 7454: character maps to <undefined> ``` ## Versions Paste the output of the following code: ```python import datasets import sys import platform print(f""" - Datasets: {datasets.__version__} - Python: {sys.version} - Platform: {platform.platform()} """) ``` - Datasets: 1.6.2 - Python: 3.9.4 (tags/v3.9.4:1f2e308, Apr 6 2021, 13:40:21) [MSC v.1928 64 bit (AMD64)] - Platform: Windows-10-10.0.19041-SP0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2319/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2318
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2318/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2318/comments
https://api.github.com/repos/huggingface/datasets/issues/2318/events
https://github.com/huggingface/datasets/issues/2318
876,212,460
MDU6SXNzdWU4NzYyMTI0NjA=
2,318
[api request] API to obtain "dataset_module" dynamic path?
{ "login": "richardliaw", "id": 4529381, "node_id": "MDQ6VXNlcjQ1MjkzODE=", "avatar_url": "https://avatars.githubusercontent.com/u/4529381?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richardliaw", "html_url": "https://github.com/richardliaw", "followers_url": "https://api.github.com/users/richardliaw/followers", "following_url": "https://api.github.com/users/richardliaw/following{/other_user}", "gists_url": "https://api.github.com/users/richardliaw/gists{/gist_id}", "starred_url": "https://api.github.com/users/richardliaw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richardliaw/subscriptions", "organizations_url": "https://api.github.com/users/richardliaw/orgs", "repos_url": "https://api.github.com/users/richardliaw/repos", "events_url": "https://api.github.com/users/richardliaw/events{/privacy}", "received_events_url": "https://api.github.com/users/richardliaw/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @richardliaw, \r\n\r\nFirst, thanks for the compliments.\r\n\r\nIn relation with your request, currently, the dynamic modules path is obtained this way:\r\n```python\r\nfrom datasets.load import init_dynamic_modules, MODULE_NAME_FOR_DYNAMIC_MODULES\r\n\r\ndynamic_modules_path = init_dynamic_modules(MODULE_NAME_FOR_DYNAMIC_MODULES)\r\n```\r\n\r\nLet me know if it is OK for you this way. \r\n\r\nI could set `MODULE_NAME_FOR_DYNAMIC_MODULES` as default value, so that you could instead obtain the path with:\r\n```\r\ndynamic_modules_path = datasets.load.init_dynamic_modules()\r\n```", "Hi @albertvillanova, the default value proposal seems great :) Looking forward to this!", "I like the idea as well ! thanks @albertvillanova ", "Hi @richardliaw, the feature is on the master branch and will be included in the next release in a couple of weeks.", "awesome work @albertvillanova !" ]
2021-05-05T08:40:48
2021-05-06T08:45:45
2021-05-06T07:57:54
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. This is an awesome library. It seems like the dynamic module path in this library has broken some of hyperparameter tuning functionality: https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/34 This is because Ray will spawn new processes, and each process will load modules by path. However, we need to explicitly inform Ray to load the right modules, or else it will error upon import. I'd like an API to obtain the dynamic paths. This will allow us to support this functionality in this awesome library while being future proof. **Describe the solution you'd like** A clear and concise description of what you want to happen. `datasets.get_dynamic_paths -> List[str]` will be sufficient for my use case. By offering this API, we will be able to address the following issues (by patching the ray integration sufficiently): https://github.com/huggingface/blog/issues/106 https://github.com/huggingface/transformers/issues/11565 https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/34 https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/35
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2318/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2316
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2316/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2316/comments
https://api.github.com/repos/huggingface/datasets/issues/2316/events
https://github.com/huggingface/datasets/issues/2316
875,756,353
MDU6SXNzdWU4NzU3NTYzNTM=
2,316
Incorrect version specification for pyarrow
{ "login": "cemilcengiz", "id": 32267027, "node_id": "MDQ6VXNlcjMyMjY3MDI3", "avatar_url": "https://avatars.githubusercontent.com/u/32267027?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cemilcengiz", "html_url": "https://github.com/cemilcengiz", "followers_url": "https://api.github.com/users/cemilcengiz/followers", "following_url": "https://api.github.com/users/cemilcengiz/following{/other_user}", "gists_url": "https://api.github.com/users/cemilcengiz/gists{/gist_id}", "starred_url": "https://api.github.com/users/cemilcengiz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cemilcengiz/subscriptions", "organizations_url": "https://api.github.com/users/cemilcengiz/orgs", "repos_url": "https://api.github.com/users/cemilcengiz/repos", "events_url": "https://api.github.com/users/cemilcengiz/events{/privacy}", "received_events_url": "https://api.github.com/users/cemilcengiz/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Fixed by #2317." ]
2021-05-04T19:15:11
2021-05-05T10:10:03
2021-05-05T10:10:03
CONTRIBUTOR
null
null
null
## Describe the bug The pyarrow dependency is incorrectly specified in setup.py file, in [this line](https://github.com/huggingface/datasets/blob/3a3e5a4da20bfcd75f8b6a6869b240af8feccc12/setup.py#L77). Also as a snippet: ```python "pyarrow>=1.0.0<4.0.0", ``` ## Steps to reproduce the bug ```bash pip install "pyarrow>=1.0.0<4.0.0" ``` ## Expected results It is expected to get a pyarrow version between 1.0.0 (inclusive) and 4.0.0 (exclusive). ## Actual results pip ignores the specified versions since there is a missing comma between the lower and upper limits. Therefore, pip installs the latest pyarrow version from PYPI, which is 4.0.0. This is especially problematic since "conda env export" fails due to incorrect version specification. Here is the conda error as well: ```bash conda env export InvalidVersionSpec: Invalid version '1.0.0<4.0.0': invalid character(s) ``` ## Fix suggestion Put a comma between the version limits which means replacing the line in setup.py file with the following: ```python "pyarrow>=1.0.0,<4.0.0", ``` ## Versions Paste the output of the following code: ```python - Datasets: 1.6.2 - Python: 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0] - Platform: Linux-5.4.0-42-generic-x86_64-with-debian-buster-sid ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2316/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2316/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2301
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2301/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2301/comments
https://api.github.com/repos/huggingface/datasets/issues/2301/events
https://github.com/huggingface/datasets/issues/2301
873,941,266
MDU6SXNzdWU4NzM5NDEyNjY=
2,301
Unable to setup dev env on Windows
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @gchhablani, \r\n\r\nThere are some 3rd-party dependencies that require to build code in C. In this case, it is the library `python-Levenshtein`.\r\n\r\nOn Windows, in order to be able to build C code, you need to install at least `Microsoft C++ Build Tools` version 14. You can find more info here: https://visualstudio.microsoft.com/visual-cpp-build-tools/", "Hi @albertvillanova \r\n\r\nSorry for such a trivial issue ;-; \r\n\r\nThanks a lot." ]
2021-05-02T13:20:42
2021-05-03T15:18:01
2021-05-03T15:17:34
CONTRIBUTOR
null
null
null
Hi I tried installing the `".[dev]"` version on Windows 10 after cloning. Here is the error I'm facing: ```bat (env) C:\testing\datasets>pip install -e ".[dev]" Obtaining file:///C:/testing/datasets Requirement already satisfied: numpy>=1.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.19.5) Collecting pyarrow>=0.17.1 Using cached pyarrow-4.0.0-cp37-cp37m-win_amd64.whl (13.3 MB) Requirement already satisfied: dill in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.3.1.1) Collecting pandas Using cached pandas-1.2.4-cp37-cp37m-win_amd64.whl (9.1 MB) Requirement already satisfied: requests>=2.19.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2.25.1) Requirement already satisfied: tqdm<4.50.0,>=4.27 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.49.0) Requirement already satisfied: xxhash in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2.0.2) Collecting multiprocess Using cached multiprocess-0.70.11.1-py37-none-any.whl (108 kB) Requirement already satisfied: fsspec in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2021.4.0) Collecting huggingface_hub<0.1.0 Using cached huggingface_hub-0.0.8-py3-none-any.whl (34 kB) Requirement already satisfied: importlib_metadata in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.0.1) Requirement already satisfied: absl-py in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.12.0) Requirement already satisfied: pytest in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (6.2.3) Collecting pytest-xdist Using cached pytest_xdist-2.2.1-py3-none-any.whl (37 kB) Collecting apache-beam>=2.24.0 Using cached apache_beam-2.29.0-cp37-cp37m-win_amd64.whl (3.7 MB) Collecting elasticsearch Using cached elasticsearch-7.12.1-py2.py3-none-any.whl (339 kB) Requirement already satisfied: boto3==1.16.43 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.16.43) Requirement already satisfied: botocore==1.19.43 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.19.43) Collecting moto[s3]==1.3.16 Using cached moto-1.3.16-py2.py3-none-any.whl (879 kB) Collecting rarfile>=4.0 Using cached rarfile-4.0-py3-none-any.whl (28 kB) Collecting tensorflow>=2.3 Using cached tensorflow-2.4.1-cp37-cp37m-win_amd64.whl (370.7 MB) Requirement already satisfied: torch in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.8.1) Requirement already satisfied: transformers in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.5.1) Collecting bs4 Using cached bs4-0.0.1-py3-none-any.whl Collecting conllu Using cached conllu-4.4-py2.py3-none-any.whl (15 kB) Collecting langdetect Using cached langdetect-1.0.8-py3-none-any.whl Collecting lxml Using cached lxml-4.6.3-cp37-cp37m-win_amd64.whl (3.5 MB) Collecting mwparserfromhell Using cached mwparserfromhell-0.6-cp37-cp37m-win_amd64.whl (101 kB) Collecting nltk Using cached nltk-3.6.2-py3-none-any.whl (1.5 MB) Collecting openpyxl Using cached openpyxl-3.0.7-py2.py3-none-any.whl (243 kB) Collecting py7zr Using cached py7zr-0.15.2-py3-none-any.whl (66 kB) Collecting tldextract Using cached tldextract-3.1.0-py2.py3-none-any.whl (87 kB) Collecting zstandard Using cached zstandard-0.15.2-cp37-cp37m-win_amd64.whl (582 kB) Collecting bert_score>=0.3.6 Using cached bert_score-0.3.9-py3-none-any.whl (59 kB) Collecting rouge_score Using cached rouge_score-0.0.4-py2.py3-none-any.whl (22 kB) Collecting sacrebleu Using cached sacrebleu-1.5.1-py3-none-any.whl (54 kB) Requirement already satisfied: scipy in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.6.3) Collecting seqeval Using cached seqeval-1.2.2-py3-none-any.whl Collecting sklearn Using cached sklearn-0.0-py2.py3-none-any.whl Collecting jiwer Using cached jiwer-2.2.0-py3-none-any.whl (13 kB) Requirement already satisfied: toml>=0.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.10.2) Requirement already satisfied: requests_file>=1.5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.5.1) Requirement already satisfied: texttable>=1.6.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.6.3) Requirement already satisfied: s3fs>=0.4.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.4.2) Requirement already satisfied: Werkzeug>=1.0.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.0.1) Collecting black Using cached black-21.4b2-py3-none-any.whl (130 kB) Collecting isort Using cached isort-5.8.0-py3-none-any.whl (103 kB) Collecting flake8==3.7.9 Using cached flake8-3.7.9-py2.py3-none-any.whl (69 kB) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from boto3==1.16.43->datasets==1.5.0.dev0) (0.10.0) Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from boto3==1.16.43->datasets==1.5.0.dev0) (0.3.7) Requirement already satisfied: urllib3<1.27,>=1.25.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from botocore==1.19.43->datasets==1.5.0.dev0) (1.26.4) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from botocore==1.19.43->datasets==1.5.0.dev0) (2.8.1) Collecting entrypoints<0.4.0,>=0.3.0 Using cached entrypoints-0.3-py2.py3-none-any.whl (11 kB) Collecting pyflakes<2.2.0,>=2.1.0 Using cached pyflakes-2.1.1-py2.py3-none-any.whl (59 kB) Collecting pycodestyle<2.6.0,>=2.5.0 Using cached pycodestyle-2.5.0-py2.py3-none-any.whl (51 kB) Collecting mccabe<0.7.0,>=0.6.0 Using cached mccabe-0.6.1-py2.py3-none-any.whl (8.6 kB) Requirement already satisfied: jsondiff>=1.1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.3.0) Requirement already satisfied: pytz in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2021.1) Requirement already satisfied: mock in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (4.0.3) Requirement already satisfied: MarkupSafe<2.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.1.1) Requirement already satisfied: python-jose[cryptography]<4.0.0,>=3.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.2.0) Requirement already satisfied: aws-xray-sdk!=0.96,>=0.93 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.8.0) Requirement already satisfied: cryptography>=2.3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.4.7) Requirement already satisfied: more-itertools in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (8.7.0) Requirement already satisfied: PyYAML>=5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (5.4.1) Requirement already satisfied: boto>=2.36.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.49.0) Requirement already satisfied: idna<3,>=2.5 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.10) Requirement already satisfied: sshpubkeys>=3.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.3.1) Requirement already satisfied: responses>=0.9.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.13.3) Requirement already satisfied: xmltodict in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.12.0) Requirement already satisfied: setuptools in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (52.0.0.post20210125) Requirement already satisfied: Jinja2>=2.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.11.3) Requirement already satisfied: zipp in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.4.1) Requirement already satisfied: six>1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.15.0) Requirement already satisfied: ecdsa<0.15 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.14.1) Requirement already satisfied: docker>=2.5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (5.0.0) Requirement already satisfied: cfn-lint>=0.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.49.0) Requirement already satisfied: grpcio<2,>=1.29.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (1.32.0) Collecting hdfs<3.0.0,>=2.1.0 Using cached hdfs-2.6.0-py3-none-any.whl (33 kB) Collecting pyarrow>=0.17.1 Using cached pyarrow-3.0.0-cp37-cp37m-win_amd64.whl (12.6 MB) Collecting fastavro<2,>=0.21.4 Using cached fastavro-1.4.0-cp37-cp37m-win_amd64.whl (394 kB) Requirement already satisfied: httplib2<0.18.0,>=0.8 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.17.4) Collecting pymongo<4.0.0,>=3.8.0 Using cached pymongo-3.11.3-cp37-cp37m-win_amd64.whl (382 kB) Collecting crcmod<2.0,>=1.7 Using cached crcmod-1.7-py3-none-any.whl Collecting avro-python3!=1.9.2,<1.10.0,>=1.8.1 Using cached avro_python3-1.9.2.1-py3-none-any.whl Requirement already satisfied: typing-extensions<3.8.0,>=3.7.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (3.7.4.3) Requirement already satisfied: future<1.0.0,>=0.18.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.18.2) Collecting oauth2client<5,>=2.0.1 Using cached oauth2client-4.1.3-py2.py3-none-any.whl (98 kB) Collecting pydot<2,>=1.2.0 Using cached pydot-1.4.2-py2.py3-none-any.whl (21 kB) Requirement already satisfied: protobuf<4,>=3.12.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (3.15.8) Requirement already satisfied: wrapt in c:\programdata\anaconda3\envs\env\lib\site-packages (from aws-xray-sdk!=0.96,>=0.93->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.12.1) Collecting matplotlib Using cached matplotlib-3.4.1-cp37-cp37m-win_amd64.whl (7.1 MB) Requirement already satisfied: junit-xml~=1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.9) Requirement already satisfied: jsonpatch in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.32) Requirement already satisfied: jsonschema~=3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.2.0) Requirement already satisfied: networkx~=2.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.5.1) Requirement already satisfied: aws-sam-translator>=1.35.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.35.0) Requirement already satisfied: cffi>=1.12 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cryptography>=2.3.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.14.5) Requirement already satisfied: pycparser in c:\programdata\anaconda3\envs\env\lib\site-packages (from cffi>=1.12->cryptography>=2.3.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.20) Requirement already satisfied: pywin32==227 in c:\programdata\anaconda3\envs\env\lib\site-packages (from docker>=2.5.1->moto[s3]==1.3.16->datasets==1.5.0.dev0) (227) Requirement already satisfied: websocket-client>=0.32.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from docker>=2.5.1->moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.58.0) Requirement already satisfied: docopt in c:\programdata\anaconda3\envs\env\lib\site-packages (from hdfs<3.0.0,>=2.1.0->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.6.2) Requirement already satisfied: filelock in c:\programdata\anaconda3\envs\env\lib\site-packages (from huggingface_hub<0.1.0->datasets==1.5.0.dev0) (3.0.12) Requirement already satisfied: pyrsistent>=0.14.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonschema~=3.0->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.17.3) Requirement already satisfied: attrs>=17.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonschema~=3.0->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (20.3.0) Requirement already satisfied: decorator<5,>=4.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from networkx~=2.4->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (4.4.2) Requirement already satisfied: rsa>=3.1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (4.7.2) Requirement already satisfied: pyasn1-modules>=0.0.5 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.2.8) Requirement already satisfied: pyasn1>=0.1.7 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.4.8) Requirement already satisfied: pyparsing>=2.1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pydot<2,>=1.2.0->apache-beam>=2.24.0->datasets==1.5.0.dev0) (2.4.7) Requirement already satisfied: certifi>=2017.4.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests>=2.19.0->datasets==1.5.0.dev0) (2020.12.5) Requirement already satisfied: chardet<5,>=3.0.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests>=2.19.0->datasets==1.5.0.dev0) (4.0.0) Collecting keras-preprocessing~=1.1.2 Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB) Requirement already satisfied: termcolor~=1.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (1.1.0) Requirement already satisfied: tensorboard~=2.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (2.5.0) Requirement already satisfied: wheel~=0.35 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (0.36.2) Collecting opt-einsum~=3.3.0 Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB) Collecting gast==0.3.3 Using cached gast-0.3.3-py2.py3-none-any.whl (9.7 kB) Collecting google-pasta~=0.2 Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB) Requirement already satisfied: tensorflow-estimator<2.5.0,>=2.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (2.4.0) Collecting astunparse~=1.6.3 Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB) Collecting flatbuffers~=1.12.0 Using cached flatbuffers-1.12-py2.py3-none-any.whl (15 kB) Collecting h5py~=2.10.0 Using cached h5py-2.10.0-cp37-cp37m-win_amd64.whl (2.5 MB) Requirement already satisfied: markdown>=2.6.8 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (3.3.4) Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.8.0) Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (0.4.4) Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (0.6.0) Requirement already satisfied: google-auth<2,>=1.6.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.30.0) Requirement already satisfied: cachetools<5.0,>=2.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (4.2.2) Requirement already satisfied: requests-oauthlib>=0.7.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.3.0) Requirement already satisfied: oauthlib>=3.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (3.1.0) Requirement already satisfied: regex!=2019.12.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (2021.4.4) Requirement already satisfied: tokenizers<0.11,>=0.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (0.10.2) Requirement already satisfied: sacremoses in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (0.0.45) Requirement already satisfied: packaging in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (20.9) Collecting pathspec<1,>=0.8.1 Using cached pathspec-0.8.1-py2.py3-none-any.whl (28 kB) Requirement already satisfied: click>=7.1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from black->datasets==1.5.0.dev0) (7.1.2) Collecting appdirs Using cached appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB) Collecting mypy-extensions>=0.4.3 Using cached mypy_extensions-0.4.3-py2.py3-none-any.whl (4.5 kB) Requirement already satisfied: typed-ast>=1.4.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from black->datasets==1.5.0.dev0) (1.4.3) Collecting beautifulsoup4 Using cached beautifulsoup4-4.9.3-py3-none-any.whl (115 kB) Requirement already satisfied: soupsieve>1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from beautifulsoup4->bs4->datasets==1.5.0.dev0) (2.2.1) Collecting python-Levenshtein Using cached python-Levenshtein-0.12.2.tar.gz (50 kB) Requirement already satisfied: jsonpointer>=1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonpatch->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.1) Requirement already satisfied: pillow>=6.2.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (8.2.0) Requirement already satisfied: cycler>=0.10 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (0.10.0) Requirement already satisfied: kiwisolver>=1.0.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (1.3.1) Collecting multiprocess Using cached multiprocess-0.70.11-py3-none-any.whl (98 kB) Using cached multiprocess-0.70.10.zip (2.4 MB) Using cached multiprocess-0.70.9-py3-none-any.whl Requirement already satisfied: joblib in c:\programdata\anaconda3\envs\env\lib\site-packages (from nltk->datasets==1.5.0.dev0) (1.0.1) Collecting et-xmlfile Using cached et_xmlfile-1.1.0-py3-none-any.whl (4.7 kB) Requirement already satisfied: pyzstd<0.15.0,>=0.14.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from py7zr->datasets==1.5.0.dev0) (0.14.4) Collecting pyppmd<0.13.0,>=0.12.1 Using cached pyppmd-0.12.1-cp37-cp37m-win_amd64.whl (32 kB) Collecting pycryptodome>=3.6.6 Using cached pycryptodome-3.10.1-cp35-abi3-win_amd64.whl (1.6 MB) Collecting bcj-cffi<0.6.0,>=0.5.1 Using cached bcj_cffi-0.5.1-cp37-cp37m-win_amd64.whl (21 kB) Collecting multivolumefile<0.3.0,>=0.2.0 Using cached multivolumefile-0.2.3-py3-none-any.whl (17 kB) Requirement already satisfied: iniconfig in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.1.1) Requirement already satisfied: py>=1.8.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.10.0) Requirement already satisfied: pluggy<1.0.0a1,>=0.12 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (0.13.1) Requirement already satisfied: atomicwrites>=1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.4.0) Requirement already satisfied: colorama in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (0.4.4) Collecting pytest-forked Using cached pytest_forked-1.3.0-py2.py3-none-any.whl (4.7 kB) Collecting execnet>=1.1 Using cached execnet-1.8.0-py2.py3-none-any.whl (39 kB) Requirement already satisfied: apipkg>=1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from execnet>=1.1->pytest-xdist->datasets==1.5.0.dev0) (1.5) Collecting portalocker==2.0.0 Using cached portalocker-2.0.0-py2.py3-none-any.whl (11 kB) Requirement already satisfied: scikit-learn>=0.21.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from seqeval->datasets==1.5.0.dev0) (0.24.2) Requirement already satisfied: threadpoolctl>=2.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from scikit-learn>=0.21.3->seqeval->datasets==1.5.0.dev0) (2.1.0) Building wheels for collected packages: python-Levenshtein Building wheel for python-Levenshtein (setup.py) ... error ERROR: Command errored out with exit status 1: command: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\VKC~1\AppData\Local\Temp\pip-wheel-8jh7fm18' cwd: C:\Users\VKC~1\AppData\Local\Temp\pip-install-ynt_dbm4\python-levenshtein_c02e7e6f9def4629a475349654670ae9\ Complete output (27 lines): running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.7 creating build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\StringMatcher.py -> build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\__init__.py -> build\lib.win-amd64-3.7\Levenshtein running egg_info writing python_Levenshtein.egg-info\PKG-INFO writing dependency_links to python_Levenshtein.egg-info\dependency_links.txt writing entry points to python_Levenshtein.egg-info\entry_points.txt writing namespace_packages to python_Levenshtein.egg-info\namespace_packages.txt writing requirements to python_Levenshtein.egg-info\requires.txt writing top-level names to python_Levenshtein.egg-info\top_level.txt reading manifest file 'python_Levenshtein.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files matching '*pyc' found anywhere in distribution warning: no previously-included files matching '*so' found anywhere in distribution warning: no previously-included files matching '.project' found anywhere in distribution warning: no previously-included files matching '.pydevproject' found anywhere in distribution writing manifest file 'python_Levenshtein.egg-info\SOURCES.txt' copying Levenshtein\_levenshtein.c -> build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\_levenshtein.h -> build\lib.win-amd64-3.7\Levenshtein running build_ext building 'Levenshtein._levenshtein' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ ---------------------------------------- ERROR: Failed building wheel for python-Levenshtein Running setup.py clean for python-Levenshtein Failed to build python-Levenshtein Installing collected packages: python-Levenshtein, pytest-forked, pyppmd, pymongo, pyflakes, pydot, pycryptodome, pycodestyle, pyarrow, portalocker, pathspec, pandas, opt-einsum, oauth2client, nltk, mypy-extensions, multivolumefile, multiprocess, moto, mccabe, matplotlib, keras-preprocessing, huggingface-hub, hdfs, h5py, google-pasta, gast, flatbuffers, fastavro, execnet, et-xmlfile, entrypoints, crcmod, beautifulsoup4, bcj-cffi, avro-python3, astunparse, appdirs, zstandard, tldextract, tensorflow, sklearn, seqeval, sacrebleu, rouge-score, rarfile, pytest-xdist, py7zr, openpyxl, mwparserfromhell, lxml, langdetect, jiwer, isort, flake8, elasticsearch, datasets, conllu, bs4, black, bert-score, apache-beam Running setup.py install for python-Levenshtein ... error ERROR: Command errored out with exit status 1: command: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\VKC~1\AppData\Local\Temp\pip-record-v7l7zitb\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\envs\env\Include\python-Levenshtein' cwd: C:\Users\VKC~1\AppData\Local\Temp\pip-install-ynt_dbm4\python-levenshtein_c02e7e6f9def4629a475349654670ae9\ Complete output (27 lines): running install running build running build_py creating build creating build\lib.win-amd64-3.7 creating build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\StringMatcher.py -> build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\__init__.py -> build\lib.win-amd64-3.7\Levenshtein running egg_info writing python_Levenshtein.egg-info\PKG-INFO writing dependency_links to python_Levenshtein.egg-info\dependency_links.txt writing entry points to python_Levenshtein.egg-info\entry_points.txt writing namespace_packages to python_Levenshtein.egg-info\namespace_packages.txt writing requirements to python_Levenshtein.egg-info\requires.txt writing top-level names to python_Levenshtein.egg-info\top_level.txt reading manifest file 'python_Levenshtein.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files matching '*pyc' found anywhere in distribution warning: no previously-included files matching '*so' found anywhere in distribution warning: no previously-included files matching '.project' found anywhere in distribution warning: no previously-included files matching '.pydevproject' found anywhere in distribution writing manifest file 'python_Levenshtein.egg-info\SOURCES.txt' copying Levenshtein\_levenshtein.c -> build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\_levenshtein.h -> build\lib.win-amd64-3.7\Levenshtein running build_ext building 'Levenshtein._levenshtein' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ ---------------------------------------- ERROR: Command errored out with exit status 1: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\VKC~1\AppData\Local\Temp\pip-record-v7l7zitb\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\envs\env\Include\python-Levenshtein' Check the logs for full command output. ``` Here are conda and python versions: ```bat (env) C:\testing\datasets>conda --version conda 4.9.2 (env) C:\testing\datasets>python --version Python 3.7.10 ``` Please help me out. Thanks.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2301/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2300
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2300/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2300/comments
https://api.github.com/repos/huggingface/datasets/issues/2300/events
https://github.com/huggingface/datasets/issues/2300
873,928,169
MDU6SXNzdWU4NzM5MjgxNjk=
2,300
Add VoxPopuli
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech", "name": "speech", "color": "d93f0b", "default": false, "description": "" } ]
closed
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
[ "I'm happy to take this on:) One question: The original unlabelled data is stored unsegmented (see e.g. https://github.com/facebookresearch/voxpopuli/blob/main/voxpopuli/get_unlabelled_data.py#L30), but segmenting the audio in the dataset would require a dependency on something like soundfile or torchaudio. An alternative could be to provide the segments start and end times as a Sequence and then it's up to the user to perform the segmentation on-the-fly if they wish?", "Hey @jfainberg,\r\n\r\nThis sounds great! I think adding a dependency would not be a big problem, however automatically segmenting the data probably means that it would take a very long time to do:\r\n\r\n```python\r\ndataset = load_dataset(\"voxpopuli\", \"french\")\r\n```\r\n\r\n=> so as a start I think your option 2 is the way to go!", "@polinaeterna VoxPopuli is available [here](https://huggingface.co/datasets/facebook/voxpopuli), so we can close this issue, right?\r\n", "@mariosasko yes, sure, closing it" ]
2021-05-02T12:17:40
2023-02-28T17:43:52
2023-02-28T17:43:51
CONTRIBUTOR
null
null
null
## Adding a Dataset - **Name:** Voxpopuli - **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings - **Paper:** https://arxiv.org/abs/2101.00390 - **Data:** https://github.com/facebookresearch/voxpopuli - **Motivation:** biggest unlabeled speech dataset **Note**: Since the dataset is so huge, we should only add the config `10k` in the beginning. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2300/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2300/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2299/comments
https://api.github.com/repos/huggingface/datasets/issues/2299/events
https://github.com/huggingface/datasets/issues/2299
873,914,717
MDU6SXNzdWU4NzM5MTQ3MTc=
2,299
My iPhone
{ "login": "Jasonbuchanan1983", "id": 82856229, "node_id": "MDQ6VXNlcjgyODU2MjI5", "avatar_url": "https://avatars.githubusercontent.com/u/82856229?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jasonbuchanan1983", "html_url": "https://github.com/Jasonbuchanan1983", "followers_url": "https://api.github.com/users/Jasonbuchanan1983/followers", "following_url": "https://api.github.com/users/Jasonbuchanan1983/following{/other_user}", "gists_url": "https://api.github.com/users/Jasonbuchanan1983/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jasonbuchanan1983/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jasonbuchanan1983/subscriptions", "organizations_url": "https://api.github.com/users/Jasonbuchanan1983/orgs", "repos_url": "https://api.github.com/users/Jasonbuchanan1983/repos", "events_url": "https://api.github.com/users/Jasonbuchanan1983/events{/privacy}", "received_events_url": "https://api.github.com/users/Jasonbuchanan1983/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2021-05-02T11:11:11
2021-07-23T09:24:16
2021-05-03T08:17:38
NONE
null
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2299/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2296
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2296/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2296/comments
https://api.github.com/repos/huggingface/datasets/issues/2296/events
https://github.com/huggingface/datasets/issues/2296
872,974,907
MDU6SXNzdWU4NzI5NzQ5MDc=
2,296
1
{ "login": "zinnyi", "id": 82880142, "node_id": "MDQ6VXNlcjgyODgwMTQy", "avatar_url": "https://avatars.githubusercontent.com/u/82880142?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zinnyi", "html_url": "https://github.com/zinnyi", "followers_url": "https://api.github.com/users/zinnyi/followers", "following_url": "https://api.github.com/users/zinnyi/following{/other_user}", "gists_url": "https://api.github.com/users/zinnyi/gists{/gist_id}", "starred_url": "https://api.github.com/users/zinnyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zinnyi/subscriptions", "organizations_url": "https://api.github.com/users/zinnyi/orgs", "repos_url": "https://api.github.com/users/zinnyi/repos", "events_url": "https://api.github.com/users/zinnyi/events{/privacy}", "received_events_url": "https://api.github.com/users/zinnyi/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
2021-04-30T17:53:49
2021-05-03T08:17:31
2021-05-03T08:17:31
NONE
null
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2296/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2294
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2294/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2294/comments
https://api.github.com/repos/huggingface/datasets/issues/2294/events
https://github.com/huggingface/datasets/issues/2294
872,136,075
MDU6SXNzdWU4NzIxMzYwNzU=
2,294
Slow #0 when using map to tokenize.
{ "login": "VerdureChen", "id": 31714566, "node_id": "MDQ6VXNlcjMxNzE0NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/31714566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VerdureChen", "html_url": "https://github.com/VerdureChen", "followers_url": "https://api.github.com/users/VerdureChen/followers", "following_url": "https://api.github.com/users/VerdureChen/following{/other_user}", "gists_url": "https://api.github.com/users/VerdureChen/gists{/gist_id}", "starred_url": "https://api.github.com/users/VerdureChen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VerdureChen/subscriptions", "organizations_url": "https://api.github.com/users/VerdureChen/orgs", "repos_url": "https://api.github.com/users/VerdureChen/repos", "events_url": "https://api.github.com/users/VerdureChen/events{/privacy}", "received_events_url": "https://api.github.com/users/VerdureChen/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! Have you tried other values for `preprocessing_num_workers` ? Is it always process 0 that is slower ?\r\nThere are no difference between process 0 and the others except that it processes the first shard of the dataset.", "Hi, I have found the reason of it. Before using the map function to tokenize the data, I concatenate the wikipedia and bookcorpus first, like this:\r\n```if args.dataset_name1 is not None:\r\n dataset1 = load_dataset(args.dataset_name1, args.dataset_config_name1, split=\"train\")\r\n dataset1 = dataset1.remove_columns('title')\r\n if args.dataset_name2 is not None:\r\n dataset2 = load_dataset(args.dataset_name2, args.dataset_config_name2,split=\"train\")\r\n assert dataset1.features.type == dataset2.features.type, str(dataset1.features.type)+';'+str(dataset2.features.type)\r\n datasets12 = concatenate_datasets([dataset1, dataset2], split='train')\r\n```\r\nWhen I just use one datasets, e.g. wikipedia, the problem seems no longer exist:\r\n![image](https://user-images.githubusercontent.com/31714566/116967059-13d24380-ace4-11eb-8d14-b7b9c9a275cc.png)\r\n\r\nBookcorpus has more row numbers than Wikipedia, however, it takes much more time to process each batch of wiki than that of bookcorpus. When we first concatenate two datasets and then use _map_ to process the concatenated datasets, e.g. `num_proc=5`, process 0 has to process all of the wikipedia data, causing the problem that #0 takes a longer time to finish the job. \r\n\r\nThe problem is caused by the different characteristic of different datasets. One solution might be using _map_ first to process two datasets seperately, then concatenate the tokenized and processed datasets before input to the `Dataloader`.\r\n\r\n", "That makes sense ! You can indeed use `map` on both datasets separately and then concatenate.\r\nAnother option is to concatenate, then shuffle, and then `map`." ]
2021-04-30T08:00:33
2021-05-04T11:00:11
null
NONE
null
null
null
Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not args.overwrite_cache, )` to tokenize by multiprocessing. However, I have found that when `num_proc`>1,the process _#0_ is much slower than others. It looks like this: ![image](https://user-images.githubusercontent.com/31714566/116665555-81246280-a9cc-11eb-8a37-6e608ab310d0.png) It takes more than 12 hours for #0, while others just about half an hour. Could anyone tell me it is normal or not, and is there any methods to speed up it?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2294/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/2288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2288/comments
https://api.github.com/repos/huggingface/datasets/issues/2288/events
https://github.com/huggingface/datasets/issues/2288
871,111,235
MDU6SXNzdWU4NzExMTEyMzU=
2,288
Load_dataset for local CSV files
{ "login": "sstojanoska", "id": 17052700, "node_id": "MDQ6VXNlcjE3MDUyNzAw", "avatar_url": "https://avatars.githubusercontent.com/u/17052700?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sstojanoska", "html_url": "https://github.com/sstojanoska", "followers_url": "https://api.github.com/users/sstojanoska/followers", "following_url": "https://api.github.com/users/sstojanoska/following{/other_user}", "gists_url": "https://api.github.com/users/sstojanoska/gists{/gist_id}", "starred_url": "https://api.github.com/users/sstojanoska/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sstojanoska/subscriptions", "organizations_url": "https://api.github.com/users/sstojanoska/orgs", "repos_url": "https://api.github.com/users/sstojanoska/repos", "events_url": "https://api.github.com/users/sstojanoska/events{/privacy}", "received_events_url": "https://api.github.com/users/sstojanoska/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthis is not a standard CSV file (requires additional preprocessing) so I wouldn't label this as s bug. You could parse the examples with the regex module or the string API to extract the data, but the following approach is probably the easiest (once you load the data):\r\n```python\r\nimport ast\r\n# load the dataset and copy the features\r\ndef process(ex):\r\n return {\"tokens\": ast.literal_eval(ex[\"tokens\"]), \"labels\": ast.literal_eval(ex[\"labels\"])}\r\ndataset = dataset.map(process, features=new_features)\r\n```\r\n", "Hi,\r\n\r\nThanks for the reply.\r\nI have already used ```ast.literal_eval``` to evaluate the string into list, but I was getting another error:\r\n```\r\nArrowInvalid: Could not convert X with type str: tried to convert to int\r\n```\r\nWhy this happens ? Should labels be mapped to their ids and use int instead of str ?", "Yes, just map the labels to their ids." ]
2021-04-29T15:01:10
2021-06-15T13:49:26
2021-06-15T13:49:26
NONE
null
null
null
The method load_dataset fails to correctly load a dataset from csv. Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings. row example: ```tokens | labels ['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ] ``` The method, loads each list as a string: (i.g "['I' , 'am', 'John']"). To solve this issue, I copied the Datasets.Features, created Sequence types ( instead of Value) and tried to cast the features type ``` new_features['tokens'] = Sequence(feature=Value(dtype='string', id=None)) new_features['labels'] = Sequence(feature=ClassLabel(num_classes=len(tag2idx), names=list(unique_tags))) dataset = dataset.cast(new_features) ``` but I got the following error ``` ArrowNotImplementedError: Unsupported cast from string to list using function cast_list ``` Moreover, I tried to set feature parameter in load_dataset method, to my new_features, but this fails as well. How can this be solved ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2288/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2285/comments
https://api.github.com/repos/huggingface/datasets/issues/2285/events
https://github.com/huggingface/datasets/issues/2285
871,005,236
MDU6SXNzdWU4NzEwMDUyMzY=
2,285
Help understanding how to build a dataset for language modeling as with the old TextDataset
{ "login": "danieldiezmallo", "id": 46021411, "node_id": "MDQ6VXNlcjQ2MDIxNDEx", "avatar_url": "https://avatars.githubusercontent.com/u/46021411?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danieldiezmallo", "html_url": "https://github.com/danieldiezmallo", "followers_url": "https://api.github.com/users/danieldiezmallo/followers", "following_url": "https://api.github.com/users/danieldiezmallo/following{/other_user}", "gists_url": "https://api.github.com/users/danieldiezmallo/gists{/gist_id}", "starred_url": "https://api.github.com/users/danieldiezmallo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danieldiezmallo/subscriptions", "organizations_url": "https://api.github.com/users/danieldiezmallo/orgs", "repos_url": "https://api.github.com/users/danieldiezmallo/repos", "events_url": "https://api.github.com/users/danieldiezmallo/events{/privacy}", "received_events_url": "https://api.github.com/users/danieldiezmallo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "\r\nI received an answer for this question on the HuggingFace Datasets forum by @lhoestq\r\n\r\nHi !\r\n\r\nIf you want to tokenize line by line, you can use this:\r\n\r\n```\r\nmax_seq_length = 512\r\nnum_proc = 4\r\n\r\ndef tokenize_function(examples):\r\n# Remove empty lines\r\nexamples[\"text\"] = [line for line in examples[\"text\"] if len(line) > 0 and not line.isspace()]\r\nreturn tokenizer(\r\n examples[\"text\"],\r\n truncation=True,\r\n max_length=max_seq_length,\r\n)\r\n\r\ntokenized_dataset = dataset.map(\r\ntokenize_function,\r\nbatched=True,\r\nnum_proc=num_proc,\r\nremove_columns=[\"text\"],\r\n)\r\n```\r\n\r\nThough the TextDataset was doing a different processing by concatenating all the texts and building blocks of size 512. If you need this behavior, then you must apply an additional map function after the tokenization:\r\n\r\n```\r\n# Main data processing function that will concatenate all texts from\r\n# our dataset and generate chunks of max_seq_length.\r\ndef group_texts(examples):\r\n# Concatenate all texts.\r\nconcatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\r\ntotal_length = len(concatenated_examples[list(examples.keys())[0]])\r\n# We drop the small remainder, we could add padding if the model supported it instead of this drop,\r\n# you can customize this part to your needs.\r\ntotal_length = (total_length // max_seq_length) * max_seq_length\r\n# Split by chunks of max_len.\r\nresult = {\r\n k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]\r\n for k, t in concatenated_examples.items()\r\n}\r\nreturn result\r\n\r\n# Note that with `batched=True`, this map processes 1,000 texts together,\r\n# so group_texts throws away a remainder for each of those groups of 1,000 texts.\r\n# You can adjust that batch_size here but a higher value might be slower to preprocess.\r\n\r\ntokenized_dataset = tokenized_dataset.map(\r\ngroup_texts,\r\nbatched=True,\r\nnum_proc=num_proc,\r\n)\r\n```\r\n\r\nThis code comes from the processing of the run_mlm.py example script of transformers\r\n\r\n", "Resolved" ]
2021-04-29T13:16:45
2021-05-19T07:22:45
2021-05-19T07:22:39
NONE
null
null
null
Hello, I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers. I would like to understand what is the process to build a text dataset that tokenizes each line, having previously split the documents in the dataset into lines of a "tokenizable" size, as the old TextDataset class would do, where you only had to do the following, and a tokenized dataset without text loss would be available to pass to a DataCollator: ``` model_checkpoint = 'distilbert-base-uncased' from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) from transformers import TextDataset dataset = TextDataset( tokenizer=tokenizer, file_path="path/to/text_file.txt", block_size=512, ) ``` For now, what I have is the following, which, of course, throws an error because each line is longer than the maximum block size in the tokenizer: ``` import datasets dataset = datasets.load_dataset('path/to/text_file.txt') model_checkpoint = 'distilbert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) def tokenize_function(examples): return tokenizer(examples["text"]) tokenized_datasets = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"]) tokenized_datasets ``` So what would be the "standard" way of creating a dataset in the way it was done before? Thank you very much for the help :))
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2285/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2279
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2279/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2279/comments
https://api.github.com/repos/huggingface/datasets/issues/2279/events
https://github.com/huggingface/datasets/issues/2279
870,431,662
MDU6SXNzdWU4NzA0MzE2NjI=
2,279
Compatibility with Ubuntu 18 and GLIBC 2.27?
{ "login": "tginart", "id": 11379648, "node_id": "MDQ6VXNlcjExMzc5NjQ4", "avatar_url": "https://avatars.githubusercontent.com/u/11379648?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tginart", "html_url": "https://github.com/tginart", "followers_url": "https://api.github.com/users/tginart/followers", "following_url": "https://api.github.com/users/tginart/following{/other_user}", "gists_url": "https://api.github.com/users/tginart/gists{/gist_id}", "starred_url": "https://api.github.com/users/tginart/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tginart/subscriptions", "organizations_url": "https://api.github.com/users/tginart/orgs", "repos_url": "https://api.github.com/users/tginart/repos", "events_url": "https://api.github.com/users/tginart/events{/privacy}", "received_events_url": "https://api.github.com/users/tginart/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "From the trace this seems like an error in the tokenizer library instead.\r\n\r\nDo you mind opening an issue at https://github.com/huggingface/tokenizers instead?", "Hi @tginart, thanks for reporting.\r\n\r\nI think this issue is already open at `tokenizers` library: https://github.com/huggingface/tokenizers/issues/685" ]
2021-04-28T22:08:07
2021-04-29T07:42:42
2021-04-29T07:42:42
NONE
null
null
null
## Describe the bug For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-29-or-higher-in-ubuntu-18-04). I'm not sure if there is anything that can be done about this, but I'd like to confirm that using huggingface/datasets requires either an upgrade to Ubuntu 19/20 or a hand-rolled install of a higher version of GLIBC. ## Steps to reproduce the bug 1. clone the transformers repo 2. move to examples/pytorch/language-modeling 3. run example command: ```python run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm``` ## Expected results As described in the transformers repo. ## Actual results ```Traceback (most recent call last): File "run_clm.py", line 34, in <module> from transformers import ( File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/__init__.py", line 2487, in __getattr__ return super().__getattr__(name) File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/file_utils.py", line 1699, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/__init__.py", line 2481, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/__init__.py", line 19, in <module> from . import ( File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/layoutlm/__init__.py", line 23, in <module> from .tokenization_layoutlm import LayoutLMTokenizer File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/layoutlm/tokenization_layoutlm.py", line 19, in <module> from ..bert.tokenization_bert import BertTokenizer File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/bert/tokenization_bert.py", line 23, in <module> from ...tokenization_utils import PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 26, in <module> from .tokenization_utils_base import ( File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 68, in <module> from tokenizers import AddedToken File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/tokenizers/__init__.py", line 79, in <module> from .tokenizers import ( ImportError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/tokenizers/tokenizers.cpython-37m-x86_64-linux-gnu.so) ``` ## Versions Paste the output of the following code: ``` - Datasets: 1.6.1 - Python: 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0] - Platform: Linux-4.15.0-128-generic-x86_64-with-debian-buster-sid ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2279/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2278/comments
https://api.github.com/repos/huggingface/datasets/issues/2278/events
https://github.com/huggingface/datasets/issues/2278
870,088,059
MDU6SXNzdWU4NzAwODgwNTk=
2,278
Loss result inGptNeoForCasual
{ "login": "Yossillamm", "id": 51174606, "node_id": "MDQ6VXNlcjUxMTc0NjA2", "avatar_url": "https://avatars.githubusercontent.com/u/51174606?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yossillamm", "html_url": "https://github.com/Yossillamm", "followers_url": "https://api.github.com/users/Yossillamm/followers", "following_url": "https://api.github.com/users/Yossillamm/following{/other_user}", "gists_url": "https://api.github.com/users/Yossillamm/gists{/gist_id}", "starred_url": "https://api.github.com/users/Yossillamm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Yossillamm/subscriptions", "organizations_url": "https://api.github.com/users/Yossillamm/orgs", "repos_url": "https://api.github.com/users/Yossillamm/repos", "events_url": "https://api.github.com/users/Yossillamm/events{/privacy}", "received_events_url": "https://api.github.com/users/Yossillamm/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi ! I think you might have to ask on the `transformers` repo on or the forum at https://discuss.huggingface.co/\r\n\r\nClosing since it's not related to this library" ]
2021-04-28T15:39:52
2021-05-06T16:14:23
2021-05-06T16:14:23
NONE
null
null
null
Is there any way you give the " loss" and "logits" results in the gpt neo api?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2278/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2276/comments
https://api.github.com/repos/huggingface/datasets/issues/2276/events
https://github.com/huggingface/datasets/issues/2276
870,010,511
MDU6SXNzdWU4NzAwMTA1MTE=
2,276
concatenate_datasets loads all the data into memory
{ "login": "chbensch", "id": 7063207, "node_id": "MDQ6VXNlcjcwNjMyMDc=", "avatar_url": "https://avatars.githubusercontent.com/u/7063207?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chbensch", "html_url": "https://github.com/chbensch", "followers_url": "https://api.github.com/users/chbensch/followers", "following_url": "https://api.github.com/users/chbensch/following{/other_user}", "gists_url": "https://api.github.com/users/chbensch/gists{/gist_id}", "starred_url": "https://api.github.com/users/chbensch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chbensch/subscriptions", "organizations_url": "https://api.github.com/users/chbensch/orgs", "repos_url": "https://api.github.com/users/chbensch/repos", "events_url": "https://api.github.com/users/chbensch/events{/privacy}", "received_events_url": "https://api.github.com/users/chbensch/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Therefore, when I try to concatenate larger datasets (5x 35GB data sets) I also get an out of memory error, since over 90GB of swap space was used at the time of the crash:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nMemoryError Traceback (most recent call last)\r\n<ipython-input-6-9766d77530b9> in <module>\r\n 20 print(file_name)\r\n 21 cv_batch = load_from_disk(file_name)\r\n---> 22 cv_sampled_train = concatenate_datasets([cv_sampled_train, cv_batch])\r\n 23 \r\n 24 print(\"Saving to disk!\")\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\arrow_dataset.py in concatenate_datasets(dsets, info, split, axis)\r\n 2891 \r\n 2892 # Concatenate tables\r\n-> 2893 table = concat_tables([dset._data for dset in dsets if len(dset._data) > 0], axis=axis)\r\n 2894 table = update_metadata_with_features(table, None)\r\n 2895 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in concat_tables(tables, axis)\r\n 837 if len(tables) == 1:\r\n 838 return tables[0]\r\n--> 839 return ConcatenationTable.from_tables(tables, axis=axis)\r\n 840 \r\n 841 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in from_tables(cls, tables, axis)\r\n 697 return result\r\n 698 \r\n--> 699 blocks = to_blocks(tables[0])\r\n 700 for table in tables[1:]:\r\n 701 table_blocks = to_blocks(table)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in to_blocks(table)\r\n 669 return [[InMemoryTable(table)]]\r\n 670 elif isinstance(table, ConcatenationTable):\r\n--> 671 return copy.deepcopy(table.blocks)\r\n 672 else:\r\n 673 return [[table]]\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 151 copier = getattr(x, \"__deepcopy__\", None)\r\n 152 if copier is not None:\r\n--> 153 y = copier(memo)\r\n 154 else:\r\n 155 reductor = dispatch_table.get(cls)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in __deepcopy__(self, memo)\r\n 143 # by adding it to the memo, self.table won't be copied\r\n 144 memo[id(self.table)] = self.table\r\n--> 145 return _deepcopy(self, memo)\r\n 146 \r\n 147 def __getstate__(self):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in _deepcopy(x, memo)\r\n 62 memo[id(x)] = result\r\n 63 for k, v in x.__dict__.items():\r\n---> 64 setattr(result, k, copy.deepcopy(v, memo))\r\n 65 return result\r\n 66 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 170 y = x\r\n 171 else:\r\n--> 172 y = _reconstruct(x, memo, *rv)\r\n 173 \r\n 174 # If is its own copy, don't memoize.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 262 if deep and args:\r\n 263 args = (deepcopy(arg, memo) for arg in args)\r\n--> 264 y = func(*args)\r\n 265 if deep:\r\n 266 memo[id(x)] = y\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <genexpr>(.0)\r\n 261 deep = memo is not None\r\n 262 if deep and args:\r\n--> 263 args = (deepcopy(arg, memo) for arg in args)\r\n 264 y = func(*args)\r\n 265 if deep:\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 170 y = x\r\n 171 else:\r\n--> 172 y = _reconstruct(x, memo, *rv)\r\n 173 \r\n 174 # If is its own copy, don't memoize.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 262 if deep and args:\r\n 263 args = (deepcopy(arg, memo) for arg in args)\r\n--> 264 y = func(*args)\r\n 265 if deep:\r\n 266 memo[id(x)] = y\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <genexpr>(.0)\r\n 261 deep = memo is not None\r\n 262 if deep and args:\r\n--> 263 args = (deepcopy(arg, memo) for arg in args)\r\n 264 y = func(*args)\r\n 265 if deep:\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_tuple(x, memo, deepcopy)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <listcomp>(.0)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_tuple(x, memo, deepcopy)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <listcomp>(.0)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 159 reductor = getattr(x, \"__reduce_ex__\", None)\r\n 160 if reductor is not None:\r\n--> 161 rv = reductor(4)\r\n 162 else:\r\n 163 reductor = getattr(x, \"__reduce__\", None)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\pyarrow\\io.pxi in pyarrow.lib.Buffer.__reduce_ex__()\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\pyarrow\\io.pxi in pyarrow.lib.Buffer.to_pybytes()\r\n\r\nMemoryError: \r\n\r\n```", "Hi ! this looks like an important issue. Let me try to reproduce this.\r\nCc @samsontmr this might be related to the memory issue you have in #2134 ", "@lhoestq Just went to open a similar issue.\r\n\r\nIt seems like deep copying (tested on master) the dataset object writes the table's record batches (`dset._data._batches`) into RAM.\r\n\r\nTo find the bug, I modified the `_deepcopy` function in `table.py` as follows:\r\n```python\r\ndef _deepcopy(x, memo: dict):\r\n \"\"\"deepcopy a regular class instance\"\"\"\r\n import psutil # pip install this package\r\n import time\r\n cls = x.__class__\r\n result = cls.__new__(cls)\r\n memo[id(x)] = result\r\n for k, v in x.__dict__.items():\r\n print(\"=\"* 50)\r\n print(\"Current memory:\", psutil.virtual_memory().percent)\r\n print(f\"Saving object {k} with value {v}\")\r\n setattr(result, k, copy.deepcopy(v, memo))\r\n time.sleep(5)\r\n print(\"Memory after copy:\", psutil.virtual_memory().percent)\r\n return result\r\n```\r\nTest script:\r\n```python\r\nimport copy\r\nfrom datasets import load_dataset\r\nbk = load_dataset(\"bookcorpus\", split=\"train\")\r\nbk_copy = copy.deepcopy(bk)\r\n```", "Thanks for the insights @mariosasko ! I'm working on a fix.\r\nSince this is a big issue I'll make a patch release as soon as this is fixed", "Hi @samsontmr @TaskManager91 the fix is on the master branch, feel free to install `datasets` from source and let us know if you still have issues", "We just released `datasets` 1.6.2 that includes the fix :)", "thanks it works like a charm! :)" ]
2021-04-28T14:27:21
2021-05-03T08:41:55
2021-05-03T08:41:55
NONE
null
null
null
## Describe the bug When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk. Interestingly, this happens when trying to save the new dataset to disk or concatenating it again. ![image](https://user-images.githubusercontent.com/7063207/116420321-2b21b480-a83e-11eb-9006-8f6ca729fb6f.png) ## Steps to reproduce the bug ```python from datasets import concatenate_datasets, load_from_disk test_sampled_pro = load_from_disk("test_sampled_pro") val_sampled_pro = load_from_disk("val_sampled_pro") big_set = concatenate_datasets([test_sampled_pro, val_sampled_pro]) # Loaded to memory big_set.save_to_disk("big_set") # Loaded to memory big_set = concatenate_datasets([big_set, val_sampled_pro]) ``` ## Expected results The data should be loaded into memory in batches and then saved directly to disk. ## Actual results The entire data set is loaded into the memory and then saved to the hard disk. ## Versions Paste the output of the following code: ```python - Datasets: 1.6.1 - Python: 3.8.8 (default, Apr 13 2021, 19:58:26) [GCC 7.3.0] - Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.10 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2276/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2275/comments
https://api.github.com/repos/huggingface/datasets/issues/2275/events
https://github.com/huggingface/datasets/issues/2275
869,378,311
MDU6SXNzdWU4NjkzNzgzMTE=
2,275
SNLI dataset has labels of -1
{ "login": "puzzler10", "id": 17426779, "node_id": "MDQ6VXNlcjE3NDI2Nzc5", "avatar_url": "https://avatars.githubusercontent.com/u/17426779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/puzzler10", "html_url": "https://github.com/puzzler10", "followers_url": "https://api.github.com/users/puzzler10/followers", "following_url": "https://api.github.com/users/puzzler10/following{/other_user}", "gists_url": "https://api.github.com/users/puzzler10/gists{/gist_id}", "starred_url": "https://api.github.com/users/puzzler10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/puzzler10/subscriptions", "organizations_url": "https://api.github.com/users/puzzler10/orgs", "repos_url": "https://api.github.com/users/puzzler10/repos", "events_url": "https://api.github.com/users/puzzler10/events{/privacy}", "received_events_url": "https://api.github.com/users/puzzler10/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @puzzler10, \r\nThose examples where `gold_label` field was empty, -1 label was alloted to it. In order to remove it you can filter the samples from train/val/test splits. Here's how you can drop those rows from the dataset:\r\n`dataset = load_dataset(\"snli\")`\r\n`dataset_test_filter = dataset['test'].filter(lambda example: example['label'] != -1)`\r\n\r\nI agree it should have been mentioned in the documentation. I'll raise a PR regarding the same. Thanks for pointing out!" ]
2021-04-28T00:32:25
2021-05-17T13:34:18
2021-05-17T13:34:18
NONE
null
null
null
There are a number of rows with a label of -1 in the SNLI dataset. The dataset descriptions [here](https://nlp.stanford.edu/projects/snli/) and [here](https://github.com/huggingface/datasets/tree/master/datasets/snli) don't list -1 as a label possibility, and neither does the dataset viewer. As examples, see index 107 or 124 of the test set. It isn't clear what these labels mean. I found a [line of code](https://github.com/huggingface/datasets/blob/80e59ef178d3bb2090d091bc32315c655eb0633d/datasets/snli/snli.py#L94) that seems to put them in but it seems still unclear why they are there. The current workaround is to just drop the rows from any model being trained. Perhaps the documentation should be updated.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2275/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2272/comments
https://api.github.com/repos/huggingface/datasets/issues/2272/events
https://github.com/huggingface/datasets/issues/2272
869,017,977
MDU6SXNzdWU4NjkwMTc5Nzc=
2,272
Bug in Dataset.class_encode_column
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "This has been fixed in this commit: https://github.com/huggingface/datasets/pull/2254/commits/88676c930216cd4cc31741b99827b477d2b46cb6\r\n\r\nIt was introduced in #2246 : using map with `input_columns` doesn't return the other columns anymore" ]
2021-04-27T16:13:18
2021-04-30T12:54:27
2021-04-30T12:54:27
MEMBER
null
null
null
## Describe the bug All the rest of the columns except the one passed to `Dataset.class_encode_column` are discarded. ## Expected results All the original columns should be kept. This needs regression tests.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2272/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2271/comments
https://api.github.com/repos/huggingface/datasets/issues/2271/events
https://github.com/huggingface/datasets/issues/2271
869,002,141
MDU6SXNzdWU4NjkwMDIxNDE=
2,271
Synchronize table metadata with features
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "See PR #2274 " ]
2021-04-27T15:55:13
2022-06-01T17:13:21
2022-06-01T17:13:21
MEMBER
null
null
null
**Is your feature request related to a problem? Please describe.** As pointed out in this [comment](https://github.com/huggingface/datasets/pull/2145#discussion_r621326767): > Metadata stored in the schema is just a redundant information regarding the feature types. It is used when calling Dataset.from_file to know which feature types to use. These metadata are stored in the schema of the pyarrow table by using `update_metadata_with_features`. However this something that's almost never tested properly. **Describe the solution you'd like** We should find a way to always make sure that the metadata (in `self.data.schema.metadata`) are synced with the actual feature types (in `self.info.features`).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2271/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2267/comments
https://api.github.com/repos/huggingface/datasets/issues/2267/events
https://github.com/huggingface/datasets/issues/2267
868,291,129
MDU6SXNzdWU4NjgyOTExMjk=
2,267
DatasetDict save load Failing test in 1.6 not in 1.5
{ "login": "timothyjlaurent", "id": 2000204, "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timothyjlaurent", "html_url": "https://github.com/timothyjlaurent", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Thanks for reporting ! We're looking into it", "I'm not able to reproduce this, do you think you can provide a code that creates a DatasetDict that has this issue when saving and reloading ?", "Hi, I just ran into a similar error. Here is the minimal code to reproduce:\r\n```python\r\nfrom datasets import load_dataset, DatasetDict\r\nds = load_dataset('super_glue', 'multirc')\r\n\r\nds.save_to_disk('tempds')\r\n\r\nds = DatasetDict.load_from_disk('tempds')\r\n\r\n```\r\n\r\n```bash\r\nReusing dataset super_glue (/home/idahl/.cache/huggingface/datasets/super_glue/multirc/1.0.2/2fb163bca9085c1deb906aff20f00c242227ff704a4e8c9cfdfe820be3abfc83)\r\nTraceback (most recent call last):\r\n File \"/home/idahl/eval-util-expl/multirc/tmp.py\", line 7, in <module>\r\n ds = DatasetDict.load_from_disk('tempds')\r\n File \"/home/idahl/miniconda3/envs/eval-util-expl/lib/python3.9/site-packages/datasets/dataset_dict.py\", line 710, in load_from_disk\r\n dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)\r\n File \"/home/idahl/miniconda3/envs/eval-util-expl/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 687, in load_from_disk\r\n return Dataset(\r\n File \"/home/idahl/miniconda3/envs/eval-util-expl/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 274, in __init__\r\n raise ValueError(\r\nValueError: External features info don't match the dataset:\r\nGot\r\n{'answer': Value(dtype='string', id=None), 'idx': {'answer': Value(dtype='int32', id=None), 'paragraph': Value(dtype='int32', id=None), 'question': Value(dtype='int32', id=None)}, 'label': ClassLabel(num_classes=2, names=['False', 'True'], names_file=None, id=None), 'paragraph': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None)}\r\nwith type\r\nstruct<answer: string, idx: struct<answer: int32, paragraph: int32, question: int32>, label: int64, paragraph: string, question: string>\r\n\r\nbut expected something like\r\n{'answer': Value(dtype='string', id=None), 'idx': {'paragraph': Value(dtype='int32', id=None), 'question': Value(dtype='int32', id=None), 'answer': Value(dtype='int32', id=None)}, 'label': Value(dtype='int64', id=None), 'paragraph': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None)}\r\nwith type\r\nstruct<answer: string, idx: struct<paragraph: int32, question: int32, answer: int32>, label: int64, paragraph: string, question: string>\r\n\r\n```\r\n\r\nThe non-matching part seems to be\r\n`'label': ClassLabel(num_classes=2, names=['False', 'True'], names_file=None, id=None),`\r\nvs \r\n`'label': Value(dtype='int64', id=None),`\r\n\r\nAnd the order in the `<struct...` being different, which might cause the [features.type != inferred_features.type](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L274) condition to become true and raise this ValueError.\r\n\r\n\r\nI am using datasets version 1.6.2.\r\n\r\nEdit: can confirm, this works without error in version 1.5.0", "My current workaround is to remove the idx feature:\r\n\r\n```\r\n\r\nfrom datasets import load_dataset, DatasetDict, Value\r\nds = load_dataset('super_glue', 'multirc')\r\nds = ds.remove_columns('idx')\r\n\r\nds.save_to_disk('tempds')\r\n\r\nds = DatasetDict.load_from_disk('tempds')\r\n\r\n```\r\n\r\nworks.", "It looks like this issue comes from the order of the fields in the 'idx' struct that is different for some reason.\r\nI'm looking into it. Note that as a workaround you can also flatten the nested features with `ds = ds.flatten()`", "I just pushed a fix on `master`. We'll do a new release soon !\r\n\r\nThanks for reporting" ]
2021-04-27T00:03:25
2021-05-28T15:27:34
null
NONE
null
null
null
## Describe the bug We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema. Downgrading to `>1.6` -- fixes the problem. ## Steps to reproduce the bug ```python ### Load a dataset dict from jsonl path = '/test/foo' ds_dict.save_to_disk(path) ds_from_disk = DatasetDict.load_from_disk(path). ## <-- this is where I see the error on 1.6 ``` ## Expected results Upgrading to 1.6 shouldn't break that test. We should be able to serialize to and from disk. ## Actual results ``` # Infer features if None inferred_features = Features.from_arrow_schema(arrow_table.schema) if self.info.features is None: self.info.features = inferred_features # Infer fingerprint if None if self._fingerprint is None: self._fingerprint = generate_fingerprint(self) # Sanity checks assert self.features is not None, "Features can't be None in a Dataset object" assert self._fingerprint is not None, "Fingerprint can't be None in a Dataset object" if self.info.features.type != inferred_features.type: > raise ValueError( "External features info don't match the dataset:\nGot\n{}\nwith type\n{}\n\nbut expected something like\n{}\nwith type\n{}".format( self.info.features, self.info.features.type, inferred_features, inferred_features.type ) ) E ValueError: External features info don't match the dataset: E Got E {'_input_hash': Value(dtype='int64', id=None), '_task_hash': Value(dtype='int64', id=None), '_view_id': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'encoding__ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'encoding__offsets': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'encoding__overflowing': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'encoding__tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'encoding__words': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'relations': [{'child': Value(dtype='int64', id=None), 'child_span': {'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None)}, 'color': Value(dtype='string', id=None), 'head': Value(dtype='int64', id=None), 'head_span': {'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None)}, 'label': Value(dtype='string', id=None)}], 'spans': [{'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'type': Value(dtype='string', id=None)}], 'text': Value(dtype='string', id=None), 'tokens': [{'disabled': Value(dtype='bool', id=None), 'end': Value(dtype='int64', id=None), 'id': Value(dtype='int64', id=None), 'start': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None), 'ws': Value(dtype='bool', id=None)}]} E with type E struct<_input_hash: int64, _task_hash: int64, _view_id: string, answer: string, encoding__ids: list<item: int64>, encoding__offsets: list<item: list<item: int64>>, encoding__overflowing: list<item: null>, encoding__tokens: list<item: string>, encoding__words: list<item: int64>, ner_ids: list<item: int64>, ner_labels: list<item: string>, relations: list<item: struct<child: int64, child_span: struct<end: int64, label: string, start: int64, token_end: int64, token_start: int64>, color: string, head: int64, head_span: struct<end: int64, label: string, start: int64, token_end: int64, token_start: int64>, label: string>>, spans: list<item: struct<end: int64, label: string, start: int64, text: string, token_end: int64, token_start: int64, type: string>>, text: string, tokens: list<item: struct<disabled: bool, end: int64, id: int64, start: int64, text: string, ws: bool>>> E E but expected something like E {'_input_hash': Value(dtype='int64', id=None), '_task_hash': Value(dtype='int64', id=None), '_view_id': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'encoding__ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'encoding__offsets': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'encoding__overflowing': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'encoding__tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'encoding__words': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'relations': [{'head': Value(dtype='int64', id=None), 'child': Value(dtype='int64', id=None), 'head_span': {'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}, 'child_span': {'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}, 'color': Value(dtype='string', id=None), 'label': Value(dtype='string', id=None)}], 'spans': [{'text': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'type': Value(dtype='string', id=None), 'label': Value(dtype='string', id=None)}], 'text': Value(dtype='string', id=None), 'tokens': [{'text': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'id': Value(dtype='int64', id=None), 'ws': Value(dtype='bool', id=None), 'disabled': Value(dtype='bool', id=None)}]} E with type E struct<_input_hash: int64, _task_hash: int64, _view_id: string, answer: string, encoding__ids: list<item: int64>, encoding__offsets: list<item: list<item: int64>>, encoding__overflowing: list<item: null>, encoding__tokens: list<item: string>, encoding__words: list<item: int64>, ner_ids: list<item: int64>, ner_labels: list<item: string>, relations: list<item: struct<head: int64, child: int64, head_span: struct<start: int64, end: int64, token_start: int64, token_end: int64, label: string>, child_span: struct<start: int64, end: int64, token_start: int64, token_end: int64, label: string>, color: string, label: string>>, spans: list<item: struct<text: string, start: int64, token_start: int64, token_end: int64, end: int64, type: string, label: string>>, text: string, tokens: list<item: struct<text: string, start: int64, end: int64, id: int64, ws: bool, disabled: bool>>> ../../../../../.virtualenvs/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:274: ValueError ``` ## Versions - Datasets: 1.6.1 - Python: 3.8.5 (default, Jan 26 2021, 10:01:04) [Clang 12.0.0 (clang-1200.0.32.2)] - Platform: macOS-10.15.7-x86_64-i386-64bit ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2267/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/2262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2262/comments
https://api.github.com/repos/huggingface/datasets/issues/2262/events
https://github.com/huggingface/datasets/issues/2262
867,325,351
MDU6SXNzdWU4NjczMjUzNTE=
2,262
NewsPH NLI dataset script fails to access test data.
{ "login": "jinmang2", "id": 37775784, "node_id": "MDQ6VXNlcjM3Nzc1Nzg0", "avatar_url": "https://avatars.githubusercontent.com/u/37775784?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jinmang2", "html_url": "https://github.com/jinmang2", "followers_url": "https://api.github.com/users/jinmang2/followers", "following_url": "https://api.github.com/users/jinmang2/following{/other_user}", "gists_url": "https://api.github.com/users/jinmang2/gists{/gist_id}", "starred_url": "https://api.github.com/users/jinmang2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jinmang2/subscriptions", "organizations_url": "https://api.github.com/users/jinmang2/orgs", "repos_url": "https://api.github.com/users/jinmang2/repos", "events_url": "https://api.github.com/users/jinmang2/events{/privacy}", "received_events_url": "https://api.github.com/users/jinmang2/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "Thanks @bhavitvyamalik for the fix !\r\nThe fix will be available in the next release.\r\nIt's already available on the `master` branch. For now you can either install `datasets` from source or use `script_version=\"master\"` in `load_dataset` to use the fixed version of this dataset." ]
2021-04-26T06:44:41
2021-04-29T09:32:03
2021-04-29T09:30:20
NONE
null
null
null
In Newsph-NLI Dataset (#1192), it fails to access test data. According to the script below, the download manager will download the train data when trying to download the test data. https://github.com/huggingface/datasets/blob/2a2dd6316af2cc7fdf24e4779312e8ee0c7ed98b/datasets/newsph_nli/newsph_nli.py#L71 If you download it according to the script above, you can see that train and test receive the same data as shown below. ```python >>> from datasets import load_dataset >>> newsph_nli = load_dataset(path="./datasets/newsph_nli.py") >>> newsph_nli DatasetDict({ train: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 420000 }) test: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 420000 }) validation: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 90000 }) }) >>> newsph_nli["train"][0] {'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).', 'label': 1, 'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'} >>> newsph_nli["test"][0] {'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).', 'label': 1, 'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'} ``` In local, I modified the code of the source as below and got the correct result. ```python 71 test_path = os.path.join(download_path, "test.csv") ``` ```python >>> from datasets import load_dataset >>> newsph_nli = load_dataset(path="./datasets/newsph_nli.py") >>> newsph_nli DatasetDict({ train: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 420000 }) test: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 9000 }) validation: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 90000 }) }) >>> newsph_nli["train"][0] {'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).', 'label': 1, 'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'} >>> newsph_nli["test"][0] {'hypothesis': '-- JAI (@JaiPaller) September 13, 2019', 'label': 1, 'premise': 'Pinag-iingat ng Konsulado ng Pilipinas sa Dubai ang publiko, partikular ang mga donor, laban sa mga scam na gumagamit ng mga charitable organization.'} ``` I don't have experience with open source pull requests, so I suggest that you reflect them in the source. Thank you for reading :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2262/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2256
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2256/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2256/comments
https://api.github.com/repos/huggingface/datasets/issues/2256/events
https://github.com/huggingface/datasets/issues/2256
866,708,609
MDU6SXNzdWU4NjY3MDg2MDk=
2,256
Running `datase.map` with `num_proc > 1` uses a lot of memory
{ "login": "roskoN", "id": 8143425, "node_id": "MDQ6VXNlcjgxNDM0MjU=", "avatar_url": "https://avatars.githubusercontent.com/u/8143425?v=4", "gravatar_id": "", "url": "https://api.github.com/users/roskoN", "html_url": "https://github.com/roskoN", "followers_url": "https://api.github.com/users/roskoN/followers", "following_url": "https://api.github.com/users/roskoN/following{/other_user}", "gists_url": "https://api.github.com/users/roskoN/gists{/gist_id}", "starred_url": "https://api.github.com/users/roskoN/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/roskoN/subscriptions", "organizations_url": "https://api.github.com/users/roskoN/orgs", "repos_url": "https://api.github.com/users/roskoN/repos", "events_url": "https://api.github.com/users/roskoN/events{/privacy}", "received_events_url": "https://api.github.com/users/roskoN/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting ! We are working on this and we'll do a patch release very soon.", "We did a patch release to fix this issue.\r\nIt should be fixed in the new version 1.6.1\r\n\r\nThanks again for reporting and for the details :)" ]
2021-04-24T09:56:20
2021-04-26T17:12:15
2021-04-26T17:12:15
NONE
null
null
null
## Describe the bug Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow. ## Steps to reproduce the bug ```python from datasets import load_dataset dstc8_datset = load_dataset("roskoN/dstc8-reddit-corpus", keep_in_memory=False) def _prepare_sample(batch): return {"input_ids": list(), "attention_mask": list()} for split_name, dataset_split in list(dstc8_datset.items()): print(f"Processing {split_name}") encoded_dataset_split = dataset_split.map( function=_prepare_sample, batched=True, num_proc=4, remove_columns=dataset_split.column_names, batch_size=10, writer_batch_size=10, keep_in_memory=False, ) print(encoded_dataset_split) path = f"./data/encoded_{split_name}" encoded_dataset_split.save_to_disk(path) ``` ## Expected results Memory usage should stay within reasonable boundaries. ## Actual results This is htop-output from running the provided script. ![image](https://user-images.githubusercontent.com/8143425/115954836-66954980-a4f3-11eb-8340-0153bdc3a475.png) ## Versions ``` - Datasets: 1.6.0 - Python: 3.8.8 (default, Apr 13 2021, 19:58:26) [GCC 7.3.0] - Platform: Linux-4.19.128-microsoft-standard-x86_64-with-glibc2.10 ``` Running on WSL2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2256/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2256/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2252
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2252/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2252/comments
https://api.github.com/repos/huggingface/datasets/issues/2252/events
https://github.com/huggingface/datasets/issues/2252
865,870,710
MDU6SXNzdWU4NjU4NzA3MTA=
2,252
Slow dataloading with big datasets issue persists
{ "login": "hwijeen", "id": 29157715, "node_id": "MDQ6VXNlcjI5MTU3NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hwijeen", "html_url": "https://github.com/hwijeen", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "repos_url": "https://api.github.com/users/hwijeen/repos", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! Sorry to hear that. This may come from another issue then.\r\n\r\nFirst can we check if this latency comes from the dataset itself ?\r\nYou can try to load your dataset and benchmark the speed of querying random examples inside it ?\r\n```python\r\nimport time\r\nimport numpy as np\r\n\r\nfrom datasets import load_from_disk\r\n\r\ndataset = load_from_disk(...) # or from load_dataset...\r\n\r\n_start = time.time()\r\nn = 100\r\nfor i in np.random.default_rng(42).integers(0, len(dataset), size=n):\r\n _ = dataset[i]\r\nprint(time.time() - _start)\r\n```\r\n\r\nIf we see a significant speed difference between your two datasets then it would mean that there's an issue somewhere", "Hi @lhoestq, here is the result. I additionally measured time to `load_from_disk`:\r\n* 60GB\r\n```\r\nloading took: 22.618776321411133\r\nramdom indexing 100 times took: 0.10214924812316895\r\n```\r\n\r\n* 600GB\r\n```\r\nloading took: 1176.1764674186707\r\nramdom indexing 100 times took: 2.853600025177002\r\n```\r\n\r\nHmm.. I double checked that it's version 1.6.0. The difference seems quite big, could it be related to the running environment? \r\n", "I'm surprised by the speed change. Can you give more details about your dataset ?\r\nThe speed depends on the number of batches in the arrow tables and the distribution of the lengths of the batches.\r\nYou can access the batches by doing `dataset.data.to_batches()` (use only for debugging) (it doesn't bring data in memory).\r\n\r\nAlso can you explain what parameters you used if you used `map` calls ?\r\nAlso if you have some code that reproduces the issue I'd be happy to investigate it.", "Also if you could give us more info about your env like your OS, version of pyarrow and if you're using an HDD or a SSD", "Here are some details of my 600GB dataset. This is a dataset AFTER the `map` function and once I load this dataset, I do not use `map` anymore in the training. Regarding the distribution of the lengths, it is almost uniform (90% is 512 tokens, and 10% is randomly shorter than that -- typical setting for language modeling).\r\n```\r\nlen(batches):\r\n492763\r\n\r\nbatches[0]: \r\npyarrow.RecordBatch\r\nattention_mask: list<item: uint8>\r\n child 0, item: uint8\r\ninput_ids: list<item: int16>\r\n child 0, item: int16\r\nspecial_tokens_mask: list<item: uint8>\r\n child 0, item: uint8\r\ntoken_type_ids: list<item: uint8>\r\n child 0, item: uint8\r\n```\r\n\r\nHere the some parameters to `map` function just in case it is relevant:\r\n```\r\nnum_proc=1 # as multi processing is slower in my case\r\nload_from_cache_file=False\r\n```\r\n", "Regarding the environment, I am running the code on a cloud server. Here are some info:\r\n```\r\nUbuntu 18.04.5 LTS # cat /etc/issue\r\npyarrow 3.0.0 # pip list | grep pyarrow\r\n```\r\nThe data is stored in SSD and it is mounted to the machine via Network File System.\r\n\r\nIf you could point me to some of the commands to check the details of the environment, I would be happy to provide relevant information @lhoestq !", "I am not sure how I could provide you with the reproducible code, since the problem only arises when the data is big. For the moment, I would share the part that I think is relevant. Feel free to ask me for more info.\r\n\r\n```python\r\nclass MyModel(pytorch_lightning.LightningModule)\r\n def setup(self, stage):\r\n self.dataset = datasets.load_from_disk(path)\r\n self.dataset.set_format(\"torch\")\r\n\r\n def train_dataloader(self):\r\n collate_fn = transformers.DataCollatorForLanguageModeling(\r\n tokenizer=transformers.ElectraTokenizerFast.from_pretrained(tok_path)\r\n )\r\n dataloader = torch.utils.DataLoader(\r\n self.dataset,\r\n batch_size=32,\r\n collate_fn=collate_fn,\r\n num_workers=8,\r\n pin_memory=True,\r\n )\r\n```", "Hi ! Sorry for the delay I haven't had a chance to take a look at this yet. Are you still experiencing this issue ?\r\nI'm asking because the latest patch release 1.6.2 fixed a few memory issues that could have lead to slow downs", "Hi! I just ran the same code with different datasets (one is 60 GB and another 600 GB), and the latter runs much slower. ETA differs by 10x.", "@lhoestq and @hwijeen\r\n\r\nDespite upgrading to datasets 1.6.2, still experiencing extremely slow (2h00) loading for a 300Gb local dataset shard size 1.1Gb on local HDD (40Mb/s read speed). This corresponds almost exactly to total data divided by reading speed implying that it reads the entire dataset at each load.\r\n\r\nStack details:\r\n=========\r\n\r\n> GCC version: Could not collect\r\n> Clang version: Could not collect\r\n> CMake version: Could not collect\r\n> \r\n> Python version: 3.7 (64-bit runtime)\r\n> Is CUDA available: True\r\n> CUDA runtime version: 10.2.89\r\n> GPU models and configuration: GPU 0: GeForce GTX 1050\r\n> Nvidia driver version: 457.63\r\n> cuDNN version: C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.2\\bin\\cudnn64_7.dll\r\n> HIP runtime version: N/A\r\n> MIOpen runtime version: N/A\r\n> \r\n> Versions of relevant libraries:\r\n> [pip3] datasets==1.6.2\r\n> [pip3] transformers==4.5.1\r\n> [pip3] numpy==1.19.1\r\n> [pip3] numpydoc==1.1.0\r\n> [pip3] pytorch-metric-learning==0.9.98\r\n> [pip3] torch==1.8.1\r\n> [pip3] torchaudio==0.8.1\r\n> [pip3] torchvision==0.2.2\r\n> [conda] blas 2.16 mkl conda-forge\r\n> [conda] cudatoolkit 10.2.89 hb195166_8 conda-forge\r\n> [conda] libblas 3.8.0 16_mkl conda-forge\r\n> [conda] libcblas 3.8.0 16_mkl conda-forge\r\n> [conda] liblapack 3.8.0 16_mkl conda-forge\r\n> [conda] liblapacke 3.8.0 16_mkl conda-forge\r\n> [conda] mkl 2020.1 216\r\n> [conda] numpy 1.19.1 py37hae9e721_0 conda-forge\r\n> [conda] numpydoc 1.1.0 py_1 conda-forge\r\n> [conda] pytorch 1.8.1 py3.7_cuda10.2_cudnn7_0 pytorch\r\n> [conda] pytorch-metric-learning 0.9.98 pyh39e3cac_0 metric-learning\r\n> [conda] torchaudio 0.8.1 py37 pytorch\r\n> [conda] torchvision 0.2.2 py_3 pytorch", "Hi @BenoitDalFerro how do your load your dataset ?", "Hi @lhoestq thanks for the quick turn-around, actually the plain vanilla way, without an particular knack or fashion, I tried to look into the documentation for some alternative but couldn't find any\r\n\r\n> dataset = load_from_disk(dataset_path=os.path.join(datasets_dir,dataset_dir))", "I’m facing the same issue when loading a 900GB dataset (stored via `save_to_disk`): `load_from_disk(path_to_dir)` takes 1.5 hours and htop consistently shows high IO rates > 120 M/s.", "@tsproisl same here, smells like ~~teen spirit~~ intended generator inadvertently ending up iterator\r\n\r\n@lhoestq perhaps solution to detect bug location in code is to track its signature via HD read usage monitoring, option is to add tracking decorator on top each function and sequentially close all hatches from top to bottom, suggest PySmart https://pypi.org/project/pySMART/ a Smartmontools implementation", "I wasn't able to reproduce this on a toy dataset of around 300GB:\r\n\r\n```python\r\nimport datasets as ds\r\n\r\ns = ds.load_dataset(\"squad\", split=\"train\")\r\ns4000 = ds.concatenate_datasets([s] * 4000)\r\nprint(ds.utils.size_str(s4000.data.nbytes)) # '295.48 GiB'\r\n\r\ns4000.save_to_disk(\"tmp/squad_4000\")\r\n```\r\n\r\n```python\r\nimport psutil\r\nimport time\r\nfrom datasets import load_from_disk\r\n\r\ndisk = \"disk0\" # You may have to change your disk here\r\niocnt1 = psutil.disk_io_counters(perdisk=True)[disk]\r\ntime1 = time.time()\r\n\r\ns4000_reloaded = load_from_disk(\"tmp/squad_4000\")\r\n\r\ntime2 = time.time()\r\niocnt2 = psutil.disk_io_counters(perdisk=True)[disk]\r\n\r\nprint(f\"Blocks read {iocnt2.read_count - iocnt1.read_count}\") # Blocks read 18\r\nprint(f\"Elapsed time: {time2 - time1:.02f}s\") # Elapsed time: 14.60s\r\n```\r\n\r\nCould you run this on your side and tell me if how much time it takes ? Please run this when your machine is idle so that other processes don't interfere.\r\n\r\nI got these results on my macbook pro on datasets 1.6.2", "@lhoestq thanks, test running as we speak, bear with me", "Just tried on google colab and got ~1min for a 15GB dataset (only 200 times SQuAD), while it should be instantaneous. The time is spent reading the Apache Arrow table from the memory mapped file. This might come a virtual disk management issue. I'm trying to see if I can still speed it up on colab.", "@lhoestq what is Google Colab's HD read speed, is it possible to introspect incl. make like SSD or HDD ?", "@lhoestq Thank you! The issue is getting more interesting. The second script is still running, but it's definitely taking much longer than 15 seconds.", "Okay, here’s the ouput:\r\nBlocks read 158396\r\nElapsed time: 529.10s\r\n\r\nAlso using datasets 1.6.2. Do you have any ideas, how to pinpoint the problem?", "@lhoestq, @tsproisl mmmh still writing on my side about 1h to go, thinking on it are your large datasets all monoblock unsharded ? mine is 335 times 1.18Gb shards.", "The 529.10s was a bit too optimistic. I cancelled the reading process once before running it completely, therefore the harddrive cache probably did its work.\r\n\r\nHere are three consecutive runs\r\nFirst run (freshly written to disk):\r\nBlocks read 309702\r\nElapsed time: 1267.74s\r\nSecond run (immediately after):\r\nBlocks read 113944\r\nElapsed time: 417.55s\r\nThird run (immediately after):\r\nBlocks read 42518\r\nElapsed time: 199.19s\r\n", "@lhoestq \r\nFirst test\r\n> elapsed time: 11219.05s\r\n\r\nSecond test running bear with me, for Windows users slight trick to modify original \"disk0\" string:\r\n\r\nFirst find physical unit relevant key in dictionnary\r\n```\r\nimport psutil\r\npsutil.disk_io_counters(perdisk=True)\r\n```\r\n\r\n> {'PhysicalDrive0': sdiskio(read_count=18453286, write_count=4075333, read_bytes=479546467840, write_bytes=161590275072, read_time=20659, write_time=2464),\r\n> 'PhysicalDrive1': sdiskio(read_count=1495778, write_count=388781, read_bytes=548628622336, write_bytes=318234849280, read_time=426066, write_time=19085)}\r\n\r\nIn my case it's _PhysicalDrive1_\r\n\r\nThen insert relevant key's string as _disk_ variable\r\n\r\n```\r\npsutil.disk_io_counters()\r\ndisk = 'PhysicalDrive1' # You may have to change your disk here\r\niocnt1 = psutil.disk_io_counters(perdisk=True)[disk]\r\ntime1 = time.time()\r\ns4000_reloaded = load_from_disk(\"your path here\")\r\ntime2 = time.time()\r\niocnt2 = psutil.disk_io_counters(perdisk=True)[disk]\r\nprint(f\"Blocks read {iocnt2.read_count - iocnt1.read_count}\") # Blocks read 18\r\nprint(f\"Elapsed time: {time2 - time1:.02f}s\") # Elapsed time: 14.60s\r\n```", "@lhoestq\r\nSecond test\r\n\r\n> Blocks read 1265609\r\n> Elapsed time: 11216.55s", "@lhoestq any luck ?", "Unfortunately no. Thanks for running the benchmark though, it shows that you machine does a lot of read operations. This is not expected: in other machines it does almost no read operations which enables a very fast loading.\r\n\r\nI did some tests on google colab and have the same issue. The first time the dataset arrow file is memory mapped takes always a lot of time (time seems linear with respect to the dataset size). Reloading the dataset is then instantaneous since the arrow file has already been memory mapped.\r\n\r\nI also tried using the Arrow IPC file format (see #1933) instead of the current streaming format that we use but it didn't help.\r\n\r\nMemory mapping is handled by the OS and depends on the disk you're using, so I'm not sure we can do much about it. I'll continue to investigate anyway, because I still don't know why in some cases it would go through the entire file (high `Blocks read ` as in your tests) and in other cases it would do almost no reading.", "@lhoestq thanks for the effort, let's stay in touch", "Just want to say that I am seeing the same issue. Dataset size if 268GB and it takes **3 hours** to load `load_from_disk`, using dataset version `1.9.0`. Filesystem underneath is `Lustre` ", "Hi @lhoestq, confirmed Windows issue, exact same code running on Linux OS total loading time about 3 minutes.", "Hmm that's different from what I got. I was on Ubuntu when reporting the initial issue." ]
2021-04-23T08:18:20
2024-01-26T15:10:28
2024-01-26T15:10:28
NONE
null
null
null
Hi, I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122). However, the problem seems to persist. Here is the profiled results: 1) Running with 60GB ``` Action | Mean duration (s) |Num calls | Total time (s) | Percentage % | ------------------------------------------------------------------------------------------------------------------------------------ Total | - |_ | 517.96 | 100 % | ------------------------------------------------------------------------------------------------------------------------------------ model_backward | 0.26144 |100 | 26.144 | 5.0475 | model_forward | 0.11123 |100 | 11.123 | 2.1474 | get_train_batch | 0.097121 |100 | 9.7121 | 1.8751 | ``` 3) Running with 600GB, datasets==1.6.0 ``` Action | Mean duration (s) |Num calls | Total time (s) | Percentage % | ------------------------------------------------------------------------------------------------------------------------------------ Total | - |_ | 4563.2 | 100 % | ------------------------------------------------------------------------------------------------------------------------------------ get_train_batch | 5.1279 |100 | 512.79 | 11.237 | model_backward | 4.8394 |100 | 483.94 | 10.605 | model_forward | 0.12162 |100 | 12.162 | 0.26652 | ``` I see that `get_train_batch` lags when data is large. Could this be related to different issues? I would be happy to provide necessary information to investigate.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2252/reactions", "total_count": 9, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 4 }
https://api.github.com/repos/huggingface/datasets/issues/2252/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2251
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2251/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2251/comments
https://api.github.com/repos/huggingface/datasets/issues/2251/events
https://github.com/huggingface/datasets/issues/2251
865,848,705
MDU6SXNzdWU4NjU4NDg3MDU=
2,251
while running run_qa.py, ran into a value error
{ "login": "nlee0212", "id": 44570724, "node_id": "MDQ6VXNlcjQ0NTcwNzI0", "avatar_url": "https://avatars.githubusercontent.com/u/44570724?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nlee0212", "html_url": "https://github.com/nlee0212", "followers_url": "https://api.github.com/users/nlee0212/followers", "following_url": "https://api.github.com/users/nlee0212/following{/other_user}", "gists_url": "https://api.github.com/users/nlee0212/gists{/gist_id}", "starred_url": "https://api.github.com/users/nlee0212/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nlee0212/subscriptions", "organizations_url": "https://api.github.com/users/nlee0212/orgs", "repos_url": "https://api.github.com/users/nlee0212/repos", "events_url": "https://api.github.com/users/nlee0212/events{/privacy}", "received_events_url": "https://api.github.com/users/nlee0212/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2021-04-23T07:51:03
2021-04-23T07:51:03
null
NONE
null
null
null
command: python3 run_qa.py --model_name_or_path hyunwoongko/kobart --dataset_name squad_kor_v2 --do_train --do_eval --per_device_train_batch_size 8 --learning_rate 3e-5 --num_train_epochs 3 --max_seq_length 512 --doc_stride 128 --output_dir /tmp/debug_squad/ error: ValueError: External features info don't match the dataset: Got {'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answer': {'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None), 'html_answer_start': Value(dtype='int32', id=None)}, 'url': Value(dtype='string', id=None), 'raw_html': Value(dtype='string', id=None)} with type struct<answer: struct<text: string, answer_start: int32, html_answer_start: int32>, context: string, id: string, question: string, raw_html: string, title: string, url: string> but expected something like {'answer': {'answer_start': Value(dtype='int32', id=None), 'html_answer_start': Value(dtype='int32', id=None), 'text': Value(dtype='string', id=None)}, 'context': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'raw_html': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None)} with type struct<answer: struct<answer_start: int32, html_answer_start: int32, text: string>, context: string, id: string, question: string, raw_html: string, title: string, url: string> I didn't encounter this error 4 hours ago. any solutions for this kind of issue? looks like gained dataset format refers to 'Data Fields', while expected refers to 'Data Instances'.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2251/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/2250
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2250/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2250/comments
https://api.github.com/repos/huggingface/datasets/issues/2250/events
https://github.com/huggingface/datasets/issues/2250
865,402,449
MDU6SXNzdWU4NjU0MDI0NDk=
2,250
some issue in loading local txt file as Dataset for run_mlm.py
{ "login": "alighofrani95", "id": 14968123, "node_id": "MDQ6VXNlcjE0OTY4MTIz", "avatar_url": "https://avatars.githubusercontent.com/u/14968123?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alighofrani95", "html_url": "https://github.com/alighofrani95", "followers_url": "https://api.github.com/users/alighofrani95/followers", "following_url": "https://api.github.com/users/alighofrani95/following{/other_user}", "gists_url": "https://api.github.com/users/alighofrani95/gists{/gist_id}", "starred_url": "https://api.github.com/users/alighofrani95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alighofrani95/subscriptions", "organizations_url": "https://api.github.com/users/alighofrani95/orgs", "repos_url": "https://api.github.com/users/alighofrani95/repos", "events_url": "https://api.github.com/users/alighofrani95/events{/privacy}", "received_events_url": "https://api.github.com/users/alighofrani95/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\n1. try\r\n ```python\r\n dataset = load_dataset(\"text\", data_files={\"train\": [\"a1.txt\", \"b1.txt\"], \"test\": [\"c1.txt\"]})\r\n ```\r\n instead.\r\n\r\n Sadly, I can't reproduce the error on my machine. If the above code doesn't resolve the issue, try to update the library to the \r\n newest version (`pip install datasets --upgrade`).\r\n\r\n2. https://github.com/huggingface/transformers/blob/3ed5e97ba04ce9b24b4a7161ea74572598a4c480/examples/pytorch/language-modeling/run_mlm.py#L258-L259\r\nThis is the original code. You'll have to modify the example source to work with multiple train files. To make it easier, let's say \"|\" will act as a delimiter between files:\r\n ```python\r\n if data_args.train_file is not None:\r\n data_files[\"train\"] = data_args.train_file.split(\"|\") # + .split(\"|\")\r\n ```\r\n Then call the script as follows (**dataset_name must be None**):\r\n ```bash\r\n python run_mlm.py [... other args] --train_file a1.txt|b1.txt\r\n ```", "i meet the same error with datasets 1.11.0, is there any insight about this?" ]
2021-04-22T19:39:13
2022-03-30T08:29:47
2022-03-30T08:29:47
NONE
null
null
null
![image](https://user-images.githubusercontent.com/14968123/115773877-18cef300-a3c6-11eb-8e58-a9cbfd1001ec.png) first of all, I tried to load 3 .txt files as a dataset (sure that the directory and permission is OK.), I face with the below error. > FileNotFoundError: [Errno 2] No such file or directory: 'c' by removing one of the training .txt files It's fixed and although if I put all file as training it's ok ![image](https://user-images.githubusercontent.com/14968123/115774207-867b1f00-a3c6-11eb-953b-905cfb112d25.png) ![image](https://user-images.githubusercontent.com/14968123/115774264-9b57b280-a3c6-11eb-9f36-7b109f0e5a31.png) after this, my question is how could I use this defined Dataset for run_mlm.py for from scratch pretraining. by using --train_file path_to_train_file just can use one .txt , .csv or, .json file. I tried to set my defined Dataset as --dataset_name but the below issue occurs. > Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 336, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py", line 291, in cached_path use_auth_token=download_config.use_auth_token, File "/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py", line 621, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/dataset/dataset.py > During handling of the above exception, another exception occurred: > Traceback (most recent call last): File "run_mlm.py", line 486, in <module> main() File "run_mlm.py", line 242, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir) File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 719, in load_dataset use_auth_token=use_auth_token, File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 347, in prepare_module combined_path, github_file_path FileNotFoundError: Couldn't find file locally at dataset/dataset.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.6.0/datasets/dataset/dataset.py. The file is also not present on the master branch on github.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2250/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2250/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2243
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2243/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2243/comments
https://api.github.com/repos/huggingface/datasets/issues/2243/events
https://github.com/huggingface/datasets/issues/2243
862,909,389
MDU6SXNzdWU4NjI5MDkzODk=
2,243
Map is slow and processes batches one after another
{ "login": "villmow", "id": 2743060, "node_id": "MDQ6VXNlcjI3NDMwNjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/villmow", "html_url": "https://github.com/villmow", "followers_url": "https://api.github.com/users/villmow/followers", "following_url": "https://api.github.com/users/villmow/following{/other_user}", "gists_url": "https://api.github.com/users/villmow/gists{/gist_id}", "starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/villmow/subscriptions", "organizations_url": "https://api.github.com/users/villmow/orgs", "repos_url": "https://api.github.com/users/villmow/repos", "events_url": "https://api.github.com/users/villmow/events{/privacy}", "received_events_url": "https://api.github.com/users/villmow/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @villmow, thanks for reporting.\r\n\r\nCould you please try with the Datasets version 1.6? We released it yesterday and it fixes some issues about the processing speed. You can see the fix implemented by @lhoestq here: #2122.\r\n\r\nOnce you update Datasets, please confirm if the problem persists.", "Hi @albertvillanova, thanks for the reply. I just tried the new version and the problem still persists. \r\n\r\nDo I need to rebuild the saved dataset (which I load from disk) with the 1.6.0 version of datasets? My script loads this dataset and creates new datasets from it. I tried it without rebuilding.\r\n\r\nSee this short video of what happens. It does not create all processes at the same time:\r\n\r\nhttps://user-images.githubusercontent.com/2743060/115720139-0da3a500-a37d-11eb-833a-9bbacc70868d.mp4\r\n\r\n", "There can be a bit of delay between the creations of the processes but this delay should be the same for both your `map` calls. We should look into this.\r\nAlso if you hav some code that reproduces this issue on google colab that'd be really useful !\r\n\r\nRegarding the speed differences:\r\nThis looks like a similar issue as https://github.com/huggingface/datasets/issues/1992 who is experiencing the same speed differences between processes.\r\nThis is a known bug that we are investigating. As of now I've never managed to reproduce it on my machine so it's pretty hard for me to find where this issue comes from.\r\n", "Upgrade to 1.6.1 solved my problem somehow. I did not change any of my code, but now it starts all processes around the same time.", "Nice ! I'm glad this works now.\r\nClosing for now, but feel free to re-open if you experience this issue again." ]
2021-04-20T14:58:20
2021-05-03T17:54:33
2021-05-03T17:54:32
NONE
null
null
null
## Describe the bug I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't give exact steps to reproduce, I'm sorry. I process a large dataset in a two step process. I first call map on a dataset I load from disk and create a new dataset from it. This works like expected and `map` uses all workers I started it with. Then I process the dataset created by the first step, again with `map`, which is really slow and starting only one or two process at a time. Number of processes is the same for both steps. pseudo code: ```python ds = datasets.load_from_disk("path") new_dataset = ds.map(work, batched=True, ...) # fast uses all processes final_dataset = new_dataset.map(work2, batched=True, ...) # slow starts one process after another ``` ## Expected results Second stage should be as fast as the first stage. ## Versions Paste the output of the following code: - Datasets: 1.5.0 - Python: 3.8.8 (default, Feb 24 2021, 21:46:12) - Platform: Linux-5.4.0-60-generic-x86_64-with-glibc2.10 Do you guys have any idea? Thanks a lot!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2243/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2242
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2242/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2242/comments
https://api.github.com/repos/huggingface/datasets/issues/2242/events
https://github.com/huggingface/datasets/issues/2242
862,870,205
MDU6SXNzdWU4NjI4NzAyMDU=
2,242
Link to datasets viwer on Quick Tour page returns "502 Bad Gateway"
{ "login": "martavillegas", "id": 6735707, "node_id": "MDQ6VXNlcjY3MzU3MDc=", "avatar_url": "https://avatars.githubusercontent.com/u/6735707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/martavillegas", "html_url": "https://github.com/martavillegas", "followers_url": "https://api.github.com/users/martavillegas/followers", "following_url": "https://api.github.com/users/martavillegas/following{/other_user}", "gists_url": "https://api.github.com/users/martavillegas/gists{/gist_id}", "starred_url": "https://api.github.com/users/martavillegas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/martavillegas/subscriptions", "organizations_url": "https://api.github.com/users/martavillegas/orgs", "repos_url": "https://api.github.com/users/martavillegas/repos", "events_url": "https://api.github.com/users/martavillegas/events{/privacy}", "received_events_url": "https://api.github.com/users/martavillegas/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "This should be fixed now!\r\n\r\ncc @srush " ]
2021-04-20T14:19:51
2021-04-20T15:02:45
2021-04-20T15:02:45
NONE
null
null
null
Link to datasets viwer (https://huggingface.co/datasets/viewer/) on Quick Tour page (https://huggingface.co/docs/datasets/quicktour.html) returns "502 Bad Gateway" The same error with https://huggingface.co/datasets/viewer/?dataset=glue&config=mrpc
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2242/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2239
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2239/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2239/comments
https://api.github.com/repos/huggingface/datasets/issues/2239/events
https://github.com/huggingface/datasets/issues/2239
861,904,306
MDU6SXNzdWU4NjE5MDQzMDY=
2,239
Error loading wikihow dataset
{ "login": "odellus", "id": 4686956, "node_id": "MDQ6VXNlcjQ2ODY5NTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4686956?v=4", "gravatar_id": "", "url": "https://api.github.com/users/odellus", "html_url": "https://github.com/odellus", "followers_url": "https://api.github.com/users/odellus/followers", "following_url": "https://api.github.com/users/odellus/following{/other_user}", "gists_url": "https://api.github.com/users/odellus/gists{/gist_id}", "starred_url": "https://api.github.com/users/odellus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/odellus/subscriptions", "organizations_url": "https://api.github.com/users/odellus/orgs", "repos_url": "https://api.github.com/users/odellus/repos", "events_url": "https://api.github.com/users/odellus/events{/privacy}", "received_events_url": "https://api.github.com/users/odellus/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @odellus, thanks for reporting.\r\n\r\nThe `wikihow` dataset has 2 versions:\r\n- `all`: Consisting of the concatenation of all paragraphs as the articles and the bold lines as the reference summaries.\r\n- `sep`: Consisting of each paragraph and its summary.\r\n\r\nTherefore, in order to load it, you have to specify which version you would like, for example:\r\n```python\r\ndataset = load_dataset('wikihow', 'all')\r\n```\r\n\r\nPlease, tell me if this solves your problem.", "Good call out. I did try that and that's when it told me to download the\ndataset. Don't believe I have tried it with local files. Will try first\nthing in the morning and get back to you.\n\nOn Mon, Apr 19, 2021, 11:17 PM Albert Villanova del Moral <\n***@***.***> wrote:\n\n> Hi @odellus <https://github.com/odellus>, thanks for reporting.\n>\n> The wikihow dataset has 2 versions:\n>\n> - all: Consisting of the concatenation of all paragraphs as the\n> articles and the bold lines as the reference summaries.\n> - sep: Consisting of each paragraph and its summary.\n>\n> Therefore, in order to load it, you have to specify which version you\n> would like, for example:\n>\n> dataset = load_dataset('wikihow', 'all')\n>\n> Please, tell me if this solves your problem.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2239#issuecomment-823004146>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABDYI3HVRTBI2QT3BOG262DTJUL57ANCNFSM43GV5BZQ>\n> .\n>\n", "Hi @odellus, yes you are right.\r\n\r\nDue to the server where the `wikihow` dataset is hosted, the dataset can't be downloaded automatically by `huggingface` and you have to download it manually as you did.\r\n\r\nNevertheless, you have to specify which dataset version you would like to load anyway:\r\n```python\r\ndataset = load_dataset('wikihow', 'all', data_dir='./wikihow')\r\n```\r\nor\r\n```python\r\ndataset = load_dataset('wikihow', 'sep', data_dir='./wikihow')\r\n```\r\nI find that the instructions given by `huggingface` are not clear enough: I am going to fix this.\r\nPlease tell me if this eventually works for you.", "That was it. Thank you Albert!" ]
2021-04-19T21:02:31
2021-04-20T16:33:11
2021-04-20T16:33:11
CONTRIBUTOR
null
null
null
## Describe the bug When attempting to load wikihow into a dataset with ```python from datasets import load_dataset dataset = load_dataset('wikihow', data_dir='./wikihow') ``` I get the message: ``` AttributeError: 'BuilderConfig' object has no attribute 'filename' ``` at the end of a [full stack trace](https://gist.github.com/odellus/602c3b2de52f541d353b1022f320ffc2). ## Steps to reproduce the bug I have followed the instructions for creating a wikihow dataset. The [wikihow dataset site](https://huggingface.co/datasets/wikihow) says to use ```python from datasets import load_dataset dataset = load_dataset('wikihow') ``` to load the dataset. I do so and I get the message ``` AssertionError: The dataset wikihow with config all requires manual data. Please follow the manual download instructions: You need to manually download two wikihow files. An overview of which files to download can be seen at https://github.com/mahnazkoupaee/WikiHow-Dataset. You need to download the following two files manually: 1) https://ucsb.app.box.com/s/ap23l8gafpezf4tq3wapr6u8241zz358 and save the file under <path/to/folder>/wikihowAll.csv 2) https://ucsb.app.box.com/s/7yq601ijl1lzvlfu4rjdbbxforzd2oag and save the file under <path/to/folder>/wikihowSep.csv The <path/to/folder> can e.g. be "~/manual_wikihow_data". Wikihow can then be loaded using the following command `datasets.load_dataset("wikihow", data_dir="<path/to/folder>")`. . Manual data can be loaded with `datasets.load_dataset(wikihow, data_dir='<path/to/manual/data>') ``` So I create a directory `./wikihow` and download `wikihowAll.csv` and `wikihowSep.csv` into the new directory. Then I run ```python from datasets import load_dataset dataset = load_dataset('wikihow', data_dir='./wikihow') ``` that's when I get the [stack trace](https://gist.github.com/odellus/602c3b2de52f541d353b1022f320ffc2) ## Expected results I expected it to load the downloaded files into a dataset. ## Actual results ```python Using custom data configuration default-data_dir=.%2Fwikihow Downloading and preparing dataset wikihow/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/azureuser/.cache/huggingface/datasets/wikihow/default-data_dir=.%2Fwikihow/0.0.0/58f42f8f0e4d459811a0f69aaab35870093830ccd58006769e7e1eb3e0e686c2... --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-9-5e4d40142f30> in <module> ----> 1 dataset = load_dataset('wikihow',data_dir='./wikihow') ~/.local/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs) 745 try_from_hf_gcs=try_from_hf_gcs, 746 base_path=base_path,--> 747 use_auth_token=use_auth_token, 748 ) 749 ~/.local/lib/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 577 if not downloaded_from_gcs: 578 self._download_and_prepare( --> 579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 580 ) 581 # Sync info ~/.local/lib/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 632 split_dict = SplitDict(dataset_name=self.name) 633 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 634 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 635 636 # Checksums verification ~/.cache/huggingface/modules/datasets_modules/datasets/wikihow/58f42f8f0e4d459811a0f69aaab35870093830ccd58006769e7e1eb3e0e686c2/wikihow.py in _split_generators(self, dl_manager) 132 133 path_to_manual_file = os.path.join( --> 134 os.path.abspath(os.path.expanduser(dl_manager.manual_dir)), self.config.filename 135 ) 136 AttributeError: 'BuilderConfig' object has no attribute 'filename' ``` ## Versions Paste the output of the following code: ```python import datasets import sys import platform print(f""" - Datasets: {datasets.__version__} - Python: {sys.version} - Platform: {platform.platform()} """) ``` ``` - Datasets: 1.5.0 - Python: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] - Platform: Linux-5.4.0-1046-azure-x86_64-with-Ubuntu-18.04-bionic ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2239/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2237
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2237/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2237/comments
https://api.github.com/repos/huggingface/datasets/issues/2237/events
https://github.com/huggingface/datasets/issues/2237
861,427,439
MDU6SXNzdWU4NjE0Mjc0Mzk=
2,237
Update Dataset.dataset_size after transformed with map
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "@albertvillanova I would like to take this up. It would be great if you could point me as to how the dataset size is calculated in HF. Thanks!" ]
2021-04-19T15:19:38
2021-04-20T14:22:05
null
MEMBER
null
null
null
After loading a dataset, if we transform it by using `.map` its `dataset_size` attirbute is not updated.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2237/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/2236
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2236/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2236/comments
https://api.github.com/repos/huggingface/datasets/issues/2236/events
https://github.com/huggingface/datasets/issues/2236
861,388,145
MDU6SXNzdWU4NjEzODgxNDU=
2,236
Request to add StrategyQA dataset
{ "login": "sarahwie", "id": 8027676, "node_id": "MDQ6VXNlcjgwMjc2NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahwie", "html_url": "https://github.com/sarahwie", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "repos_url": "https://api.github.com/users/sarahwie/repos", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[]
2021-04-19T14:46:26
2021-04-19T14:46:26
null
NONE
null
null
null
## Request to add StrategyQA dataset - **Name:** StrategyQA - **Description:** open-domain QA [(project page)](https://allenai.org/data/strategyqa) - **Paper:** [url](https://arxiv.org/pdf/2101.02235.pdf) - **Data:** [here](https://allenai.org/data/strategyqa) - **Motivation:** uniquely-formulated dataset that also includes a question-decomposition breakdown and associated Wikipedia annotations for each step. Good for multi-hop reasoning modeling.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2236/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2236/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/2230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2230/comments
https://api.github.com/repos/huggingface/datasets/issues/2230/events
https://github.com/huggingface/datasets/issues/2230
859,817,159
MDU6SXNzdWU4NTk4MTcxNTk=
2,230
Keys yielded while generating dataset are not being checked
{ "login": "NikhilBartwal", "id": 42388668, "node_id": "MDQ6VXNlcjQyMzg4NjY4", "avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NikhilBartwal", "html_url": "https://github.com/NikhilBartwal", "followers_url": "https://api.github.com/users/NikhilBartwal/followers", "following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}", "gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}", "starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions", "organizations_url": "https://api.github.com/users/NikhilBartwal/orgs", "repos_url": "https://api.github.com/users/NikhilBartwal/repos", "events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}", "received_events_url": "https://api.github.com/users/NikhilBartwal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi ! Indeed there's no verification on the uniqueness nor the types of the keys.\r\nDo you already have some ideas of what you would like to implement and how ?", "Hey @lhoestq, thank you so much for the opportunity.\r\nAlthough I haven't had much experience with the HF Datasets code, after a careful look at how the `ArrowWriter` functions, I think we can implement this as follows:\r\n\r\n1. First, we would have to update the `ArrowWriter.write()` function here:\r\nhttps://github.com/huggingface/datasets/blob/fcd3c3c8e3b1d9a2f3686a496082e21f06591380/src/datasets/arrow_writer.py#L296\r\nso that it accepts an additional argument `key` which would be appended along with the example here after hashing.\r\n\r\n2. Then, we would need to create a `Hasher` class which will take the key as its input and return a hash for it (We might need to use some hash salt which can be passed to the ArrowWriter.writer() with value equal to the `split_name` for differentiating between same keys of different splits)\r\n\r\n We can use the `hashlib.md5` function for hashing which will conert each key to its byte code before hashing (depending on the data type of the key) **Thus, the `key` type will be verified here**.\r\n\r\n3. Now, we would have to edit this\r\nhttps://github.com/huggingface/datasets/blob/fcd3c3c8e3b1d9a2f3686a496082e21f06591380/src/datasets/arrow_writer.py#L257\r\n so that it iterates over each `(hash, example)` pair (sorted according to hash). We can then simply **check whether each hash is different from the previous hash** (since they will be sorted)\r\n\r\nHowever, since I'm not very familiar with how the data is being written on disk in the form of a table, I might need some guidance for Step 3. \r\nPlease let me know your thought on this. Thanks!", "Interesting !\r\nWe keep the dataset sorted in the order examples are generated by the builder (we expect the dataset builders to generate examples in deterministic order). Therefore I don't think we should shuffle the examples with the hashing. Let me know what you think.\r\nOther that that, I really like the idea of checking for keys duplicates in `write_examples_on_file` :)\r\n\r\nThis looks like a great plan ! Feel free to open a PR and ping me if you have questions or if I can help\r\n", "@lhoestq I'm glad you liked the idea!\r\nI think that since the keys will be unique and deterministic in the nature themselves, so even if we shuffle the examples according to the hash, a deterministic order would still be maintained (as the keys will always have the same hash, whenever the dataset is generated). \r\nAnd since, we are not dealing with time series data (which would require the data to be in original order), I don't think the order of examples would matter much, as long as the order is deterministic and constant for all users.\r\n\r\nI think that this is also what was originally envisioned as mentioned in the documentation here:\r\nhttps://github.com/huggingface/datasets/blob/6775661b19d2ec339784f3d84553a3996a1d86c3/src/datasets/builder.py#L973\r\n\r\nAlso, if we avoid this, we would need to keep track of all the hashed keys in some place and compare each individual key with all others. This can cause some major overhead as each dataset consists of tens of thousands of examples.\r\nLet me know your thoughts in it! I would be opening a PR soon :)", "When users load their own data, they expect the order to stay the same. I think that shuffling the data can make things inconvenient.\r\n\r\n> I think that this is also what was originally envisioned as mentioned in the documentation here:\r\n\r\nThis part was originally developed by tensorflow datasets, and tensorflow datasets indeed does the shuffling. However in this library this is probably not what we want in the general case. But if @albertvillanova and @thomwolf you have opinions on this please let us know.\r\n\r\n> Also, if we avoid this, we would need to keep track of all the hashed keys in some place and compare each individual key with all others. This can cause some major overhead as each dataset consists of tens of thousands of examples.\r\n\r\nMaybe we cam simply keep track of the hashes of of each batch being written ? The size of the batch when the data are save in arrow is 10 000 examples. This would only ensure that we don't have duplicates in each batch, but there might still be duplicates across batches. For 10 000 examples the hashes can just be stored as a python `set`.\r\n\r\nOtherwise if we want full deduplication, we need an extra tool that allows to temporarily save and query hashes that may need to use disk space rather than memory.", "Yes I think we want to keep the original order by default and only shuffle when the user ask for it (for instance by calling `dataset.shuffle()`). That’s how I had it in mind originally.", "Hey @lhoestq, I just had a more in-depth look at the original TFDS code about why the keys and hash were used in the first place.\r\n\r\nIn my opinion, the only use that the `hash(key)` serves is that it allows us to shuffle the examples in a deterministic order (as each example will always yield the same key and thus, the same hash on every system) so that the same dataset is generated for each user, irrespective of the order the examples are yielded by the dataset builder on different user systems.\r\n\r\nOtherwise, if we are not shuffling, then while yielding and writing the data, after getting the key and hashing it for an example, I can't quite see the use of the hash or the key. The hash will simply be generated for each example but not actually used anywhere?\r\n\r\n@lhoestq @thomwolf It would be great if you could explain a bit more about the usage of keys. Thanks!\r\n", "In `datasets` the keys are currently ignored.\r\nFor shuffling we don't use the keys. Instead we shuffle an array of indices. Since both the original order of the dataset and the indices shuffling are deterministic, then `dataset.shuffle` is deterministic as well.\r\nWe can use it to:\r\n1. detect duplicates\r\n2. verify that the generation order is indeed deterministic\r\n3. maybe more ?", "Thanks a lot @lhoestq. I think I understand what we need to do now. The keys can indeed be used for detecting duplicates in generated examples as well as ensuring the order.\r\n\r\n> Maybe we cam simply keep track of the hashes of of each batch being written ? The size of the batch when the data are save in arrow is 10 000 examples. This would only ensure that we don't have duplicates in each batch,\r\n\r\nI think that checking for duplicates in every batch independently would be sufficient as the probability of collisions using something like `MD5` is very low. I would be opening a draft PR soon. It would be great to have your guidance. Thanks!" ]
2021-04-16T13:29:47
2021-05-10T17:31:21
2021-05-10T17:31:21
CONTRIBUTOR
null
null
null
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not. Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196 Even after having a tuple as key, the dataset is generated without any warning. Also, as tested in the case of `anli` dataset (I tweeked the dataset script to use `1` as a key for every example): ``` >>> import datasets >>> nik = datasets.load_dataset('anli') Downloading and preparing dataset anli/plain_text (download: 17.76 MiB, generated: 73.55 MiB, post-processed: Unknown size, total: 91.31 MiB) to C:\Users\nikhil\.cache\huggingface\datasets\anli\plain_text\0.1.0\43fa2c99c10bf8478f1fa0860f7b122c6b277c4c41306255b7641257cf4e3299... 0 examples [00:00, ? examples/s]1 {'uid': '0fd0abfb-659e-4453-b196-c3a64d2d8267', 'premise': 'The Parma trolleybus system (Italian: "Rete filoviaria di Parma" ) forms part of the public transport network of the city and "comune" of Parma, in the region of Emilia-Romagna, northern Italy. In operation since 1953, the system presently comprises four urban routes.', 'hypothesis': 'The trolleybus system has over 2 urban routes', 'label': 'entailment', 'reason': ''} 2021-04-16 12:38:14.483968: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll 1 examples [00:01, 1.87s/ examples]1 {'uid': '7ed72ff4-40b7-4f8a-b1b9-6c612aa62c84', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Sharron Macready was a popular character through the 1980's.", 'label': 'neutral', 'reason': ''} 1 {'uid': '5d2930a3-62ac-485d-94d7-4e36cbbcd7b5', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Bastedo didn't keep any pets because of her views on animal rights.", 'label': 'neutral', 'reason': ''} 1 {'uid': '324db753-ddc9-4a85-a825-f09e2e5aebdd', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Alexandra Bastedo was named by her mother.', 'label': 'neutral', 'reason': ''} 1 {'uid': '4874f429-da0e-406a-90c7-22240ff3ddf8', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Bastedo cared for all the animals that inhabit the earth.', 'label': 'neutral', 'reason': ''} ``` Here also, the dataset was generated successfuly even hough it had same keys without any warning. The reason appears to stem from here: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L988 Here, although it has access to every key, but it is not being checked and the example is written directly: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L992 I would like to take this issue if you allow me. Thank You!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2230/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2230/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2229
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2229/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2229/comments
https://api.github.com/repos/huggingface/datasets/issues/2229/events
https://github.com/huggingface/datasets/issues/2229
859,810,602
MDU6SXNzdWU4NTk4MTA2MDI=
2,229
`xnli` dataset creating a tuple key while yielding instead of `str` or `int`
{ "login": "NikhilBartwal", "id": 42388668, "node_id": "MDQ6VXNlcjQyMzg4NjY4", "avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NikhilBartwal", "html_url": "https://github.com/NikhilBartwal", "followers_url": "https://api.github.com/users/NikhilBartwal/followers", "following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}", "gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}", "starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions", "organizations_url": "https://api.github.com/users/NikhilBartwal/orgs", "repos_url": "https://api.github.com/users/NikhilBartwal/repos", "events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}", "received_events_url": "https://api.github.com/users/NikhilBartwal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Sure sounds good. Also if you find other datasets that use tuples instead of str/int, you can also fix them !\r\nthanks :)", "@lhoestq I have sent a PR for fixing the issue. Would be great if you could have a look! Thanks!" ]
2021-04-16T13:21:53
2021-04-19T08:56:42
2021-04-19T08:56:42
CONTRIBUTOR
null
null
null
When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196 Since, community datasets in Tensorflow Datasets also use HF datasets, this causes a Tuple key error while loading HF's `xnli` dataset. I'm up for sending a fix for this, I think we can simply use `file_idx + "_" + row_idx` as a unique key instead of a tuple.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2229/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2226
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2226/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2226/comments
https://api.github.com/repos/huggingface/datasets/issues/2226/events
https://github.com/huggingface/datasets/issues/2226
859,720,302
MDU6SXNzdWU4NTk3MjAzMDI=
2,226
Batched map fails when removing all columns
{ "login": "villmow", "id": 2743060, "node_id": "MDQ6VXNlcjI3NDMwNjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/villmow", "html_url": "https://github.com/villmow", "followers_url": "https://api.github.com/users/villmow/followers", "following_url": "https://api.github.com/users/villmow/following{/other_user}", "gists_url": "https://api.github.com/users/villmow/gists{/gist_id}", "starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/villmow/subscriptions", "organizations_url": "https://api.github.com/users/villmow/orgs", "repos_url": "https://api.github.com/users/villmow/repos", "events_url": "https://api.github.com/users/villmow/events{/privacy}", "received_events_url": "https://api.github.com/users/villmow/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "I found the problem. I called `set_format` on some columns before. This makes it crash. Here is a complete example to reproduce:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nsst = load_dataset(\"sst\")\r\nsst.set_format(\"torch\", columns=[\"label\"], output_all_columns=True)\r\nds = sst[\"train\"]\r\n\r\n# crashes\r\nds.map(\r\n lambda x: {\"a\": list(range(20))},\r\n remove_columns=ds.column_names,\r\n load_from_cache_file=False,\r\n num_proc=1,\r\n batched=True,\r\n)\r\n```", "Thanks for reporting and for providing this code to reproduce the issue, this is really helpful !", "I merged a fix, it should work on `master` now :)\r\nWe'll do a new release soon !" ]
2021-04-16T11:17:01
2022-10-05T17:32:15
2022-10-05T17:32:15
NONE
null
null
null
Hi @lhoestq , I'm hijacking this issue, because I'm currently trying to do the approach you recommend: > Currently the optimal setup for single-column computations is probably to do something like > > ```python > result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names) > ``` Here is my code: (see edit, in which I added a simplified version ``` This is the error: ```bash pyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 8964 but got length 1000 ``` I wonder why this error occurs, when I delete every column? Can you give me a hint? ### Edit: I preprocessed my dataset before (using map with the features argument) and saved it to disk. May this be part of the error? I can iterate over the complete dataset and print every sample before calling map. There seems to be no other problem with the dataset. I tried to simplify the code that crashes: ```python # works log.debug(dataset.column_names) log.debug(dataset) for i, sample in enumerate(dataset): log.debug(i, sample) # crashes counted_dataset = dataset.map( lambda x: {"a": list(range(20))}, input_columns=column, remove_columns=dataset.column_names, load_from_cache_file=False, num_proc=num_workers, batched=True, ) ``` ``` pyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 20 but got length 1000 ``` Edit2: May this be a problem with a schema I set when preprocessing the dataset before? I tried to add the `features` argument to the function and then I get a new error: ```python # crashes counted_dataset = dataset.map( lambda x: {"a": list(range(20))}, input_columns=column, remove_columns=dataset.column_names, load_from_cache_file=False, num_proc=num_workers, batched=True, features=datasets.Features( { "a": datasets.Sequence(datasets.Value("int32")) } ) ) ``` ``` File "env/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1704, in _map_single writer.write_batch(batch) File "env/lib/python3.8/site-packages/datasets/arrow_writer.py", line 312, in write_batch col_type = schema.field(col).type if schema is not None else None File "pyarrow/types.pxi", line 1341, in pyarrow.lib.Schema.field KeyError: 'Column tokens does not exist in schema' ``` _Originally posted by @villmow in https://github.com/huggingface/datasets/issues/2193#issuecomment-820230874_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2226/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2224
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2224/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2224/comments
https://api.github.com/repos/huggingface/datasets/issues/2224/events
https://github.com/huggingface/datasets/issues/2224
857,983,361
MDU6SXNzdWU4NTc5ODMzNjE=
2,224
Raise error if Windows max path length is not disabled
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2021-04-14T14:57:20
2021-04-14T14:59:13
null
MEMBER
null
null
null
On startup, raise an error if Windows max path length is not disabled; ask the user to disable it. Linked to discussion in #2220.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2224/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2224/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/2218
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2218/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2218/comments
https://api.github.com/repos/huggingface/datasets/issues/2218/events
https://github.com/huggingface/datasets/issues/2218
857,238,435
MDU6SXNzdWU4NTcyMzg0MzU=
2,218
Duplicates in the LAMA dataset
{ "login": "amarasovic", "id": 7276193, "node_id": "MDQ6VXNlcjcyNzYxOTM=", "avatar_url": "https://avatars.githubusercontent.com/u/7276193?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amarasovic", "html_url": "https://github.com/amarasovic", "followers_url": "https://api.github.com/users/amarasovic/followers", "following_url": "https://api.github.com/users/amarasovic/following{/other_user}", "gists_url": "https://api.github.com/users/amarasovic/gists{/gist_id}", "starred_url": "https://api.github.com/users/amarasovic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amarasovic/subscriptions", "organizations_url": "https://api.github.com/users/amarasovic/orgs", "repos_url": "https://api.github.com/users/amarasovic/repos", "events_url": "https://api.github.com/users/amarasovic/events{/privacy}", "received_events_url": "https://api.github.com/users/amarasovic/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi,\r\n\r\ncurrently the datasets API doesn't have a dedicated function to remove duplicate rows, but since the LAMA dataset is not too big (it fits in RAM), we can leverage pandas to help us remove duplicates:\r\n```python\r\n>>> from datasets import load_dataset, Dataset\r\n>>> dataset = load_dataset('lama', split='train')\r\n>>> dataset = Dataset.from_pandas(dataset.to_pandas().drop_duplicates(subset=...)) # specify a subset of the columns to consider in a list or use all of the columns if None\r\n```\r\n\r\nNote that the same can be achieved with the `Dataset.filter` method but this would requrie some extra work (filter function, speed?).", "Oh, seems like my question wasn't specified well. I'm _not_ asking how to remove duplicates, but whether duplicates should be removed if I want to do the evaluation on the LAMA dataset as it was proposed in the original paper/repository? In other words, will I get the same result if evaluate on the de-duplicated dataset loaded from HF's `datasets` as the results I'd get if I use the original data format and data processing script in https://github.com/facebookresearch/LAMA? ", "So it looks like the person who added LAMA to the library chose to have one item per piece of evidence rather than one per relation - and in this case, there are duplicate pieces of evidence for the target relation\r\n\r\nIf I understand correctly, to reproduce reported results, you would have to aggregate predictions for the several pieces of evidence provided for each relation (each unique `uuid`), but the original authors will know better \r\n\r\ncc @fabiopetroni " ]
2021-04-13T18:59:49
2021-04-14T21:42:27
null
NONE
null
null
null
I observed duplicates in the LAMA probing dataset, see a minimal code below. ``` >>> import datasets >>> dataset = datasets.load_dataset('lama') No config specified, defaulting to: lama/trex Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c1fe1ec0d6b5eece7bddc) >>> train_dataset = dataset['train'] >>> train_dataset[0] {'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi ʒyl tʁɔʃy]; 12 March 1815 – 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'} >>> train_dataset[1] {'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi ʒyl tʁɔʃy]; 12 March 1815 – 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'} ``` I checked the original data available at https://dl.fbaipublicfiles.com/LAMA/data.zip. This particular duplicated comes from: ``` {"uuid": "40b2ed1c-0961-482e-844e-32596b6117c8", "obj_uri": "Q150", "obj_label": "French", "sub_uri": "Q441235", "sub_label": "Louis Jules Trochu", "predicate_id": "P103", "evidences": [{"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}, {"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}]} ``` What is the best way to deal with these duplicates if I want to use `datasets` to probe with LAMA?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2218/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/2214
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2214/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2214/comments
https://api.github.com/repos/huggingface/datasets/issues/2214/events
https://github.com/huggingface/datasets/issues/2214
856,333,657
MDU6SXNzdWU4NTYzMzM2NTc=
2,214
load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
{ "login": "nsaphra", "id": 414788, "node_id": "MDQ6VXNlcjQxNDc4OA==", "avatar_url": "https://avatars.githubusercontent.com/u/414788?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nsaphra", "html_url": "https://github.com/nsaphra", "followers_url": "https://api.github.com/users/nsaphra/followers", "following_url": "https://api.github.com/users/nsaphra/following{/other_user}", "gists_url": "https://api.github.com/users/nsaphra/gists{/gist_id}", "starred_url": "https://api.github.com/users/nsaphra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nsaphra/subscriptions", "organizations_url": "https://api.github.com/users/nsaphra/orgs", "repos_url": "https://api.github.com/users/nsaphra/repos", "events_url": "https://api.github.com/users/nsaphra/events{/privacy}", "received_events_url": "https://api.github.com/users/nsaphra/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @nsaphra, thanks for reporting.\r\n\r\nThis issue was fixed in `datasets` version 1.3.0. Could you please update `datasets` and tell me if the problem persists?\r\n```shell\r\npip install -U datasets\r\n```", "There might be a bug in the conda version of `datasets` 1.2.1 where the datasets/metric scripts are downloaded from `master` instead of the `1.2.1` repo.\r\n\r\nYou can try setting the env var `HF_SCRIPTS_VERSION=\"1.2.1\"` as a workaround. Let me know if that helps.", "I just faced the same issue. I was using 1.2.1 from conda and received the same AttributeError complaining about 'add_start_docstrings'. Uninstalling the conda installed datasets and then installing the latest datasets (version 1.5.0) using pip install solved the issue for me. I don't like mixing up conda and pip installs in the same environments but this will have to do for now, until 1.5.0 is made available through conda.", "Yep, seems to have fixed things! The conda package could really do with an update. Thanks!" ]
2021-04-12T20:26:01
2021-04-23T15:20:02
2021-04-23T15:20:02
NONE
null
null
null
I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package. ```python >>> from datasets import load_metric >>> metric = load_metric("glue", "sst2") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 502, in load_metric File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 66, in import_main_class File "/ext3/miniconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/ns4008/.cache/huggingface/modules/datasets_modules/metrics/glue/e4606ab9804a36bcd5a9cebb2cb65bb14b6ac78ee9e6d5981fa679a495dd55de/glue.py", line 105, in <module> @datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) AttributeError: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2214/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2212
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2212/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2212/comments
https://api.github.com/repos/huggingface/datasets/issues/2212/events
https://github.com/huggingface/datasets/issues/2212
855,999,133
MDU6SXNzdWU4NTU5OTkxMzM=
2,212
Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset
{ "login": "hanss0n", "id": 21348833, "node_id": "MDQ6VXNlcjIxMzQ4ODMz", "avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hanss0n", "html_url": "https://github.com/hanss0n", "followers_url": "https://api.github.com/users/hanss0n/followers", "following_url": "https://api.github.com/users/hanss0n/following{/other_user}", "gists_url": "https://api.github.com/users/hanss0n/gists{/gist_id}", "starred_url": "https://api.github.com/users/hanss0n/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hanss0n/subscriptions", "organizations_url": "https://api.github.com/users/hanss0n/orgs", "repos_url": "https://api.github.com/users/hanss0n/repos", "events_url": "https://api.github.com/users/hanss0n/events{/privacy}", "received_events_url": "https://api.github.com/users/hanss0n/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Apparently the data are not available from this url anymore. We'll replace it with the new url when it's available", "I saw this on their website when we request to download the dataset:\r\n![image](https://user-images.githubusercontent.com/19718818/114879600-fa458680-9e1e-11eb-9e05-f0963d68ff0f.png)\r\n\r\nCan we still request them link for the dataset and make a PR? @lhoestq @yjernite ", "I've contacted Martin (first author of the fquad paper) regarding a possible new url. Hopefully we can get one soon !", "They now made a website to force people who want to use the dataset for commercial purposes to seek a commercial license from them ...", "The script has been adopted to support manual download from the website, so I'm closing this issue." ]
2021-04-12T13:49:56
2023-10-03T16:09:19
2023-10-03T16:09:18
NONE
null
null
null
I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running: ```Python fquad = load_dataset("fquad") ``` which produces the following error: ``` Using custom data configuration default Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, post-processed: Unknown size, total: 9.76 MiB) to /root/.cache/huggingface/datasets/fquad/default/0.1.0/778dc2c85813d05ddd0c17087294d5f8f24820752340958070876b677af9f061... --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) <ipython-input-48-a2721797e23b> in <module>() ----> 1 fquad = load_dataset("fquad") 11 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 614 raise FileNotFoundError("Couldn't find file at {}".format(url)) 615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") --> 616 raise ConnectionError("Couldn't reach {}".format(url)) 617 618 # Try a second time ConnectionError: Couldn't reach https://storage.googleapis.com/illuin/fquad/train.json.zip ``` Does anyone know why that is and how to fix it?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2212/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2212/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2211
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2211/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2211/comments
https://api.github.com/repos/huggingface/datasets/issues/2211/events
https://github.com/huggingface/datasets/issues/2211
855,988,410
MDU6SXNzdWU4NTU5ODg0MTA=
2,211
Getting checksum error when trying to load lc_quad dataset
{ "login": "hanss0n", "id": 21348833, "node_id": "MDQ6VXNlcjIxMzQ4ODMz", "avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hanss0n", "html_url": "https://github.com/hanss0n", "followers_url": "https://api.github.com/users/hanss0n/followers", "following_url": "https://api.github.com/users/hanss0n/following{/other_user}", "gists_url": "https://api.github.com/users/hanss0n/gists{/gist_id}", "starred_url": "https://api.github.com/users/hanss0n/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hanss0n/subscriptions", "organizations_url": "https://api.github.com/users/hanss0n/orgs", "repos_url": "https://api.github.com/users/hanss0n/repos", "events_url": "https://api.github.com/users/hanss0n/events{/privacy}", "received_events_url": "https://api.github.com/users/hanss0n/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nI've already opened a PR with the fix. If you are in a hurry, just build the project from source and run:\r\n```bash\r\ndatasets-cli test datasets/lc_quad --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\n", "Ah sorry, I tried searching but couldn't find any related PR. \r\n\r\nThank you! " ]
2021-04-12T13:38:58
2021-04-14T13:42:25
2021-04-14T13:42:25
NONE
null
null
null
I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running: ```Python lc_quad = load_dataset("lc_quad") ``` which is giving me the following error: ``` Using custom data configuration default Downloading and preparing dataset lc_quad/default (download: 3.69 MiB, generated: 19.77 MiB, post-processed: Unknown size, total: 23.46 MiB) to /root/.cache/huggingface/datasets/lc_quad/default/2.0.0/5a98fe174603f5dec6df07edf1c2b4d2317210d2ad61f5a393839bca4d64e5a7... --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-42-404ace83f73c> in <module>() ----> 1 lc_quad = load_dataset("lc_quad") 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 37 if len(bad_urls) > 0: 38 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 40 logger.info("All the checksums matched successfully" + for_verification_name) 41 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/AskNowQA/LC-QuAD2.0/archive/master.zip'] ``` Does anyone know why this could be and how I fix it?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2211/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2211/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2210
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2210/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2210/comments
https://api.github.com/repos/huggingface/datasets/issues/2210/events
https://github.com/huggingface/datasets/issues/2210
855,709,400
MDU6SXNzdWU4NTU3MDk0MDA=
2,210
dataloading slow when using HUGE dataset
{ "login": "hwijeen", "id": 29157715, "node_id": "MDQ6VXNlcjI5MTU3NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hwijeen", "html_url": "https://github.com/hwijeen", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "repos_url": "https://api.github.com/users/hwijeen/repos", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Yes this is an issue with `datasets<=1.5.0`\r\nThis issue has been fixed by #2122 , we'll do a new release soon :)\r\nFor now you can test it on the `master` branch.", "Hi, thank you for your answer. I did not realize that my issue stems from the same problem. " ]
2021-04-12T08:33:02
2021-04-13T02:03:05
2021-04-13T02:03:05
NONE
null
null
null
Hi, When I use datasets with 600GB data, the dataloading speed increases significantly. I am experimenting with two datasets, and one is about 60GB and the other 600GB. Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training. When looking at the pytorch-lightning supported profile of two different runs, I see that fetching a batch(`get_train_batch`) consumes an unreasonable amount of time when data is large. What could be the cause? * 60GB data ``` Action | Mean duration (s) |Num calls | Total time (s) | Percentage % | ------------------------------------------------------------------------------------------------------------------------------------ Total | - |_ | 200.33 | 100 % | ------------------------------------------------------------------------------------------------------------------------------------ run_training_epoch | 71.994 |1 | 71.994 | 35.937 | run_training_batch | 0.64373 |100 | 64.373 | 32.133 | optimizer_step_and_closure_0 | 0.64322 |100 | 64.322 | 32.108 | training_step_and_backward | 0.61004 |100 | 61.004 | 30.452 | model_backward | 0.37552 |100 | 37.552 | 18.745 | model_forward | 0.22813 |100 | 22.813 | 11.387 | training_step | 0.22759 |100 | 22.759 | 11.361 | get_train_batch | 0.066385 |100 | 6.6385 | 3.3138 | ``` * 600GB data ``` Action | Mean duration (s) |Num calls | Total time (s) | Percentage % | ------------------------------------------------------------------------------------------------------------------------------------ Total | - |_ | 3285.6 | 100 % | ------------------------------------------------------------------------------------------------------------------------------------ run_training_epoch | 1397.9 |1 | 1397.9 | 42.546 | run_training_batch | 7.2596 |100 | 725.96 | 22.095 | optimizer_step_and_closure_0 | 7.2589 |100 | 725.89 | 22.093 | training_step_and_backward | 7.223 |100 | 722.3 | 21.984 | model_backward | 6.9662 |100 | 696.62 | 21.202 | get_train_batch | 6.322 |100 | 632.2 | 19.241 | model_forward | 0.24902 |100 | 24.902 | 0.75789 | training_step | 0.2485 |100 | 24.85 | 0.75633 | ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2210/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2207
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2207/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2207/comments
https://api.github.com/repos/huggingface/datasets/issues/2207/events
https://github.com/huggingface/datasets/issues/2207
855,267,383
MDU6SXNzdWU4NTUyNjczODM=
2,207
making labels consistent across the datasets
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! The ClassLabel feature type encodes the labels as integers.\r\nThe integer corresponds to the index of the label name in the `names` list of the ClassLabel.\r\nHere that means that the labels are 'entailment' (0), 'neutral' (1), 'contradiction' (2).\r\n\r\nYou can get the label names back by using `a.features['label'].int2str(i)`.\r\n", "Hi! You can also easily reorder the label with the [`Dataset.align_labels_with_mapping`](https://huggingface.co/docs/datasets/master/en/process#align) method." ]
2021-04-11T10:03:56
2022-06-01T16:23:08
2022-06-01T16:21:10
NONE
null
null
null
Hi For accessing the labels one can type ``` >>> a.features['label'] ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None) ``` The labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if one try to access as above they are entailment, neutral,contradiction, it would be great to have the labels consistent. thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2207/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2206/comments
https://api.github.com/repos/huggingface/datasets/issues/2206/events
https://github.com/huggingface/datasets/issues/2206
855,252,415
MDU6SXNzdWU4NTUyNTI0MTU=
2,206
Got pyarrow error when loading a dataset while adding special tokens into the tokenizer
{ "login": "yana-xuyan", "id": 38536635, "node_id": "MDQ6VXNlcjM4NTM2NjM1", "avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yana-xuyan", "html_url": "https://github.com/yana-xuyan", "followers_url": "https://api.github.com/users/yana-xuyan/followers", "following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}", "gists_url": "https://api.github.com/users/yana-xuyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/yana-xuyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yana-xuyan/subscriptions", "organizations_url": "https://api.github.com/users/yana-xuyan/orgs", "repos_url": "https://api.github.com/users/yana-xuyan/repos", "events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}", "received_events_url": "https://api.github.com/users/yana-xuyan/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthe output of the tokenizers is treated specially in the lib to optimize the dataset size (see the code [here](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L138-L141)). It looks like that one of the values in a dictionary returned by the tokenizer is out of the assumed range.\r\nCan you please provide a minimal reproducible example for more help?", "Hi @yana-xuyan, thanks for reporting.\r\n\r\nAs clearly @mariosasko explained, `datasets` performs some optimizations in order to reduce the size of the dataset cache files. And one of them is storing the field `special_tokens_mask` as `int8`, which means that this field can only contain integers between `-128` to `127`. As your message error states, one of the values of this field is `50259`, and therefore it cannot be stored as an `int8`.\r\n\r\nMaybe we could implement a way to disable this optimization and allow using any integer value; although the size of the cache files would be much larger.", "I'm facing same issue @mariosasko @albertvillanova \r\n\r\n```\r\nArrowInvalid: Integer value 50260 not in range: -128 to 127\r\n```\r\n\r\nTo reproduce:\r\n```python\r\nSPECIAL_TOKENS = ['<bos>','<eos>','<speaker1>','<speaker2>','<pad>']\r\nATTR_TO_SPECIAL_TOKEN = {\r\n 'bos_token': '<bos>', \r\n 'eos_token': '<eos>', \r\n 'pad_token': '<pad>',\r\n 'additional_special_tokens': ['<speaker1>', '<speaker2>']\r\n }\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\", use_fast=False)\r\nnum_added_tokens =tokenizer.add_special_tokens(ATTR_TO_SPECIAL_TOKEN)\r\nvocab_size = len(self.tokenizer.encoder) + num_added_tokens\r\nvocab =tokenizer.get_vocab()\r\n\r\npad_index = tokenizer.pad_token_id\r\neos_index = tokenizer.eos_token_id\r\nbos_index = tokenizer.bos_token_id\r\nspeaker1_index = vocab[\"<speaker1>\"]\r\nspeaker2_index = vocab[\"<speaker2>\"]\r\n```\r\n\r\n```python\r\ntokenizer.decode(['50260'])\r\n'<speaker1>'\r\n```", "@mariosasko \r\nI am hitting this bug in the Bert tokenizer too. I see that @albertvillanova labeled this as a bug back in April. Has there been a fix released yet?\r\nWhat I did for now is to just disable the optimization in the HF library. @yana-xuyan and @thomas-happify, is that what you did and did that work for you?\r\n\r\n", "Hi @gregg-ADP, \r\n\r\nThis is still a bug.\r\n\r\nAs @albertvillanova has suggested, maybe it's indeed worth adding a variable to `config.py` to have a way to disable this behavior.\r\n\r\nIn the meantime, this forced optimization can be disabled by specifying `features` (of the returned examples) in the `map` call:\r\n```python\r\nfrom datasets import *\r\n... # dataset init\r\nds.map(process_example, features=Features({\"special_tokens_mask\": Sequence(Value(\"int32\")), ... rest of the features}) \r\n```\r\n\r\ncc @lhoestq so he is also aware of this issue", "Thanks for the quick reply @mariosasko. What I did was to changed the optimizer to use int32 instead of int8. \r\nWhat you're suggesting specifies the type for each feature explicitly without changing the HF code. This is definitely a better option. However, we are hitting a new error later:\r\n```\r\n File \"/Users/ccccc/PycharmProjects/aaaa-ml/venv-source/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\nTypeError: forward() got an unexpected keyword argument 'pos'\r\n\r\n```\r\nWhere 'pos' is the name of a new feature we added. Do you agree that your way of fixing the optimizer issue will not fix our new issue? If not, I will continue with this optimizer fix until we resolve our other issue.\r\n", "Hi @gwc4github,\r\n\r\nthe fix was merged a few minutes ago, and it doesn't require any changes on the user side (e.g. no need for specifying `features`). If you find time, feel free to install `datasets` from master with:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```\r\nand let us know if it works for your use case! " ]
2021-04-11T08:40:09
2021-11-10T12:18:30
2021-11-10T12:04:28
NONE
null
null
null
I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below: Traceback (most recent call last): File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_single writer.write(example) File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 296, in write self.write_on_file() File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 270, in write_on_file pa_array = pa.array(typed_sequence) File "pyarrow/array.pxi", line 222, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 108, in __arrow_array__ out = out.cast(pa.list_(self.optimized_int_type)) File "pyarrow/array.pxi", line 810, in pyarrow.lib.Array.cast File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/pyarrow/compute.py", line 281, in cast return call_function("cast", [arr], options) File "pyarrow/_compute.pyx", line 465, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 294, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Integer value 50259 not in range: -128 to 127 Do you have any idea about it?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2206/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2200
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2200/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2200/comments
https://api.github.com/repos/huggingface/datasets/issues/2200/events
https://github.com/huggingface/datasets/issues/2200
854,449,656
MDU6SXNzdWU4NTQ0NDk2NTY=
2,200
_prepare_split will overwrite DatasetBuilder.info.features
{ "login": "Gforky", "id": 4157614, "node_id": "MDQ6VXNlcjQxNTc2MTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4157614?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Gforky", "html_url": "https://github.com/Gforky", "followers_url": "https://api.github.com/users/Gforky/followers", "following_url": "https://api.github.com/users/Gforky/following{/other_user}", "gists_url": "https://api.github.com/users/Gforky/gists{/gist_id}", "starred_url": "https://api.github.com/users/Gforky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Gforky/subscriptions", "organizations_url": "https://api.github.com/users/Gforky/orgs", "repos_url": "https://api.github.com/users/Gforky/repos", "events_url": "https://api.github.com/users/Gforky/events{/privacy}", "received_events_url": "https://api.github.com/users/Gforky/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! This might be related to #2153 \r\n\r\nYou're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch\r\nI'm opening a PR to fix this and also to figure out how it was not caught in the tests\r\n\r\nEDIT: opened #2201", "> Hi ! This might be related to #2153\r\n> \r\n> You're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch\r\n> I'm opening a PR to fix this and also to figure out how it was not caught in the tests\r\n> \r\n> EDIT: opened #2201\r\n\r\nGlad to hear that! Thank you for your fix, I'm new to huggingface, it's a fantastic project 😁" ]
2021-04-09T11:47:13
2021-06-04T10:37:35
2021-06-04T10:37:35
NONE
null
null
null
Hi, here is my issue: I initialized a Csv datasetbuilder with specific features: ``` def get_dataset_features(data_args): features = {} if data_args.text_features: features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")}) if data_args.num_features: features.update({text_feature: hf_features.Value("float32") for text_feature in data_args.num_features.strip().split(",")}) if data_args.label_classes: features["label"] = hf_features.ClassLabel(names=data_args.label_classes.strip().split(",")) else: features["label"] = hf_features.Value("float32") return hf_features.Features(features) datasets = load_dataset(extension, data_files=data_files, sep=data_args.delimiter, header=data_args.header, column_names=data_args.column_names.split(",") if data_args.column_names else None, features=get_dataset_features(data_args=data_args)) ``` The `features` is printout as below before `builder_instance.as_dataset` is called: ``` {'label': ClassLabel(num_classes=2, names=['unacceptable', 'acceptable'], names_file=None, id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)} ```` But after the `builder_instance.as_dataset` is called for Csv dataset builder, the `features` is changed to: ``` {'label': Value(dtype='int64', id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)} ``` After digged into the code, I releazed that in `ArrowBasedBuilder._prepare_split`, the DatasetBuilder's info's features will be overwrited by `ArrowWriter`'s `_features`. But `ArrowWriter` is initailized without passing `features`. So my concern is: It's this overwrite must be done, or, should it be an option to pass features in `_prepare_split` function?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2200/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2196
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2196/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2196/comments
https://api.github.com/repos/huggingface/datasets/issues/2196/events
https://github.com/huggingface/datasets/issues/2196
854,126,114
MDU6SXNzdWU4NTQxMjYxMTQ=
2,196
`load_dataset` caches two arrow files?
{ "login": "hwijeen", "id": 29157715, "node_id": "MDQ6VXNlcjI5MTU3NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hwijeen", "html_url": "https://github.com/hwijeen", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "repos_url": "https://api.github.com/users/hwijeen/repos", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
closed
false
null
[]
null
[ "Hi ! Files that starts with `cache-*` are cached computation files, i.e. they are the cached results of map/filter/cast/etc. operations. For example if you used `map` on your dataset to transform it, then the resulting dataset is going to be stored and cached in a `cache-*` file. These files are used to avoid having to load the dataset in RAM, even after many transforms", "Thanks @lhoestq! Hmm.. that's strange because I specifically turned off auto caching, and saved mapped result, using `save_to_disk`, to another location. At this location, the following file is created:`355G\tcache-ed205e500a7dc44c.arrow`\r\n\r\nTo my observation, both `load_dataset` and `map` creates `cache-*` files, and I wonder what the `cache-*` file from `load_dataset` is for (as I believe the same information is stored in `json-train.arrow`.", "This is a wrong report -- `cache-*` files are created only my `map`, not by `load_dataset`. " ]
2021-04-09T03:49:19
2021-04-12T05:25:29
2021-04-12T05:25:29
NONE
null
null
null
Hi, I am using datasets to load large json file of 587G. I checked the cached folder and found that there are two arrow files created: * `cache-ed205e500a7dc44c.arrow` - 355G * `json-train.arrow` - 582G Why is the first file created? If I delete it, would I still be able to `load_from_disk`?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2196/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2196/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2195
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2195/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2195/comments
https://api.github.com/repos/huggingface/datasets/issues/2195/events
https://github.com/huggingface/datasets/issues/2195
854,070,194
MDU6SXNzdWU4NTQwNzAxOTQ=
2,195
KeyError: '_indices_files' in `arrow_dataset.py`
{ "login": "samsontmr", "id": 15007950, "node_id": "MDQ6VXNlcjE1MDA3OTUw", "avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/samsontmr", "html_url": "https://github.com/samsontmr", "followers_url": "https://api.github.com/users/samsontmr/followers", "following_url": "https://api.github.com/users/samsontmr/following{/other_user}", "gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}", "starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions", "organizations_url": "https://api.github.com/users/samsontmr/orgs", "repos_url": "https://api.github.com/users/samsontmr/repos", "events_url": "https://api.github.com/users/samsontmr/events{/privacy}", "received_events_url": "https://api.github.com/users/samsontmr/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks for reporting @samsontmr.\r\n\r\nIt seems a backward compatibility issue...", "Thanks @samsontmr this should be fixed on master now\r\n\r\nFeel free to reopen if you're still having issues" ]
2021-04-09T01:37:12
2021-04-09T09:55:09
2021-04-09T09:54:39
NONE
null
null
null
After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset. Trace: ``` Traceback (most recent call last): File "load_data.py", line 11, in <module> dataset = load_from_disk(SRC) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line 784, in load_from_disk return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 692, in load_from_disk dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 634, in load_from_disk if state["_indices_files"]: KeyError: '_indices_files' ``` I believe this is the line causing the error since there may not be a `_indices_files` key in the older versions: https://github.com/huggingface/datasets/blob/b70141e3c5149430951773aaa0155555c5fb3e76/src/datasets/arrow_dataset.py#L634 May I suggest using `state.get()` instead of directly indexing the dictionary? @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2195/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2195/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2194
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2194/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2194/comments
https://api.github.com/repos/huggingface/datasets/issues/2194/events
https://github.com/huggingface/datasets/issues/2194
853,909,452
MDU6SXNzdWU4NTM5MDk0NTI=
2,194
py3.7: TypeError: can't pickle _LazyModule objects
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "\r\nThis wasn't a `datasets` problem, but `transformers`' and it was solved here https://github.com/huggingface/transformers/pull/11168\r\n" ]
2021-04-08T21:02:48
2021-04-09T16:56:50
2021-04-09T01:52:57
CONTRIBUTOR
null
null
null
While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install: ``` git clone https://github.com/huggingface/transformers cd transformers pip install -e .[testing] export BS=1; rm -rf /tmp/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \ examples/language-modeling/run_clm.py --model_name_or_path distilgpt2 --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 --do_train --max_train_samples 1 \ --per_device_train_batch_size $BS --output_dir /tmp/test-clm --block_size 128 --logging_steps 1 \ --fp16 ``` ``` Traceback (most recent call last): File "examples/language-modeling/run_clm.py", line 453, in <module> main() File "examples/language-modeling/run_clm.py", line 336, in main load_from_cache_file=not data_args.overwrite_cache, File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in map for k, dataset in self.items() File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp> for k, dataset in self.items() File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1259, in map update_data=update_data, File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 157, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 158, in wrapper self._fingerprint, transform, kwargs_for_fingerprint File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint hasher.update(transform_args[key]) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 57, in update self.m.update(self.hash(value).encode("utf-8")) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 53, in hash return cls.hash_default(value) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 46, in hash_default return cls.hash_bytes(dumps(value)) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 389, in dumps dump(obj, file) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 361, in dump Pickler(file, recurse=True).dump(obj) File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump StockPickler.dump(self, obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 437, in dump self.save(obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 556, in save_function obj=obj, File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 524, in save rv = reduce(self.proto) TypeError: can't pickle _LazyModule objects ``` ``` $ python --version Python 3.7.4 $ python -m torch.utils.collect_env Collecting environment information... PyTorch version: 1.8.0.dev20210110+cu110 Is debug build: False CUDA used to build PyTorch: 11.0 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.2 LTS (x86_64) GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.16.3 ``` Thanks.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2194/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2193
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2193/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2193/comments
https://api.github.com/repos/huggingface/datasets/issues/2193/events
https://github.com/huggingface/datasets/issues/2193
853,725,707
MDU6SXNzdWU4NTM3MjU3MDc=
2,193
Filtering/mapping on one column is very slow
{ "login": "norabelrose", "id": 39116809, "node_id": "MDQ6VXNlcjM5MTE2ODA5", "avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4", "gravatar_id": "", "url": "https://api.github.com/users/norabelrose", "html_url": "https://github.com/norabelrose", "followers_url": "https://api.github.com/users/norabelrose/followers", "following_url": "https://api.github.com/users/norabelrose/following{/other_user}", "gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}", "starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions", "organizations_url": "https://api.github.com/users/norabelrose/orgs", "repos_url": "https://api.github.com/users/norabelrose/repos", "events_url": "https://api.github.com/users/norabelrose/events{/privacy}", "received_events_url": "https://api.github.com/users/norabelrose/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
closed
false
null
[]
null
[ "Hi ! Yes we are working on making `filter` significantly faster. You can look at related PRs here: #2060 #2178 \r\n\r\nI think you can expect to have the fast version of `filter` available next week.\r\n\r\nWe'll make it only select one column, and we'll also make the overall filtering operation way faster by avoiding many arrow<->python conversions especially during writing.\r\n\r\nI'll let you know how it goes !", "@lhoestq Thanks for the response— it's great to hear that we'll be getting a much faster `filter` method soon. However, my use case does also involve using `map` over a single column in order to pre-compute roughly uniformly sized batches, and right now that is also very slow. Is there any plan to make `map` faster for single column operations?\r\n\r\nIf that's not a priority for the maintainers right now, I could try my hand at adding the feature, but I can't guarantee I would do a good job given my lack of familiarity with pyarrow.", "Currently the optimal setup for single-column computations is probably to do something like\r\n```python\r\nresult = dataset.map(f, input_columns=\"my_col\", remove_columns=dataset.column_names)\r\n```\r\nThis has two advantages:\r\n- input_columns=\"my_col\" allows to only read the column \"my_col\"\r\n- remove_columns=dataset.column_names makes `map` only keep the output of your function `f`, and it drops the other columns of the dataset instead of keeping them.\r\n\r\nLet me know if it improves speed on your side.\r\n\r\nYou can also get more speed by using `batched=True` and setting `num_proc=` for multiprocessing", "Hi @lhoestq ,\r\n\r\nI'm hijacking this issue, because I'm currently trying to do the approach you recommend:\r\n\r\n> Currently the optimal setup for single-column computations is probably to do something like\r\n> \r\n> ```python\r\n> result = dataset.map(f, input_columns=\"my_col\", remove_columns=dataset.column_names)\r\n> ```\r\n\r\nHere is my code: (see edit, in which I added a simplified version\r\n\r\n```\r\nThis is the error:\r\n```bash\r\npyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 8964 but got length 1000\r\n```\r\nI wonder why this error occurs, when I delete every column? Can you give me a hint?\r\n\r\n### Edit:\r\nI preprocessed my dataset before (using map with the features argument) and saved it to disk. May this be part of the error? I can iterate over the\r\ncomplete dataset and print every sample before calling map. There seems to be no other problem with the dataset.\r\n\r\nI tried to simplify the code that crashes:\r\n\r\n```python\r\n# works\r\nlog.debug(dataset.column_names)\r\nlog.debug(dataset)\r\nfor i, sample in enumerate(dataset):\r\n log.debug(i, sample)\r\n\r\n# crashes\r\ncounted_dataset = dataset.map(\r\n lambda x: {\"a\": list(range(20))},\r\n input_columns=column,\r\n remove_columns=dataset.column_names,\r\n load_from_cache_file=False,\r\n num_proc=num_workers,\r\n batched=True,\r\n)\r\n```\r\n\r\n```\r\npyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 20 but got length 1000\r\n```\r\n\r\nEdit2: \r\n\r\nMay this be a problem with a schema I set when preprocessing the dataset before? I tried to add the `features` argument to the function and then I get a new error:\r\n\r\n```python\r\n# crashes\r\ncounted_dataset = dataset.map(\r\n lambda x: {\"a\": list(range(20))},\r\n input_columns=column,\r\n remove_columns=dataset.column_names,\r\n load_from_cache_file=False,\r\n num_proc=num_workers,\r\n batched=True,\r\n features=datasets.Features(\r\n {\r\n \"a\": datasets.Sequence(datasets.Value(\"int32\"))\r\n }\r\n )\r\n)\r\n```\r\n\r\n```\r\n File \"env/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1704, in _map_single\r\n writer.write_batch(batch)\r\n File \"env/lib/python3.8/site-packages/datasets/arrow_writer.py\", line 312, in write_batch\r\n col_type = schema.field(col).type if schema is not None else None\r\n File \"pyarrow/types.pxi\", line 1341, in pyarrow.lib.Schema.field\r\nKeyError: 'Column tokens does not exist in schema'\r\n```", "Hi ! Can you open a separate issue for that ?\r\nAlso if you could provide a google colab or a sample code to reproduce this issue that would be helpful.\r\nOn my side I was not able to reproduce this error.", "@lhoestq Sorry I'm just responding now. I'm currently using your recommendation for the map on a single column, and I've gotten it to be fast enough to sort of work for my use case by just setting `num_proc=10`, although it's still quite slow. It's clear that it is still loading the entirety of each row into memory and then discarding everything except the selected column, instead of exploiting the columnar data format to only load the selected column.\r\n\r\nMy code is like this:\r\n```\r\n self.dataset = self.dataset.sort('num_tokens')\r\n batch_dataset = self.dataset.map(\r\n\tcompute_uniform_sized_batches,\r\n\tbatched=True, batch_size=10_000, num_proc=10, input_columns=['num_tokens'],\r\n\tremove_columns=get_columns_all_equal(self.dataset),\r\n\twith_indices=True,\r\n\tfn_kwargs=dict(max_size=tokens_per_batch)\r\n)\r\nself.batches = {\r\n\tname: list(zip(split['start'], split['length']))\r\n\tfor name, split in batch_dataset.items()\r\n}\r\n```\r\nI find that the processes with higher IDs take significantly longer to complete, presumably because the dataset is sorted by article length and they're loading the entire article text into memory, instead of just the 'num_tokens' column.\r\n\r\nI should note that my batching procedure would work best if I just used `batch_size=None` and loaded the whole column into memory at once, but I found that this was intolerably slow and gave me no progress information, so I'm using the less than ideal `batch_size=10_000`.", "Hi @norabelrose ! I'm glad you managed to make this work on your side.\r\nRegarding memory usage, you can try to drop the columns that you don't want to use for your `map` for now.\r\n\r\nIn the future we'll try to find a way to not load unnecessary columns in memory in `map`. Currently the way it works is that it gets the batch as a python dict, then it updates it using the output of your mapping function, and finally it removes columns from `remove_columns`. Therefore for a moment some columns are loaded in memory even if you remove them or don't use them for your mapping function.\r\n\r\nIt would be nice to have a way to optimize memory for cases such as yours !", "@lhoestq After looking through the source code, it looks like the following solution has at least some chance of working:\r\n- refactor `Dataset.map()` so that the `input_columns` parameter is implemented by using the `self.formatted_as()` context manager with `columns=input_columns`\r\n- change `Dataset._getitem()` so that it passes `self._data.drop(drop_columns)` to the `query_table()` function whenever `format_columns` is non-None and `output_all_columns` is False, instead of `self._data` itself", "Looks like a great direction :)\r\nNote that `query_table` doesn't bring data into memory. Only `format_table` does.\r\nAlso the dataset may already have a format with `columns=` already defined so we would need to define the formatted `input_dataset` like:\r\n```python\r\n# before the `map` main for loop\r\ninput_columns = input_columns if input_columns is not None else self.column_names\r\nif not self._output_all_columns:\r\n columns = [col for col in input_columns if self._format_columns is None or col in self._format_columns]\r\n input_dataset = self.with_format(\r\n type=self._format_type,\r\n columns=columns\r\n )\r\nelse:\r\n # in this case we could find a way to filter both format_columns and unformatted columns eventually\r\n input_dataset = self\r\n# then input_dataset can be used in the main for loop of `map`\r\n```\r\n\r\nEDIT: oh and regarding streaming format versus file format for arrow, we plan to start using the file format #1933 at one point (though I'm not sure if it would improve performance)", "Good to know about `query_table` not bringing anything into memory. I was under the impression that it did because a while back I looked at my `map` operation in pdb and it looked like it was spending forever in line 93 of formatting.py, `return pa.concat_tables(....)`, although that was before the `fast_slice` interpolation search was implemented, so it may have had more to do with the slow ChunkedArray slice implementation than anything else.\r\n\r\nIf `query_table` is I/O free then the fix may be as simple as just adding this to line 1779 of arrow_dataset.py:\r\n```python\r\n# Only load the columns we actually need\r\nif input_columns:\r\n stack.enter_context(self.formatted_as(\r\n self._format_type,\r\n columns=input_columns,\r\n output_all_columns=False,\r\n **self._format_kwargs\r\n ))\r\n```\r\nIt's not clear to me why the `[col for col in input_columns if self._format_columns is None or col in self._format_columns]` check would be necessary— it seems like either `input_columns` should simply temporarily override the `_format_columns` within the `map` operation, or we should throw an error if there are any conflicts. Currently it doesn't look like this case is checked for at all within `map`, but maybe I'm just missing it.", "`query_table` simply slices/concatenates parts of the table. The actual data inside the table is not brought in memory.\r\nAlso I'm more in favor of declaring `input_dataset = self.with_format(...)` since `formatted_as` may update the dataset fingerprint of `self`, which is not expected when someone runs `map`.\r\n\r\n> It's not clear to me why the [col for col in input_columns if self._format_columns is None or col in self._format_columns] check would be necessary— it seems like either input_columns should simply temporarily override the _format_columns within the map operation, or we should throw an error if there are any conflicts. Currently it doesn't look like this case is checked for at all within map, but maybe I'm just missing it.\r\n\r\nActually yes we can just use input_columns. And we do need to add a check to make sure there are not conflicts or this could lead to confusing errors.", "That sounds good to me! I just submitted a PR (#2246) implementing your approach. I also changed how `_query_table` handles Iterable keys since it still seemed like `pa.concat_tables` was taking a long time to create the table for each batch. Now my whole `map()` operation takes 1 min 46 seconds where it used to take somewhere on the order of 10 minutes." ]
2021-04-08T18:16:14
2021-04-26T16:13:59
2021-04-26T16:13:59
CONTRIBUTOR
null
null
null
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation. I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_columns=['num_tokens']`, it seems that the entirety of each row is loaded into memory, which makes the operation take much longer than it should. Indeed, `filter` currently just calls `map`, and I found that in `_map_single` on lines 1690-1704 of `arrow_dataset.py`, the method is just grabbing slices of _all the rows_ of the dataset and then passing only the specified columns to the map function. It seems that, when the user passes a value for `input_columns`, the `map` function should create a temporary pyarrow table by selecting just those columns, and then get slices from that table. Or something like that— I'm not very familiar with the pyarrow API. I know that in the meantime I can sort of get around this by simply only returning the rows that match my filter criterion from the tokenizing function I pass to `map()`, but I actually _also_ want to map on just the `num_tokens` column in order to compute batches with a roughly uniform number of tokens per batch. I would also ideally like to be able to change my minimum and maximum article lengths without having to re-tokenize the entire dataset. PS: This is definitely not a "dataset request." I'm realizing that I don't actually know how to remove labels from my own issues on other people's repos, if that is even possible.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2193/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2190
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2190/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2190/comments
https://api.github.com/repos/huggingface/datasets/issues/2190/events
https://github.com/huggingface/datasets/issues/2190
853,181,564
MDU6SXNzdWU4NTMxODE1NjQ=
2,190
News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs
{ "login": "anassalamah", "id": 8571003, "node_id": "MDQ6VXNlcjg1NzEwMDM=", "avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anassalamah", "html_url": "https://github.com/anassalamah", "followers_url": "https://api.github.com/users/anassalamah/followers", "following_url": "https://api.github.com/users/anassalamah/following{/other_user}", "gists_url": "https://api.github.com/users/anassalamah/gists{/gist_id}", "starred_url": "https://api.github.com/users/anassalamah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anassalamah/subscriptions", "organizations_url": "https://api.github.com/users/anassalamah/orgs", "repos_url": "https://api.github.com/users/anassalamah/repos", "events_url": "https://api.github.com/users/anassalamah/events{/privacy}", "received_events_url": "https://api.github.com/users/anassalamah/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @anassalamah,\r\n\r\nCould you please try with this:\r\n```python\r\ntrain_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[:98%]')\r\nval_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[98%:]')\r\n```", "Hello @albertvillanova, \r\n\r\nThanks for the suggestion. I didn't know you could do that. however, it didn't resolve the issue\r\n\r\n![image](https://user-images.githubusercontent.com/8571003/114169966-ec819400-993a-11eb-8a67-930f9a9b2290.png)\r\n" ]
2021-04-08T07:53:43
2021-05-24T10:03:55
2021-05-24T10:03:55
NONE
null
null
null
I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi. ``` train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]') val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]') # filtering out examples that are not ar-en translations but ar-hi val_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True) ``` * I'm fairly new to using datasets so I might be doing something wrong
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2190/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2190/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2189
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2189/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2189/comments
https://api.github.com/repos/huggingface/datasets/issues/2189/events
https://github.com/huggingface/datasets/issues/2189
853,052,891
MDU6SXNzdWU4NTMwNTI4OTE=
2,189
save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object.
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! We refactored save_to_disk in #2025 so this doesn't happen.\r\nFeel free to try it on master for now\r\nWe'll do a new release soon" ]
2021-04-08T04:42:53
2022-06-01T16:32:15
2022-06-01T16:32:15
NONE
null
null
null
As you can see, it saves the entire dataset. @lhoestq You can check by going through the following example, ``` from datasets import load_from_disk,concatenate_datasets loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset') n=20 kb_list=[loaded_data.shard(n, i, contiguous=True) for i in range(n)] final_dataset=concatenate_datasets([kb_list[1],kb_list[2]]) final_dataset.save_to_disk('/home/gsir059/haha/k.arrow') ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2189/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2189/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2188
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2188/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2188/comments
https://api.github.com/repos/huggingface/datasets/issues/2188/events
https://github.com/huggingface/datasets/issues/2188
853,044,166
MDU6SXNzdWU4NTMwNDQxNjY=
2,188
Duplicate data in Timit dataset
{ "login": "thanh-p", "id": 78190188, "node_id": "MDQ6VXNlcjc4MTkwMTg4", "avatar_url": "https://avatars.githubusercontent.com/u/78190188?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thanh-p", "html_url": "https://github.com/thanh-p", "followers_url": "https://api.github.com/users/thanh-p/followers", "following_url": "https://api.github.com/users/thanh-p/following{/other_user}", "gists_url": "https://api.github.com/users/thanh-p/gists{/gist_id}", "starred_url": "https://api.github.com/users/thanh-p/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thanh-p/subscriptions", "organizations_url": "https://api.github.com/users/thanh-p/orgs", "repos_url": "https://api.github.com/users/thanh-p/repos", "events_url": "https://api.github.com/users/thanh-p/events{/privacy}", "received_events_url": "https://api.github.com/users/thanh-p/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting\r\nIf I recall correctly this has been recently fixed #1995\r\nCan you try to upgrade your local version of `datasets` ?\r\n```\r\npip install --upgrade datasets\r\n```", "Hi Ihoestq,\r\n\r\nThank you. It works after upgrading the datasets\r\n" ]
2021-04-08T04:21:54
2021-04-08T12:13:19
2021-04-08T12:13:19
NONE
null
null
null
I ran a simple code to list all texts in Timit dataset and the texts were all the same. Is this dataset corrupted? **Code:** timit = load_dataset("timit_asr") print(*timit['train']['text'], sep='\n') **Result:** Would such an act of refusal be useful? Would such an act of refusal be useful? Would such an act of refusal be useful? Would such an act of refusal be useful? ... ... Would such an act of refusal be useful?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2188/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2187
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2187/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2187/comments
https://api.github.com/repos/huggingface/datasets/issues/2187/events
https://github.com/huggingface/datasets/issues/2187
852,939,736
MDU6SXNzdWU4NTI5Mzk3MzY=
2,187
Question (potential issue?) related to datasets caching
{ "login": "ioana-blue", "id": 17202292, "node_id": "MDQ6VXNlcjE3MjAyMjky", "avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ioana-blue", "html_url": "https://github.com/ioana-blue", "followers_url": "https://api.github.com/users/ioana-blue/followers", "following_url": "https://api.github.com/users/ioana-blue/following{/other_user}", "gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}", "starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions", "organizations_url": "https://api.github.com/users/ioana-blue/orgs", "repos_url": "https://api.github.com/users/ioana-blue/repos", "events_url": "https://api.github.com/users/ioana-blue/events{/privacy}", "received_events_url": "https://api.github.com/users/ioana-blue/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
open
false
null
[]
null
[ "An educated guess: does this refer to the fact that depending on the custom column names in the dataset files (csv in this case), there is a dataset loader being created? and this dataset loader - using the \"custom data configuration\" is used among all jobs running using this particular csv files? (thinking out loud here...)\r\n\r\nIf this is the case, it may be ok for my use case (have to think about it more), still a bit surprising given that datasets caching is disabled (or so I hope) by the lines I pasted above. ", "Hi ! Currently disabling the caching means that all the dataset transform like `map`, `filter` etc. ignore the cache: it doesn't write nor read processed cache files.\r\nHowever `load_dataset` reuses datasets that have already been prepared: it does reload prepared dataset files.\r\n\r\nIndeed from the documentation:\r\n> datasets.set_caching_enabled(boolean: bool)\r\n\r\n> When applying transforms on a dataset, the data are stored in cache files. The caching mechanism allows to reload an existing cache file if it’s already been computed.\r\n> Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated after each transform.\r\n> If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets. More precisely, if the caching is disabled:\r\n> - cache files are always recreated\r\n> - cache files are written to a temporary directory that is deleted when session closes\r\n> - cache files are named using a random hash instead of the dataset fingerprint - use datasets.Dataset.save_to_disk() to save a transformed dataset or it will be deleted when session closes\r\n> - caching doesn’t affect datasets.load_dataset(). If you want to regenerate a dataset from scratch you should use the download_mode parameter in datasets.load_dataset().", "Thank you for the clarification. \r\n\r\nThis is a bit confusing. On one hand, it says that cache files are always recreated and written to a temporary directory that is removed; on the other hand the last bullet point makes me think that since the default according to the docs for `download_mode (Optional datasets.GenerateMode) – select the download/generate mode - Default to REUSE_DATASET_IF_EXISTS` => it almost sounds that it could reload prepared dataset files. Where are these files stored? I guess not in the temporary directory that is removed... \r\n\r\nI find this type of api design error-prone. When I see as a programmer `datasets.set_caching_enabled(False)` I expect no reuse of anything in the cache. ", "It would be nice if the documentation elaborated on all the possible values for `download_mode` and/or a link to `datasets.GenerateMode`. \r\nThis info here:\r\n```\r\n \"\"\"`Enum` for how to treat pre-existing downloads and data.\r\n The default mode is `REUSE_DATASET_IF_EXISTS`, which will reuse both\r\n raw downloads and the prepared dataset if they exist.\r\n The generations modes:\r\n | | Downloads | Dataset |\r\n | -----------------------------------|-----------|---------|\r\n | `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse |\r\n | `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh |\r\n | `FORCE_REDOWNLOAD` | Fresh | Fresh |\r\n```", "I have another question. Assuming that I understood correctly and there is reuse of datasets files when caching is disabled (!), I'm guessing there is a directory that is created based on some information on the dataset file. I'm interested in the situation where I'm loading a (custom) dataset from local disk. What information is used to create the directory/filenames where the files are stored?\r\n\r\nI'm concerned about the following scenario: if I have a file, let's say `train.csv` at path `the_path`, run once, the dataset is prepared, some models are run, etc. Now let's say there is an issue and I recreate `train.csv` at the same path `the_path`. Is there enough information in the temporary name/hash to *not* reload the *old* prepared dataset (e.g., timestamp of the file)? Or is it going to reload the *old* prepared file? ", "Thanks for the feedback, we'll work in improving this aspect of the documentation.\r\n\r\n> Where are these files stored? I guess not in the temporary directory that is removed...\r\n\r\nWe're using the Arrow file format to load datasets. Therefore each time you load a dataset, it is prepared as an arrow file on your disk. By default the file is located in the ~/.cache/huggingface/datasets/<dataset_name>/<config_id>/<version> directory.\r\n\r\n> What information is used to create the directory/filenames where the files are stored?\r\n\r\nThe config_id contains a hash that takes into account:\r\n- the dataset loader used and its source code (e.g. the \"csv\" loader)\r\n- the arguments passed to the loader (e.g. the csv delimiter)\r\n- metadata of the local data files if any (e.g. their timestamps)\r\n\r\n> I'm concerned about the following scenario: if I have a file, let's say train.csv at path the_path, run once, the dataset is prepared, some models are run, etc. Now let's say there is an issue and I recreate train.csv at the same path the_path. Is there enough information in the temporary name/hash to not reload the old prepared dataset (e.g., timestamp of the file)? Or is it going to reload the old prepared file?\r\n\r\nYes the timestamp of the local csv file is taken into account. If you edit your csv file, the config_id will change and loading the dataset will create a new arrow file.", "Thank you for all your clarifications, really helpful! \r\n\r\nIf you have the bandwidth, please do revisit the api wrt cache disabling. Anywhere in the computer stack (hardware included) where you disable the cache, one assumes there is no caching that happens. ", "That makes total sense indeed !\r\nI think we can do the change", "I have another question about caching, this time in the case where FORCE_REDOWNLOAD is used to load the dataset, the datasets cache is one directory as defined by HF_HOME and there are multiple concurrent jobs running in a cluster using the same local dataset (i.e., same local files in the cluster). Does anything in the naming convention and/or file access/locking that you're using prevent race conditions between the concurrent jobs on the caching of the local dataset they all use?\r\n\r\nI noticed some errors (can provide more details if helpful) in load_dataset/prepare_split that lead to my question above. \r\n\r\nLet me know if my question is clear, I can elaborate more if needed @lhoestq Thank you!", "I got another error that convinces me there is a race condition (one of the test files had zero samples at prediction time). I think it comes down to the fact that the `config_id` above (used in the naming for the cache) has no information on who's touching the data. If I have 2 concurrent jobs, both loading the same dataset and forcing redownload, they may step on each other foot/caching of the dataset. ", "We're using a locking mechanism to prevent two processes from writing at the same time. The locking is based on the `filelock` module.\r\nAlso directories that are being written use a suffix \".incomplete\" so that reading is not possible on a dataset being written.\r\n\r\nDo you think you could provide a simple code to reproduce the race condition you experienced ?", "I can provide details about the code I'm running (it's really-really close to some official samples from the huggingface transformers examples, I can point to the exact sample file, I kept a record of that). I can also describe in which conditions this race occurs (I'm convinced it has to do with forcing the redownloading of the dataset, I've been running hundreds of experiments before and didn't have a problem before I forced the redownload). I also can provide samples of the different stack errors I get and some details about the level of concurrency of jobs I was running. I can also try to imagine how the race manifests (I'm fairly sure that it's a combo of one job cleaning up and another job being in the middle of the run).\r\n\r\nHowever, I have to cleanup all this to make sure I'm no spilling any info I shouldn't be spilling. I'll try to do it by the end of the week, if you think all this is helpful. \r\n\r\nFor now, I have a workaround. Don't use forcing redownloading. And to be ultra careful (although I don't think this is a problem), I run a series of jobs that will prepare the datasets and I know there is no concurrency wrt the dataset. Once that's done (and I believe even having multiple jobs loading the datasets at the same time doesn't create problems, as long as REUSE_DATASET_IF_EXISTS is the policy for loading the dataset, so the filelock mechanism you're using is working in that scenario), the prepared datasets will be reused, no race possible in any way. \r\n\r\nThanks for all the details you provided, it helped me understand the underlying implementation and coming up with workarounds when I ran into issues. ", "Hi! I have the same challenge with caching, where the **.cache** folder is required even though it isn't possible for me.\r\n\r\nI'd like to run transformers in Snowflake, using Snowpark for Python, this would mean I could provide configurable transformers in real-time for business users without having data leave an environment (for security reasons). With no need for data transfer,n the compute is faster. It is a large use case - is it possible to entirely disable caching in certain scenarios?\r\n@lhoestq ?\r\n", "You can try to change the location of the cache folder using the `HF_CACHE_HOME` environment variable, and set a location where you have read/write access.", "Thanks @lhoestq \r\n\r\nI wanted to do that, however, snowflake does not allow it to write at all. I'm asking around to see if they can help me out with that issue 😅" ]
2021-04-08T00:16:28
2023-01-03T18:30:38
null
NONE
null
null
null
I thought I had disabled datasets caching in my code, as follows: ``` from datasets import set_caching_enabled ... def main(): # disable caching in datasets set_caching_enabled(False) ``` However, in my log files I see messages like the following: ``` 04/07/2021 18:34:42 - WARNING - datasets.builder - Using custom data configuration default-888a87931cbc5877 04/07/2021 18:34:42 - WARNING - datasets.builder - Reusing dataset csv (xxxx/cache-transformers/datasets/csv/default-888a87931cbc5877/0.0.0/965b6429be0fc05f975b608ce64e1fa941cc8fb4f30629b523d2390f3c0e1a93 ``` Can you please let me know what this reusing dataset csv means? I wouldn't expect any reusing with the datasets caching disabled. Thank you!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2187/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/2185
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2185/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2185/comments
https://api.github.com/repos/huggingface/datasets/issues/2185/events
https://github.com/huggingface/datasets/issues/2185
852,684,395
MDU6SXNzdWU4NTI2ODQzOTU=
2,185
.map() and distributed training
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi, one workaround would be to save the mapped(tokenized in your case) file using `save_to_disk`, and having each process load this file using `load_from_disk`. This is what I am doing, and in this case, I turn off the ability to automatically load from the cache.\r\n\r\nAlso, multiprocessing the map function seems to be slower at the moment (#1992), hope this helps you.", "Thanks @hwijeen for the workaround, feels a bit prototypical but it works! (it seems files are written twice then though)\r\n\r\n(I haven't observed slowness using multiprocessed map function but I could be wrong)", "To my understanding, files are written twice anyhow(one after load_dataset, another aftet map). It's just that you now have it at the location where you can see, whereas it was secretlely saved at caching folder(.cache/huggingface/datasets by default)! Correct me if I'm wrong!", "Slowness in multiprocessing has been observed in certain environments but not others. We're investigating ;)", "So to answer my initial question, I was just doing something stupid as I was not re-giving the `preprocessing_num_workers` arguments when launching the distributed training (and it was then set to `None`). I initially thought the hash was computed only with the `tokenize_function` but it's all arguments. Thanks @lhoestq for clarifying!", "This cache process isn't really consistent. I just changed `per_device_train_batch_size` of training script and now it rebuilding the dataset cache!!!! Why?", "Hi ! A `map` function is recomputed if the code changes or if any of the variables it uses changes. Can you check that your function doesn't use `per_device_train_batch_size` or any variable that contains `per_device_train_batch_size` ?", "My code is actually a transformer's example for training t5, I modified a bit:\r\n\r\nhttps://github.com/puraminy/transformers/blob/4b40877132eedb566043f83de8f1d29a84d71430/examples/flax/language-modeling/run_t5_mlm_flax.py#L614\r\n\r\nNo, it doesn't use `per_device_train_batch_size`. I remember it worked for several times and then for no reason or various reasons like the above it started to build the cache again, as if it had an expiration date (maybe), or maybe I had changed the code! \r\n\r\nSo, to get rid of these problems I saved cache with a name (was forced to not use multiple_processes, because otherwise it generates multiple files) and then I load it from this cache file. " ]
2021-04-07T18:22:14
2021-10-23T07:11:15
2021-04-09T15:38:31
MEMBER
null
null
null
Hi, I have a question regarding distributed training and the `.map` call on a dataset. I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`. `dataset` is then tokenized: ```python datasets = load_from_disk(dataset_path=my_path) [...] def tokenize_function(examples): return tokenizer(examples[text_column_name]) logger.info("Mapping dataset to tokenized dataset.") tokenized_datasets = datasets.map( tokenize_function, batched=True, num_proc=preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=True, ) ``` I am using 31 workers (`preprocessing_num_workers=31`) and thus it creates 31 `cache*.arrow` files in `my_path/train` (there is only a train split). When I relaunch the script, the map is tokenization is skipped in favor of loading the 31 previously cached files, and that's perfect. Everything so far was done by launching a **single process script**. I now launch the same training script in **distributed mode** (`pytorch -m torch.distributed.launch --nproc_per_node 2`). However, once it reaches the map call, it re-does the tokenization... instead of loading the 31 cached files. I tried adding the `cache_file_name` argument: `cache_file_name={"train": my_path/one_of_the_arrow_file}`, but I can't give the 31 cached files, so it probably isn't the right way to do it. **My question: what is the best way to load cached files if they were pre-processed and dumped in multiple arrow files?** It seems automatically handled for single processes but fails on distributed training. - I am following the same structure as the examples of transformers (more specifically [run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) in my case) - I am using 1.5.0 version of datasets if that matters.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2185/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2185/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2181
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2181/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2181/comments
https://api.github.com/repos/huggingface/datasets/issues/2181/events
https://github.com/huggingface/datasets/issues/2181
852,261,607
MDU6SXNzdWU4NTIyNjE2MDc=
2,181
Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
{ "login": "hwijeen", "id": 29157715, "node_id": "MDQ6VXNlcjI5MTU3NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hwijeen", "html_url": "https://github.com/hwijeen", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "repos_url": "https://api.github.com/users/hwijeen/repos", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Can you try to increase the block size ? For example\r\n```python\r\nblock_size_10MB = 10<<20\r\nload_dataset(\"json\", ..., block_size=block_size_10MB)\r\n```\r\nThe block size corresponds to how much bytes to process at a time from the input stream.\r\nThis will determine multi-threading granularity as well as the size of individual chunks in the dataset.\r\n\r\nYou can also try with bigger block sizes if needed", "Hi @lhoestq! Thank you for your prompt reply.\r\nI have experimented with (10<<20, 10<<28, 10<<30, 10<<33, 10<<34), since my machine has 192G of memory, but it's either the above-mentioned error or processed killed because of OOM.\r\n\r\nCould you give me a bit of background on why block size needs to be exactly calibrated?\r\nTo my understanding, small block sized should run just fine despite its slowness..\r\n\r\n\r\n", "We're using the JSON loader of pyarrow. It parses the file chunk by chunk to load the dataset.\r\nThis issue happens when there's no delimiter in one chunk of data. For json line, the delimiter is the end of line.\r\nSo with a big value for chunk_size this should have worked unless you have one extremely long line in your file.\r\n\r\nAlso what version of pyarrow are you using ?\r\n\r\nFInally I wonder if it could be an issue on pyarrow's side when using big json files. (I haven't tested big json files like yours)", "I'm using `pyarrow==3.0.0` with `datasets==1.5.0`.\r\n\r\nYour point totally makes sense. I will check if my jsonl file contains an extremely long file and let you know. \r\n\r\nHere are some different error messages that I got when tweaking `block_size`. I also suspect that this is related to the pyarrow... but I guess it would be wonderful if datasesets could give a clear guide on how to play with large datasets! (I am suddenly experiencing various issue when working with large datasets.. e.g. #1992 )\r\n```python\r\n return paj.ReadOptions(use_threads=self.use_threads, block_size=self.block_size)\r\n File \"pyarrow/_json.pyx\", line 56, in pyarrow._json.ReadOptions.__init__\r\n File \"pyarrow/_json.pyx\", line 81, in pyarrow._json.ReadOptions.block_size.__set__\r\nOverflowError: value too large to convert to int32_t\r\n```\r\n\r\n```python\r\n\r\nline 83, in _generate_tables\r\n parse_options=self.config.pa_parse_options,\r\n File \"pyarrow/_json.pyx\", line 247, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Exceeded maximum rows\r\n```", "I am getting the same error. When I tweak the block_size, I also find:\r\n`OverflowError: value too large to convert to int32_t`\r\nand \r\n`pyarrow.lib.ArrowInvalid: Exceeded maximum rows`\r\n", "I made more tests. I used a smaller dataset and I was getting the same error, which means that it was not necessarily linked to the dataset size. To make both my smaller and larger datasets work, I got rid of lists with the json file. I had the following data format:\r\n```python\r\n[\r\n {'key': \"a\", 'value': ['one', 'two', 'three']},\r\n {'key': \"b\", 'value': ['four', 'five', 'six']}\r\n]\r\n```\r\nI changed to:\r\n\r\n```python\r\n {'key': \"a\", 'value': 'one\\ntwo\\nthree'},\r\n {'key': \"b\", 'value': 'four\\nfive\\nsix']}\r\n```\r\nand that worked!\r\n\r\nI used the following to reformat my json file:\r\n```python\r\nwith open(file_name, \"w\", encoding=\"utf-8\") as f:\r\n for item in list_:\r\n f.write(json.dumps(item) + \"\\n\")\r\n```\r\nThis works with `block_size_10MB = 10 << 20` or without specifying `block_size`.", "Thanks @hwijeen for reporting and thanks @jpilaul for pointing this out.\r\n\r\nIndeed, those are different JSON-like formats:\r\n- the first one is the **standard JSON** format: all the file content is JSON-valid, thus all content is either a JSON object (between curly brackets `{...}`) or a JSON array (between square brackets `[...]`)\r\n- the second one is called **JSON Lines**: the entire file content is not JSON-valid, but only every line (newline-delimited) is JSON-valid\r\n\r\nCurrently PyArrow only supports **JSON Lines** format: \r\n- https://arrow.apache.org/docs/python/generated/pyarrow.json.read_json.html\r\n > Currently only the line-delimited JSON format is supported.\r\n- https://arrow.apache.org/docs/python/json.html\r\n > Arrow supports reading columnar data from line-delimited JSON files.", "Thanks @albertvillanova for your explanation, it is helpful to know (maybe add to docs?)!\r\nHowever, the problem I described above happened when I was dealing with jsonl files 😿\r\nAlthough I did not thoroughly inspect, I suspect the cause was the one extremely long document in my case.", "I see... I guess there is another problem going one then, related to the size." ]
2021-04-07T10:26:46
2021-04-12T07:15:55
2021-04-12T07:15:55
NONE
null
null
null
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project. When loading a huge json file of 500GB, pyarrow complains as follows: ``` Traceback (most recent call last): File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 531, in incomplete_dir yield tmp_dir File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 650, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 1027, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose): File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__ for obj in iterable: File "/app/.cache/huggingface/modules/datasets_modules/datasets/json/9498524fd296a6cca99c66d6c5be507d1c0991f5a814e535b507f4a66096a641/json.py", line 83, in _generate_tables parse_options=self.config.pa_parse_options, File "pyarrow/_json.pyx", line 247, in pyarrow._json.read_json File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ``` When using only a small portion of the sample file, say first 100 lines, it works perfectly well.. I see that it is the error from pyarrow, but could you give me a hint or possible solutions? #369 describes the same error and #372 claims to have fixed the issue, but I have no clue why I am still getting this one. Thanks in advance!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2181/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2181/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2179
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2179/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2179/comments
https://api.github.com/repos/huggingface/datasets/issues/2179/events
https://github.com/huggingface/datasets/issues/2179
852,237,957
MDU6SXNzdWU4NTIyMzc5NTc=
2,179
Load small datasets in-memory instead of using memory map
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2021-04-07T09:58:16
2021-04-20T10:04:04
2021-04-20T10:04:03
MEMBER
null
null
null
Currently all datasets are loaded using memory mapping by default in `load_dataset`. However this might not be necessary for small datasets. If a dataset is small enough, then it can be loaded in-memory and: - its memory footprint would be small so it's ok - in-memory computations/queries would be faster - the caching on-disk would be disabled, making computations even faster (no I/O bound because of the disk) - but running the same computation a second time would recompute everything since there would be no cached results on-disk. But this is probably fine since computations would be fast anyway + users should be able to provide a cache filename if needed. Therefore, maybe the default behavior of `load_dataset` should be to load small datasets in-memory and big datasets using memory mapping.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2179/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2179/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2176
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2176/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2176/comments
https://api.github.com/repos/huggingface/datasets/issues/2176/events
https://github.com/huggingface/datasets/issues/2176
851,865,795
MDU6SXNzdWU4NTE4NjU3OTU=
2,176
Converting a Value to a ClassLabel
{ "login": "nelson-liu", "id": 7272031, "node_id": "MDQ6VXNlcjcyNzIwMzE=", "avatar_url": "https://avatars.githubusercontent.com/u/7272031?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nelson-liu", "html_url": "https://github.com/nelson-liu", "followers_url": "https://api.github.com/users/nelson-liu/followers", "following_url": "https://api.github.com/users/nelson-liu/following{/other_user}", "gists_url": "https://api.github.com/users/nelson-liu/gists{/gist_id}", "starred_url": "https://api.github.com/users/nelson-liu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nelson-liu/subscriptions", "organizations_url": "https://api.github.com/users/nelson-liu/orgs", "repos_url": "https://api.github.com/users/nelson-liu/repos", "events_url": "https://api.github.com/users/nelson-liu/events{/privacy}", "received_events_url": "https://api.github.com/users/nelson-liu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi @nelson-liu!\r\nHere is what I do to convert a string to class label:\r\n\r\n```python\r\nfrom datasets import load_dataset, features\r\n\r\n\r\ndset = load_dataset(...)\r\ncol_name = \"the string column name\"\r\n\r\nclass_names = dset.unique(col_name)\r\nclass_feature = features.ClassLabel(names=sorted(class_names))\r\ndset = dset.map(lambda str_value: {col_name: class_feature.str2int(str_value)}, input_columns=col_name)\r\n\r\ndset = dset.cast(features.Features({\r\n ...\r\n col_name: class_feature\r\n})\r\n```\r\n", "Hi! You can use `Dataset.class_encode_column` for this. And in the next release of `datasets` (this feature is only available on `master`), you'll also be able to use `cast` to do the conversion. \r\n\r\nAn example of conversion via `cast`: \r\n```python\r\nfrom datasets import Dataset, Features, ClassLabel\r\nd = Dataset.from_dict({\"a\": [\"no\", \"yes\", \"no\"]})\r\nd = d.cast(Features({\"a\": ClassLabel(names=[\"yes\", \"no\"])}))\r\n```" ]
2021-04-06T22:54:16
2022-06-01T16:31:49
2022-06-01T16:31:49
NONE
null
null
null
Hi! In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.` Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2176/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2176/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2175
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2175/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2175/comments
https://api.github.com/repos/huggingface/datasets/issues/2175/events
https://github.com/huggingface/datasets/issues/2175
851,836,096
MDU6SXNzdWU4NTE4MzYwOTY=
2,175
dataset.search_batch() function outputs all -1 indices sometime.
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Actually, I found the answer [here](https://github.com/facebookresearch/faiss/wiki/FAQ#what-does-it-mean-when-a-search-returns--1-ids). \r\n\r\nSo we have to do some modifications to the code for instances where the index doesn't retrieve any IDs.", "@lhoestq @patrickvonplaten \r\n\r\nI also found another short bug in the retrieval part. Especially, when retrieving documents. If Faiss returns the -1 as the index, the retriever will always use the last element in the dataset.\r\n\r\nplease check [def get_doc_dicts function](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_rag.py#L222)\r\n\r\n\r\nDoes the use of the HNSW guarantee to retrieve valid indexes always? \r\n\r\n", "Hi !\r\nNo it happens sometimes to return -1, especially if your dataset is small.\r\nIf your dataset is big enough it shouldn't happen in my experience.\r\n\r\nIdeally we should ignore all the -1 that are returned. It should be possible to change that in RAG's code ", "I also checked with some indexes it returns more -1s. Specially with IVF\nwhen nprobr is very low. It doesn't happen when using HNSW though. But at\nthe moment if it happens, dataset will always return the last element.\nMaybe we should change it to repeat the most last valid retrieved doc id.\nWhat do you think?\n\nOn Wed, Apr 7, 2021, 21:09 Quentin Lhoest ***@***.***> wrote:\n\n> Hi !\n> No it happens sometimes to return -1, especially if your dataset is small.\n> If your dataset is big enough it shouldn't happen.\n>\n> Ideally we should ignore all the -1 that are returned. It should be\n> possible to change that in RAG's code\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2175#issuecomment-814746509>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGTENOTLBEZTXEO2RS3THQOMPANCNFSM42PRVYDA>\n> .\n>\n", "That would be an easy way to workaround this issue. Feel free to open a PR on `transformers` and ping me ! :)", "Sure. Will push everything together with RAG end to end. :) thanks a lot.\n\nOn Wed, Apr 7, 2021, 21:16 Quentin Lhoest ***@***.***> wrote:\n\n> That would be an easy way to workaround this issue. Feel free to open a PR\n> on transformers and ping me ! :)\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2175#issuecomment-814752589>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGWLROCGARKN7WOJYSTTHQPH5ANCNFSM42PRVYDA>\n> .\n>\n" ]
2021-04-06T21:50:49
2021-04-16T12:21:16
2021-04-16T12:21:15
NONE
null
null
null
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**. During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_rag.py#L231) an error issue when all retrieved indices are -1. Please refer to the screenshot of a PID worker. ![image](https://user-images.githubusercontent.com/16892570/113782387-37a67600-9786-11eb-9c29-acad661a9648.png) Here, my retrieve batch size is 2 and n_docs is 5. I can solve this by working around np. stack, but I want to ask, why we get an output index of -1. Do you have any idea :) ? Is this a problem of the index, where the faiss can't find any similar vector? Is there documentation on the output index being -1? @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2175/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2170
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2170/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2170/comments
https://api.github.com/repos/huggingface/datasets/issues/2170/events
https://github.com/huggingface/datasets/issues/2170
850,913,228
MDU6SXNzdWU4NTA5MTMyMjg=
2,170
Wikipedia historic dumps are deleted but hf/datasets hardcodes dump date
{ "login": "leezu", "id": 946903, "node_id": "MDQ6VXNlcjk0NjkwMw==", "avatar_url": "https://avatars.githubusercontent.com/u/946903?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leezu", "html_url": "https://github.com/leezu", "followers_url": "https://api.github.com/users/leezu/followers", "following_url": "https://api.github.com/users/leezu/following{/other_user}", "gists_url": "https://api.github.com/users/leezu/gists{/gist_id}", "starred_url": "https://api.github.com/users/leezu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leezu/subscriptions", "organizations_url": "https://api.github.com/users/leezu/orgs", "repos_url": "https://api.github.com/users/leezu/repos", "events_url": "https://api.github.com/users/leezu/events{/privacy}", "received_events_url": "https://api.github.com/users/leezu/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "It seems that this can be fixed from user's end by including a `date` argument, like this:\r\n\r\n`dataset = datasets.load_dataset('wikipedia', '20200501.en', date='20210420')`\r\n\r\nYou can get available dates from [here](https://dumps.wikimedia.org/enwiki/).\r\n\r\nThis is not a proper fix however as all the files will still have '20200501' in their file names." ]
2021-04-06T03:13:18
2021-06-16T01:10:50
null
NONE
null
null
null
Wikimedia does not keep all historical dumps. For example, as of today https://dumps.wikimedia.org/kowiki/ only provides ``` 20201220/ 02-Feb-2021 01:36 - 20210101/ 21-Feb-2021 01:26 - 20210120/ 02-Mar-2021 01:25 - 20210201/ 21-Mar-2021 01:26 - 20210220/ 02-Apr-2021 01:26 - 20210301/ 03-Mar-2021 08:10 - 20210320/ 21-Mar-2021 18:13 - 20210401/ 03-Apr-2021 10:08 - latest/ 03-Apr-2021 10:08 - ``` However, the wikipedia dataset provided in the library, only supports the following configs, none of which are applicable anymore when disregarding the cached datasets: ``` ValueError: BuilderConfig 20210401.ko not found. Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu'] ``` The cached datasets: ``` % aws s3 --no-sign-request --endpoint-url https://storage.googleapis.com ls s3://huggingface-nlp/cache/datasets/wikipedia/ PRE 20200501.de/ PRE 20200501.en/ PRE 20200501.fr/ PRE 20200501.frr/ PRE 20200501.it/ PRE 20200501.simple/ ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2170/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2170/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/2167
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2167/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2167/comments
https://api.github.com/repos/huggingface/datasets/issues/2167/events
https://github.com/huggingface/datasets/issues/2167
849,944,891
MDU6SXNzdWU4NDk5NDQ4OTE=
2,167
Split type not preserved when reloading the dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2021-04-04T19:29:54
2021-04-19T09:08:55
2021-04-19T09:08:55
CONTRIBUTOR
null
null
null
A minimal reproducible example: ```python >>> from datasets import load_dataset, Dataset >>> dset = load_dataset("sst", split="train") >>> dset.save_to_disk("sst") >>> type(dset.split) <class 'datasets.splits.NamedSplit'> >>> dset = Dataset.load_from_disk("sst") >>> type(dset.split) # NamedSplit expected <class 'str'> ``` It seems like this bug was introduced in #2025.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2167/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2166
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2166/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2166/comments
https://api.github.com/repos/huggingface/datasets/issues/2166/events
https://github.com/huggingface/datasets/issues/2166
849,778,545
MDU6SXNzdWU4NDk3Nzg1NDU=
2,166
Regarding Test Sets for the GEM datasets
{ "login": "vyraun", "id": 17217068, "node_id": "MDQ6VXNlcjE3MjE3MDY4", "avatar_url": "https://avatars.githubusercontent.com/u/17217068?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vyraun", "html_url": "https://github.com/vyraun", "followers_url": "https://api.github.com/users/vyraun/followers", "following_url": "https://api.github.com/users/vyraun/following{/other_user}", "gists_url": "https://api.github.com/users/vyraun/gists{/gist_id}", "starred_url": "https://api.github.com/users/vyraun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vyraun/subscriptions", "organizations_url": "https://api.github.com/users/vyraun/orgs", "repos_url": "https://api.github.com/users/vyraun/repos", "events_url": "https://api.github.com/users/vyraun/events{/privacy}", "received_events_url": "https://api.github.com/users/vyraun/received_events", "type": "User", "site_admin": false }
[ { "id": 2067401494, "node_id": "MDU6TGFiZWwyMDY3NDAxNDk0", "url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion", "name": "Dataset discussion", "color": "72f99f", "default": false, "description": "Discussions on the datasets" } ]
closed
false
null
[]
null
[ "Hi @vyraun ! The test references for CommonGen are not publicly available: you can reach out to the original dataset authors if you would like to ask for them, but we will not be releasing them as part of GEM (March 31st was the release date for the test set inputs, references are incidentally released for some of the test sets but shouldn't really be used for benchmark submissions)\r\n\r\ncc @sebastiangehrmann", "Oh okay, thanks @yjernite ! " ]
2021-04-04T02:02:45
2021-04-06T08:13:12
2021-04-06T08:13:12
NONE
null
null
null
@yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)? e.g. ``` from datasets import load_dataset DATASET_NAME="common_gen" data = load_dataset("gem", DATASET_NAME) ``` The test set doesn't have the target or references. ``` data['test'][0] {'concept_set_id': 0, 'concepts': ['drill', 'field', 'run', 'team'], 'gem_id': 'common_gen-test-0', 'gem_parent_id': 'common_gen-test-0', 'references': [], 'target': ''} ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2166/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2165
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2165/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2165/comments
https://api.github.com/repos/huggingface/datasets/issues/2165/events
https://github.com/huggingface/datasets/issues/2165
849,771,665
MDU6SXNzdWU4NDk3NzE2NjU=
2,165
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
{ "login": "y-rokutan", "id": 24562381, "node_id": "MDQ6VXNlcjI0NTYyMzgx", "avatar_url": "https://avatars.githubusercontent.com/u/24562381?v=4", "gravatar_id": "", "url": "https://api.github.com/users/y-rokutan", "html_url": "https://github.com/y-rokutan", "followers_url": "https://api.github.com/users/y-rokutan/followers", "following_url": "https://api.github.com/users/y-rokutan/following{/other_user}", "gists_url": "https://api.github.com/users/y-rokutan/gists{/gist_id}", "starred_url": "https://api.github.com/users/y-rokutan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/y-rokutan/subscriptions", "organizations_url": "https://api.github.com/users/y-rokutan/orgs", "repos_url": "https://api.github.com/users/y-rokutan/repos", "events_url": "https://api.github.com/users/y-rokutan/events{/privacy}", "received_events_url": "https://api.github.com/users/y-rokutan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\na HF dataset can be converted to a Torch Dataset with a simple wrapper as follows:\r\n```python\r\nfrom torch.utils.data import Dataset\r\n \r\nclass HFDataset(Dataset):\r\n def __init__(self, dset):\r\n self.dset = dset\r\n\r\n def __getitem__(self, idx):\r\n return self.dset[idx]\r\n\r\n def __len__(self):\r\n return len(self.dset)\r\n\r\ntrain_ds = HFDataset(train_ds)\r\n```\r\n@lhoestq Since the Arrow Dataset already provides `__getitem__` and `__len__`, I think we could use the [virtual subclass](https://docs.python.org/3/library/abc.html#abc.ABCMeta.register) mechanism from the `abc` module to elegantly solve this issue. This mechanism would allow the Arrow Dataset to be used in place of the Torch Dataset because the `isinstance(instance of Arrow Dataset, TorchDataset)` check would return True (DeepSpeed has this check [here](https://github.com/microsoft/DeepSpeed/blob/ab5534fc4c0f8ca21ada321f9730d723aa31288b/deepspeed/runtime/engine.py#L823)).\r\n\r\nAnd it requires a minimal change in the `arrow_dataset.py` file:\r\n```python\r\nif config.TORCH_AVAILABLE:\r\n from torch.utils.data import Dataset as TorchDataset\r\n TorchDataset.register(Dataset)\r\n```", "Interesting ! Thanks for sharing this @mariosasko . I like the idea\r\nThis looks like something we should add IMO", "@mariosasko \r\nThx for your code!\r\nIt perfectly works with a small modification for HF NLP dataset:\r\n```\r\noriginal_ds = nlp.load_dataset('scientific_papers', 'arxiv')\r\ntrain_ds = HFDataset(train_ds['train']) # needs splitting\r\n```", "@lhoestq Sadly, from Python 3.7 onwards `torch.utils.data.Dataset` doesn't support the virtual subclass mechanism due to `typing.Generic` type no longer having `abc.ABCMeta` as its metaclass.\r\n\r\nWith that in mind, another option is to remove a direct type check (`isinstance(dataset, torch.utils.data.Dataset)`) in `deepspeed.initalize` and to rewrite the checks in a manner similar to `torch.utils.data.DataLoader` ([link](https://github.com/pytorch/pytorch/blob/b80c6f863f2327c712c478f67c248b94d66b65ac/torch/utils/data/dataloader.py#L197-L239)). This is exactly why the `DataLoader` works with arbitrary objects that provide `__getitem__` and `__len__` (and in our case, the `ArrowDataset`). By doing so, their code wouldn't be any stricter in comparison to the `DataLoader`.\r\n\r\nSo if you agree, I can open an issue in their repo and fix this if they like the idea.", "That makes sense ! Feel free to open an issue on their repo and discuss this idea", "@y-rokutan Hi, now if you install `deepspeed` from master (this feature will be available in the next official release), the code should work without subclassing. Let us know if you still have any issues.", "Worth mentioning that any function that expects a `torch..Dataset` (like `torch..DataLoader`) will fail a mypy-esque typecheck if a `datasets.Dataset` is passed, even though it implements the interface correctly (I think). The virtual subclass idea was a good one- I wonder if there's another workaround given the Generic issue. What we're really talking about is something similar to the structural subtyping semantics that `typing.Protocol` defines. If `torch..DataLoader` accepted anything that supports `__getitem__` and `__len__` methods this would be much easier. Not sure if there's a way to do this without the wrapper from the perspective of `datasets`." ]
2021-04-04T01:01:48
2021-08-24T15:55:35
2021-04-07T15:06:04
NONE
null
null
null
Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_attention_mask", "labels"], ) engine, _, _, _ = deepspeed.initialize( args=args, model=model, model_parameters=[p for p in model.parameters() if p.requires_grad], training_data=train_ds) ``` but deepspeed.initialize accepts torch.utils.data.Dataset only. How can I convert HF-style dataset to torch-style dataset?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2165/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2165/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2162
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2162/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2162/comments
https://api.github.com/repos/huggingface/datasets/issues/2162/events
https://github.com/huggingface/datasets/issues/2162
849,129,201
MDU6SXNzdWU4NDkxMjkyMDE=
2,162
visualization for cc100 is broken
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "This looks like an issue with the cc100 dataset itself but not sure\r\nDid you try loading cc100 on your machine ?", "Hi\nloading works fine, but the viewer only is broken\nthanks\n\nOn Wed, Apr 7, 2021 at 12:17 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> This looks like an issue with the cc100 dataset itself but not sure\n> Did you try loading cc100 on your machine ?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2162#issuecomment-814793809>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMRUO33JSOYGT6RETWLTHQWNLANCNFSM42IUOR6Q>\n> .\n>\n", "Hi! This visualization tool is deprecated now. The viewer at https://huggingface.co/datasets/cc100 works fine, so I'm closing this issue." ]
2021-04-02T10:11:13
2022-10-05T13:20:24
2022-10-05T13:20:24
NONE
null
null
null
Hi visualization through dataset viewer for cc100 is broken https://huggingface.co/datasets/viewer/ thanks a lot
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2162/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2161
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2161/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2161/comments
https://api.github.com/repos/huggingface/datasets/issues/2161/events
https://github.com/huggingface/datasets/issues/2161
849,127,041
MDU6SXNzdWU4NDkxMjcwNDE=
2,161
any possibility to download part of large datasets only?
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Not yet but it’s on the short/mid-term roadmap (requested by many indeed).", "oh, great, really awesome feature to have, thank you very much for the great, fabulous work", "We'll work on dataset streaming soon. This should allow you to only load the examples you need ;)", "thanks a lot Quentin, this would be really really a great feature to have\n\nOn Wed, Apr 7, 2021 at 12:14 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> We'll work on dataset streaming soon. This should allow you to only load\n> the examples you need ;)\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2161#issuecomment-814791922>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMROD62QAKIJMAKWISTTHQWBVANCNFSM42IUI5JQ>\n> .\n>\n", "Is streaming completed? On the 1.8.0 docs it is mentioned (https://huggingface.co/docs/datasets/dataset_streaming.html), but when following the example I get the following error:\r\n\r\n```\r\n>>> dataset2 = load_dataset(\"amazon_us_reviews\", \"Pet_Products_v1_00\", split='train', streaming=True)\r\n\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-21-1eedab26cff1> in <module>()\r\n----> 1 en_dataset = load_dataset('oscar', \"unshuffled_deduplicated_en\", split='train', streaming=True)\r\n\r\n3 frames\r\n/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)\r\n 339 if value is not None:\r\n 340 if not hasattr(builder_config, key):\r\n--> 341 raise ValueError(f\"BuilderConfig {builder_config} doesn't have a '{key}' key.\")\r\n 342 setattr(builder_config, key, value)\r\n 343 \r\n\r\nValueError: BuilderConfig OscarConfig(name='unshuffled_deduplicated_en', version=1.0.0, data_dir=None, data_files=None, description='Unshuffled and deduplicated, English OSCAR dataset') doesn't have a 'streaming' key.\r\n```\r\n\r\nUPDATE: Managed to get streaming working by building from source and installing the additional `datasets[streaming]` package:\r\n\r\n```\r\n!pip install git+https://github.com/huggingface/datasets.git\r\n!pip install datasets[streaming]\r\n```", "Hi ! Streaming is available on `master` only right now. We'll make a new release 1.9.0 on Monday :)" ]
2021-04-02T10:06:46
2022-10-05T13:26:51
2022-10-05T13:26:51
NONE
null
null
null
Hi Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2161/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2160
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2160/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2160/comments
https://api.github.com/repos/huggingface/datasets/issues/2160/events
https://github.com/huggingface/datasets/issues/2160
849,052,921
MDU6SXNzdWU4NDkwNTI5MjE=
2,160
data_args.preprocessing_num_workers almost freezes
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi.\r\nI cannot always reproduce this issue, and on later runs I did not see it so far. Sometimes also I set 8 processes but I see less being showed, is this normal, here only 5 are shown for 8 being set, thanks\r\n\r\n```\r\n#3: 11%|███████████████▊ | 172/1583 [00:46<06:21, 3.70ba/s]\r\n#4: 9%|█████████████▏ | 143/1583 [00:46<07:46, 3.09ba/s]\r\n#7: 6%|█████████ | 98/1583 [00:45<11:34, 2.14ba/s]\r\n#5: 8%|███████████▍ | 124/1583 [00:46<09:03, 2.68ba/s]\r\n#6: 7%|██████████▏ \r\n```", "closing since I cannot reproduce it again, thanks " ]
2021-04-02T07:56:13
2021-04-02T10:14:32
2021-04-02T10:14:31
NONE
null
null
null
Hi @lhoestq I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessing_num_workers = 4 with opus100 corpus but this moves on till a point and then this freezes almost for sometime during tokenization steps and then this is back again, overall to me taking more time than normal case, I appreciate your advice on how I can use this option properly to speed up. thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2160/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2160/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2159
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2159/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2159/comments
https://api.github.com/repos/huggingface/datasets/issues/2159/events
https://github.com/huggingface/datasets/issues/2159
848,851,962
MDU6SXNzdWU4NDg4NTE5NjI=
2,159
adding ccnet dataset
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "closing since I think this is cc100, just the name has been changed. thanks " ]
2021-04-01T23:28:36
2021-04-02T10:05:19
2021-04-02T10:05:19
NONE
null
null
null
## Adding a Dataset - **Name:** ccnet - **Description:** Common Crawl - **Paper:** https://arxiv.org/abs/1911.00359 - **Data:** https://github.com/facebookresearch/cc_net - **Motivation:** this is one of the most comprehensive clean monolingual datasets across a variety of languages. Quite important for cross-lingual reseach Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2159/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2158
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2158/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2158/comments
https://api.github.com/repos/huggingface/datasets/issues/2158/events
https://github.com/huggingface/datasets/issues/2158
848,506,746
MDU6SXNzdWU4NDg1MDY3NDY=
2,158
viewer "fake_news_english" error
{ "login": "emanuelevivoli", "id": 9447991, "node_id": "MDQ6VXNlcjk0NDc5OTE=", "avatar_url": "https://avatars.githubusercontent.com/u/9447991?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emanuelevivoli", "html_url": "https://github.com/emanuelevivoli", "followers_url": "https://api.github.com/users/emanuelevivoli/followers", "following_url": "https://api.github.com/users/emanuelevivoli/following{/other_user}", "gists_url": "https://api.github.com/users/emanuelevivoli/gists{/gist_id}", "starred_url": "https://api.github.com/users/emanuelevivoli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emanuelevivoli/subscriptions", "organizations_url": "https://api.github.com/users/emanuelevivoli/orgs", "repos_url": "https://api.github.com/users/emanuelevivoli/repos", "events_url": "https://api.github.com/users/emanuelevivoli/events{/privacy}", "received_events_url": "https://api.github.com/users/emanuelevivoli/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Thanks for reporting !\r\nThe viewer doesn't have all the dependencies of the datasets. We may add openpyxl to be able to show this dataset properly", "This viewer tool is deprecated now and the new viewer at https://huggingface.co/datasets/fake_news_english works fine, so I'm closing this issue" ]
2021-04-01T14:13:20
2022-10-05T13:22:02
2022-10-05T13:22:02
NONE
null
null
null
When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error: > ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional dependency for reading xlsx files' for instance' as well as the error Traceback.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2158/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2153
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2153/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2153/comments
https://api.github.com/repos/huggingface/datasets/issues/2153/events
https://github.com/huggingface/datasets/issues/2153
846,181,502
MDU6SXNzdWU4NDYxODE1MDI=
2,153
load_dataset ignoring features
{ "login": "GuillemGSubies", "id": 37592763, "node_id": "MDQ6VXNlcjM3NTkyNzYz", "avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GuillemGSubies", "html_url": "https://github.com/GuillemGSubies", "followers_url": "https://api.github.com/users/GuillemGSubies/followers", "following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}", "gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}", "starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions", "organizations_url": "https://api.github.com/users/GuillemGSubies/orgs", "repos_url": "https://api.github.com/users/GuillemGSubies/repos", "events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}", "received_events_url": "https://api.github.com/users/GuillemGSubies/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! Thanks for reporting. I opened a PR to fix this issue: #2201", "Nice question which helped me a lot! I have wasted a lot of time to the `DatasetDict` creation from a csv file. Hope the document of this module add some simple examples.", "Hi :) We're indeed working on tutorials that we will add to the docs !" ]
2021-03-31T08:30:09
2022-10-05T13:29:12
2022-10-05T13:29:12
NONE
null
null
null
First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything. I'm using datasets 1.5.0 ![image](https://user-images.githubusercontent.com/37592763/113114369-8f376580-920b-11eb-900d-94365b59f04b.png) As you can see, when I load the dataset, the ClassLabels are ignored, I have to cast the dataset in order to make it work. Code to reproduce: ```python import datasets data_location = "/data/prueba_multiclase" features = datasets.Features( {"texto": datasets.Value("string"), "label": datasets.features.ClassLabel(names=["false", "true"])} ) dataset = datasets.load_dataset( "csv", data_files=data_location, delimiter="\t", features=features ) ``` Dataset I used: [prueba_multiclase.zip](https://github.com/huggingface/datasets/files/6235022/prueba_multiclase.zip) (it has to be unzipped) Thank you! ❤️
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2153/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2153/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2149
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2149/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2149/comments
https://api.github.com/repos/huggingface/datasets/issues/2149/events
https://github.com/huggingface/datasets/issues/2149
844,734,076
MDU6SXNzdWU4NDQ3MzQwNzY=
2,149
Telugu subset missing for xtreme tatoeba dataset
{ "login": "jerryIsHere", "id": 50871412, "node_id": "MDQ6VXNlcjUwODcxNDEy", "avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jerryIsHere", "html_url": "https://github.com/jerryIsHere", "followers_url": "https://api.github.com/users/jerryIsHere/followers", "following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}", "gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}", "starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions", "organizations_url": "https://api.github.com/users/jerryIsHere/orgs", "repos_url": "https://api.github.com/users/jerryIsHere/repos", "events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}", "received_events_url": "https://api.github.com/users/jerryIsHere/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Good catch ! Thanks for reporting\r\n\r\nI just opened #2180 to fix this", "Fixed in #2180" ]
2021-03-30T15:26:34
2022-10-05T13:28:30
2022-10-05T13:28:30
CONTRIBUTOR
null
null
null
from nlp import load_dataset train_dataset = load_dataset('xtreme', 'tatoeba.tel')['validation'] ValueError: BuilderConfig tatoeba.tel not found. but language tel is actually included in xtreme: https://github.com/google-research/xtreme/blob/master/utils_preprocess.py def tatoeba_preprocess(args): lang3_dict = { 'afr':'af', 'ara':'ar', 'bul':'bg', 'ben':'bn', 'deu':'de', 'ell':'el', 'spa':'es', 'est':'et', 'eus':'eu', 'pes':'fa', 'fin':'fi', 'fra':'fr', 'heb':'he', 'hin':'hi', 'hun':'hu', 'ind':'id', 'ita':'it', 'jpn':'ja', 'jav':'jv', 'kat':'ka', 'kaz':'kk', 'kor':'ko', 'mal':'ml', 'mar':'mr', 'nld':'nl', 'por':'pt', 'rus':'ru', 'swh':'sw', 'tam':'ta', **_'tel':'te'_**, 'tha':'th', 'tgl':'tl', <----here 'tur':'tr', 'urd':'ur', 'vie':'vi', 'cmn':'zh', 'eng':'en', }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2149/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2149/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/2148
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2148/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2148/comments
https://api.github.com/repos/huggingface/datasets/issues/2148/events
https://github.com/huggingface/datasets/issues/2148
844,700,910
MDU6SXNzdWU4NDQ3MDA5MTA=
2,148
Add configurable options to `seqeval` metric
{ "login": "marrodion", "id": 44571847, "node_id": "MDQ6VXNlcjQ0NTcxODQ3", "avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marrodion", "html_url": "https://github.com/marrodion", "followers_url": "https://api.github.com/users/marrodion/followers", "following_url": "https://api.github.com/users/marrodion/following{/other_user}", "gists_url": "https://api.github.com/users/marrodion/gists{/gist_id}", "starred_url": "https://api.github.com/users/marrodion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marrodion/subscriptions", "organizations_url": "https://api.github.com/users/marrodion/orgs", "repos_url": "https://api.github.com/users/marrodion/repos", "events_url": "https://api.github.com/users/marrodion/events{/privacy}", "received_events_url": "https://api.github.com/users/marrodion/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @marrodion. \r\n\r\nThanks for pointing this out. It would be great to incorporate this metric-specific enhancement.\r\n\r\nAnother possibility would be to require the user to input the scheme as a string `mode=\"strict\", scheme=\"IOB2\"` and then dynamically import the corresponding module using Python `importlib`:\r\n```python\r\nif scheme:\r\n scheme = importlib.import_module(f\"seqeval.scheme.{scheme}\")\r\n```\r\n\r\nFeel free to create a Pull Request to make this contribution." ]
2021-03-30T15:04:06
2021-04-15T13:49:46
2021-04-15T13:49:46
CONTRIBUTOR
null
null
null
Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation). However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be plugged in just by supporting additional kwargs in `Seqeval._compute` https://github.com/huggingface/datasets/blob/85cf7ff920c90ca2e12bedca12b36d2a043c3da2/metrics/seqeval/seqeval.py#L109 Things that would be relevant are, for example, supporting `mode="strict", scheme=IOB2` to count only full entity match as a true positive and omit partial matches. The only problem I see is that the spirit of `metrics` seems to not require additional imports from user. `seqeval` only supports schemes as objects, without any string aliases. It can be solved naively with mapping like `{"IOB2": seqeval.scheme.IOB2}`. Or just left as is and require user to explicitly import scheme from `seqeval` if he wants to configure it past the default implementation. If that makes sense, I am happy to implement the change.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2148/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2148/timeline
null
completed