url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
2.19B
node_id
stringlengths
18
24
number
int64
2
6.73k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
listlengths
0
30
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
draft
null
pull_request
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
https://api.github.com/repos/huggingface/datasets/issues/5123
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5123/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5123/comments
https://api.github.com/repos/huggingface/datasets/issues/5123/events
https://github.com/huggingface/datasets/issues/5123
1,410,828,756
I_kwDODunzps5UF4nU
5,123
datasets freezes with streaming mode in multiple-gpu
{ "login": "jackfeinmann5", "id": 59409879, "node_id": "MDQ6VXNlcjU5NDA5ODc5", "avatar_url": "https://avatars.githubusercontent.com/u/59409879?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jackfeinmann5", "html_url": "https://github.com/jackfeinmann5", "followers_url": "https://api.github.com/users/jackfeinmann5/followers", "following_url": "https://api.github.com/users/jackfeinmann5/following{/other_user}", "gists_url": "https://api.github.com/users/jackfeinmann5/gists{/gist_id}", "starred_url": "https://api.github.com/users/jackfeinmann5/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jackfeinmann5/subscriptions", "organizations_url": "https://api.github.com/users/jackfeinmann5/orgs", "repos_url": "https://api.github.com/users/jackfeinmann5/repos", "events_url": "https://api.github.com/users/jackfeinmann5/events{/privacy}", "received_events_url": "https://api.github.com/users/jackfeinmann5/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "@lhoestq I tested the script without accelerator, and I confirm this is due to datasets part as this gets similar results without accelerator.", "Hi ! You said it works on 1 GPU but doesn't wortk without accelerator - what's the difference between running on 1 GPU and running without accelerator in your case ?", "Hi @lhoestq \r\nthanks for coming back to me. Sorry for the confusion I made. I meant this works fine on 1 GPU, but on multi-gpu it is freezing. \"accelerator\" is not an issue as if you adapt the code without accelerator this still gets the same issue.\r\nIn order to test it. Please run \"accelerate config\", then use the setup for multi-gpu in one node.\r\nAfter that run \"accelerate launch code.py\" and then you would see the freezing occurs.", "Hi @lhoestq \r\ncould you have the chance to reproduce the error by running the minimal example shared?\r\nthanks", "I think you need to do `train_dataset = train_dataset.with_format(\"torch\")` to work with the DataLoader in a multiprocessing setup :)\r\n\r\nThe hang is probably caused by our streamign lib `fsspec` which doesn't work in multiprocessing out of the box - but we made it work with the PyTorch DataLoader when the dataset format is set to \"torch\"", "Hi @lhoestq \r\nthanks for the response. I added the line suggested right before calling `with accelerator.main_process_first():` in the code above and I confirm this also freezes. to reproduce it please run \"accelerate launch code.py\". I was wondering if you could have more suggestions for me? I do not have an idea how to fix this or debug this freezing. many thanks.", "Maybe the `fsspec` stuff need to be clearer even before - can you try to run this function at the very beginning of your script ?\r\n```python\r\nimport fsspec\r\n\r\ndef _set_fsspec_for_multiprocess() -> None:\r\n \"\"\"\r\n Clear reference to the loop and thread.\r\n This is necessary otherwise HTTPFileSystem hangs in the ML training loop.\r\n Only required for fsspec >= 0.9.0\r\n See https://github.com/fsspec/gcsfs/issues/379\r\n \"\"\"\r\n fsspec.asyn.iothread[0] = None\r\n fsspec.asyn.loop[0] = None\r\n\r\n_set_fsspec_for_multiprocess()\r\n```", "Hi @lhoestq \r\nthank you. I tried it, I am getting `AttributeError: module 'fsspec' has no attribute 'asyn'`. which version of fsspect do you use?\r\nI am using \r\n```fsspec 2022.8.2 pypi_0 pypi```\r\nthank you.", "Hi @lhoestq \r\nI solved `fsspec` error with this hack for now https://discuss.huggingface.co/t/attributeerror-module-fsspec-has-no-attribute-asyn/19255 but this is still freezing, I greatly appreciate if you could run this script on your side. Many thanks.\r\n\r\n```\r\nimport fsspec\r\n\r\ndef _set_fsspec_for_multiprocess() -> None:\r\n \"\"\"\r\n Clear reference to the loop and thread.\r\n This is necessary otherwise HTTPFileSystem hangs in the ML training loop.\r\n Only required for fsspec >= 0.9.0\r\n See https://github.com/fsspec/gcsfs/issues/379\r\n \"\"\"\r\n fsspec.asyn.iothread[0] = None\r\n fsspec.asyn.loop[0] = None\r\n\r\n\r\n_set_fsspec_for_multiprocess()\r\n\r\nfrom accelerate import Accelerator\r\nfrom accelerate.logging import get_logger\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data.dataloader import DataLoader\r\nimport torch\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoTokenizer\r\nimport torch\r\nfrom accelerate.logging import get_logger\r\nfrom torch.utils.data import IterableDataset\r\nfrom torch.utils.data.datapipes.iter.combinatorics import ShufflerIterDataPipe\r\n\r\n\r\nlogger = get_logger(__name__)\r\n\r\n\r\nclass ConstantLengthDataset(IterableDataset):\r\n \"\"\"\r\n Iterable dataset that returns constant length chunks of tokens from stream of text files.\r\n Args:\r\n tokenizer (Tokenizer): The processor used for proccessing the data.\r\n dataset (dataset.Dataset): Dataset with text files.\r\n infinite (bool): If True the iterator is reset after dataset reaches end else stops.\r\n max_seq_length (int): Length of token sequences to return.\r\n num_of_sequences (int): Number of token sequences to keep in buffer.\r\n chars_per_token (int): Number of characters per token used to estimate number of tokens in text buffer.\r\n \"\"\"\r\n\r\n def __init__(\r\n self,\r\n tokenizer,\r\n dataset,\r\n infinite=False,\r\n max_seq_length=1024,\r\n num_of_sequences=1024,\r\n chars_per_token=3.6,\r\n ):\r\n self.tokenizer = tokenizer\r\n # self.concat_token_id = tokenizer.bos_token_id\r\n self.dataset = dataset\r\n self.max_seq_length = max_seq_length\r\n self.epoch = 0\r\n self.infinite = infinite\r\n self.current_size = 0\r\n self.max_buffer_size = max_seq_length * chars_per_token * num_of_sequences\r\n self.content_field = \"text\"\r\n\r\n def __iter__(self):\r\n iterator = iter(self.dataset)\r\n more_examples = True\r\n while more_examples:\r\n buffer, buffer_len = [], 0\r\n while True:\r\n if buffer_len >= self.max_buffer_size:\r\n break\r\n try:\r\n buffer.append(next(iterator)[self.content_field])\r\n buffer_len += len(buffer[-1])\r\n except StopIteration:\r\n if self.infinite:\r\n iterator = iter(self.dataset)\r\n self.epoch += 1\r\n logger.info(f\"Dataset epoch: {self.epoch}\")\r\n else:\r\n more_examples = False\r\n break\r\n tokenized_inputs = self.tokenizer(buffer, truncation=False)[\"input_ids\"]\r\n all_token_ids = []\r\n for tokenized_input in tokenized_inputs:\r\n all_token_ids.extend(tokenized_input)\r\n for i in range(0, len(all_token_ids), self.max_seq_length):\r\n input_ids = all_token_ids[i : i + self.max_seq_length]\r\n if len(input_ids) == self.max_seq_length:\r\n self.current_size += 1\r\n yield torch.tensor(input_ids)\r\n\r\n def shuffle(self, buffer_size=1000):\r\n return ShufflerIterDataPipe(self, buffer_size=buffer_size)\r\n\r\n\r\ndef create_dataloaders(tokenizer, accelerator):\r\n ds_kwargs = {\"streaming\": True}\r\n # In distributed training, the load_dataset function gaurantees that only one process\r\n # can concurrently download the dataset.\r\n datasets = load_dataset(\r\n \"c4\",\r\n \"en\",\r\n cache_dir=\"cache_dir\",\r\n **ds_kwargs,\r\n )\r\n train_data, valid_data = datasets[\"train\"], datasets[\"validation\"]\r\n with accelerator.main_process_first():\r\n train_data = train_data.shuffle(buffer_size=10000, seed=None)\r\n train_dataset = ConstantLengthDataset(\r\n tokenizer,\r\n train_data,\r\n infinite=True,\r\n max_seq_length=256,\r\n )\r\n valid_dataset = ConstantLengthDataset(\r\n tokenizer,\r\n valid_data,\r\n infinite=False,\r\n max_seq_length=256,\r\n )\r\n train_dataset = train_dataset.shuffle(buffer_size=10000)\r\n train_dataloader = DataLoader(train_dataset, batch_size=160, shuffle=True)\r\n eval_dataloader = DataLoader(valid_dataset, batch_size=160)\r\n return train_dataloader, eval_dataloader\r\n\r\n\r\ndef main():\r\n # Accelerator.\r\n logging_dir = \"data_save_dir/log\"\r\n accelerator = Accelerator(\r\n gradient_accumulation_steps=1,\r\n mixed_precision=\"bf16\",\r\n log_with=\"tensorboard\",\r\n logging_dir=logging_dir,\r\n )\r\n # We need to initialize the trackers we use, and also store our configuration.\r\n # The trackers initializes automatically on the main process.\r\n if accelerator.is_main_process:\r\n accelerator.init_trackers(\"test\")\r\n tokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\n # Load datasets and create dataloaders.\r\n train_dataloader, _ = create_dataloaders(tokenizer, accelerator)\r\n\r\n train_dataloader = accelerator.prepare(train_dataloader)\r\n for step, batch in enumerate(train_dataloader, start=1):\r\n print(step)\r\n accelerator.end_training()\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```", "Are you using `Pytorch 1.11`? Otherwise the script freezes because of the shuffling in this line: \r\n```\r\n return ShufflerIterDataPipe(self, buffer_size=buffer_size)\r\n```\r\n`ShufflerIterDataPipe` behavior must have changed for newer Pytorch versions. But this doesn't change whether you're using streaming or not in `datasets`, so probably not the same issue, but something to try.", "> Are you using `Pytorch 1.11`? Otherwise the script freezes because of the shuffling in this line:\r\n> \r\n> ```\r\n> return ShufflerIterDataPipe(self, buffer_size=buffer_size)\r\n> ```\r\n> \r\n> `ShufflerIterDataPipe` behavior must have changed for newer Pytorch versions. But this doesn't change whether you're using streaming or not in `datasets`, so probably not the same issue, but something to try.\r\n\r\nI met the same issue for pytorch 1.12 and 1.13, is there a way to work around for this function for newer pytorch versions?" ]
2022-10-17T03:28:16
2023-05-14T06:55:20
null
NONE
null
null
null
## Describe the bug Hi. I am using this dataloader, which is for processing large datasets in streaming mode mentioned in one of examples of huggingface. I am using it to read c4: https://github.com/huggingface/transformers/blob/b48ac1a094e572d6076b46a9e4ed3e0ebe978afc/examples/research_projects/codeparrot/scripts/codeparrot_training.py#L22 During using multi-gpu in accelerator in one node, the code freezes, but works for 1 GPU: ``` 10/16/2022 14:18:46 - INFO - datasets.info - Loading Dataset Infos from /home/jack/.cache/huggingface/modules/datasets_modules/datasets/c4/df532b158939272d032cc63ef19cd5b83e9b4d00c922b833e4cb18b2e9869b01 Steps: 0%| | 0/400000 [00:00<?, ?it/s]10/16/2022 14:18:47 - INFO - torch.utils.data.dataloader - Shared seed (135290893754684706) sent to store on rank 0 ``` # Code to reproduce please run this code with `accelerate launch code.py` ``` from accelerate import Accelerator from accelerate.logging import get_logger from datasets import load_dataset from torch.utils.data.dataloader import DataLoader import torch from datasets import load_dataset from transformers import AutoTokenizer import torch from accelerate.logging import get_logger from torch.utils.data import IterableDataset from torch.utils.data.datapipes.iter.combinatorics import ShufflerIterDataPipe logger = get_logger(__name__) class ConstantLengthDataset(IterableDataset): """ Iterable dataset that returns constant length chunks of tokens from stream of text files. Args: tokenizer (Tokenizer): The processor used for proccessing the data. dataset (dataset.Dataset): Dataset with text files. infinite (bool): If True the iterator is reset after dataset reaches end else stops. max_seq_length (int): Length of token sequences to return. num_of_sequences (int): Number of token sequences to keep in buffer. chars_per_token (int): Number of characters per token used to estimate number of tokens in text buffer. """ def __init__( self, tokenizer, dataset, infinite=False, max_seq_length=1024, num_of_sequences=1024, chars_per_token=3.6, ): self.tokenizer = tokenizer # self.concat_token_id = tokenizer.bos_token_id self.dataset = dataset self.max_seq_length = max_seq_length self.epoch = 0 self.infinite = infinite self.current_size = 0 self.max_buffer_size = max_seq_length * chars_per_token * num_of_sequences self.content_field = "text" def __iter__(self): iterator = iter(self.dataset) more_examples = True while more_examples: buffer, buffer_len = [], 0 while True: if buffer_len >= self.max_buffer_size: break try: buffer.append(next(iterator)[self.content_field]) buffer_len += len(buffer[-1]) except StopIteration: if self.infinite: iterator = iter(self.dataset) self.epoch += 1 logger.info(f"Dataset epoch: {self.epoch}") else: more_examples = False break tokenized_inputs = self.tokenizer(buffer, truncation=False)["input_ids"] all_token_ids = [] for tokenized_input in tokenized_inputs: all_token_ids.extend(tokenized_input) for i in range(0, len(all_token_ids), self.max_seq_length): input_ids = all_token_ids[i : i + self.max_seq_length] if len(input_ids) == self.max_seq_length: self.current_size += 1 yield torch.tensor(input_ids) def shuffle(self, buffer_size=1000): return ShufflerIterDataPipe(self, buffer_size=buffer_size) def create_dataloaders(tokenizer, accelerator): ds_kwargs = {"streaming": True} # In distributed training, the load_dataset function gaurantees that only one process # can concurrently download the dataset. datasets = load_dataset( "c4", "en", cache_dir="cache_dir", **ds_kwargs, ) train_data, valid_data = datasets["train"], datasets["validation"] with accelerator.main_process_first(): train_data = train_data.shuffle(buffer_size=10000, seed=None) train_dataset = ConstantLengthDataset( tokenizer, train_data, infinite=True, max_seq_length=256, ) valid_dataset = ConstantLengthDataset( tokenizer, valid_data, infinite=False, max_seq_length=256, ) train_dataset = train_dataset.shuffle(buffer_size=10000) train_dataloader = DataLoader(train_dataset, batch_size=160, shuffle=True) eval_dataloader = DataLoader(valid_dataset, batch_size=160) return train_dataloader, eval_dataloader def main(): # Accelerator. logging_dir = "data_save_dir/log" accelerator = Accelerator( gradient_accumulation_steps=1, mixed_precision="bf16", log_with="tensorboard", logging_dir=logging_dir, ) # We need to initialize the trackers we use, and also store our configuration. # The trackers initializes automatically on the main process. if accelerator.is_main_process: accelerator.init_trackers("test") tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") # Load datasets and create dataloaders. train_dataloader, _ = create_dataloaders(tokenizer, accelerator) train_dataloader = accelerator.prepare(train_dataloader) for step, batch in enumerate(train_dataloader, start=1): print(step) accelerator.end_training() if __name__ == "__main__": main() ``` ## Results expected Being able to run the code for streamining datasets with multi-gpu ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.5.2 - Platform: linux - Python version: 3.9.12 - PyArrow version: 9.0.0 @lhoestq I do not have any idea why this freezing happens, and I removed the streaming mode and this was working fine, so I know this is caused by streaming mode of the dataloader part not working well with multi-gpu setting. Since datasets are large, I hope to keep the streamining mode. I very much appreciate your help.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5123/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5123/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5118
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5118/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5118/comments
https://api.github.com/repos/huggingface/datasets/issues/5118/events
https://github.com/huggingface/datasets/issues/5118
1,410,547,373
I_kwDODunzps5UEz6t
5,118
Installing `datasets` on M1 computers
{ "login": "david1542", "id": 9879252, "node_id": "MDQ6VXNlcjk4NzkyNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4", "gravatar_id": "", "url": "https://api.github.com/users/david1542", "html_url": "https://github.com/david1542", "followers_url": "https://api.github.com/users/david1542/followers", "following_url": "https://api.github.com/users/david1542/following{/other_user}", "gists_url": "https://api.github.com/users/david1542/gists{/gist_id}", "starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david1542/subscriptions", "organizations_url": "https://api.github.com/users/david1542/orgs", "repos_url": "https://api.github.com/users/david1542/repos", "events_url": "https://api.github.com/users/david1542/events{/privacy}", "received_events_url": "https://api.github.com/users/david1542/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @david1542." ]
2022-10-16T16:50:08
2022-10-19T09:10:08
2022-10-19T09:10:08
CONTRIBUTOR
null
null
null
## Describe the bug I wanted to install `datasets` dependencies on my M1 (in order to start contributing to the project). However, I got an error regarding `tensorflow`. On M1, `tensorflow-macos` needs to be installed instead. Can we add a conditional requirement, so that `tensorflow-macos` would be installed on M1? ## Steps to reproduce the bug Fresh clone this project (on m1), create a virtualenv and run this: ```python pip install -e ".[dev]" ``` ## Expected results Installation should be smooth, and all the dependencies should be installed on M1. ## Actual results You should receive an error, saying pip couldn't find a version that matches this pattern: ``` tensorflow>=2.3,!=2.6.0,!=2.6.1 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.6.2.dev0 - Platform: macOS-12.6-arm64-arm-64bit - Python version: 3.9.6 - PyArrow version: 7.0.0 - Pandas version: 1.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5118/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5118/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5117/comments
https://api.github.com/repos/huggingface/datasets/issues/5117/events
https://github.com/huggingface/datasets/issues/5117
1,409,571,346
I_kwDODunzps5UBFoS
5,117
Progress bars have color red and never completed to 100%
{ "login": "echatzikyriakidis", "id": 63857529, "node_id": "MDQ6VXNlcjYzODU3NTI5", "avatar_url": "https://avatars.githubusercontent.com/u/63857529?v=4", "gravatar_id": "", "url": "https://api.github.com/users/echatzikyriakidis", "html_url": "https://github.com/echatzikyriakidis", "followers_url": "https://api.github.com/users/echatzikyriakidis/followers", "following_url": "https://api.github.com/users/echatzikyriakidis/following{/other_user}", "gists_url": "https://api.github.com/users/echatzikyriakidis/gists{/gist_id}", "starred_url": "https://api.github.com/users/echatzikyriakidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/echatzikyriakidis/subscriptions", "organizations_url": "https://api.github.com/users/echatzikyriakidis/orgs", "repos_url": "https://api.github.com/users/echatzikyriakidis/repos", "events_url": "https://api.github.com/users/echatzikyriakidis/events{/privacy}", "received_events_url": "https://api.github.com/users/echatzikyriakidis/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "david1542", "id": 9879252, "node_id": "MDQ6VXNlcjk4NzkyNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4", "gravatar_id": "", "url": "https://api.github.com/users/david1542", "html_url": "https://github.com/david1542", "followers_url": "https://api.github.com/users/david1542/followers", "following_url": "https://api.github.com/users/david1542/following{/other_user}", "gists_url": "https://api.github.com/users/david1542/gists{/gist_id}", "starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david1542/subscriptions", "organizations_url": "https://api.github.com/users/david1542/orgs", "repos_url": "https://api.github.com/users/david1542/repos", "events_url": "https://api.github.com/users/david1542/events{/privacy}", "received_events_url": "https://api.github.com/users/david1542/received_events", "type": "User", "site_admin": false }
[ { "login": "david1542", "id": 9879252, "node_id": "MDQ6VXNlcjk4NzkyNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4", "gravatar_id": "", "url": "https://api.github.com/users/david1542", "html_url": "https://github.com/david1542", "followers_url": "https://api.github.com/users/david1542/followers", "following_url": "https://api.github.com/users/david1542/following{/other_user}", "gists_url": "https://api.github.com/users/david1542/gists{/gist_id}", "starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david1542/subscriptions", "organizations_url": "https://api.github.com/users/david1542/orgs", "repos_url": "https://api.github.com/users/david1542/repos", "events_url": "https://api.github.com/users/david1542/events{/privacy}", "received_events_url": "https://api.github.com/users/david1542/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @echatzikyriakidis, thanks for submitting the issue.\r\nWhich shell are you using exactly? I tried to run the command you sent, but I don't see colors at all 🧐\r\n\r\nI tried from bash and zsh as well.", "Hi @david1542 ,\r\n\r\nI use Google Colab.\r\n", "Got it. I [created a PR](https://github.com/huggingface/datasets/pull/5120) that fixes this issue. Turns out that the wrapping logic for the inner loop was slightly incorrect.", "Thank you!" ]
2022-10-14T16:12:30
2022-10-23T12:58:41
2022-10-23T12:58:41
NONE
null
null
null
## Describe the bug Progress bars after transformative operations turn in red and never be completed to 100% ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('rotten_tomatoes', split='test').filter(lambda o: True) ``` ## Expected results Progress bar should be 100% and green ## Actual results Progress bar turn in red and never completed to 100% ## Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.14 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5117/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5117/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5114
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5114/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5114/comments
https://api.github.com/repos/huggingface/datasets/issues/5114/events
https://github.com/huggingface/datasets/issues/5114
1,409,236,738
I_kwDODunzps5T_z8C
5,114
load_from_disk with remote filesystem fails due to a wrong temporary local folder path
{ "login": "Hubert-Bonisseur", "id": 48770768, "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hubert-Bonisseur", "html_url": "https://github.com/Hubert-Bonisseur", "followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers", "following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}", "gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions", "organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs", "repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos", "events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}", "received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi Hubert! Could you please probably create a publicly available `gs://` dataset link? I think this would be easier for others to directly start to debug.", "What seems to work is to change the line to:\r\n```\r\nfs.download(src_dataset_path, dataset_path.parent.as_posix(), recursive=True)\r\n```" ]
2022-10-14T11:54:53
2022-11-19T07:13:10
null
CONTRIBUTOR
null
null
null
## Describe the bug The function load_from_disk fails when using a remote filesystem because of a wrong temporary path generation in the load_from_disk method of arrow_dataset.py: ```python if is_remote_filesystem(fs): src_dataset_path = extract_path_from_uri(dataset_path) dataset_path = Dataset._build_local_temp_path(src_dataset_path) fs.download(src_dataset_path, dataset_path.as_posix(), recursive=True) ``` If _dataset_path_ is `gs://speech/mydataset/train`, then _src_dataset_path_ will be `speech/mydataset/train` and _dataset_path_ will be something like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train` Then, after downloading the **folder** _src_dataset_path_, you will get a path like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train/train/state.json` (notice we have train twice) Instead of downloading the remote folder we should be downloading all the files in the folder for the path to be right: ```python fs.download(os.path.join(src_dataset_path,*), dataset_path.as_posix(), recursive=True) ``` ## Steps to reproduce the bug ```python fs = gcsfs.GCSFileSystem(**storage_options) dataset = load_from_disk("common_voice_processed") # loading local dataset previously saved locally, works fine dataset.save_to_disk(output_dir, fs=fs) #works fine dataset = load_from_disk(output_dir, fs=fs) # crashes ``` ## Expected results The dataset is loaded ## Actual results FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/9s/gf0b9jz15d517yrf7m3nvlxr0000gn/T/tmp6t5e221_/speech/datasets/tests/common_voice_processed/train/state.json' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets-2.6.1.dev0 - Platform: mac os monterey 12.5.1 - Python version: 3.8.13 - PyArrow version:pyarrow==9.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5114/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5112/comments
https://api.github.com/repos/huggingface/datasets/issues/5112/events
https://github.com/huggingface/datasets/issues/5112
1,409,143,409
I_kwDODunzps5T_dJx
5,112
Bug with filtered indices
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "The issue is here:\r\nhttps://github.com/huggingface/datasets/blob/3ad9644b9a2e4558dd1d0f1e43c67658674e6228/src/datasets/arrow_dataset.py#L2964", "@PartiallyTyped, @Muennighoff: the issue is fixed.\r\n\r\nWe are planning to make a patch release today.", "Thanks a lot for the swift response! For a brief moment yesterday I thought I had gone insane 🤣On 14 Oct 2022, at 15:44, Albert Villanova del Moral ***@***.***> wrote:\n@PartiallyTyped, @Muennighoff: the issue is fixed.\nWe are planning to make a patch release today.\n\n—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>" ]
2022-10-14T10:35:47
2022-10-14T13:55:03
2022-10-14T12:11:45
MEMBER
null
null
null
## Describe the bug As reported by @PartiallyTyped (and by @Muennighoff): - https://github.com/huggingface/datasets/issues/5111#issuecomment-1278652524 There is an issue with the indices of a filtered dataset. ## Steps to reproduce the bug ```python ds = Dataset.from_dict({"num": [0, 1, 2, 3]}) ds = ds.filter(lambda num: num % 2 == 0, input_columns="num", batch_size=2) assert all(item["num"] % 2 == 0 for item in ds) ``` ## Expected results The indices of the filtered dataset should correspond to the examples with "language" equals to "english". ## Actual results Indices to items with other languages are included in the filtered dataset indices ## Preliminar investigation It seems a bug introduced by: - #5030
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5112/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5112/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5111
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5111/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5111/comments
https://api.github.com/repos/huggingface/datasets/issues/5111/events
https://github.com/huggingface/datasets/issues/5111
1,408,143,170
I_kwDODunzps5T7o9C
5,111
map and filter not working properly in multiprocessing with the new release 2.6.0
{ "login": "loubnabnl", "id": 44069155, "node_id": "MDQ6VXNlcjQ0MDY5MTU1", "avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loubnabnl", "html_url": "https://github.com/loubnabnl", "followers_url": "https://api.github.com/users/loubnabnl/followers", "following_url": "https://api.github.com/users/loubnabnl/following{/other_user}", "gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}", "starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions", "organizations_url": "https://api.github.com/users/loubnabnl/orgs", "repos_url": "https://api.github.com/users/loubnabnl/repos", "events_url": "https://api.github.com/users/loubnabnl/events{/privacy}", "received_events_url": "https://api.github.com/users/loubnabnl/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Same bug exists with `num_proc=1` on colab. `3.7.14 (default, Sep 8 2022, 00:06:44) [GCC 7.5.0]` ", "Thanks for reporting, @loubnabnl and for the additional information, @PartiallyTyped.\r\n\r\nHowever, I'm not able to reproduce this issue, neither locally nor on Colab:\r\n```\r\nDataset({\r\n features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'],\r\n num_rows: 10\r\n})\r\nDataset({\r\n features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'],\r\n num_rows: 10\r\n})\r\n```\r\nCC: @huggingface/datasets can anybody reproduce this?", "This is the minimum reproducible example. I ran this on the premium instances of colab.\r\n\r\n```\r\n# !pip install datasets\r\nimport datasets\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"copenlu/answerable_tydiqa\").filter(\"english\".__eq__, input_columns=\"language\")\r\nassert all(map(\"english\".__eq__, ds[\"train\"][\"language\"]))\r\n```\r\n\r\nIn my case, the number of samples is correct, however, the samples selected when indexing are wrong.\r\n\r\n```python\r\nDatasetDict({\r\n validation: Dataset({\r\n features: ['question_text', 'document_title', 'language', 'annotations', 'document_plaintext', 'document_url'],\r\n num_rows: 990\r\n })\r\n train: Dataset({\r\n features: ['question_text', 'document_title', 'language', 'annotations', 'document_plaintext', 'document_url'],\r\n num_rows: 7389\r\n })\r\n})\r\n```\r\n\r\nThe number of rows is indeed correct, and i have checked it with a version that works.", "I can reproduce the issue on my mac too \r\n```\r\n- `datasets` version: 2.6.0\r\n- Platform: macOS-12.2.1-arm64-arm-64bit\r\n- Python version: 3.9.13\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.3\r\n```\r\nBut not on Colab with python 3.7, maybe related to python version? (didn't manage to install python 3.9)\r\n```\r\n- `datasets` version: 2.6.0\r\n- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.14\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.3.5\r\n```", "I have the same issue, here's a simple notebook to reproduce: https://colab.research.google.com/drive/1Lvo9fg5DSpGUUgXW5JAutZ0bFsR-WV--?usp=sharing\r\n\r\n\r\n\r\n", "I think there are 2 different issues here:\r\n- the one reported by @loubnabnl is related to multiprocessing in map and then filter; we should reproduce it first: I have tried with Python version 3.9.7 and I can't reproduce it either; maybe it is related to the version of PyArrow? To be checked.\r\n- the issue reported by @PartiallyTyped is related just to \"filter\" (without multiprocessing) and I can reproduce it.", "Could you create another issue for the @PartiallyTyped one please ?\r\n\r\nRegarding the OP issue, I also tried on colab or locally on py3.7 or py3.10 but didn't reproduce", "I have created another issue for the one reported by @PartiallyTyped: \r\n- #5112 ", "I managed to reproduce your issue @loubnabnl on colab by upgrading pyarrow to 9.0.0 instead of 6.0.1", "I managed to have a _super_ minimal reproducible example:\r\n```python\r\n\r\nfrom datasets import Dataset, concatenate_datasets\r\n\r\nds = concatenate_datasets([Dataset.from_dict({\"a\": [i]}) for i in range(10)])\r\nds2 = ds.map(lambda _: {}, batched=True)\r\nassert list(ds2) == list(ds)\r\n```\r\n(filter uses a batched `map` under the hood)", "> the one reported by @loubnabnl is related to multiprocessing in map and then filter; we should reproduce it first: I have tried with Python version 3.9.7 and I can't reproduce it either; maybe it is related to the version of PyArrow? To be checked.\r\n\r\nSo finally it was related to PyArrow version! :+1: ", "Doing a patch release asap :)", "Did the patch release yesterday, lmk if you still have issues", "It works now, thanks!\r\n" ]
2022-10-13T17:00:55
2022-10-17T08:26:59
2022-10-14T14:59:59
NONE
null
null
null
## Describe the bug When mapping is used on a dataset with more than one process, there is a weird behavior when trying to use `filter` , it's like only the samples from one worker are retrieved, one needs to specify the same `num_proc` in filter for it to work properly. This doesn't happen with `datasets` version 2.5.2 In the code below the data is filtered differently when we increase `num_proc` used in `map` although the datsets before and after mapping have identical elements. ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset def preprocess(example): return example ds = load_dataset("codeparrot/codeparrot-clean-valid", split="train").select([i for i in range(10)]) ds1 = ds.map(preprocess, num_proc=2) ds2 = ds.map(preprocess) # the datasets elements are the same for i in range(len(ds1)): assert ds1[i]==ds2[i] print(f'Target column before filtering {ds1["autogenerated"]}') print(f'Target column before filtering {ds2["autogenerated"]}') print(f"datasets version {datasets.__version__}") ds_filtered_1 = ds1.filter(lambda x: not x["autogenerated"]) ds_filtered_2 = ds2.filter(lambda x: not x["autogenerated"]) # all elements in Target column are false so they should all be kept, but for ds2 only the first 5=num_samples/num_proc are kept print(ds_filtered_1) print(ds_filtered_2) ``` ``` Target column before filtering [False, False, False, False, False, False, False, False, False, False] Target column before filtering [False, False, False, False, False, False, False, False, False, False] Dataset({ features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'], num_rows: 5 }) Dataset({ features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'], num_rows: 10 }) ``` ## Expected results Increasing `num_proc` in mapping shouldn't alter filtering. With the previous version 2.5.2 this doesn't happen ## Actual results Filtering doesn't work properly when we increase `num_proc` in mapping but not when calling `filter` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.6.0 - Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5111/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5111/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5109
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5109/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5109/comments
https://api.github.com/repos/huggingface/datasets/issues/5109/events
https://github.com/huggingface/datasets/issues/5109
1,407,434,706
I_kwDODunzps5T47_S
5,109
Map caching not working for some class methods
{ "login": "Mouhanedg56", "id": 23029765, "node_id": "MDQ6VXNlcjIzMDI5NzY1", "avatar_url": "https://avatars.githubusercontent.com/u/23029765?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mouhanedg56", "html_url": "https://github.com/Mouhanedg56", "followers_url": "https://api.github.com/users/Mouhanedg56/followers", "following_url": "https://api.github.com/users/Mouhanedg56/following{/other_user}", "gists_url": "https://api.github.com/users/Mouhanedg56/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mouhanedg56/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mouhanedg56/subscriptions", "organizations_url": "https://api.github.com/users/Mouhanedg56/orgs", "repos_url": "https://api.github.com/users/Mouhanedg56/repos", "events_url": "https://api.github.com/users/Mouhanedg56/events{/privacy}", "received_events_url": "https://api.github.com/users/Mouhanedg56/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "The hash used for caching is computed by pickling recursively the function passed to `map`. Maybe some objects don't have the same hash across sessions. In particular you can check the hash of your model using\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\nobj = AutoModel.from_config(config=config, add_pooling_layer=False)\r\nprint(Hasher.hash(obj))\r\n```\r\n\r\nYou can find mode info here: https://huggingface.co/docs/datasets/about_cache\r\n\r\nYou can also provide your own unique hash in `map` if you want, with the `new_fingerprint` argument", "Indeed, the hash is changing. The `dumps` function serialize the model object in different ways because the model object is not deterministic\r\n```python\r\nfrom datasets.utils.py_utils import dumps\r\nobj1 = AutoModel.from_config(config=config, add_pooling_layer=False)\r\nobj2 = AutoModel.from_config(config=config, add_pooling_layer=False)\r\n\r\ndumps(bert) == dumps(bert2). # False\r\n```\r\n\r\n> You can find mode info here: https://huggingface.co/docs/datasets/about_cache\r\n> \r\n> You can also provide your own unique hash in map if you want, with the new_fingerprint argument\r\n\r\n\r\nThanks, the doc is so helpful. Indeed, we can fix the hash and get cache hit using `new_fingerprint`. Closing the issue." ]
2022-10-13T09:12:58
2022-10-17T10:38:45
2022-10-17T10:38:45
CONTRIBUTOR
null
null
null
## Describe the bug The cache loading is not working as expected for some class methods with a model stored in an attribute. The new fingerprint for `_map_single` is not the same at each run. The hasher generate a different hash for the class method. This comes from `dumps` function in `datasets.utils.py_utils` which generates a different dump at each run. ## Steps to reproduce the bug ```python from datasets import load_dataset from transformers import AutoConfig, AutoModel, AutoTokenizer dataset = load_dataset("ethos", "binary") BASE_MODELNAME = "sentence-transformers/all-MiniLM-L6-v2" class Object: def __init__(self): config = AutoConfig.from_pretrained(BASE_MODELNAME) self.bert = AutoModel.from_config(config=config, add_pooling_layer=False) self.tok = AutoTokenizer.from_pretrained(BASE_MODELNAME) def tokenize(self, examples): tokenized_texts = self.tok( examples["text"], padding="max_length", truncation=True, max_length=256, ) return tokenized_texts instance = Object() result = dict() for phase in ["train"]: result[phase] = dataset[phase].map(instance.tokenize, batched=True, load_from_cache_file=True, num_proc=2) ``` ## Expected results Load cache instead of recompute result. ## Actual results Result recomputed from scratch at each run. The cache works fine when deleting `bert` attribute. ## Environment info - `datasets` version: 2.5.3.dev0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.13 - PyArrow version: 7.0.0 - Pandas version: 1.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5109/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5109/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5105/comments
https://api.github.com/repos/huggingface/datasets/issues/5105/events
https://github.com/huggingface/datasets/issues/5105
1,406,078,357
I_kwDODunzps5Tzw2V
5,105
Specifying an exisiting folder in download_and_prepare deletes everything in it
{ "login": "cakiki", "id": 3664563, "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cakiki", "html_url": "https://github.com/cakiki", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "organizations_url": "https://api.github.com/users/cakiki/orgs", "repos_url": "https://api.github.com/users/cakiki/repos", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "received_events_url": "https://api.github.com/users/cakiki/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "cc @lhoestq ", "Thanks for reporting, @cakiki.\r\n\r\nI would say the deletion of the dir is an expected behavior though...", "`dask.to_parquet` has an \"overwrite\" parameter and default is `False`, we could also have something similar", "Thank you both for your feedback!\r\n\r\n@albertvillanova I think I might have have the wrong mental model of what the function was meant to do. I thought it would be an API similar to the pandas `to_XX` write methods (Like the one @lhoestq mentions) so I just assumed it would download the dataframe to whichever folder I specififed (`\"./\"` in my case) so I could load it into a dask dataframe. I absolutely did not expect it to delete everything in my local directory, including the script where I called it from :smile: \r\n\r\nI think Quentin's proposed solution sounds like a reasonable feature!", "actually there's already a `download_mode` parameter that defaults to `REUSE_DATASET_IF_EXISTS` - so I guess it's just a matter of not deleting files unrelated to the dataset, and to overwrite existing dataset files if the download mode is `REUSE_CACHE_IF_EXISTS` or `FORCE_REDOWNLOAD`" ]
2022-10-12T11:53:33
2022-10-20T11:53:59
null
CONTRIBUTOR
null
null
null
## Describe the bug The builder correctly creates the `output_dir` folder if it doesn't exist, but if the folder exists everything within it is deleted. Specifying `"."` as the `output_dir` deletes everything in your current dir but also leads to **another bug** whose traceback is the following: ``` Traceback (most recent call last) Input In [11], in <cell line: 1>() ----> 1 rotten_tomatoes_builder.download_and_prepare(output_dir=".", max_shard_size="200MB", file_format="parquet") File ~/BIGSCIENCE/env/lib/python3.9/site-packages/datasets/builder.py:818, in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs) File /usr/lib/python3.9/contextlib.py:124, in _GeneratorContextManager.__exit__(self, type, value, traceback) 122 if type is None: 123 try: --> 124 next(self.gen) 125 except StopIteration: 126 return False File ~/BIGSCIENCE/env/lib/python3.9/site-packages/datasets/builder.py:760, in incomplete_dir(dirname) File /usr/lib/python3.9/shutil.py:722, in rmtree(path, ignore_errors, onerror) 720 os.rmdir(path) 721 except OSError: --> 722 onerror(os.rmdir, path, sys.exc_info()) 723 else: 724 try: 725 # symlinks to directories are forbidden, see bug #1669 File /usr/lib/python3.9/shutil.py:720, in rmtree(path, ignore_errors, onerror) 718 _rmtree_safe_fd(fd, path, onerror) 719 try: --> 720 os.rmdir(path) 721 except OSError: 722 onerror(os.rmdir, path, sys.exc_info()) OSError: [Errno 22] Invalid argument: '/home/christopher/BIGSCIENCE/.' ``` ## Steps to reproduce the bug ```python rotten_tomatoes_builder = load_dataset_builder("rotten_tomatoes") rotten_tomatoes_builder.download_and_prepare(output_dir="./test_folder", max_shard_size="200MB", file_format="parquet") ``` If `test_folder` contains any files they will all be deleted ## Expected results Either a warning that all files will be deleted, but preferably that they not be deleted at all. ## Actual results N/A ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5105/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5102/comments
https://api.github.com/repos/huggingface/datasets/issues/5102/events
https://github.com/huggingface/datasets/issues/5102
1,404,746,554
I_kwDODunzps5Turs6
5,102
Error in create a dataset from a Python generator
{ "login": "yangxuhui", "id": 9004682, "node_id": "MDQ6VXNlcjkwMDQ2ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/9004682?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yangxuhui", "html_url": "https://github.com/yangxuhui", "followers_url": "https://api.github.com/users/yangxuhui/followers", "following_url": "https://api.github.com/users/yangxuhui/following{/other_user}", "gists_url": "https://api.github.com/users/yangxuhui/gists{/gist_id}", "starred_url": "https://api.github.com/users/yangxuhui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yangxuhui/subscriptions", "organizations_url": "https://api.github.com/users/yangxuhui/orgs", "repos_url": "https://api.github.com/users/yangxuhui/repos", "events_url": "https://api.github.com/users/yangxuhui/events{/privacy}", "received_events_url": "https://api.github.com/users/yangxuhui/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
{ "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false }
[ { "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi, thanks for reporting! The last line should be `dataset = Dataset.from_generator(my_gen)`.", "Can I work on this one?" ]
2022-10-11T14:28:58
2022-10-12T11:31:56
2022-10-12T11:31:56
NONE
null
null
null
## Describe the bug In HOW-TO-GUIDES > Load > [Python generator](https://huggingface.co/docs/datasets/v2.5.2/en/loading#python-generator), the code example defines the `my_gen` function, but when creating the dataset, an undefined `my_dict` is passed in. ```Python >>> from datasets import Dataset >>> def my_gen(): ... for i in range(1, 4): ... yield {"a": i} >>> dataset = Dataset.from_generator(my_dict) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5102/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5100
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5100/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5100/comments
https://api.github.com/repos/huggingface/datasets/issues/5100/events
https://github.com/huggingface/datasets/issues/5100
1,404,458,586
I_kwDODunzps5TtlZa
5,100
datasets[s3] sagemaker can't run a model - datasets issue with Value and ClassLabel and cast() method
{ "login": "jagochi", "id": 115545475, "node_id": "U_kgDOBuMVgw", "avatar_url": "https://avatars.githubusercontent.com/u/115545475?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jagochi", "html_url": "https://github.com/jagochi", "followers_url": "https://api.github.com/users/jagochi/followers", "following_url": "https://api.github.com/users/jagochi/following{/other_user}", "gists_url": "https://api.github.com/users/jagochi/gists{/gist_id}", "starred_url": "https://api.github.com/users/jagochi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jagochi/subscriptions", "organizations_url": "https://api.github.com/users/jagochi/orgs", "repos_url": "https://api.github.com/users/jagochi/repos", "events_url": "https://api.github.com/users/jagochi/events{/privacy}", "received_events_url": "https://api.github.com/users/jagochi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2022-10-11T11:16:31
2022-10-11T13:48:26
2022-10-11T13:48:26
NONE
null
null
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5100/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5099
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5099/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5099/comments
https://api.github.com/repos/huggingface/datasets/issues/5099/events
https://github.com/huggingface/datasets/issues/5099
1,404,370,191
I_kwDODunzps5TtP0P
5,099
datasets doesn't support # in data paths
{ "login": "loubnabnl", "id": 44069155, "node_id": "MDQ6VXNlcjQ0MDY5MTU1", "avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loubnabnl", "html_url": "https://github.com/loubnabnl", "followers_url": "https://api.github.com/users/loubnabnl/followers", "following_url": "https://api.github.com/users/loubnabnl/following{/other_user}", "gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}", "starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions", "organizations_url": "https://api.github.com/users/loubnabnl/orgs", "repos_url": "https://api.github.com/users/loubnabnl/repos", "events_url": "https://api.github.com/users/loubnabnl/events{/privacy}", "received_events_url": "https://api.github.com/users/loubnabnl/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
{ "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false }
[ { "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false } ]
null
[ "`datasets` doesn't seem to urlencode the directory names here\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/utils/file_utils.py#L109-L111\r\n\r\nfor example we should have\r\n```python\r\nfrom datasets.utils.file_utils import hf_hub_url\r\n\r\nurl = hf_hub_url(\"loubnabnl/bigcode_csharp\", \"data/c#/data_0003.jsonl\")\r\nprint(url)\r\n# Currently returns\r\n# https://huggingface.co/datasets/loubnabnl/bigcode_csharp/resolve/main/data/c#/data_0003.jsonl\r\n# while it should be \r\n# https://huggingface.co/datasets/loubnabnl/bigcode_csharp/resolve/main/data/c%23/data_0003.jsonl\r\n```", "I'll work on this :)", "@loubnabnl The dataset you linked in the description of the bug does not work and returns a 404. Where can I find the dataset to reproduce the bug?", "I think you can create a dataset repository on the Hub with a dummy file containing a `#`", "Ah sorry it was private I just made it public, I can also help with this if needed", "@lhoestq Should I url encode also repo_id and revision parameters? I'm not sure what are the valid characters there.\r\n\r\nPersonally, I would be cautious and only url encode the path parameter.", "These are possible solutions (assuming `from urllib.parse import quote`):\r\n\r\n1) url encode only the path parameter:\r\n```\r\n# src/datasets/utils/file_utils.py\r\ndef hf_hub_url(repo_id: str, path: str, revision: Optional[str] = None) -> str:\r\n revision = revision or config.HUB_DEFAULT_VERSION\r\n return config.HUB_DATASETS_URL.format(repo_id=repo_id, path=quote(path), revision=revision)\r\n```\r\n2) url encode all parameters:\r\n```\r\n# src/datasets/utils/file_utils.py\r\ndef hf_hub_url(repo_id: str, path: str, revision: Optional[str] = None) -> str:\r\n revision = revision or config.HUB_DEFAULT_VERSION\r\n return config.HUB_DATASETS_URL.format(repo_id=quote(repo_id), path=quote(path), revision=quote(revision))\r\n```\r\n3) url encode the whole url:\r\n```\r\n# src/datasets/config.py\r\nHUB_DATASETS_PATH = \"/datasets/{repo_id}/resolve/{revision}/{path}\"\r\nHUB_DATASETS_URL = HF_ENDPOINT + HUB_DATASETS_PATH\r\n```\r\n```\r\n# src/datasets/utils/file_utils.py\r\ndef hf_hub_url(repo_id: str, path: str, revision: Optional[str] = None) -> str:\r\n revision = revision or config.HUB_DEFAULT_VERSION\r\n return config.HF_ENDPOINT + quote(config.HUB_DATASETS_PATH.format(repo_id=repo_id, path=path, revision=revision))\r\n```", "repo_id can only contain alphanumeric characters and _- so it doesn't need to be encoded.\r\n\r\nHowever I agree it's a good idea to also apply `quote` to the revision as well as in 2. !", "Should be fixed by https://github.com/huggingface/datasets/issues/5099 - we'll do a release later today" ]
2022-10-11T10:05:32
2022-10-13T13:14:20
2022-10-13T13:14:20
NONE
null
null
null
## Describe the bug dataset files with `#` symbol their paths aren't read correctly. ## Steps to reproduce the bug The data in folder `c#`of this [dataset](https://huggingface.co/datasets/loubnabnl/bigcode_csharp) can't be loaded. While the folder `c_sharp` with the same data is loaded properly ```python ds = load_dataset('loubnabnl/bigcode_csharp', split="train", data_files=["data/c#/*"]) ``` ``` FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/loubnabnl/bigcode_csharp/resolve/27a3166cff4bb18e11919cafa6f169c0f57483de/data/c#/data_0003.jsonl ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.5.2 - Platform: macOS-12.2.1-arm64-arm-64bit - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.3 cc @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5099/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5098
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5098/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5098/comments
https://api.github.com/repos/huggingface/datasets/issues/5098/events
https://github.com/huggingface/datasets/issues/5098
1,404,058,518
I_kwDODunzps5TsDuW
5,098
Classes label error when loading symbolic links using imagefolder
{ "login": "horizon86", "id": 49552732, "node_id": "MDQ6VXNlcjQ5NTUyNzMy", "avatar_url": "https://avatars.githubusercontent.com/u/49552732?v=4", "gravatar_id": "", "url": "https://api.github.com/users/horizon86", "html_url": "https://github.com/horizon86", "followers_url": "https://api.github.com/users/horizon86/followers", "following_url": "https://api.github.com/users/horizon86/following{/other_user}", "gists_url": "https://api.github.com/users/horizon86/gists{/gist_id}", "starred_url": "https://api.github.com/users/horizon86/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/horizon86/subscriptions", "organizations_url": "https://api.github.com/users/horizon86/orgs", "repos_url": "https://api.github.com/users/horizon86/repos", "events_url": "https://api.github.com/users/horizon86/events{/privacy}", "received_events_url": "https://api.github.com/users/horizon86/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
{ "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false }
[ { "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false } ]
null
[ "It can be solved temporarily by remove `resolve` in \r\nhttps://github.com/huggingface/datasets/blob/bef23be3d9543b1ca2da87ab2f05070201044ddc/src/datasets/data_files.py#L278", "Hi, thanks for reporting and suggesting a fix! We still need to account for `.`/`..` in the file path, so a more robust fix would be `Path(os.path.abspath(filepath))`.", "> Hi, thanks for reporting and suggesting a fix! We still need to account for `.`/`..` in the file path, so a more robust fix would be `Path(os.path.abspath(filepath))`.\r\n\r\nThanks for your reply!" ]
2022-10-11T06:10:58
2022-11-14T14:40:20
2022-11-14T14:40:20
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** Like this: #4015 When there are **symbolic links** to pictures in the data folder, the parent folder name of the **real file** will be used as the class name instead of the parent folder of the symbolic link itself. Can you give an option to decide whether to enable symbolic link tracking? This is inconsistent with the `torchvision.datasets.ImageFolder` behavior. For example: ![image](https://user-images.githubusercontent.com/49552732/195008591-3cce644e-aabe-4f39-90b9-832861cadb3d.png) ![image](https://user-images.githubusercontent.com/49552732/195008841-0b0c2289-eb7f-411a-977b-37426f23a277.png) It use `others` in green circle as class label but not `abnormal`, I wish `load_dataset` not use the real file parent as label. **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context about the feature request here.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5098/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5098/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5097
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5097/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5097/comments
https://api.github.com/repos/huggingface/datasets/issues/5097/events
https://github.com/huggingface/datasets/issues/5097
1,403,679,353
I_kwDODunzps5TqnJ5
5,097
Fatal error with pyarrow/libarrow.so
{ "login": "catalys1", "id": 11340846, "node_id": "MDQ6VXNlcjExMzQwODQ2", "avatar_url": "https://avatars.githubusercontent.com/u/11340846?v=4", "gravatar_id": "", "url": "https://api.github.com/users/catalys1", "html_url": "https://github.com/catalys1", "followers_url": "https://api.github.com/users/catalys1/followers", "following_url": "https://api.github.com/users/catalys1/following{/other_user}", "gists_url": "https://api.github.com/users/catalys1/gists{/gist_id}", "starred_url": "https://api.github.com/users/catalys1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/catalys1/subscriptions", "organizations_url": "https://api.github.com/users/catalys1/orgs", "repos_url": "https://api.github.com/users/catalys1/repos", "events_url": "https://api.github.com/users/catalys1/events{/privacy}", "received_events_url": "https://api.github.com/users/catalys1/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks for reporting, @catalys1.\r\n\r\nThis seems a duplicate of:\r\n- #3310 \r\n\r\nThe source of the problem is in PyArrow:\r\n- [ARROW-15141: [C++] Fatal error condition occurred in aws_thread_launch](https://issues.apache.org/jira/browse/ARROW-15141)\r\n- [ARROW-17501: [C++] Fatal error condition occurred in aws_thread_launch](https://issues.apache.org/jira/browse/ARROW-17501)\r\n\r\nThe bug in their dependency is still unresolved:\r\n- https://github.com/aws/aws-sdk-cpp/issues/1809\r\n\r\nApparently, the `aws-sdk-cpp` PyArrow dependency needs to be pinned at version `1.8.186` if using conda. Have you updated it after installing PyArrow?\r\n```shell\r\nconda list aws-sdk-cpp\r\n```\r\n\r\nMaybe you should try to downgrade it to that version:\r\n```shell\r\nconda install -c conda-forge aws-sdk-cpp=1.8.186\r\n```" ]
2022-10-10T20:29:04
2022-10-11T06:56:01
2022-10-11T06:56:00
NONE
null
null
null
## Describe the bug When using datasets, at the very end of my jobs the program crashes (see trace below). It doesn't seem to affect anything, as it appears to happen as the program is closing down. Just importing `datasets` is enough to cause the error. ## Steps to reproduce the bug This is sufficient to reproduce the problem: ```bash python -c "import datasets" ``` ## Expected results Program should run to completion without an error. ## Actual results ```bash Fatal error condition occurred in /opt/vcpkg/buildtrees/aws-c-io/src/9e6648842a-364b708815.clean/source/event_loop.c:72: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS Exiting Application ################################################################################ Stack trace: ################################################################################ /u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200af06) [0x150dff547f06] /u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x20028e5) [0x150dff53f8e5] /u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1f27e09) [0x150dff464e09] /u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200ba3d) [0x150dff548a3d] /u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1f25948) [0x150dff462948] /u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200ba3d) [0x150dff548a3d] /u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1ee0b46) [0x150dff41db46] /u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x194546a) [0x150dfee8246a] /lib64/libc.so.6(+0x39b0c) [0x150e15eadb0c] /lib64/libc.so.6(on_exit+0) [0x150e15eadc40] /u/user/miniconda3/envs/env/bin/python(+0x28db18) [0x560ae370eb18] /u/user/miniconda3/envs/env/bin/python(+0x28db4b) [0x560ae370eb4b] /u/user/miniconda3/envs/env/bin/python(+0x28db90) [0x560ae370eb90] /u/user/miniconda3/envs/env/bin/python(_PyRun_SimpleFileObject+0x1e6) [0x560ae37123e6] /u/user/miniconda3/envs/env/bin/python(_PyRun_AnyFileObject+0x44) [0x560ae37124c4] /u/user/miniconda3/envs/env/bin/python(Py_RunMain+0x35d) [0x560ae37135bd] /u/user/miniconda3/envs/env/bin/python(Py_BytesMain+0x39) [0x560ae37137d9] /lib64/libc.so.6(__libc_start_main+0xf3) [0x150e15e97493] /u/user/miniconda3/envs/env/bin/python(+0x2125d4) [0x560ae36935d4] Aborted (core dumped) ``` ## Environment info - `datasets` version: 2.5.1 - Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.4 - PyArrow version: 9.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5097/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5097/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5096
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5096/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5096/comments
https://api.github.com/repos/huggingface/datasets/issues/5096/events
https://github.com/huggingface/datasets/issues/5096
1,403,379,816
I_kwDODunzps5TpeBo
5,096
Transfer some canonical datasets under an organization namespace
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4564477500, "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution", "name": "dataset contribution", "color": "0e8a16", "default": false, "description": "Contribution to a dataset script" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "The transfer of the dummy dataset to the dummy org works as expected:\r\n```python\r\nIn [1]: from datasets import load_dataset; ds = load_dataset(\"dummy_canonical_dataset\", download_mode=\"force_redownload\"); ds\r\nDownloading builder script: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.98k/2.98k [00:00<00:00, 2.01MB/s]\r\nDownloading and preparing dataset dummy_canonical_dataset/default (download: 411 bytes, generated: 385 bytes, post-processed: Unknown size, total: 796 bytes) to .../.cache/huggingface/datasets/dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4...\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 411/411 [00:00<00:00, 293kB/s]\r\nDataset dummy_canonical_dataset downloaded and prepared to .../.cache/huggingface/datasets/dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 304.16it/s]\r\nOut[1]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['langs', 'ner_tags', 'tokens'],\r\n num_rows: 3\r\n })\r\n})\r\n\r\nIn [2]: from datasets import load_dataset; ds = load_dataset(\"dummy-canonical-org/dummy_canonical_dataset\"); ds\r\nDownloading builder script: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.98k/2.98k [00:00<00:00, 1.57MB/s]\r\nDownloading and preparing dataset dummy_canonical_dataset/default to .../.cache/huggingface/datasets/dummy-canonical-org___dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4...\r\nDataset dummy_canonical_dataset downloaded and prepared to .../.cache/huggingface/datasets/dummy-canonical-org___dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 362.48it/s]\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['langs', 'ner_tags', 'tokens'],\r\n num_rows: 3\r\n })\r\n})\r\n```", "Cool ! 🚀 ", "Maybe we should be a bit more proactive with these transfers. There are only ≈70 canonical models, so reaching that number with datasets would be great, too. It's not easy considering the current number of ≈750 canonical datasets, but doable.\r\n\r\nFor instance, it shouldn't be too hard to transfer these datasets (partial list; all of them have more than > 1k downloads):\r\n\r\n<details>\r\n\r\n<summary> Datasets to transfer </summary>\r\n\r\n```\r\nquickdraw -> google\r\nopenai_humaneval -> openai\r\nc4 -> allenai/c4 (the canonical version reads data from the org version)\r\nmbpp -> google (ask jaaustin (author) where to transfer the dataset)\r\ncompetition_math -> hendrycks (author)\r\ngsm8k -> openai\r\nai2_arc -> allenai\r\nimdb -> stanfordai\r\ngreek_legal_code -> chrispap (author)\r\nspider -> Yale-LILY\r\nsquad and squad_v2 -> rajpurkarlab (or rajpurkar, a member of the org and one of the authors)\r\ncppe-5 -> rishitdagli\r\nnews_commentary -> Helsinki-NLP\r\njfleg -> keisks (author)\r\npubmed_qa -> qiaojin (author)\r\nmedmcqa -> infinitylogesh (author)\r\ncifar10 and cifar100 -> UniversityofToronto\r\ncc100 -> gwenzek (author)\r\nasset -> facebook\r\nblbooks -> BritishLibraryLabs\r\ncapes -> FLSRDS (maybe the author?)\r\ncc_news -> fhamborg (author)\r\nclue -> CLUE benchmark\r\ncoqa -> stanfordnlp\r\nlambada -> germank (author)\r\nlibrispeech_asr -> openslr\r\ndrop -> allenai\r\nduorc -> salesforce (ask amritasaha87 (author) where to transfer)\r\nglue -> nyu-mll ?\r\ngo_emotions -> google\r\ncommonsense_qa -> tau\r\ndbpedia_14 -> JensLehmann (author?)\r\ndiscofuse -> google\r\nmc4 -> allenai/c4\r\nopenbookqa -> allenai\r\nropes -> allene\r\ntrivia_qa -> mandarjoshi (author)\r\nwikiann -> afshinrahimi (author)\r\nxtreme -> google\r\nxscr -> INK-USC\r\nyelp_review_full -> Yelp\r\ntruthful_qa -> jacobhilton22 (author)\r\nbigbench -> google\r\nxnli -> facebook\r\nsciq -> allenai\r\nsst2 -> stanfordnlp\r\nblimp -> alexwarstadt (author)\r\ntweet_eval -> cardiffnlp\r\nbeans -> AI-Lab-Makerere\r\nlex_glue -> coastalcph\r\namericas_nli -> abteen (author)\r\nopus_euconst -> tiedeman (author)\r\nmedical_questions_pairs -> curaihealth\r\nweb_questions -> joberant (author)\r\nanli -> facebook\r\nrace -> CarnegieMellonCS\r\nklue -> klue\r\nwino_bias -> uclanlp\r\nwiki_qa -> microsoft\r\nxcopa -> cambridgeltl\r\nindic_glue -> ai4bharat\r\nboolq -> google\r\nadversarial_qa -> mbartolo (author)\r\nnq_open -> google\r\nsnli -> stanfordnlp\r\nstsb_multi_mt -> PhilipMay (author)\r\nmulti_nli -> sleepinyourhat (author)\r\npaws -> google\r\npaws-x -> google\r\nms_marco - microsoft\r\nxquad -> deepmind\r\nnarrativeqa -> deepmind\r\nkilt_tasks -> facebook\r\nhate_speech_offensive -> tdavidson (author)\r\nwiki40b -> google\r\ncovost2 -> facebook\r\ncommon_gen -> INKLAB\r\nmulti_eurlex -> kiddothe2b (author)\r\nexams -> mhardalov (author)\r\ntiny_shakespeare -> karpathy (author)\r\nblbooksgenre -> BritishLibraryLabs ?\r\nfood101 -> ethz ?\r\nscitail -> allenai\r\nbillsum -> FiscalNote\r\nimppres -> facebook\r\nquartz -> allenai\r\nqasc -> allenai\r\nquail -> textmachinelab\r\nwiki_lingua -> esdurmus\r\ncos_e -> salesforce ?\r\ncivil_comments -> google ? (create a “jigsaw” org) \r\nxquad_r -> google\r\nwikitext-> metamind (or salesforce)\r\n\r\n// deprecate c4 and mc4 in favor of allenai/c4 (add a dataset script to the org version to make it easier to use?)\r\n```\r\n</details>\r\n\r\nAlso, a space that allows users to claim the existing canonical datasets (for themselves or their organizations) could be nice.\r\n\r\nWDYT?", "Next week I can take care of some of them :) In most cases we just need to send an email to ask them if they're ok with it.\r\nLet's coordinate on slack ?", "Yup, sounds good to me!", "I can also continuing working on this if we agree this has become a priority now.", "cool stuff! \r\n\r\nthis morning on my side i moved huggingface.co/ctrl (a not very used model) to its rightful entity", "As a previous step before transferring the datasets, we decided we should convert them to Parquet, so that the viewer does not stop working (the viewer does not support datasets with scripts). \r\n\r\nDatasets converted to Parquet:\r\n- [x] adversarial_qa\r\n- [x] ai2_arc\r\n- [x] americas_nli\r\n- [x] anli\r\n- [x] asset\r\n- [x] beans\r\n- [ ] bigbench\r\n- [x] billsum\r\n- [ ] blbooks: it was already transferred to: TheBritishLibrary/blbooks\r\n- [ ] blbooksgenre: it was already transferred to: TheBritishLibrary/blbooksgenre\r\n- [x] blimp\r\n- [x] boolq\r\n- [ ] c4\r\n- [x] capes\r\n- [ ] cc100\r\n- [x] cc_news\r\n- [x] cifar10\r\n- [x] cifar100\r\n- [x] civil_comments\r\n- [x] clue\r\n- [x] common_gen\r\n- [x] commonsense_qa\r\n- [ ] competition_math: it was already transferred to: hendrycks/competition_math\r\n- [x] coqa\r\n- [x] cos_e\r\n- [ ] covost2: it requires manual download\r\n- [x] cppe-5\r\n- [x] dbpedia_14\r\n- [x] discofuse\r\n- [x] drop\r\n- [x] duorc\r\n- [x] exams\r\n- [x] food101\r\n- [x] glue\r\n- [x] go_emotions\r\n- [x] greek_legal_code\r\n- [x] gsm8k\r\n- [x] hate_speech_offensive\r\n- [x] imdb\r\n- [x] imppres\r\n- [x] indic_glue\r\n- [x] jfleg\r\n- [x] kilt_tasks\r\n- [x] klue\r\n- [x] lambada\r\n- [x] lex_glue\r\n- [ ] librispeech_asr\r\n- [x] mbpp\r\n- [ ] mc4\r\n- [x] medical_questions_pairs\r\n- [x] medmcqa\r\n- [x] ms_marco\r\n- [ ] multi_eurlex\r\n- [x] multi_nli\r\n- [ ] narrativeqa\r\n- [ ] news_commentary\r\n- [x] nq_open\r\n- [x] openai_humaneval\r\n- [x] openbookqa\r\n- [ ] opus_euconst\r\n- [x] paws\r\n- [x] paws-x\r\n- [x] pubmed_qa\r\n- [x] qasc\r\n- [x] quail\r\n- [x] quartz\r\n- [ ] quickdraw\r\n- [x] race\r\n- [x] ropes\r\n- [x] sciq\r\n- [x] scitail\r\n- [ ] snli\r\n- [x] spider\r\n- [x] squad\r\n- [x] squad_v2\r\n- [x] sst2\r\n- [x] stsb_multi_mt\r\n- [x] tiny_shakespeare\r\n- [x] trivia_qa\r\n- [x] truthful_qa\r\n- [x] tweet_eval\r\n- [x] web_questions\r\n- [ ] wiki40b\r\n- [x] wiki_lingua\r\n- [x] wiki_qa\r\n- [ ] wikiann\r\n- [x] wikitext\r\n- [x] wino_bias\r\n- [x] xcopa\r\n- [x] xcsr\r\n- [x] xnli\r\n- [x] xquad\r\n- [x] xquad_r\r\n- [ ] xtreme\r\n- [x] yelp_review_full\r\n", "For `c4` and `mc4` I was thinking of adding the corresponding configs to `allenai/c4` and redirect `c4` and `mc4` to `allenai/c4`. I'll open a PR on `allenai/c4` if it's good for you", "@davanstrien and @lhoestq, I have shared with you this spreadsheet: https://docs.google.com/spreadsheets/d/1GvNTd1UxmtTvEFOK-Eq6E3Str4FUWQuWZsEN0WVFirs/edit?usp=sharing\r\n\r\nThis way we can take datasets by batches to contact the authors and transfer to the organizations." ]
2022-10-10T15:44:31
2024-01-11T10:03:35
null
MEMBER
null
null
null
As discussed during our @huggingface/datasets meeting, we are planning to move some "canonical" dataset scripts under their corresponding organization namespace (if this does not exist). On the contrary, if the dataset already exists under the organization namespace, we are deprecating the canonical one (and eventually delete it). First, we should test it using a dummy dataset/organization. TODO: - [x] Test with a dummy dataset - [x] Create dummy canonical dataset: https://huggingface.co/datasets/dummy_canonical_dataset - [x] Create dummy organization: https://huggingface.co/dummy-canonical-org - [x] Transfer dummy canonical dataset to dummy organization - [ ] Transfer datasets - [x] babi_qa => facebook - [x] blbooks => TheBritishLibrary/blbooks - [x] blbooksgenre => TheBritishLibrary/blbooksgenre - [x] common_gen => allenai - [x] commonsense_qa => tau - [x] competition_math => hendrycks/competition_math - [x] cord19 => allenai - [x] emotion => dair-ai - [ ] gem => GEM - [x] hendrycks_test => cais/mmlu - [x] indonlu => indonlp - [ ] multilingual_librispeech => facebook - It already exists "facebook/multilingual_librispeech" - [ ] oscar => oscar-corpus - [x] peer_read => allenai - [x] qasper => allenai - [x] reddit => webis/tldr-17 - [x] russian_super_glue => russiannlp - [x] rvl_cdip => aharley - [x] s2orc => allenai - [x] scicite => allenai - [x] scifact => allenai - [x] scitldr => allenai - [x] swiss_judgment_prediction => rcds - [x] the_pile => EleutherAI - [ ] wmt14, wmt15, wmt16, wmt17, wmt18, wmt19,... => wmt - [ ] Deprecate (and eventually remove) datasets that cannot be transferred because they already exist - [x] banking77 => PolyAI - [x] common_voice => mozilla-foundation - [x] german_legal_entity_recognition => elenanereiss - ... EDIT: the list above is continuously being updated
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5096/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5096/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5094
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5094/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5094/comments
https://api.github.com/repos/huggingface/datasets/issues/5094/events
https://github.com/huggingface/datasets/issues/5094
1,403,214,950
I_kwDODunzps5To1xm
5,094
Multiprocessing with `Dataset.map` and `PyTorch` results in deadlock
{ "login": "RR-28023", "id": 36822895, "node_id": "MDQ6VXNlcjM2ODIyODk1", "avatar_url": "https://avatars.githubusercontent.com/u/36822895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RR-28023", "html_url": "https://github.com/RR-28023", "followers_url": "https://api.github.com/users/RR-28023/followers", "following_url": "https://api.github.com/users/RR-28023/following{/other_user}", "gists_url": "https://api.github.com/users/RR-28023/gists{/gist_id}", "starred_url": "https://api.github.com/users/RR-28023/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RR-28023/subscriptions", "organizations_url": "https://api.github.com/users/RR-28023/orgs", "repos_url": "https://api.github.com/users/RR-28023/repos", "events_url": "https://api.github.com/users/RR-28023/events{/privacy}", "received_events_url": "https://api.github.com/users/RR-28023/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! Could it be an Out of Memory issue that could have killed one of the processes ? can you check your memory ?", "Hi! I don't think it is a memory issue. I'm monitoring the main and spawn python processes and threads with `htop` and the memory does not peak. Besides, the example I've posted above should not be that demanding in terms of memory, right? (I have 32GB of RAM). ", "Indeed it should be fine. I couldn't reproduce the error though - I ran your script on my side and it works fine. What version of pytorch are you using ?", "Interesting.. I'm using `torch 1.12.1`", "I also tried on colab and it works fine 🤔 \r\nMaybe something is wrong with your installation of pytorch ?", "Oh actually I just saw that you're using python 3.9\r\n\r\nThis could be related to https://github.com/huggingface/datasets/issues/4113\r\n\r\nWe'll fix that as soon as we can, in the meantime you can try to use use single process, or use an older version of python maybe ?", "I tried with python 3.7 and the issue persists. In collab, which also uses 3.7 I don't get the issue, so yes I guess is something on mu side... will post it here if I manage to fix it", "Hi! Which version of transformers are you using? I test the code on Colab (so python 3.7) with transformers 4.23.1, torch 1.12.1 and pyarrow 9.0.0 (also 6.x), it worked without stuck.", "Hi, I have the same problem in use **datasets.IterableDatasetDict.map()**\r\nmy pytorch is 2.0.0a0+gitc263bd4\r\nmy python is 3.8.16(default, Jun 12 2023, 17:37:21)\r\nwork on aarch64 in 16 node, each node with 4*nVidia-A100-40G\r\nevery node have 4 process execute code as ↓\r\n\r\n```\r\nfrom datasets import load_dataset, interleave_datasets, IterableDatasetDict, concatenate_datasets\r\n```\r\n...\r\n```\r\n model_args.cache_dir = '/home/scx/.cache'\r\n for dataset_name in data_args.datasets_name:\r\n train_datasets.append(\r\n load_dataset(\r\n dataset_name,\r\n cache_dir=model_args.cache_dir,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n streaming=data_args.streaming,\r\n split='train'\r\n ).select_columns('text')\r\n )\r\n valid_datasets.append(\r\n load_dataset(\r\n dataset_name,\r\n cache_dir=model_args.cache_dir,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n streaming=data_args.streaming,\r\n split='validation'\r\n ).select_columns('text')\r\n )\r\n train_dataset = interleave_datasets(train_datasets,\r\n probabilities=data_args.datasets_probabilities, \r\n seed=training_args.seed,\r\n stopping_strategy='all_exhausted')\r\n raw_datasets = IterableDatasetDict({'train': train_dataset, 'validation': valid_dataset})\r\n```\r\n...\r\n\r\n```\r\n tokenized_datasets = None\r\n with training_args.main_process_first(desc=\"dataset map tokenization\"):\r\n if not data_args.streaming:\r\n tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n desc=\"Running tokenizer on dataset\",\r\n remove_columns=column_names,\r\n )\r\n else:\r\n #TODO 20230722\r\n logger.info('{}: {}'.format(__file__, 'tokenized_datasets = raw_datasets.map('))\r\n logger.info('len raw_datasets: {}'.format(len(raw_datasets.items())))\r\n logger.info('raw_datasets:{}'.format(raw_datasets.items()))\r\n tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n batch_size=1000,\r\n remove_columns=column_names\r\n )\r\n logger.info('map ok!')\r\n logger.info('show train: {}'.format(next(iter(tokenized_datasets['train']))))\r\n logger.info('ok')\r\n # ### RAW CODE ###\r\n # tokenized_datasets = raw_datasets.map(\r\n # tokenize_function,\r\n # batched=True,\r\n # batch_size=1000,\r\n # remove_columns=column_names\r\n # )\r\n #TODO 20230722\r\n logger.info(\"Finish tokenization\")\r\n```\r\nthe output of my code is\r\n```\r\n07/22/2023 21:57:09 - INFO - __main__ - /demo/run_blue_space.py: tokenized_datasets = raw_datasets.map(\r\n07/22/2023 21:57:09 - INFO - __main__ - len raw_datasets: 2\r\n07/22/2023 21:57:09 - INFO - __main__ - raw_datasets:dict_items([('train', <datasets.iterable_dataset.IterableDataset object at 0x4005ee301190>), ('validation', <datasets.iterable_dataset.IterableDataset object at 0x4005ee5427f0>)])\r\n07/22/2023 21:57:09 - INFO - __main__ - map ok!\r\n07/22/2023 22:01:07 - INFO - __main__ - show train: {'input_ids': [14608, 26797, 31891, 34260, 12227, 33207, 5, 5, 31632, 26797, 31891, 34260, 12227, 33207, 7398, 28561, 31236, 31177, 31253, 33558, 31556, 31377, 72, 20732, 32383, 32295, 14027, 31178, 53, 61, 53, 55, 31189, 31146, 31321, 31235, 53, 61, 56, 58, 31189, 31145, 72, 53, 61, 58, 54, 31189, 54, 31245, 53, 60, 31224, 31896, 31178, 28561, 29331, 20732, 31888, 32637, 4426, 2824, 72, 53, 61, 60, 55, 31189, 53, 54, 31245, 53, 31224, 31896, 31178, 28561, 29331, 26137, 20732, 4426, 2824, 73, 54, 52, 52, 52, 31189, 61, 31245, 59, 31224, 31896, 31178, 29331, 28561, 20732, 4426, 2824, 73, 5], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}\r\n07/22/2023 22:01:07 - INFO - __main__ - ok\r\n```\r\n\r\n", "@bio-punk `IterableDatasetDict.map` does not support multiprocessing (only `DatasetDict.map` and `Dataset.map` do), so please open a new issue as this doesn't seem to be related to the original issue. ", "Closing as this issue doesn't seem to be related to `datasets`." ]
2022-10-10T13:50:56
2023-07-24T15:29:13
2023-07-24T15:29:13
NONE
null
null
null
## Describe the bug There seems to be an issue with using multiprocessing with `datasets.Dataset.map` (i.e. setting `num_proc` to a value greater than one) combined with a function that uses `torch` under the hood. The subprocesses that `datasets.Dataset.map` spawns [a this step](https://github.com/huggingface/datasets/blob/1b935dab9d2f171a8c6294269421fe967eb55e34/src/datasets/arrow_dataset.py#L2663) go into wait mode forever. ## Steps to reproduce the bug The below code goes into deadlock when `NUMBER_OF_PROCESSES` is greater than one. ```python NUMBER_OF_PROCESSES = 2 from transformers import AutoTokenizer, AutoModel from datasets import load_dataset dataset = load_dataset("glue", "mrpc", split="train") tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-MiniLM-L6-v2") model = AutoModel.from_pretrained("sentence-transformers/all-MiniLM-L6-v2") model.to("cpu") def cls_pooling(model_output): return model_output.last_hidden_state[:, 0] def generate_embeddings_batched(examples): sentences_batch = list(examples['sentence1']) encoded_input = tokenizer( sentences_batch, padding=True, truncation=True, return_tensors="pt" ) encoded_input = {k: v.to("cpu") for k, v in encoded_input.items()} model_output = model(**encoded_input) embeddings = cls_pooling(model_output) examples['embeddings'] = embeddings.detach().cpu().numpy() # 64, 384 return examples embeddings_dataset = dataset.map( generate_embeddings_batched, batched=True, batch_size=10, num_proc=NUMBER_OF_PROCESSES ) ``` While debugging it I've seen that it gets "stuck" when calling `torch.nn.Embedding.forward` but some testing shows that the same happens with other functions from `torch.nn`. ## Environment info - Platform: Linux-5.14.0-1052-oem-x86_64-with-glibc2.31 - Python version: 3.9.14 - PyArrow version: 9.0.0 - Pandas version: 1.5.0 Not sure if this is a HF problem, a PyTorch problem or something I'm doing wrong.. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5094/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5094/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5093/comments
https://api.github.com/repos/huggingface/datasets/issues/5093/events
https://github.com/huggingface/datasets/issues/5093
1,402,939,660
I_kwDODunzps5TnykM
5,093
Mismatch between tutoriel and doc
{ "login": "clefourrier", "id": 22726840, "node_id": "MDQ6VXNlcjIyNzI2ODQw", "avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clefourrier", "html_url": "https://github.com/clefourrier", "followers_url": "https://api.github.com/users/clefourrier/followers", "following_url": "https://api.github.com/users/clefourrier/following{/other_user}", "gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}", "starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions", "organizations_url": "https://api.github.com/users/clefourrier/orgs", "repos_url": "https://api.github.com/users/clefourrier/repos", "events_url": "https://api.github.com/users/clefourrier/events{/privacy}", "received_events_url": "https://api.github.com/users/clefourrier/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
{ "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false }
[ { "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi, thanks for reporting! This line should be replaced with \r\n```python\r\ndataset = dataset.map(lambda examples: tokenizer(examples[\"text\"], return_tensors=\"np\"), batched=True)\r\n```\r\nfor it to work (the `return_tensors` part inside the `tokenizer` call).", "Can I work on this?", "Fixed in https://github.com/huggingface/datasets/pull/5095" ]
2022-10-10T10:23:53
2022-10-10T17:51:15
2022-10-10T17:51:14
MEMBER
null
null
null
## Describe the bug In the "Process text data" tutorial, [`map` has `return_tensors` as kwarg](https://huggingface.co/docs/datasets/main/en/nlp_process#map). It does not seem to appear in the [function documentation](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map), nor to work. ## Steps to reproduce the bug MWE: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") from datasets import load_dataset dataset = load_dataset("lhoestq/demo1", split="train") dataset = dataset.map(lambda examples: tokenizer(examples["review"]), batched=True, return_tensors="pt") ``` ## Expected results return_tensors to be a valid kwarg :smiley: ## Actual results ```python >> TypeError: map() got an unexpected keyword argument 'return_tensors' ``` ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.14.0-1052-oem-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5093/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5093/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5090
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5090/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5090/comments
https://api.github.com/repos/huggingface/datasets/issues/5090/events
https://github.com/huggingface/datasets/issues/5090
1,401,102,407
I_kwDODunzps5TgyBH
5,090
Review sync issues from GitHub to Hub
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Nice!!" ]
2022-10-07T12:31:56
2022-10-08T07:07:36
2022-10-08T07:07:36
MEMBER
null
null
null
## Describe the bug We have discovered that sometimes there were sync issues between GitHub and Hub datasets, after a merge commit to main branch. For example: - this merge commit: https://github.com/huggingface/datasets/commit/d74a9e8e4bfff1fed03a4cab99180a841d7caf4b - was not properly synced with the Hub: https://github.com/huggingface/datasets/actions/runs/3002495269/jobs/4819769684 ``` [main 9e641de] Add Papers with Code ID to scifact dataset (#4941) Author: Albert Villanova del Moral <[email protected]> 1 file changed, 42 insertions(+), 14 deletions(-) push failed ! GitCommandError(['git', 'push'], 1, b'remote: ---------------------------------------------------------- \nremote: Sorry, your push was rejected during YAML metadata verification: \nremote: - Error: "license" does not match any of the allowed types \nremote: ---------------------------------------------------------- \nremote: Please find the documentation at: \nremote: https://huggingface.co/docs/hub/models-cards#model-card-metadata \nremote: ---------------------------------------------------------- \nTo [https://huggingface.co/datasets/scifact.git\n](https://huggingface.co/datasets/scifact.git/n) ! [remote rejected] main -> main (pre-receive hook declined)\nerror: failed to push some refs to \'[https://huggingface.co/datasets/scifact.git\](https://huggingface.co/datasets/scifact.git/)'', b'') ``` We are reviewing sync issues in previous commits to recover them and repushing to the Hub. TODO: Review - [x] #4941 - scifact - [x] #4931 - scifact - [x] #4753 - wikipedia - [x] #4554 - wmt17, wmt19, wmt_t2t - Fixed with "Release 2.4.0" commit: https://github.com/huggingface/datasets/commit/401d4c4f9b9594cb6527c599c0e7a72ce1a0ea49 - https://huggingface.co/datasets/wmt17/commit/5c0afa83fbbd3508ff7627c07f1b27756d1379ea - https://huggingface.co/datasets/wmt19/commit/b8ad5bf1960208a376a0ab20bc8eac9638f7b400 - https://huggingface.co/datasets/wmt_t2t/commit/b6d67191804dd0933476fede36754a436b48d1fc - [x] #4607 - [x] #4416 - lccc - Fixed with "Release 2.3.0" commit: https://huggingface.co/datasets/lccc/commit/8b1f8cf425b5653a0a4357a53205aac82ce038d1 - [x] #4367
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5090/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5090/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5089
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5089/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5089/comments
https://api.github.com/repos/huggingface/datasets/issues/5089/events
https://github.com/huggingface/datasets/issues/5089
1,400,788,486
I_kwDODunzps5TflYG
5,089
Resume failed process
{ "login": "felix-schneider", "id": 208336, "node_id": "MDQ6VXNlcjIwODMzNg==", "avatar_url": "https://avatars.githubusercontent.com/u/208336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/felix-schneider", "html_url": "https://github.com/felix-schneider", "followers_url": "https://api.github.com/users/felix-schneider/followers", "following_url": "https://api.github.com/users/felix-schneider/following{/other_user}", "gists_url": "https://api.github.com/users/felix-schneider/gists{/gist_id}", "starred_url": "https://api.github.com/users/felix-schneider/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/felix-schneider/subscriptions", "organizations_url": "https://api.github.com/users/felix-schneider/orgs", "repos_url": "https://api.github.com/users/felix-schneider/repos", "events_url": "https://api.github.com/users/felix-schneider/events{/privacy}", "received_events_url": "https://api.github.com/users/felix-schneider/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2022-10-07T08:07:03
2022-10-07T08:07:03
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** When a process (`map`, `filter`, etc.) crashes part-way through, you lose all progress. **Describe the solution you'd like** It would be good if the cache reflected the partial progress, so that after we restart the script, the process can restart where it left off. **Describe alternatives you've considered** Doing processing outside of `datasets`, by writing the dataset to json files and building a restart mechanism myself. **Additional context** N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5089/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5089/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5088
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5088/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5088/comments
https://api.github.com/repos/huggingface/datasets/issues/5088/events
https://github.com/huggingface/datasets/issues/5088
1,400,530,412
I_kwDODunzps5TemXs
5,088
load_datasets("json", ...) don't read local .json.gz properly
{ "login": "junwang-wish", "id": 112650299, "node_id": "U_kgDOBrboOw", "avatar_url": "https://avatars.githubusercontent.com/u/112650299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/junwang-wish", "html_url": "https://github.com/junwang-wish", "followers_url": "https://api.github.com/users/junwang-wish/followers", "following_url": "https://api.github.com/users/junwang-wish/following{/other_user}", "gists_url": "https://api.github.com/users/junwang-wish/gists{/gist_id}", "starred_url": "https://api.github.com/users/junwang-wish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/junwang-wish/subscriptions", "organizations_url": "https://api.github.com/users/junwang-wish/orgs", "repos_url": "https://api.github.com/users/junwang-wish/repos", "events_url": "https://api.github.com/users/junwang-wish/events{/privacy}", "received_events_url": "https://api.github.com/users/junwang-wish/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi @junwang-wish, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce the bug. Which version of `datasets` are you using? Does the problem persist if you update `datasets`?\r\n```shell\r\npip install -U datasets\r\n``` ", "Thanks @albertvillanova I updated `datasets` from `2.5.1` to `2.5.2` and tested copying the `json.gz` to a different directory and my mind was blown:\r\n\r\n```python\r\nfpath = '/data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz'\r\nds_panda = DatasetDict(\r\n test=Dataset.from_pandas(\r\n pd.read_json(fpath, lines=True)\r\n )\r\n)\r\nds_direct = load_dataset(\r\n 'json', data_files={\r\n 'test': fpath\r\n }, features=Features(\r\n text_input=Value(dtype=\"string\", id=None),\r\n text_output=Value(dtype=\"string\", id=None)\r\n )\r\n)\r\nlen(ds_panda['test']), len(ds_direct['test'])\r\n```\r\nproduces \r\n```python\r\nUsing custom data configuration default-0e6cf24134163e8b\r\nFound cached dataset json (/data/junwang/.cache/huggingface/datasets/json/default-0e6cf24134163e8b/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab)\r\n(1, 0)\r\n```\r\nbut then I ran below command to see if the same file in a different directory leads to same discrepancy\r\n```shell\r\ncp /data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz tmp_test.json.gz\r\n```\r\nand so I ran\r\n```python\r\nfpath = 'tmp_test.json.gz'\r\nds_panda = DatasetDict(\r\n test=Dataset.from_pandas(\r\n pd.read_json(fpath, lines=True)\r\n )\r\n)\r\nds_direct = load_dataset(\r\n 'json', data_files={\r\n 'test': fpath\r\n }, features=Features(\r\n text_input=Value(dtype=\"string\", id=None),\r\n text_output=Value(dtype=\"string\", id=None)\r\n )\r\n)\r\nlen(ds_panda['test']), len(ds_direct['test'])\r\n```\r\nand behold, I get \r\n```python\r\nUsing custom data configuration default-f679b32ab0008520\r\nDownloading and preparing dataset json/default to /data/junwang/.cache/huggingface/datasets/json/default-f679b32ab0008520/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab...\r\nDataset json downloaded and prepared to /data/junwang/.cache/huggingface/datasets/json/default-f679b32ab0008520/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab. Subsequent calls will reuse this data.\r\n(1, 1)\r\n```\r\nThey match now !\r\n\r\nThis problem happens regardless of the shell I use (VScode jupyter extension or plain old Python REPL). \r\n\r\nI attached the `json.gz` here for reference: [test.json.gz](https://github.com/huggingface/datasets/files/9734843/test.json.gz)\r\n\r\n" ]
2022-10-07T02:16:58
2022-10-07T14:43:16
null
NONE
null
null
null
## Describe the bug I have a local file `*.json.gz` and it can be read by `pandas.read_json(lines=True)`, but cannot be read by `load_datasets("json")` (resulting in 0 lines) ## Steps to reproduce the bug ```python fpath = '/data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz' ds_panda = DatasetDict( test=Dataset.from_pandas( pd.read_json(fpath, lines=True) ) ) ds_direct = load_dataset( 'json', data_files={ 'test': fpath }, features=Features( text_input=Value(dtype="string", id=None), text_output=Value(dtype="string", id=None) ) ) len(ds_panda['test']), len(ds_direct['test']) ``` ## Expected results Lines of `ds_panda['test']` and `ds_direct['test']` should match. ## Actual results ``` Using custom data configuration default-c0ef2598760968aa Downloading and preparing dataset json/default to /data/junwang/.cache/huggingface/datasets/json/default-c0ef2598760968aa/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab... Dataset json downloaded and prepared to /data/junwang/.cache/huggingface/datasets/json/default-c0ef2598760968aa/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab. Subsequent calls will reuse this data. (62087, 0) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Ubuntu 18.04.4 LTS - Python version: 3.8.13 - PyArrow version: 9.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5088/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5088/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5086
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5086/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5086/comments
https://api.github.com/repos/huggingface/datasets/issues/5086/events
https://github.com/huggingface/datasets/issues/5086
1,400,216,975
I_kwDODunzps5TdZ2P
5,086
HTTPError: 404 Client Error: Not Found for url
{ "login": "km5ar", "id": 54015474, "node_id": "MDQ6VXNlcjU0MDE1NDc0", "avatar_url": "https://avatars.githubusercontent.com/u/54015474?v=4", "gravatar_id": "", "url": "https://api.github.com/users/km5ar", "html_url": "https://github.com/km5ar", "followers_url": "https://api.github.com/users/km5ar/followers", "following_url": "https://api.github.com/users/km5ar/following{/other_user}", "gists_url": "https://api.github.com/users/km5ar/gists{/gist_id}", "starred_url": "https://api.github.com/users/km5ar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/km5ar/subscriptions", "organizations_url": "https://api.github.com/users/km5ar/orgs", "repos_url": "https://api.github.com/users/km5ar/repos", "events_url": "https://api.github.com/users/km5ar/events{/privacy}", "received_events_url": "https://api.github.com/users/km5ar/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "FYI @lewtun ", "Hi @km5ar, thanks for reporting.\r\n\r\nThis should be fixed in the notebook:\r\n- the filename `datasets-issues-with-hf-doc-builder.jsonl` no longer exists on the repo; instead, current filename is `datasets-issues-with-comments.jsonl`\r\n- see: https://huggingface.co/datasets/lewtun/github-issues/tree/main\r\n\r\nAnyway, depending on your version of `datasets`, you can now use:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nissues_dataset = load_dataset(\"lewtun/github-issues\")\r\nissues_dataset\r\n```\r\ninstead of:\r\n```python\r\nfrom huggingface_hub import hf_hub_url\r\n\r\ndata_files = hf_hub_url(\r\n repo_id=\"lewtun/github-issues\",\r\n filename=\"datasets-issues-with-hf-doc-builder.jsonl\",\r\n repo_type=\"dataset\",\r\n)\r\nfrom datasets import load_dataset\r\n\r\nissues_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\r\nissues_dataset\r\n```\r\n\r\nOutput:\r\n```python\r\nIn [25]: ds = load_dataset(\"lewtun/github-issues\")\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10.5k/10.5k [00:00<00:00, 5.75MB/s]\r\nUsing custom data configuration lewtun--github-issues-cff5093ecc410ea2\r\nDownloading and preparing dataset json/lewtun--github-issues to .../.cache/huggingface/datasets/lewtun___json/lewtun--github-issues-cff5093ecc410ea2/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12.2M/12.2M [00:00<00:00, 26.5MB/s]\r\nDownloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.70s/it]\r\nExtracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1589.96it/s]\r\nDataset json downloaded and prepared to .../.cache/huggingface/datasets/lewtun___json/lewtun--github-issues-cff5093ecc410ea2/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 133.95it/s]\r\n\r\nIn [26]: ds\r\nOut[26]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['url', 'repository_url', 'labels_url', 'comments_url', 'events_url', 'html_url', 'id', 'node_id', 'number', 'title', 'user', 'labels', 'state', 'locked', 'assignee', 'assignees', 'milestone', 'comments', 'created_at', 'updated_at', 'closed_at', 'author_association', 'active_lock_reason', 'pull_request', 'body', 'timeline_url', 'performed_via_github_app', 'is_pull_request'],\r\n num_rows: 3019\r\n })\r\n})\r\n```", "Thanks for reporting @km5ar and thank you @albertvillanova for the quick solution! I'll post a fix on the source too" ]
2022-10-06T19:48:58
2022-10-07T15:12:01
2022-10-07T15:12:01
NONE
null
null
null
## Describe the bug I was following chap 5 from huggingface course: https://huggingface.co/course/chapter5/6?fw=tf However, I'm not able to download the datasets, with a 404 erros <img width="1160" alt="iShot2022-10-06_15 54 50" src="https://user-images.githubusercontent.com/54015474/194406327-ae62c2f3-1da5-4686-8631-13d879a0edee.png"> ## Steps to reproduce the bug ```python from huggingface_hub import hf_hub_url data_files = hf_hub_url( repo_id="lewtun/github-issues", filename="datasets-issues-with-hf-doc-builder.jsonl", repo_type="dataset", ) from datasets import load_dataset issues_dataset = load_dataset("json", data_files=data_files, split="train") issues_dataset ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.5.2 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.12 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5086/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5086/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5085
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5085/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5085/comments
https://api.github.com/repos/huggingface/datasets/issues/5085/events
https://github.com/huggingface/datasets/issues/5085
1,400,113,569
I_kwDODunzps5TdAmh
5,085
Filtering on an empty dataset returns a corrupted dataset.
{ "login": "gabegma", "id": 36087158, "node_id": "MDQ6VXNlcjM2MDg3MTU4", "avatar_url": "https://avatars.githubusercontent.com/u/36087158?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gabegma", "html_url": "https://github.com/gabegma", "followers_url": "https://api.github.com/users/gabegma/followers", "following_url": "https://api.github.com/users/gabegma/following{/other_user}", "gists_url": "https://api.github.com/users/gabegma/gists{/gist_id}", "starred_url": "https://api.github.com/users/gabegma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gabegma/subscriptions", "organizations_url": "https://api.github.com/users/gabegma/orgs", "repos_url": "https://api.github.com/users/gabegma/repos", "events_url": "https://api.github.com/users/gabegma/events{/privacy}", "received_events_url": "https://api.github.com/users/gabegma/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
{ "login": "Mouhanedg56", "id": 23029765, "node_id": "MDQ6VXNlcjIzMDI5NzY1", "avatar_url": "https://avatars.githubusercontent.com/u/23029765?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mouhanedg56", "html_url": "https://github.com/Mouhanedg56", "followers_url": "https://api.github.com/users/Mouhanedg56/followers", "following_url": "https://api.github.com/users/Mouhanedg56/following{/other_user}", "gists_url": "https://api.github.com/users/Mouhanedg56/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mouhanedg56/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mouhanedg56/subscriptions", "organizations_url": "https://api.github.com/users/Mouhanedg56/orgs", "repos_url": "https://api.github.com/users/Mouhanedg56/repos", "events_url": "https://api.github.com/users/Mouhanedg56/events{/privacy}", "received_events_url": "https://api.github.com/users/Mouhanedg56/received_events", "type": "User", "site_admin": false }
[ { "login": "Mouhanedg56", "id": 23029765, "node_id": "MDQ6VXNlcjIzMDI5NzY1", "avatar_url": "https://avatars.githubusercontent.com/u/23029765?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mouhanedg56", "html_url": "https://github.com/Mouhanedg56", "followers_url": "https://api.github.com/users/Mouhanedg56/followers", "following_url": "https://api.github.com/users/Mouhanedg56/following{/other_user}", "gists_url": "https://api.github.com/users/Mouhanedg56/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mouhanedg56/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mouhanedg56/subscriptions", "organizations_url": "https://api.github.com/users/Mouhanedg56/orgs", "repos_url": "https://api.github.com/users/Mouhanedg56/repos", "events_url": "https://api.github.com/users/Mouhanedg56/events{/privacy}", "received_events_url": "https://api.github.com/users/Mouhanedg56/received_events", "type": "User", "site_admin": false } ]
null
[ "~~It seems like #5043 fix (merged recently) is the root cause of such behaviour. When we empty indices mapping (because the dataset length equals to zero), we can no longer get column item like: `ds_filter_2['sentence']` which uses\r\n`ds_filter_1._indices.column(0)`~~\r\n\r\n**UPDATE:**\r\nEmpty datasets are returned without going through partial function on `map` method, which will not work to get indices for `filter`: we need to run `get_indices_from_mask_function` partial function on the dataset to get output = `{\"indices\": []}`. But this is complicated since functions used in args, in particular `get_indices_from_mask_function`, do not support empty datasets.\r\nWe can just handle empty datasets aside on filter method.", "#self-assign", "Thank you for solving this amazingly quickly!" ]
2022-10-06T18:18:49
2022-10-07T19:06:02
2022-10-07T18:40:26
NONE
null
null
null
## Describe the bug When filtering a dataset twice, where the first result is an empty dataset, the second dataset seems corrupted. ## Steps to reproduce the bug ```python datasets = load_dataset("glue", "sst2") dataset_split = datasets['validation'] ds_filter_1 = dataset_split.filter(lambda x: False) # Some filtering condition that leads to an empty dataset assert ds_filter_1.num_rows == 0 sentences = ds_filter_1['sentence'] assert len(sentences) == 0 ds_filter_2 = ds_filter_1.filter(lambda x: False) # Some other filtering condition assert ds_filter_2.num_rows == 0 assert 'sentence' in ds_filter_2.column_names sentences = ds_filter_2['sentence'] ``` ## Expected results The last line should be returning an empty list, same as 4 lines above. ## Actual results The last line currently raises `IndexError: index out of bounds`. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.5.2 - Platform: macOS-11.6.6-x86_64-i386-64bit - Python version: 3.9.11 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5085/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5085/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5083
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5083/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5083/comments
https://api.github.com/repos/huggingface/datasets/issues/5083/events
https://github.com/huggingface/datasets/issues/5083
1,399,842,514
I_kwDODunzps5Tb-bS
5,083
Support numpy/torch/tf/jax formatting for IterableDataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" }, { "id": 3761482852, "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue", "name": "good second issue", "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "hii @lhoestq, can you assign this issue to me? Though i am new to open source still I would love to put my best foot forward. I can see there isn't anyone right now assigned to this issue.", "Hi @zutarich ! This issue was fixed by #5852 - sorry I forgot to close it\r\n\r\nFeel free to look for other issues and ping me or @mariosasko if you have questions :)\r\nAlso let us know if we can help find an issue that can correspond to what you're looking for" ]
2022-10-06T15:14:58
2023-10-09T12:42:15
2023-10-09T12:42:15
MEMBER
null
null
null
Right now `IterableDataset` doesn't do any formatting. In particular this code should return a numpy array: ```python from datasets import load_dataset ds = load_dataset("imagenet-1k", split="train", streaming=True).with_format("np") print(next(iter(ds))["image"]) ``` Right now it returns a PIL.Image. Setting `streaming=False` does return a numpy array after #5072
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5083/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5083/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5081
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5081/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5081/comments
https://api.github.com/repos/huggingface/datasets/issues/5081/events
https://github.com/huggingface/datasets/issues/5081
1,399,340,050
I_kwDODunzps5TaDwS
5,081
Bug loading `sentence-transformers/parallel-sentences`
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "tagging @nreimers ", "The dataset is sadly not really compatible to be loaded with `load_dataset`. So far it is better to git clone it and to use the files directly.\r\n\r\nA data loading script would be needed to be added to this dataset. But this was too much overhead / not really intuitive how to create it.", "Since the dataset is a bunch of TSVs we should not need a dataset script I think.\r\n\r\nBy default it tries to load all the TSVs at once, which fails here because they don't all have the same columns (pd.read_csv uses the first line as header by default). But those files have no header ! So, to properly load any TSV file in this repo, one has to pass `names=[...]` for pd.read_csv to know which column names to use.\r\n\r\nTo fix this situation, we can either do\r\n1. replace the TSVs by TSV with column names\r\n2. OR specify the pd.read_csv kwargs as YAML in the dataset card - and `datasets` would use that by default\r\n\r\nWDTY ?", "There are more issues in the dataset.\r\nTo load OpenSubtitles I have to provide this (see `skiprows`):\r\n\r\n```python\r\ndf_os = pd.read_csv(\r\n \"./parallel-sentences/OpenSubtitles/OpenSubtitles-en-de-train.tsv.gz\", \r\n sep=\"\\t\", \r\n quoting=csv.QUOTE_NONE,\r\n header=None,\r\n names=[\"en\", \"de\"],\r\n skiprows=[540344, 9151700, 10040173, 10040199, 11314673, 11338258, 11869223, 12159297, 12251078, 12303334],\r\n)\r\n```", "What's wrong with those lines exactly ?\r\nMaybe passing `error_bad_lines=False` (and maybe `warn_bad_lines=True`) can be helpful", "> What's wrong with those lines exactly ? \r\n\r\nStuff like this: `ParserError: Error tokenizing data. C error: Expected 2 fields in line 540345, saw 3`\r\n\r\n", "> Maybe passing error_bad_lines=False (and maybe warn_bad_lines=True) can be helpful\r\n\r\nYes. That would hide the issue but not solve it.", "@nreimers WDYT about the two options mentioned above ?" ]
2022-10-06T10:47:51
2022-10-11T10:00:48
null
CONTRIBUTOR
null
null
null
## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("sentence-transformers/parallel-sentences") ``` raises this: ``` /home/phmay/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py:697: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols' return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs) /home/phmay/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py:697: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols' return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In [4], line 1 ----> 1 dataset = load_dataset("sentence-transformers/parallel-sentences", split="train") File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/load.py:1693, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1690 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1692 # Download and prepare data -> 1693 builder_instance.download_and_prepare( 1694 download_config=download_config, 1695 download_mode=download_mode, 1696 ignore_verifications=ignore_verifications, 1697 try_from_hf_gcs=try_from_hf_gcs, 1698 use_auth_token=use_auth_token, 1699 ) 1701 # Build dataset for splits 1702 keep_in_memory = ( 1703 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1704 ) File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:807, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs) 801 if not downloaded_from_gcs: 802 prepare_split_kwargs = { 803 "file_format": file_format, 804 "max_shard_size": max_shard_size, 805 **download_and_prepare_kwargs, 806 } --> 807 self._download_and_prepare( 808 dl_manager=dl_manager, 809 verify_infos=verify_infos, 810 **prepare_split_kwargs, 811 **download_and_prepare_kwargs, 812 ) 813 # Sync info 814 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:898, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 894 split_dict.add(split_generator.split_info) 896 try: 897 # Prepare split will record examples associated to the split --> 898 self._prepare_split(split_generator, **prepare_split_kwargs) 899 except OSError as e: 900 raise OSError( 901 "Cannot find data file. " 902 + (self.manual_download_instructions or "") 903 + "\nOriginal error:\n" 904 + str(e) 905 ) from None File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:1513, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, max_shard_size) 1506 shard_id += 1 1507 writer = writer_class( 1508 features=writer._features, 1509 path=fpath.replace("SSSSS", f"{shard_id:05d}"), 1510 storage_options=self._fs.storage_options, 1511 embed_local_files=embed_local_files, 1512 ) -> 1513 writer.write_table(table) 1514 finally: 1515 num_shards = shard_id + 1 File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/arrow_writer.py:540, in ArrowWriter.write_table(self, pa_table, writer_batch_size) 538 if self.pa_writer is None: 539 self._build_writer(inferred_schema=pa_table.schema) --> 540 pa_table = table_cast(pa_table, self._schema) 541 if self.embed_local_files: 542 pa_table = embed_table_storage(pa_table) File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/table.py:2044, in table_cast(table, schema) 2032 """Improved version of pa.Table.cast. 2033 2034 It supports casting to feature types stored in the schema metadata. (...) 2041 table (:obj:`pyarrow.Table`): the casted table 2042 """ 2043 if table.schema != schema: -> 2044 return cast_table_to_schema(table, schema) 2045 elif table.schema.metadata != schema.metadata: 2046 return table.replace_schema_metadata(schema.metadata) File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/table.py:2005, in cast_table_to_schema(table, schema) 2003 features = Features.from_arrow_schema(schema) 2004 if sorted(table.column_names) != sorted(features): -> 2005 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match") 2006 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] 2007 return pa.Table.from_arrays(arrays, schema=schema) ValueError: Couldn't cast Action taken on Parliament's resolutions: see Minutes: string Následný postup na základě usnesení Parlamentu: viz zápis: string -- schema metadata -- pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 742 to {'Membership of Parliament: see Minutes': Value(dtype='string', id=None), 'Състав на Парламента: вж. протоколи': Value(dtype='string', id=None)} because column names don't match ``` ## Expected results no error ## Actual results error ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Linux - Python version: Python 3.9.13 - PyArrow version: pyarrow 9.0.0 - transformers 4.22.2 - datasets 2.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5081/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5081/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5080/comments
https://api.github.com/repos/huggingface/datasets/issues/5080/events
https://github.com/huggingface/datasets/issues/5080
1,398,849,565
I_kwDODunzps5TYMAd
5,080
Use hfh for caching
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "There is some discussion in https://github.com/huggingface/huggingface_hub/pull/1088 if it can help :)" ]
2022-10-06T05:51:58
2022-10-06T14:26:05
null
MEMBER
null
null
null
## Is your feature request related to a problem? As previously discussed in our meeting with @Wauplin and agreed on our last datasets team sync meeting, I'm investigating how `datasets` can use `hfh` for caching. ## Describe the solution you'd like Due to the peculiarities of the `datasets` cache, I would propose adopting `hfh` caching system in stages. First, we could easily start using `hfh` caching for: - dataset Python scripts - dataset READMEs - dataset infos JSON files (now deprecated) Second, we could also use `hfh` caching for data files downloaded from the Hub. Further investigation is needed for: - files downloaded from non-Hub hosts - extracted files from downloaded archive/compressed files - generated Arrow files ## Additional context Docs about the `hfh` caching system: - [Manage huggingface_hub cache-system](https://huggingface.co/docs/huggingface_hub/main/en/how-to-cache) - [Cache-system reference](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/cache) The `transformers` library has already adopted `hfh` for caching. See: - huggingface/transformers#18438 - huggingface/transformers#18857 - huggingface/transformers#18966
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5080/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5075
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5075/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5075/comments
https://api.github.com/repos/huggingface/datasets/issues/5075/events
https://github.com/huggingface/datasets/issues/5075
1,397,865,501
I_kwDODunzps5TUbwd
5,075
Throw EnvironmentError when token is not present
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "@mariosasko I've raised a PR #5076 against this issue. Please help to review. Thanks." ]
2022-10-05T14:14:18
2022-10-07T14:33:28
2022-10-07T14:33:28
CONTRIBUTOR
null
null
null
Throw EnvironmentError instead of OSError ([link](https://github.com/huggingface/datasets/blob/6ad430ba0cdeeb601170f732d4bd977f5c04594d/src/datasets/arrow_dataset.py#L4306) to the line) in `push_to_hub` when the Hub token is not present.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5075/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5075/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5074
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5074/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5074/comments
https://api.github.com/repos/huggingface/datasets/issues/5074/events
https://github.com/huggingface/datasets/issues/5074
1,397,850,352
I_kwDODunzps5TUYDw
5,074
Replace AssertionErrors with more meaningful errors
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
{ "login": "galbwe", "id": 20004072, "node_id": "MDQ6VXNlcjIwMDA0MDcy", "avatar_url": "https://avatars.githubusercontent.com/u/20004072?v=4", "gravatar_id": "", "url": "https://api.github.com/users/galbwe", "html_url": "https://github.com/galbwe", "followers_url": "https://api.github.com/users/galbwe/followers", "following_url": "https://api.github.com/users/galbwe/following{/other_user}", "gists_url": "https://api.github.com/users/galbwe/gists{/gist_id}", "starred_url": "https://api.github.com/users/galbwe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/galbwe/subscriptions", "organizations_url": "https://api.github.com/users/galbwe/orgs", "repos_url": "https://api.github.com/users/galbwe/repos", "events_url": "https://api.github.com/users/galbwe/events{/privacy}", "received_events_url": "https://api.github.com/users/galbwe/received_events", "type": "User", "site_admin": false }
[ { "login": "galbwe", "id": 20004072, "node_id": "MDQ6VXNlcjIwMDA0MDcy", "avatar_url": "https://avatars.githubusercontent.com/u/20004072?v=4", "gravatar_id": "", "url": "https://api.github.com/users/galbwe", "html_url": "https://github.com/galbwe", "followers_url": "https://api.github.com/users/galbwe/followers", "following_url": "https://api.github.com/users/galbwe/following{/other_user}", "gists_url": "https://api.github.com/users/galbwe/gists{/gist_id}", "starred_url": "https://api.github.com/users/galbwe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/galbwe/subscriptions", "organizations_url": "https://api.github.com/users/galbwe/orgs", "repos_url": "https://api.github.com/users/galbwe/repos", "events_url": "https://api.github.com/users/galbwe/events{/privacy}", "received_events_url": "https://api.github.com/users/galbwe/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi, can I pick up this issue?", "#self-assign", "Looks like the top-level `datasource` directory was removed when https://github.com/huggingface/datasets/pull/4974 was merged, so there are 3 source files to fix." ]
2022-10-05T14:03:55
2022-10-07T14:33:11
2022-10-07T14:33:11
CONTRIBUTOR
null
null
null
Replace the AssertionErrors with more meaningful errors such as ValueError, TypeError, etc. The files with AssertionErrors that need to be replaced: ``` src/datasets/arrow_reader.py src/datasets/builder.py src/datasets/utils/version.py ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5074/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5074/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5070
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5070/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5070/comments
https://api.github.com/repos/huggingface/datasets/issues/5070/events
https://github.com/huggingface/datasets/issues/5070
1,396,765,647
I_kwDODunzps5TQPPP
5,070
Support default config name when no builder configs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thank you for creating this feature request, Albert.\r\n\r\nFor context this is the datatest where Albert has been helping me to switch to on-the-fly split config https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing\r\n\r\nand the attempt to switch on-the-fly splits was here: https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing/discussions/2/files\r\n\r\nbut which I had to revert since providing no split breaks at run time.\r\n" ]
2022-10-04T19:49:35
2022-10-06T14:40:26
2022-10-06T14:40:26
MEMBER
null
null
null
**Is your feature request related to a problem? Please describe.** As discussed with @stas00, we could support defining a default config name, even if no predefined allowed config names are set. That is, support `DEFAULT_CONFIG_NAME`, even when `BUILDER_CONFIGS` is not defined. **Additional context** In order to support creating configs on the fly **by name** (not using kwargs), the list of allowed builder configs `BUILDER_CONFIGS` must not be set. However, if so, then `DEFAULT_CONFIG_NAME` is not supported.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5070/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5070/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5061
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5061/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5061/comments
https://api.github.com/repos/huggingface/datasets/issues/5061/events
https://github.com/huggingface/datasets/issues/5061
1,395,476,770
I_kwDODunzps5TLUki
5,061
`_pickle.PicklingError: logger cannot be pickled` in multiprocessing `map`
{ "login": "ZhaofengWu", "id": 11954789, "node_id": "MDQ6VXNlcjExOTU0Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZhaofengWu", "html_url": "https://github.com/ZhaofengWu", "followers_url": "https://api.github.com/users/ZhaofengWu/followers", "following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}", "gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions", "organizations_url": "https://api.github.com/users/ZhaofengWu/orgs", "repos_url": "https://api.github.com/users/ZhaofengWu/repos", "events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}", "received_events_url": "https://api.github.com/users/ZhaofengWu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "This is maybe related to python 3.10, do you think you could try on 3.8 ?\r\n\r\nIn the meantime we'll keep improving the support for 3.10. Let me add a dedicated CI", "I did some binary search and seems like the root cause is either `multiprocess` or `dill`. python 3.10 is fine. Specifically:\r\n- `multiprocess==0.70.12.2, dill==0.3.4`: works\r\n- `multiprocess==0.70.12.2, dill==0.3.5.1`: doesn't work\r\n- `multiprocess==0.70.13, dill==0.3.5.1`: doesn't work\r\n- `multiprocess==0.70.13, dill==0.3.4`: can't test, `multiprocess==0.70.13` requires `dill>=0.3.5.1`\r\n\r\nI will pin their versions on my end. I don't have enough knowledge of how python multiprocessing works to debug this, but ideally there could be a fix. It's also possible that I'm doing something wrong in my code, but again the `.name` of the logger that failed to pickle is `datasets.fingerprint`, which I'm not using directly.", "Do you know which logger fails at being pickled ?", "I'm not 100% sure how to figure it out -- the stack trace above doesn't clearly give me a place where I can print out who owns the logger, etc. I only found out its `.name` is `datasets.fingerprint` by printing right before\r\n```\r\n File \".../logging/__init__.py\", line 1774, in __reduce__\r\n raise pickle.PicklingError('logger cannot be pickled')\r\n```\r\nIf you have any idea on how to find it out, please let me know.", "Ok I see, not sure why it triggers this error though, in `logging.py` the code is\r\n\r\nhttps://github.com/python/cpython/blob/c9da063e32725a66495e4047b8a5ed13e72d9e8e/Lib/logging/__init__.py#L1769-L1775\r\n\r\nand on my side it works on 3.10 with dill 0.3.5.1 and multiprocess 0.70.13\r\n```python\r\n>>> datasets.fingerprint.logger.__reduce__() \r\n(<function logging.getLogger(name=None)>, ('datasets.fingerprint',))\r\n```\r\nCould you try to run this code ?\r\n\r\nAre you in an environment where the loggers are instantiated differently ? Can you check the source code of `logging.Logger.__reduce__` in `\".../logging/__init__.py\", line 1774` ?", "Closing due to inactivity." ]
2022-10-03T23:51:38
2023-07-21T14:43:35
2023-07-21T14:43:34
NONE
null
null
null
## Describe the bug When I `map` with multiple processes, this error occurs. The `.name` of the `logger` that fails to pickle in the final line is `datasets.fingerprint`. ``` File "~/project/dataset.py", line 204, in <dictcomp> split: dataset.map( File ".../site-packages/datasets/arrow_dataset.py", line 2489, in map transformed_shards[index] = async_result.get() File ".../site-packages/multiprocess/pool.py", line 771, in get raise self._value File ".../site-packages/multiprocess/pool.py", line 537, in _handle_tasks put(task) File ".../site-packages/multiprocess/connection.py", line 214, in send self._send_bytes(_ForkingPickler.dumps(obj)) File ".../site-packages/multiprocess/reduction.py", line 54, in dumps cls(buf, protocol, *args, **kwds).dump(obj) File ".../site-packages/dill/_dill.py", line 620, in dump StockPickler.dump(self, obj) File ".../pickle.py", line 487, in dump self.save(obj) File ".../pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File ".../pickle.py", line 902, in save_tuple save(element) File ".../pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File ".../site-packages/dill/_dill.py", line 1963, in save_function _save_with_postproc(pickler, (_create_function, ( File ".../site-packages/dill/_dill.py", line 1140, in _save_with_postproc pickler.save_reduce(*reduction, obj=obj) File ".../pickle.py", line 717, in save_reduce save(state) File ".../pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File ".../pickle.py", line 887, in save_tuple save(element) File ".../pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File ".../site-packages/dill/_dill.py", line 1251, in save_module_dict StockPickler.save_dict(pickler, obj) File ".../pickle.py", line 972, in save_dict self._batch_setitems(obj.items()) File ".../pickle.py", line 998, in _batch_setitems save(v) File ".../pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File ".../site-packages/dill/_dill.py", line 1963, in save_function _save_with_postproc(pickler, (_create_function, ( File ".../site-packages/dill/_dill.py", line 1140, in _save_with_postproc pickler.save_reduce(*reduction, obj=obj) File ".../pickle.py", line 717, in save_reduce save(state) File ".../pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File ".../pickle.py", line 887, in save_tuple save(element) File ".../pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File ".../site-packages/dill/_dill.py", line 1251, in save_module_dict StockPickler.save_dict(pickler, obj) File ".../pickle.py", line 972, in save_dict self._batch_setitems(obj.items()) File ".../pickle.py", line 998, in _batch_setitems save(v) File ".../pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File ".../site-packages/dill/_dill.py", line 1963, in save_function _save_with_postproc(pickler, (_create_function, ( File ".../site-packages/dill/_dill.py", line 1154, in _save_with_postproc pickler._batch_setitems(iter(source.items())) File ".../pickle.py", line 998, in _batch_setitems save(v) File ".../pickle.py", line 578, in save rv = reduce(self.proto) File ".../logging/__init__.py", line 1774, in __reduce__ raise pickle.PicklingError('logger cannot be pickled') _pickle.PicklingError: logger cannot be pickled ``` ## Steps to reproduce the bug Sorry I failed to have a minimal reproducible example, but the offending line on my end is ```python dataset.map( lambda examples: self.tokenize(examples), # this doesn't matter, lambda e: [1] * len(...) also breaks. In fact I'm pretty sure it breaks before executing this lambda batched=True, num_proc=4, ) ``` This does work when `num_proc=1`, so it's likely a multiprocessing thing. ## Expected results `map` succeeds ## Actual results The error trace above. ## Environment info - `datasets` version: 1.16.1 and 2.5.1 both failed - Platform: Ubuntu 20.04.4 LTS - Python version: 3.10.4 - PyArrow version: 9.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5061/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5061/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5060
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5060/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5060/comments
https://api.github.com/repos/huggingface/datasets/issues/5060/events
https://github.com/huggingface/datasets/issues/5060
1,395,382,940
I_kwDODunzps5TK9qc
5,060
Unable to Use Custom Dataset Locally
{ "login": "zanussbaum", "id": 33707069, "node_id": "MDQ6VXNlcjMzNzA3MDY5", "avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zanussbaum", "html_url": "https://github.com/zanussbaum", "followers_url": "https://api.github.com/users/zanussbaum/followers", "following_url": "https://api.github.com/users/zanussbaum/following{/other_user}", "gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}", "starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions", "organizations_url": "https://api.github.com/users/zanussbaum/orgs", "repos_url": "https://api.github.com/users/zanussbaum/repos", "events_url": "https://api.github.com/users/zanussbaum/events{/privacy}", "received_events_url": "https://api.github.com/users/zanussbaum/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! I opened a PR in your repo to fix this :)\r\nhttps://huggingface.co/datasets/zpn/pubchem_selfies/discussions/7\r\n\r\nbasically you need to use `open` for streaming to work properly", "Thank you so much for this! Naive question, is this a feature of `open` or have you all overloaded it to be able to read from a URL? Any links to code/documentation would be greatly appreciated, I'd love to learn more", "`datasets` extends `open` in dataset scripts to work with URLs. The builtin `open` from python only works with local files.\r\n\r\nYou can find the extension here: https://github.com/huggingface/datasets/blob/6ad430ba0cdeeb601170f732d4bd977f5c04594d/src/datasets/download/streaming_download_manager.py#L435-L451\r\n\r\nI think we can create a docs section dedicated to streaming to explain how this works", "Closing this one - feel free to reopen if you have more questions" ]
2022-10-03T21:55:16
2022-10-06T14:29:18
2022-10-06T14:29:17
CONTRIBUTOR
null
null
null
## Describe the bug I have uploaded a [dataset](https://huggingface.co/datasets/zpn/pubchem_selfies) and followed the instructions from the [dataset_loader](https://huggingface.co/docs/datasets/dataset_script#download-data-files-and-organize-splits) tutorial. In that tutorial, it says ``` If the data files live in the same folder or repository of the dataset script, you can just pass the relative paths to the files instead of URLs. ``` Accordingly, I put the [relative path](https://huggingface.co/datasets/zpn/pubchem_selfies/blob/main/pubchem_selfies.py#L76) to the data to be used. I was able to test the dataset and generate the metadata locally with `datasets-cli test path/to/<your-dataset-loading-script> --save_infos --all_configs` However, if I try to load the data using `load_dataset`, I get the following error ``` with gzip.open(filepath, mode="rt") as f: File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 58, in open binary_file = GzipFile(filename, gz_mode, compresslevel) File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 173, in __init__ fileobj = self.myfileobj = builtins.open(filename, mode or 'rb') FileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co/datasets/zpn/pubchem_selfies/resolve/main/data/Compound_021000001_021500000/Compound_021000001_021500000_SELFIES.jsonl.gz' ``` ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> dataset = load_dataset("zpn/pubchem_selfies", streaming=True) >>> t = dataset["train"] >>> for item in t: ...... print(item) ...... break Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 723, in __iter__ for key, example in self._iter(): File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 713, in _iter yield from ex_iterable File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/Users/zachnussbaum/.cache/huggingface/modules/datasets_modules/datasets/zpn--pubchem_selfies/d2571f35996765aea70fd3f3f8e3882d59c401fb738615c79282e2eb1d9f7a25/pubchem_selfies.py", line 475, in _generate_examples with gzip.open(filepath, mode="rt") as f: File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 58, in open binary_file = GzipFile(filename, gz_mode, compresslevel) File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 173, in __init__ fileobj = self.myfileobj = builtins.open(filename, mode or 'rb') FileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co/datasets/zpn/pubchem_selfies/resolve/main/data/Compound_021000001_021500000/Compound_021000001_021500000_SELFIES.jsonl.gz' ```` ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.5.1 - Platform: macOS-12.5.1-x86_64-i386-64bit - Python version: 3.9.7 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5060/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5060/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5053
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5053/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5053/comments
https://api.github.com/repos/huggingface/datasets/issues/5053/events
https://github.com/huggingface/datasets/issues/5053
1,393,739,882
I_kwDODunzps5TEshq
5,053
Intermittent JSON parse error when streaming the Pile
{ "login": "neelnanda-io", "id": 77788841, "node_id": "MDQ6VXNlcjc3Nzg4ODQx", "avatar_url": "https://avatars.githubusercontent.com/u/77788841?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neelnanda-io", "html_url": "https://github.com/neelnanda-io", "followers_url": "https://api.github.com/users/neelnanda-io/followers", "following_url": "https://api.github.com/users/neelnanda-io/following{/other_user}", "gists_url": "https://api.github.com/users/neelnanda-io/gists{/gist_id}", "starred_url": "https://api.github.com/users/neelnanda-io/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neelnanda-io/subscriptions", "organizations_url": "https://api.github.com/users/neelnanda-io/orgs", "repos_url": "https://api.github.com/users/neelnanda-io/repos", "events_url": "https://api.github.com/users/neelnanda-io/events{/privacy}", "received_events_url": "https://api.github.com/users/neelnanda-io/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Maybe #2838 can help. In this PR we allow to skip bad chunks of JSON data to not crash the training\r\n\r\nDid you have warning messages before the error ?\r\n\r\nsomething like this maybe ?\r\n```\r\n03/24/2022 02:19:46 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [1/20]\r\n03/24/2022 02:20:01 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [2/20]\r\n03/24/2022 02:20:09 - ERROR - datasets.packaged_modules.json.json - Failed to read file 'gzip://file-000000000007.json::https://huggingface.co/datasets/lvwerra/codeparrot-clean-train/resolve/1d740acb9d09cf7a3307553323e2c677a6535407/file-000000000007.json.gz' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Invalid value. in row 0\r\n```", "Ah, thanks! I did get errors like that. Sad that PR wasn't merged in! \r\n\r\nI'm currently just downloading 200GB of the Pile locally to avoid streaming (I have space and it's faster anyway), but that's really useful! I can probably apply the dumb patch of just commenting out the bits that raise the JSON Parse Error lol, based on your code - if I continue the loop should it be fine?", "Yup you can get some inspiration from this PR. It simply ignores the bad chunks (a chunk is ~a few MBs of data).\r\nWe'll try to merge this PR soon" ]
2022-10-02T11:56:46
2022-10-04T17:59:03
null
NONE
null
null
null
## Describe the bug I have an intermittent error when streaming the Pile, where I get a JSON parse error which causes my program to crash. This is intermittent - when I rerun the program with the same random seed it does not crash in the same way. The exact point this happens also varied - it happened to me 11B tokens and 4 days into a training run, and now just happened 2 minutes into one, but I can't reliably reproduce it. I'm using a remote machine with 8 A6000 GPUs via runpod.io ## Expected results I have a DataLoader which can iterate through the whole Pile ## Actual results Stack trace: ``` Failed to read file 'zstd://12.jsonl::https://the-eye.eu/public/AI/pile/train/12.jsonl.zst' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Invalid value. in row 0 ``` I'm currently using HuggingFace accelerate, which also gave me the following stack trace, but I've also experienced this problem intermittently when using DataParallel, so I don't think it's to do with parallelisation ``` Traceback (most recent call last): File "ddp_script.py", line 1258, in <module> main() File "ddp_script.py", line 1143, in main for c, batch in tqdm.tqdm(enumerate(data_iter)): File "/opt/conda/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 503, in __iter__ next_batch, next_batch_info, next_skip = self._fetch_batches(main_iterator) File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 454, in _fetch_batches broadcast_object_list(batch_info) File "/opt/conda/lib/python3.7/site-packages/accelerate/utils/operations.py", line 333, in broadcast_object_list torch.distributed.broadcast_object_list(object_list, src=from_process) File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1900, in broadcast_object_list object_list[i] = _tensor_to_object(obj_view, obj_size) File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1571, in _tensor_to_object return _unpickler(io.BytesIO(buf)).load() _pickle.UnpicklingError: invalid load key, '@'. ``` ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset( cfg["dataset_name"], streaming=True, split="train") dataset = dataset.remove_columns("meta") dataset = dataset.map(tokenize_and_concatenate, batched=True) dataset = dataset.with_format(type="torch") train_data_loader = DataLoader( dataset, batch_size=cfg["batch_size"], num_workers=3) for batch in train_data_loader: continue ``` `tokenize_and_concatenate` is a custom tokenization function I defined on the GPT-NeoX tokenizer to tokenize the text, separated by endoftext tokens, and reshape to have length batch_size, I don't think this is related to tokenization: ``` import numpy as np import einops import torch def tokenize_and_concatenate(examples): texts = examples["text"] full_text = tokenizer.eos_token.join(texts) div = 20 length = len(full_text) // div text_list = [full_text[i * length: (i + 1) * length] for i in range(div)] tokens = tokenizer(text_list, return_tensors="np", padding=True)[ "input_ids" ].flatten() tokens = tokens[tokens != tokenizer.pad_token_id] n = len(tokens) curr_batch_size = n // (seq_len - 1) tokens = tokens[: (seq_len - 1) * curr_batch_size] tokens = einops.rearrange( tokens, "(batch_size seq) -> batch_size seq", batch_size=curr_batch_size, seq=seq_len - 1, ) prefix = np.ones((curr_batch_size, 1), dtype=np.int64) * \ tokenizer.bos_token_id return { "text": np.concatenate([prefix, tokens], axis=1) } ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-5.4.0-105-generic-x86_64-with-debian-buster-sid - Python version: 3.7.13 - PyArrow version: 9.0.0 - Pandas version: 1.3.5 ZStandard data: Version: 0.18.0 Summary: Zstandard bindings for Python Home-page: https://github.com/indygreg/python-zstandard Author: Gregory Szorc Author-email: [email protected] License: BSD Location: /opt/conda/lib/python3.7/site-packages Requires: Required-by:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5053/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5053/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5050
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5050/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5050/comments
https://api.github.com/repos/huggingface/datasets/issues/5050/events
https://github.com/huggingface/datasets/issues/5050
1,392,381,882
I_kwDODunzps5S_g-6
5,050
Restore saved format state in `load_from_disk`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
{ "login": "asofiaoliveira", "id": 74454835, "node_id": "MDQ6VXNlcjc0NDU0ODM1", "avatar_url": "https://avatars.githubusercontent.com/u/74454835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asofiaoliveira", "html_url": "https://github.com/asofiaoliveira", "followers_url": "https://api.github.com/users/asofiaoliveira/followers", "following_url": "https://api.github.com/users/asofiaoliveira/following{/other_user}", "gists_url": "https://api.github.com/users/asofiaoliveira/gists{/gist_id}", "starred_url": "https://api.github.com/users/asofiaoliveira/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/asofiaoliveira/subscriptions", "organizations_url": "https://api.github.com/users/asofiaoliveira/orgs", "repos_url": "https://api.github.com/users/asofiaoliveira/repos", "events_url": "https://api.github.com/users/asofiaoliveira/events{/privacy}", "received_events_url": "https://api.github.com/users/asofiaoliveira/received_events", "type": "User", "site_admin": false }
[ { "login": "asofiaoliveira", "id": 74454835, "node_id": "MDQ6VXNlcjc0NDU0ODM1", "avatar_url": "https://avatars.githubusercontent.com/u/74454835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asofiaoliveira", "html_url": "https://github.com/asofiaoliveira", "followers_url": "https://api.github.com/users/asofiaoliveira/followers", "following_url": "https://api.github.com/users/asofiaoliveira/following{/other_user}", "gists_url": "https://api.github.com/users/asofiaoliveira/gists{/gist_id}", "starred_url": "https://api.github.com/users/asofiaoliveira/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/asofiaoliveira/subscriptions", "organizations_url": "https://api.github.com/users/asofiaoliveira/orgs", "repos_url": "https://api.github.com/users/asofiaoliveira/repos", "events_url": "https://api.github.com/users/asofiaoliveira/events{/privacy}", "received_events_url": "https://api.github.com/users/asofiaoliveira/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi, can I work on this?", "Hi, sure! Let us know if you need some pointers/help." ]
2022-09-30T12:40:07
2022-10-11T16:49:24
2022-10-11T16:49:24
CONTRIBUTOR
null
null
null
Even though we save the `format` state in `save_to_disk`, we don't restore it in `load_from_disk`. We should fix that. Reported here: https://discuss.huggingface.co/t/save-to-disk-loses-formatting-information/23815
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5050/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5050/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5046/comments
https://api.github.com/repos/huggingface/datasets/issues/5046/events
https://github.com/huggingface/datasets/issues/5046
1,391,372,519
I_kwDODunzps5S7qjn
5,046
Audiofolder creates empty Dataset if files same level as metadata
{ "login": "msis", "id": 577139, "node_id": "MDQ6VXNlcjU3NzEzOQ==", "avatar_url": "https://avatars.githubusercontent.com/u/577139?v=4", "gravatar_id": "", "url": "https://api.github.com/users/msis", "html_url": "https://github.com/msis", "followers_url": "https://api.github.com/users/msis/followers", "following_url": "https://api.github.com/users/msis/following{/other_user}", "gists_url": "https://api.github.com/users/msis/gists{/gist_id}", "starred_url": "https://api.github.com/users/msis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/msis/subscriptions", "organizations_url": "https://api.github.com/users/msis/orgs", "repos_url": "https://api.github.com/users/msis/repos", "events_url": "https://api.github.com/users/msis/events{/privacy}", "received_events_url": "https://api.github.com/users/msis/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
{ "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false }
[ { "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi! Unfortunately, I can't reproduce this behavior. Instead, I get `ValueError: audio at 2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav doesn't have metadata in /audio-data/metadata.csv`, which can be fixed by removing the `./` from the file name.\r\n\r\n(Link to a Colab that tries to reproduce this behavior: https://colab.research.google.com/drive/1IhQzULYi0Van1xLrN_SddBX1JF7mLZZK?usp=sharing)", "I think we can make the file name matching part more robust by replacing `file_name` with `os.path.normpath(file_name)`, to ignore \"./\" among other things, in these two places:\r\n* https://github.com/huggingface/datasets/blob/85cd129bde605cd9acacdff0d065fc02e39e09b1/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L319\r\n* https://github.com/huggingface/datasets/blob/85cd129bde605cd9acacdff0d065fc02e39e09b1/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L388", "@mariosasko Some tests failed (see my PR). Any thoughts on that?", "Yes, I mentioned the solution in my review.", "I realized what I was doing wrong.\r\n\r\nThe documentation puts the files in a subfolder.\r\nOnce I have done that, it worked.\r\n\r\nBut l agree that this should be handled better if possible." ]
2022-09-29T19:17:23
2022-10-28T13:05:07
2022-10-28T13:05:07
NONE
null
null
null
## Describe the bug When audio files are at the same level as the metadata (`metadata.csv` or `metadata.jsonl` ), the `load_dataset` returns a `DatasetDict` with no rows but the correct columns. https://github.com/huggingface/datasets/blob/1ea4d091b7a4b83a85b2eeb8df65115d39af3766/docs/source/audio_dataset.mdx?plain=1#L88 ## Steps to reproduce the bug `metadata.csv`: ```csv file_name,duration,transcription ./2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav,10.768,hello ``` ```python >>> audio_dataset = load_dataset("audiofolder", data_dir="/audio-data/") >>> audio_dataset DatasetDict({ train: Dataset({ features: ['audio', 'duration', 'transcription'], num_rows: 0 }) validation: Dataset({ features: ['audio', 'duration', 'transcription'], num_rows: 0 }) }) ``` I've tried, with no success,: - setting `split` to something else so I don't get a `DatasetDict`, - removing the `./`, - using `.jsonl`. ## Expected results ``` Dataset({ features: ['audio', 'duration', 'transcription'], num_rows: 1 }) ``` ## Actual results ``` DatasetDict({ train: Dataset({ features: ['audio', 'duration', 'transcription'], num_rows: 0 }) validation: Dataset({ features: ['audio', 'duration', 'transcription'], num_rows: 0 }) }) ``` ## Environment info - `datasets` version: 2.5.1 - Platform: Linux-5.13.0-1025-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5046/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5045
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5045/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5045/comments
https://api.github.com/repos/huggingface/datasets/issues/5045/events
https://github.com/huggingface/datasets/issues/5045
1,391,287,609
I_kwDODunzps5S7V05
5,045
Automatically revert to last successful commit to hub when a push_to_hub is interrupted
{ "login": "jorahn", "id": 13120204, "node_id": "MDQ6VXNlcjEzMTIwMjA0", "avatar_url": "https://avatars.githubusercontent.com/u/13120204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jorahn", "html_url": "https://github.com/jorahn", "followers_url": "https://api.github.com/users/jorahn/followers", "following_url": "https://api.github.com/users/jorahn/following{/other_user}", "gists_url": "https://api.github.com/users/jorahn/gists{/gist_id}", "starred_url": "https://api.github.com/users/jorahn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jorahn/subscriptions", "organizations_url": "https://api.github.com/users/jorahn/orgs", "repos_url": "https://api.github.com/users/jorahn/repos", "events_url": "https://api.github.com/users/jorahn/events{/privacy}", "received_events_url": "https://api.github.com/users/jorahn/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Could you share the error you got please ? Maybe the full stack trace if you have it ?\r\n\r\nMaybe `push_to_hub` be implemented as a single commit @Wauplin ? This way if it fails, the repo is still at the previous (valid) state instead of ending-up in an invalid/incimplete state.", "> Maybe push_to_hub be implemented as a single commit ? \r\n\r\nI think that would definitely be the way to go. Do you know the reasons why not implementing it like this in the first place ? I guess it is because of not been able to upload all at once with `huggingface_hub` but if there was another reason, please let me know.\r\nAbout pushing all at once, it seems to be a more and more requested feature. I have created this issue https://github.com/huggingface/huggingface_hub/issues/1085 recently but other discussions already happened in the past. The `moon-landing` team is working on it (cc @coyotte508). The `huggingface_hub` integration will come afterwards.\r\n\r\nFor now, maybe it's best to wait for a proper implementation instead of creating a temporary workaround :)\r\n", "> I think that would definitely be the way to go. Do you know the reasons why not implementing it like this in the first place ? I guess it is because of not been able to upload all at once with huggingface_hub but if there was another reason, please let me know.\r\n\r\nIdeally we would want to upload the files iteratively - and then once everything is uploaded we proceed to commit. When we implemented `push_to_hub`, using `upload_file` for each shard was the only option.\r\n\r\nFor more context: for each shard to upload we do:\r\n1. load the arrow shard in memory\r\n2. convert to parquet\r\n3. upload\r\n\r\nSo to avoid OOM we need to upload the files iteratively.\r\n\r\n> For now, maybe it's best to wait for a proper implementation instead of creating a temporary workaround :)\r\n\r\nLet us know if we can help !", "> Ideally we would want to upload the files iteratively - and then once everything is uploaded we proceed to commit. \r\n\r\nOh I see. So maybe this has to be done in an implementation specific to `datasets/` as it is not a very common case (upload a bunch of files on the fly).\r\n\r\nYou can maybe have a look at how `huggingface_hub` is implemented for LFS files (arrow shards are LFS anyway, right?).\r\nIn [`upload_lfs_files`](https://github.com/huggingface/huggingface_hub/blob/e28646c977fc9304a4c3576ce61ff07f9778950b/src/huggingface_hub/_commit_api.py#L164) LFS files are uploaded 1 by 1 (multithreaded) and then [the commit is pushed](https://github.com/huggingface/huggingface_hub/blob/e28646c977fc9304a4c3576ce61ff07f9778950b/src/huggingface_hub/hf_api.py#L1926) to the Hub once all files have been uploaded. This is pretty much what you need, right ?\r\n\r\nI can help you if you have questions how to do it in `datasets`. If that makes sense we could then move the implementation from `datasets` to `huggingface_hub` once it's mature. Next week I'm on holidays but feel free to start without my input.\r\n\r\n(also cc @coyotte508 and @SBrandeis who implemented LFS upload in `hfh`)", "> Could you share the error you got please ? Maybe the full stack trace if you have it ?\r\n\r\nHere’s part of the stack trace, that I can reproduce at the moment from a photo I took (potential typos from OCR):\r\n```\r\nValueError\r\nTraceback (most recent call last)\r\n<ipython-input-4-274613b7d3f5> in <module>\r\nfrom datasets import load dataset\r\nds = load_dataset('jrahn/chessv6', use_auth_token-True)\r\n\r\n/us/local/1ib/python3.7/dist-packages/datasets/table.py in cast_table _to_schema (table, schema)\r\nLine 2005 raise ValueError()\r\n\r\nValueError: Couldn't cast \r\nfen: string \r\nmove: string \r\nres: string \r\neco: string \r\nmove_id: int64\r\nres_num: int64 to\r\n{ 'fen': Value(dtype='string', id=None), \r\n'move': Value(dtype=' string', id=None),\r\n'res': Value(dtype='string', id=None),\r\n'eco': Value(dtype='string', id=None), \r\n'hc': Value(dtype='string', id=None), \r\n'move_ id': Value(dtype='int64', id=None),\r\n'res_num': Value(dtype= 'int64' , id=None) }\r\nbecause column names don't match \r\n```\r\n\r\nThe column 'hc' was removed before the interrupted push_to_hub(). It appears in the column list in curly brackets but not in the column list above.\r\n\r\nLet me know, if I can be of any help." ]
2022-09-29T18:08:12
2023-10-16T13:30:49
2023-10-16T13:30:49
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** I pushed a modification of a large dataset (remove a column) to the hub. The push was interrupted after some files were committed to the repo. This left the dataset to raise an error on load_dataset() (ValueError couldn’t cast … because column names don’t match). Only by specifying the previous (complete) commit as revision=commit_hash in load_data(), I was able to repair this and after a successful, complete push, the dataset loads without error again. **Describe the solution you'd like** Would it make sense to detect an incomplete push_to_hub() and automatically revert to the previous commit/revision? **Describe alternatives you've considered** Leave everything as is, the revision parameter in load_dataset() allows to manually fix this problem. **Additional context** Provide useful defaults
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5045/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5045/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5044
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5044/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5044/comments
https://api.github.com/repos/huggingface/datasets/issues/5044/events
https://github.com/huggingface/datasets/issues/5044
1,391,242,908
I_kwDODunzps5S7K6c
5,044
integrate `load_from_disk` into `load_dataset`
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "I agree the situation is not ideal and it would be awesome to use `load_dataset` to reload a dataset saved locally !\r\n\r\nFor context:\r\n\r\n- `load_dataset` works in three steps: download the dataset, then prepare it as an arrow dataset, and finally return a memory mapped arrow dataset. In particular it creates a cache directory to store the arrow data and the subsequent cache files for `map`.\r\n\r\n- `load_from_disk` directly returns a memory mapped dataset from the arrow file (similar to `Dataset.from_file`). It doesn't create a cache diretory, instead all the subsequent `map` calls write in the same directory as the original data. \r\n\r\nIf we want to keep the download_and_prepare step for consistency, it would unnecessarily copy the arrow data into the datasets cache. On the other hand if we don't do this step, the cache directory doesn't exist which is inconsistent.\r\n\r\nI'm curious, what would you expect to happen in this situation ?", "Thank you for the detailed breakdown, @lhoestq \r\n\r\n> I'm curious, what would you expect to happen in this situation ?\r\n\r\n1. the simplest solution is to add a flag to the dataset saved by `save_to_disk` and have `load_dataset` check that flag - if it's set simply switch control to `load_from_disk` behind the scenes. So `load_dataset` detects it's a local filesystem, looks inside to see whether it's something it can cache or whether it should use it directly as is and continues accordingly with one of the 2 dataset-type specific APIs.\r\n\r\n2. the more evolved solution is to look at a dataset produced by `save_to_disk` as a remote resource like hub. So the first time `load_dataset` sees it, it'll take a fingerprint and create a normal cached dataset. On subsequent uses it'll again discover it as a remote resource, validate that it has it cached via the fingerprint and serve as a normal dataset. \r\n\r\nAs you said the cons of approach 2 is that if the dataset is huge it'll make 2 copies on the same machine. So it's possible that both approaches can be integrated. Say if `save_to_disc(do_not_cache=True)` is passed it'll use solution 1, otherwise solution 2. or could even symlink the huge arrow files to the cache instead? or perhaps it's more intuitive to use `load_dataset(do_not_cache=True)` instead. So that one can choose whether to make a cached copy or not for the locally saved dataset. i.e. a simple at use point user control.\r\n\r\nSurely there are other ways to handle it, this is just one possibility.\r\n", "I think the simplest is to always memory map the local file without copy, but still have a cached directory in the cache at `~/.cache/huggingface` instead of saving `map` results next to the original data.\r\n\r\nIn practice we can even use symlinks if it makes the implementation simpler", "Yes, so that you always have the cached entry for any dataset, but the \"payload\" doesn't have to be physically in the cache if it's already on the local filesystem. As you said a symlink will do. ", "Any updates?", "We haven't had the bandwidth to implement this so far. Let me know if you'd be interested in contributing this feature :)", "@lhoestq I can jump into that. What I don't like is having functions with many parameters input. Even though they are optional, it's always harder to reason about and test such cases.\r\nIf there are more features worth to work on, feel free to ping me. It's a lot of fun to help :smile: ", "Thanks a lot for your help @mariusz-jachimowicz-83 :)\r\n\r\nI think as a first step we could implement an Arrow dataset builder to be able to load and stream Arrow datasets locally or from Hugging Face. Maybe something similar to the Parquet builder at [src/datasets/packaged_modules/parquet/parquet.py](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/parquet/parquet.py) ?\r\n\r\nAnd we can deal with the disk space optimization as a second step. What do you think ?\r\n\r\n(this issue is also related to https://github.com/huggingface/datasets/issues/3035)", "@lhoestq I made a PR based on suggestion https://github.com/huggingface/datasets/pull/5944. Could you please review it?", "@lhoestq Let me know if you have further recommendations or anything that you would like to add but you don't have bandwith for. ", "Any update on this issue? It makes existing scripts and examples fall flat when provided with a customized/preprocessed dataset saved to disk." ]
2022-09-29T17:37:12
2024-02-12T15:03:27
null
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** Is it possible to make `load_dataset` more universal similar to `from_pretrained` in `transformers` so that it can handle the hub, and the local path datasets of all supported types? Currently one has to choose a different loader depending on how the dataset has been created. e.g. this won't work: ``` $ git clone https://huggingface.co/datasets/severo/test-parquet $ python -c 'from datasets import load_dataset; ds=load_dataset("test-parquet"); \ ds.save_to_disk("my_dataset"); load_dataset("my_dataset")' [...] Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/load.py", line 1746, in load_dataset builder_instance.download_and_prepare( File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 1277, in _prepare_split writer.write_table(table) File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/arrow_writer.py", line 524, in write_table pa_table = table_cast(pa_table, self._schema) File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/table.py", line 2005, in table_cast return cast_table_to_schema(table, schema) File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/table.py", line 1968, in cast_table_to_schema raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match") ValueError: Couldn't cast _data_files: list<item: struct<filename: string>> child 0, item: struct<filename: string> child 0, filename: string ``` both times the dataset is being loaded from disk. Why does it fail the second time? Why can't `save_to_disk` generate a dataset that can be immediately loaded by `load_dataset`? e.g. the simplest hack would be to have `save_to_disk` add some flag to the saved dataset, that tells `load_dataset` to internally call `load_from_disk`. like having `save_to_disk` create a `load_me_with_load_from_disk.txt` file ;) and `load_dataset` will support that feature from saved datasets from new `datasets` versions. The old ones will still need to use `load_from_disk` explicitly. Unless the flag is not needed and one can immediately tell by looking at the saved dataset that it was saved via `save_to_disk` and thus use `load_from_disk` internally. The use-case is defining a simple API where the user only ever needs to pass a `dataset_name_or_path` and it will always just work. Currently one needs to manually add additional switches telling the system whether to use one loading method or the other which works but it's not smooth. Thank you!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5044/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5044/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5039
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5039/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5039/comments
https://api.github.com/repos/huggingface/datasets/issues/5039/events
https://github.com/huggingface/datasets/issues/5039
1,390,353,315
I_kwDODunzps5S3xuj
5,039
Hendrycks Checksum
{ "login": "DanielHesslow", "id": 9974388, "node_id": "MDQ6VXNlcjk5NzQzODg=", "avatar_url": "https://avatars.githubusercontent.com/u/9974388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DanielHesslow", "html_url": "https://github.com/DanielHesslow", "followers_url": "https://api.github.com/users/DanielHesslow/followers", "following_url": "https://api.github.com/users/DanielHesslow/following{/other_user}", "gists_url": "https://api.github.com/users/DanielHesslow/gists{/gist_id}", "starred_url": "https://api.github.com/users/DanielHesslow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DanielHesslow/subscriptions", "organizations_url": "https://api.github.com/users/DanielHesslow/orgs", "repos_url": "https://api.github.com/users/DanielHesslow/repos", "events_url": "https://api.github.com/users/DanielHesslow/events{/privacy}", "received_events_url": "https://api.github.com/users/DanielHesslow/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @DanielHesslow. We are fixing it. ", "@albertvillanova thanks for taking care of this so quickly!", "The dataset metadata is fixed. You can download it normally." ]
2022-09-29T06:56:20
2022-09-29T10:23:30
2022-09-29T10:04:20
NONE
null
null
null
Hi, The checksum for [hendrycks_test](https://huggingface.co/datasets/hendrycks_test) does not compare correctly, I guess it has been updated on the remote. ``` datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://people.eecs.berkeley.edu/~hendrycks/data.tar'] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5039/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5038
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5038/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5038/comments
https://api.github.com/repos/huggingface/datasets/issues/5038/events
https://github.com/huggingface/datasets/issues/5038
1,389,631,122
I_kwDODunzps5S1BaS
5,038
`Dataset.unique` showing wrong output after filtering
{ "login": "mxschmdt", "id": 4904985, "node_id": "MDQ6VXNlcjQ5MDQ5ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/4904985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxschmdt", "html_url": "https://github.com/mxschmdt", "followers_url": "https://api.github.com/users/mxschmdt/followers", "following_url": "https://api.github.com/users/mxschmdt/following{/other_user}", "gists_url": "https://api.github.com/users/mxschmdt/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxschmdt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxschmdt/subscriptions", "organizations_url": "https://api.github.com/users/mxschmdt/orgs", "repos_url": "https://api.github.com/users/mxschmdt/repos", "events_url": "https://api.github.com/users/mxschmdt/events{/privacy}", "received_events_url": "https://api.github.com/users/mxschmdt/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi! It seems like `flatten_indices` (called in `unique`) doesn't know how to handle empty indices mappings. I'm working on the fix.", "Thanks, that was fast!" ]
2022-09-28T16:20:35
2022-09-30T15:44:25
2022-09-30T15:44:25
CONTRIBUTOR
null
null
null
## Describe the bug After filtering a dataset, and if no samples remain, `Dataset.unique` will return the unique values of the unfiltered dataset. ## Steps to reproduce the bug ```python from datasets import Dataset dataset = Dataset.from_dict({'id': [0]}) dataset = dataset.filter(lambda _: False) print(dataset.unique('id')) ``` ## Expected results The above code should return an empty list since the dataset is empty. ## Actual results ```bash [0] ``` ## Environment info - `datasets` version: 2.5.1 - Platform: Linux-5.18.19-100.fc35.x86_64-x86_64-with-glibc2.34 - Python version: 3.9.14 - PyArrow version: 7.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5038/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5038/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5032
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5032/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5032/comments
https://api.github.com/repos/huggingface/datasets/issues/5032/events
https://github.com/huggingface/datasets/issues/5032
1,388,270,935
I_kwDODunzps5Sv1VX
5,032
new dataset type: single-label and multi-label video classification
{ "login": "fcakyon", "id": 34196005, "node_id": "MDQ6VXNlcjM0MTk2MDA1", "avatar_url": "https://avatars.githubusercontent.com/u/34196005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fcakyon", "html_url": "https://github.com/fcakyon", "followers_url": "https://api.github.com/users/fcakyon/followers", "following_url": "https://api.github.com/users/fcakyon/following{/other_user}", "gists_url": "https://api.github.com/users/fcakyon/gists{/gist_id}", "starred_url": "https://api.github.com/users/fcakyon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fcakyon/subscriptions", "organizations_url": "https://api.github.com/users/fcakyon/orgs", "repos_url": "https://api.github.com/users/fcakyon/repos", "events_url": "https://api.github.com/users/fcakyon/events{/privacy}", "received_events_url": "https://api.github.com/users/fcakyon/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi ! You can in the `features` folder how we implemented the audio and image feature types.\r\n\r\nWe can have something similar to videos. What we need to decide:\r\n- the video loading library to use\r\n- the output format when a user accesses a video type object\r\n- what parameters a `Video()` feature type needs\r\n\r\nalso cc @nateraw who also took a look at what we can do for video", "@lhoestq @nateraw is there any progress on adding video classification datasets? ", "Hi ! I think we just missing which lib we're going to use to decode the videos + which parameters must go in the `Video` type", "Hmm. `decord` could be nice but it's no longer maintained [it seems](https://github.com/dmlc/decord/issues/214). ", "pytorchvideo uses [pyav](https://github.com/PyAV-Org/PyAV) as the default decoder: https://github.com/facebookresearch/pytorchvideo/blob/c8d23d8b7e597586a9e2d18f6ed31ad8aa379a7a/pytorchvideo/data/labeled_video_dataset.py#L37\r\n\r\nAlso it would be great if `optionally` audio can also be decoded from the video as in pytorchvideo: https://github.com/facebookresearch/pytorchvideo/blob/c8d23d8b7e597586a9e2d18f6ed31ad8aa379a7a/pytorchvideo/data/labeled_video_dataset.py#L35\r\n\r\nHere are the other decoders supported in pytorchvideo: https://github.com/facebookresearch/pytorchvideo/blob/c8d23d8b7e597586a9e2d18f6ed31ad8aa379a7a/pytorchvideo/data/encoded_video.py#L17\r\n", "@sayakpaul I did do quite a bit of work on [this PR](https://github.com/huggingface/datasets/pull/4532) a while back to add a video feature. It's outdated, but uses my `encoded_video` [package](https://github.com/nateraw/encoded-video) under the hood, which is basically a wrapper around PyAV stolen from [pytorchvideo](https://github.com/facebookresearch/pytorchvideo/) that gets rid of the `torch` dependency. \r\n\r\nwould be really great to get something like this in...it's just a really tricky and time consuming feature to add. " ]
2022-09-27T19:40:11
2022-11-02T19:10:13
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** In my research, I am dealing with multi-modal (audio+text+frame sequence) video classification. It would be great if the datasets library supported generating multi-modal batches from a video dataset. **Describe the solution you'd like** Assume I have video files having single/multiple labels. I want to train a single/multi-label video classification model. I want datasets to support generating multi-modal batches (audio+frame sequence) from video files. Audio waveform and frame sequence can be extracted from each video clip then I can use any audio, image and video model from transformers library to extract features which will be fed into my model. **Describe alternatives you've considered** Currently, I am using https://github.com/facebookresearch/pytorchvideo dataloaders. There seems to be not much alternative. **Additional context** I am wiling to open a PR but don't know where to start.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5032/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/5032/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/5028
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5028/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5028/comments
https://api.github.com/repos/huggingface/datasets/issues/5028/events
https://github.com/huggingface/datasets/issues/5028
1,386,272,533
I_kwDODunzps5SoNcV
5,028
passing parameters to the method passed to Dataset.from_generator()
{ "login": "Basir-mahmood", "id": 64276129, "node_id": "MDQ6VXNlcjY0Mjc2MTI5", "avatar_url": "https://avatars.githubusercontent.com/u/64276129?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Basir-mahmood", "html_url": "https://github.com/Basir-mahmood", "followers_url": "https://api.github.com/users/Basir-mahmood/followers", "following_url": "https://api.github.com/users/Basir-mahmood/following{/other_user}", "gists_url": "https://api.github.com/users/Basir-mahmood/gists{/gist_id}", "starred_url": "https://api.github.com/users/Basir-mahmood/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Basir-mahmood/subscriptions", "organizations_url": "https://api.github.com/users/Basir-mahmood/orgs", "repos_url": "https://api.github.com/users/Basir-mahmood/repos", "events_url": "https://api.github.com/users/Basir-mahmood/events{/privacy}", "received_events_url": "https://api.github.com/users/Basir-mahmood/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi! Yes, you can either use the `gen_kwargs` param in `Dataset.from_generator` (`ds = Dataset.from_generator(gen, gen_kwargs={\"param1\": val})`) or wrap the generator function with `functools.partial`\r\n(`ds = Dataset.from_generator(functools.partial(gen, param1=\"val\"))`) to pass custom parameters to it.\r\n" ]
2022-09-26T15:20:06
2022-10-03T13:00:00
2022-10-03T13:00:00
NONE
null
null
null
Big thanks for providing dataset creation via a generator. I want to ask whether there is any way that parameters can be passed to the method Dataset.from_generator() method, like as follows. ``` from datasets import Dataset def gen(param1): for idx in len(custom_dataset): yield custom_dataset[idx] + param1 ds = Dataset.from_generator(gen(param1)) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5028/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5028/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5025
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5025/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5025/comments
https://api.github.com/repos/huggingface/datasets/issues/5025/events
https://github.com/huggingface/datasets/issues/5025
1,386,011,239
I_kwDODunzps5SnNpn
5,025
Custom Json Dataset Throwing Error when batch is False
{ "login": "jmandivarapu1", "id": 21245519, "node_id": "MDQ6VXNlcjIxMjQ1NTE5", "avatar_url": "https://avatars.githubusercontent.com/u/21245519?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmandivarapu1", "html_url": "https://github.com/jmandivarapu1", "followers_url": "https://api.github.com/users/jmandivarapu1/followers", "following_url": "https://api.github.com/users/jmandivarapu1/following{/other_user}", "gists_url": "https://api.github.com/users/jmandivarapu1/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmandivarapu1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmandivarapu1/subscriptions", "organizations_url": "https://api.github.com/users/jmandivarapu1/orgs", "repos_url": "https://api.github.com/users/jmandivarapu1/repos", "events_url": "https://api.github.com/users/jmandivarapu1/events{/privacy}", "received_events_url": "https://api.github.com/users/jmandivarapu1/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi! Our processors are meant to be used in `batched` mode, so if `batched` is `False`, you need to drop the batch dimension (the error message warns you that the array has an extra dimension meaning it's 4D instead of 3D) to avoid the error:\r\n```python\r\ndef prepare_examples(examples):\r\n #Some preporcessing for each image and text as all my data saved in cloud\r\n #For this reason I couldn't set the batch to True. \r\n encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,\r\n truncation=True, padding=\"max_length\", return_tensors=\"np\")\r\n # drop extra dim\r\n for k in encoding.items():\r\n encoding[k]=encoding[k][0]\r\n return encoding\r\n```", "> Hi! Our processors are meant to be used in `batched` mode, so if `batched` is `False`, you need to drop the batch dimension (the error message warns you that the array has an extra dimension meaning it's 4D instead of 3D) to avoid the error:\r\n> \r\n> ```python\r\n> def prepare_examples(examples):\r\n> #Some preporcessing for each image and text as all my data saved in cloud\r\n> #For this reason I couldn't set the batch to True. \r\n> encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,\r\n> truncation=True, padding=\"max_length\", return_tensors=\"np\")\r\n> # drop extra dim\r\n> for k in encoding.items():\r\n> encoding[k]=encoding[k][0]\r\n> return encoding\r\n> ```\r\n\r\nThank you it did work\r\n\r\n```\r\nfor k,v in encoding.items():\r\n encoding[k]=encoding[k][0]\r\n```" ]
2022-09-26T12:38:39
2022-09-27T19:50:00
2022-09-27T19:50:00
NONE
null
null
null
## Describe the bug A clear and concise description of what the bug is. I tried to create my custom dataset using below code ``` from datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D from torchvision import transforms from transformers import AutoProcessor # we'll use the Auto API here - it will load LayoutLMv3Processor behind the scenes, # based on the checkpoint we provide from the hub from datasets import load_dataset def prepare_examples(examples): #Some preporcessing for each image and text as all my data saved in cloud #For this reason I couldn't set the batch to True. encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels, truncation=True, padding="max_length") # encoding['pixel_values']=np.array(encoding['pixel_values']) return encoding dataset = load_dataset("json", data_files='issues.jsonl') processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False) features = dataset["train"].features column_names = dataset["train"].column_names # we need to define custom features for `set_format` (used later on) to work properly features = Features({ 'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)), 'input_ids': Sequence(feature=Value(dtype='int64')), 'attention_mask': Sequence(Value(dtype='int64')), 'bbox': Array2D(dtype="int64", shape=(512, 4)), 'labels': Sequence(feature=Value(dtype='int64')), }) train_dataset = dataset["train"].map( prepare_examples, batched=False, remove_columns=column_names, features=features ) ``` It throws below error. ``` /opt/conda/lib/python3.7/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type) 172 storage = to_pyarrow_listarray(data, pa_type) --> 173 return pa.ExtensionArray.from_storage(pa_type, storage) 174 /opt/conda/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.ExtensionArray.from_storage() TypeError: Incompatible storage type list<item: list<item: list<item: list<item: float>>>> for extension type extension<arrow.py_extension_type<Array3DExtensionType>> ``` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` rom datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D from torchvision import transforms from transformers import AutoProcessor # we'll use the Auto API here - it will load LayoutLMv3Processor behind the scenes, # based on the checkpoint we provide from the hub from datasets import load_dataset def prepare_examples(examples): #Some preporcessing for each image and text as all my data saved in cloud encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels, truncation=True, padding="max_length") # encoding['pixel_values']=np.array(encoding['pixel_values']) return encoding dataset = load_dataset("json", data_files='issues.jsonl') processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False) features = dataset["train"].features column_names = dataset["train"].column_names # we need to define custom features for `set_format` (used later on) to work properly features = Features({ 'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)), 'input_ids': Sequence(feature=Value(dtype='int64')), 'attention_mask': Sequence(Value(dtype='int64')), 'bbox': Array2D(dtype="int64", shape=(512, 4)), 'labels': Sequence(feature=Value(dtype='int64')), }) train_dataset = dataset["train"].map( prepare_examples, batched=False, remove_columns=column_names, features=features ) ## Expected results A clear and concise description of the expected results. Expected would be similar to all the otherdatasets with no error. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Unix - Python version: 3.9 - PyArrow version: 9.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5025/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5025/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5023
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5023/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5023/comments
https://api.github.com/repos/huggingface/datasets/issues/5023/events
https://github.com/huggingface/datasets/issues/5023
1,385,881,112
I_kwDODunzps5Smt4Y
5,023
Text strings are split into lists of characters in xcsr dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2022-09-26T11:11:50
2022-09-28T07:54:20
2022-09-28T07:54:20
MEMBER
null
null
null
## Describe the bug Text strings are split into lists of characters. Example for "X-CSQA-en": ``` {'id': 'd3845adc08414fda', 'lang': 'en', 'question': {'stem': ['T', 'h', 'e', ' ', 'd', 'e', 'n', 't', 'a', 'l', ' ', 'o', 'f', 'f', 'i', 'c', 'e', ' ', 'h', 'a', 'n', 'd', 'l', 'e', 'd', ' ', 'a', ' ', 'l', 'o', 't', ' ', 'o', 'f', ' ', 'p', 'a', 't', 'i', 'e', 'n', 't', 's', ' ', 'w', 'h', 'o', ' ', 'e', 'x', 'p', 'e', 'r', 'i', 'e', 'n', 'c', 'e', 'd', ' ', 't', 'r', 'a', 'u', 'm', 'a', 't', 'i', 'c', ' ', 'm', 'o', 'u', 't', 'h', ' ', 'i', 'n', 'j', 'u', 'r', 'y', ',', ' ', 'w', 'h', 'e', 'r', 'e', ' ', 'w', 'e', 'r', 'e', ' ', 't', 'h', 'e', 's', 'e', ' ', 'p', 'a', 't', 'i', 'e', 'n', 't', 's', ' ', 'c', 'o', 'm', 'i', 'n', 'g', ' ', 'f', 'r', 'o', 'm', '?'], 'choices': [{'label': ['A'], 'text': ['t', 'o', 'w', 'n']}, {'label': ['B'], 'text': ['m', 'i', 'c', 'h', 'i', 'g', 'a', 'n']}, {'label': ['C'], 'text': ['h', 'o', 's', 'p', 'i', 't', 'a', 'l']}, {'label': ['D'], 'text': ['s', 'c', 'h', 'o', 'o', 'l', 's']}, {'label': ['E'], 'text': ['o', 'f', 'f', 'i', 'c', 'e', ' ', 'b', 'u', 'i', 'l', 'd', 'i', 'n', 'g']}]}, 'answerKey': 'C'} ## Steps to reproduce the bug ```python ds = load_dataset("datasets/xcsr", "X-CSQA-en", split="validation", streaming=True) item = next(iter(ds)) item ``` ## Expected results ``` {'id': 'd3845adc08414fda', 'lang': 'en', 'question': {'stem': 'The dental office handled a lot of patients who experienced traumatic mouth injury, where were these patients coming from?', 'choices': {'label': ['A', 'B', 'C', 'D', 'E'], 'text': ['town', 'michigan', 'hospital', 'schools', 'office building']}}, 'answerKey': 'C'} ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5023/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5023/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5021
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5021/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5021/comments
https://api.github.com/repos/huggingface/datasets/issues/5021/events
https://github.com/huggingface/datasets/issues/5021
1,385,351,250
I_kwDODunzps5SkshS
5,021
Split is inferred from filename and overrides metadata.jsonl
{ "login": "float-trip", "id": 102226344, "node_id": "U_kgDOBhfZqA", "avatar_url": "https://avatars.githubusercontent.com/u/102226344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/float-trip", "html_url": "https://github.com/float-trip", "followers_url": "https://api.github.com/users/float-trip/followers", "following_url": "https://api.github.com/users/float-trip/following{/other_user}", "gists_url": "https://api.github.com/users/float-trip/gists{/gist_id}", "starred_url": "https://api.github.com/users/float-trip/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/float-trip/subscriptions", "organizations_url": "https://api.github.com/users/float-trip/orgs", "repos_url": "https://api.github.com/users/float-trip/repos", "events_url": "https://api.github.com/users/float-trip/events{/privacy}", "received_events_url": "https://api.github.com/users/float-trip/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
null
[]
null
[ "Hi! What's the structure of your image folder? `datasets` by default tries to infer to what split each file belongs based on directory/file names. If it's OK to load all the images inside the `dataset` folder in the `train` split, you can do the following:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", data_files=\"dataset/**\")\r\n```", "Thanks! Specifying `data_files` worked for that case.\r\n\r\nI'm new to the library, so let me try rephrasing the issue. If there's no actual bug here, sorry for the trouble.\r\n\r\nI've uploaded an example [here](https://files.catbox.moe/nfj2pd.zip) with the following files: \r\n\r\n```\r\n.\r\n├── bug.py\r\n└── imagefolder\r\n ├── test\r\n │ ├── metadata.jsonl\r\n │ ├── dog.jpg\r\n │ └── personal trainer.jpg\r\n └── train\r\n ├── metadata.jsonl\r\n ├── cat.jpg\r\n └── testing center.jpg\r\n```\r\n\r\n`bug.py`\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imagefolder\")\r\n\r\nprint(dataset)\r\n# DatasetDict({\r\n# test: Dataset({\r\n# features: ['image', 'text'],\r\n# num_rows: 1\r\n# })\r\n# })\r\n\r\nfor split in dataset:\r\n print(\"Split:\", split)\r\n for n in dataset[split]:\r\n print(n['text'])\r\n\r\n\r\n# Split: test\r\n# testing center\r\n```\r\n\r\nAs far as I can tell, this conforms with the example given here: https://huggingface.co/docs/datasets/image_dataset#imagefolder. It appears to me that, even though `metadata.jsonl` is present, the inferred labels from the path are taking precedent. Does this sound like a bug/undocumented behavior?", "This looks like a duplicate of https://github.com/huggingface/datasets/issues/4895 (the problem is explained in this comment: https://github.com/huggingface/datasets/issues/4895#issuecomment-1248269550).\r\n\r\nIn the meantime, you can do the following to fetch all the splits:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", data_files={\"train\": \"imagefolder/train/**\", \"test\": \"imagefolder/test/**\"})\r\n```\r\n" ]
2022-09-26T03:22:14
2022-09-29T08:07:50
2022-09-29T08:07:50
NONE
null
null
null
## Describe the bug Including the strings "test" or "train" anywhere in a filename causes `datasets` to infer the split and silently ignore all other files. This behavior is documented for directory names but not filenames: https://huggingface.co/docs/datasets/image_dataset#imagefolder ## Steps to reproduce the bug `metadata.jsonl` ```json {"file_name": "photo of a cat.jpg", "text": "a photo of a cat"} {"file_name": "photo of a dog.jpg", "text": "a photo of a dog"} {"file_name": "photo of a train.jpg", "text": "a photo of a train"} {"file_name": "photo of test tubes.jpg", "text": "a photo of test tubes"} ``` `bug.py` ```python from datasets import load_dataset dataset = load_dataset("dataset") print(dataset) # DatasetDict({ # train: Dataset({ # features: ['image', 'text'], # num_rows: 1 # }) # test: Dataset({ # features: ['image', 'text'], # num_rows: 1 # }) # }) for split in dataset: for n in dataset[split]: print(n['text']) # a photo of a train # a photo of test tubes ``` ## Expected results One single dataset with all four images / a warning for unused files / documentation of this behavior ## Actual results Only the images with "test" or "train" in the name are loaded ## Environment info - `datasets` version: 2.5.1 - Platform: macOS-12.5.1-x86_64-i386-64bit - Python version: 3.10.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5021/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5021/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5017
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5017/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5017/comments
https://api.github.com/repos/huggingface/datasets/issues/5017/events
https://github.com/huggingface/datasets/issues/5017
1,384,022,463
I_kwDODunzps5SfoG_
5,017
xcsr: X-CSQA simply uses english for all alleged non-english data
{ "login": "thesofakillers", "id": 26286291, "node_id": "MDQ6VXNlcjI2Mjg2Mjkx", "avatar_url": "https://avatars.githubusercontent.com/u/26286291?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thesofakillers", "html_url": "https://github.com/thesofakillers", "followers_url": "https://api.github.com/users/thesofakillers/followers", "following_url": "https://api.github.com/users/thesofakillers/following{/other_user}", "gists_url": "https://api.github.com/users/thesofakillers/gists{/gist_id}", "starred_url": "https://api.github.com/users/thesofakillers/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thesofakillers/subscriptions", "organizations_url": "https://api.github.com/users/thesofakillers/orgs", "repos_url": "https://api.github.com/users/thesofakillers/repos", "events_url": "https://api.github.com/users/thesofakillers/events{/privacy}", "received_events_url": "https://api.github.com/users/thesofakillers/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @thesofakillers. Good catch. We are fixing this. " ]
2022-09-23T16:11:54
2022-09-26T10:57:31
2022-09-26T10:57:31
NONE
null
null
null
## Describe the bug All the alleged non-english subcollections for the X-CSQA task in the [xcsr benchmark dataset ](https://huggingface.co/datasets/xcsr) seem to be copies of the english subcollection, rather than translations. This is in contrast to the data description: > we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR ## Steps to reproduce the bug ```python # let's say you want to load the french X-CSQA subcollection french = datasets.load_dataset("xcsr", "X-CSQA-fr") # for good measure, let's load english too english = datasets.load_dataset("xcsr", "X-CSQA-en") # let's inspect "".join(english['test'][0]['question']['stem']) # output: 'The people wanted to stop the parade, so what did they set up to thwart it?' "".join(french['test'][0]['question']['stem']) # output: 'The people wanted to stop the parade, so what did they set up to thwart it?' # what? Why are they both in english? # I've checked this for validation and train splits too, across many datapoints. It's all the same english dataset # maybe i need to look better? french['test'].unique('lang') # output: ['en'] # no, it's all english ``` ## Expected results Accessing a subcollection in language X should return a subcollection containg samples in language X ## Actual results Accessing a subcollection in language X returns a subcollection containing samples in English. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.5.1 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5017/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5017/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5015
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5015/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5015/comments
https://api.github.com/repos/huggingface/datasets/issues/5015/events
https://github.com/huggingface/datasets/issues/5015
1,383,485,558
I_kwDODunzps5SdlB2
5,015
Transfer dataset scripts to Hub
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Sounds good ! Can I help with anything ?" ]
2022-09-23T08:48:10
2022-10-05T07:15:57
2022-10-05T07:15:57
MEMBER
null
null
null
Before merging: - #4974 TODO: - [x] Create label: ["dataset contribution"](https://github.com/huggingface/datasets/pulls?q=label%3A%22dataset+contribution%22) - [x] Create project: [Datasets: Transfer datasets to Hub](https://github.com/orgs/huggingface/projects/22/) - [x] PRs: - [x] Add dataset: we should recommend transfer all additions of datasets to the Hub, under the appropriate namespace; no more additions of datasets on GitHub - [x] Update dataset: in general, we should merge bug fixes; enhancements should be considered on a case-by-case basis, depending on whether there is a more suitable namespace on the Hub - [ ] Issues Finally: - [x] #4974 Let me know what you think! :hugs:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5015/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5015/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5014
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5014/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5014/comments
https://api.github.com/repos/huggingface/datasets/issues/5014/events
https://github.com/huggingface/datasets/issues/5014
1,383,422,639
I_kwDODunzps5SdVqv
5,014
I need to read the custom dataset in conll format
{ "login": "shell-nlp", "id": 39985245, "node_id": "MDQ6VXNlcjM5OTg1MjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/39985245?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shell-nlp", "html_url": "https://github.com/shell-nlp", "followers_url": "https://api.github.com/users/shell-nlp/followers", "following_url": "https://api.github.com/users/shell-nlp/following{/other_user}", "gists_url": "https://api.github.com/users/shell-nlp/gists{/gist_id}", "starred_url": "https://api.github.com/users/shell-nlp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shell-nlp/subscriptions", "organizations_url": "https://api.github.com/users/shell-nlp/orgs", "repos_url": "https://api.github.com/users/shell-nlp/repos", "events_url": "https://api.github.com/users/shell-nlp/events{/privacy}", "received_events_url": "https://api.github.com/users/shell-nlp/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi! We don't currently have a builder for parsing custom `conll` datasets, but I guess we could add one as a packaged module (similarly to what [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/dataset_builders/conll/conll_dataset_builder.py) did). @lhoestq @albertvillanova WDYT?\r\n\r\nIn the meantime, you can use `Dataset.from_generator` to create a dataset as follows:\r\n```python\r\nfrom datasets import Dataset\r\n\r\n# 2009 version\r\nINPUT_COLUMNS = \"ID FORM LEMMA PLEMMA POS PPOS FEAT PFEAT HEAD PHEAD DEPREL PDEPREL\".split()\r\n\r\ndef read_conll(file):\r\n example = {col: [] for col in INPUT_COLUMNS}\r\n idx = 0\r\n with open(file) as f:\r\n for line in f:\r\n if line.startswith(\"-DOCSTART-\") or line == \"\\n\" or not line:\r\n if example[next(iter(example))]:\r\n yield idx, example\r\n idx += 1\r\n example = {col: [] for col in INPUT_COLUMNS}\r\n else:\r\n row_cols = line.split()\r\n for i, col in enumerate(example):\r\n example[col] = row_cols[i].rstrip()\r\n\r\n# (optional) pass custom features with `features=Features(...)`\r\ndset = Dataset.from_generator(read_conll, gen_kwargs={\"file\": \"path/to/conll/file\"}) \r\n``` ", "I think we could add a dedicated builder if you think this format is general enough.", "\r\n\r\n\r\n> I think we could add a dedicated builder if you think this format is general enough.\r\n\r\nI think its functions are incomplete. It should have to_ Conll and from_ There are two methods of conll." ]
2022-09-23T07:49:42
2022-11-02T11:57:15
null
NONE
null
null
null
I need to read the custom dataset in conll format
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5014/timeline
null
reopened
https://api.github.com/repos/huggingface/datasets/issues/5013
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5013/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5013/comments
https://api.github.com/repos/huggingface/datasets/issues/5013/events
https://github.com/huggingface/datasets/issues/5013
1,383,415,971
I_kwDODunzps5SdUCj
5,013
would huggingface like publish cpp binding for datasets package ?
{ "login": "mullerhai", "id": 6143404, "node_id": "MDQ6VXNlcjYxNDM0MDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/6143404?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mullerhai", "html_url": "https://github.com/mullerhai", "followers_url": "https://api.github.com/users/mullerhai/followers", "following_url": "https://api.github.com/users/mullerhai/following{/other_user}", "gists_url": "https://api.github.com/users/mullerhai/gists{/gist_id}", "starred_url": "https://api.github.com/users/mullerhai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mullerhai/subscriptions", "organizations_url": "https://api.github.com/users/mullerhai/orgs", "repos_url": "https://api.github.com/users/mullerhai/repos", "events_url": "https://api.github.com/users/mullerhai/events{/privacy}", "received_events_url": "https://api.github.com/users/mullerhai/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892913, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": "This will not be worked on" } ]
closed
false
null
[]
null
[ "Hi ! Can you share more information about your use case ? How could it help you to have cpp bindings versus using the python libraries ?", "> Hi ! Can you share more information about your use case ? How could it help you to have cpp bindings versus using the python libraries ?\r\n\r\nfor example ,the huggingface load_model() and load_dataset() can execute in cpp env", "If it's a viable option for you, you can check [tch-rs](https://github.com/LaurentMazare/tch-rs) to load models in Rust. Regarding datasets, you can first download them in python and then use Arrow C++ or Rust to load them", "If you are more adventurous, another option is to embed python calls inside c++ e.g. with `pybind11`.", "> pybind11\r\n\r\nI think it is not the best solution" ]
2022-09-23T07:42:49
2023-02-24T16:20:57
2023-02-24T16:20:57
NONE
null
null
null
HI: I use cpp env libtorch, I like use hugggingface ,but huggingface not cpp binding, would you like publish cpp binding for it. thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5013/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5012
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5012/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5012/comments
https://api.github.com/repos/huggingface/datasets/issues/5012/events
https://github.com/huggingface/datasets/issues/5012
1,382,851,096
I_kwDODunzps5SbKIY
5,012
Force JSON format regardless of file naming on S3
{ "login": "junwang-wish", "id": 112650299, "node_id": "U_kgDOBrboOw", "avatar_url": "https://avatars.githubusercontent.com/u/112650299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/junwang-wish", "html_url": "https://github.com/junwang-wish", "followers_url": "https://api.github.com/users/junwang-wish/followers", "following_url": "https://api.github.com/users/junwang-wish/following{/other_user}", "gists_url": "https://api.github.com/users/junwang-wish/gists{/gist_id}", "starred_url": "https://api.github.com/users/junwang-wish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/junwang-wish/subscriptions", "organizations_url": "https://api.github.com/users/junwang-wish/orgs", "repos_url": "https://api.github.com/users/junwang-wish/repos", "events_url": "https://api.github.com/users/junwang-wish/events{/privacy}", "received_events_url": "https://api.github.com/users/junwang-wish/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi ! Support for URIs like `s3://...` is not implemented yet in `data_files=`. You can use the HTTP URL instead if your data is public in the meantime", "Hi,\r\nI want to make sure I understand this response. I have a set of files on S3 that are private for security reasons. Because they are not public files I cannot read those files (many are parquet) into my hf notebooks in Kaggle? That can't be correct, can it? ", "Hi ! There is a discussion at https://github.com/huggingface/datasets/issues/5281\r\n\r\nUsing the latest `datasets` 2.11 you can try passing fsspec URLs to private buckets to `data_files` in `load_dataset()`. Though this is still experimental and undocumented, so feedback is welcome. You may not have the best experience though, since anything related to performance and caching hasn't been tested properly yet.", "closing this one since data_files supports fsspec (still experimental/untested/undocumented for s3 though)" ]
2022-09-22T18:28:15
2023-08-16T09:58:36
2023-08-16T09:58:36
NONE
null
null
null
I have a file on S3 created by Data Version Control, it looks like `s3://dvc/ac/badff5b134382a0f25248f1b45d7b2` but contains a json file. If I run ```python dataset = load_dataset( "json", data_files='s3://dvc/ac/badff5b134382a0f25248f1b45d7b2' ) ``` It gives me ``` InvalidSchema: No connection adapters were found for 's3://dvc/ac/badff5b134382a0f25248f1b45d7b2' ``` However, I cannot go ahead and change the names of the s3 file. Is there a way to "force" load a S3 url with certain decoder (JSON, CSV, etc.) regardless of s3 URL naming?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5012/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5012/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5011
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5011/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5011/comments
https://api.github.com/repos/huggingface/datasets/issues/5011/events
https://github.com/huggingface/datasets/issues/5011
1,382,609,587
I_kwDODunzps5SaPKz
5,011
Audio: `encode_example` fails with IndexError
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Sorry bug on my part 😅 Closing " ]
2022-09-22T15:07:27
2022-09-23T09:05:18
2022-09-23T09:05:18
CONTRIBUTOR
null
null
null
## Describe the bug Loading the dataset [earnings-22](https://huggingface.co/datasets/sanchit-gandhi/earnings22_split) from the Hub yields an Index Error. I created this dataset locally and then pushed to hub at the specified URL. Thus, I expect the dataset should work out-of-the-box! Indeed, the dataset viewer functions correctly, and there were no issues when I had the dataset locally. Don't think it's a sound file bug as the version matches what worked previously. Update: the bug appeared for me on a GPU, mysteriously on a TPU I can't repro and it downloads correctly... ## Steps to reproduce the bug ```python from datasets import load_dataset earnings22 = load_dataset("sanchit-gandhi/earnings22_split") ``` ## Expected results ``` >>> earnings22 DatasetDict({ validation: Dataset({ features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'], num_rows: 2650 }) train: Dataset({ features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'], num_rows: 52006 }) test: Dataset({ features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'], num_rows: 2735 }) }) ``` ## Actual results ``` Traceback (most recent call last): File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2764, in _map_single writer.write(example) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 451, in write self.write_examples_on_file() File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 409, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 508, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 231, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 197, in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1683, in wrapper return func(array, *args, **kwargs) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1795, in cast_array_to_feature return feature.cast_storage(array) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in cast_storage storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()]) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in <listcomp> storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()]) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 92, in encode_example sf.write(buffer, value["array"], value["sampling_rate"], format="wav") File "/opt/conda/envs/hf/lib/python3.8/site-packages/soundfile.py", line 313, in write channels = data.shape[1] IndexError: tuple index out of range ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.3 Plus: - SoundFile version: 0.10.3.post1 cc @lhoestq @polinaeterna
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5011/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5011/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5009/comments
https://api.github.com/repos/huggingface/datasets/issues/5009/events
https://github.com/huggingface/datasets/issues/5009
1,381,194,067
I_kwDODunzps5SU1lT
5,009
Error loading StonyBrookNLP/tellmewhy dataset from hub even though local copy loads correctly
{ "login": "ykl7", "id": 4996184, "node_id": "MDQ6VXNlcjQ5OTYxODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4996184?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ykl7", "html_url": "https://github.com/ykl7", "followers_url": "https://api.github.com/users/ykl7/followers", "following_url": "https://api.github.com/users/ykl7/following{/other_user}", "gists_url": "https://api.github.com/users/ykl7/gists{/gist_id}", "starred_url": "https://api.github.com/users/ykl7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ykl7/subscriptions", "organizations_url": "https://api.github.com/users/ykl7/orgs", "repos_url": "https://api.github.com/users/ykl7/repos", "events_url": "https://api.github.com/users/ykl7/events{/privacy}", "received_events_url": "https://api.github.com/users/ykl7/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I think this is because some columns are mostly empty lists. In particular the train and validation splits only have empty lists for `val_ann`. Therefore the type inference doesn't know which type is inside (or it would have to scan the other splits first before knowing).\r\n\r\nYou can fix that by specifying the features types explicitly.\r\nThen you can save the feature types inside the dataset repository, so that you won't need to specify the features in subsequent calls:\r\n```python\r\nfrom datasets import load_dataset, Features, Sequence, Value\r\nfrom datasets.info import DatasetInfosDict\r\n\r\nfeatures = Features({\r\n 'narrative': Value('string'),\r\n 'question': Value('string'),\r\n 'original_sentence_for_question': Value('string'),\r\n 'narrative_lexical_overlap': Value('float64'),\r\n 'is_ques_answerable': Value('string'),\r\n 'answer': Value('string'),\r\n 'is_ques_answerable_annotator': Value('string'),\r\n 'original_narrative_form': Sequence(Value('string')),\r\n 'question_meta': Value('string'),\r\n 'helpful_sentences': Sequence(Value('int64')),\r\n 'human_eval': Value('bool'),\r\n 'val_ann': Sequence(Value('int64')),\r\n 'gram_ann': Sequence(Value('int64'))\r\n})\r\nds = load_dataset('StonyBrookNLP/tellmewhy', features=features)\r\nDatasetInfosDict({\"default\": ds[\"train\"].info}).write_to_directory(\"path/to/local/tellmewhy\")\r\n```\r\nand then after pushing the change to the dataset repository on the Hub, `load_dataset(\"StonyBrookNLP/tellmewhy\")` will work directly`", "(Note that specifying explicit types will be made easier with https://github.com/huggingface/datasets/pull/4926)", "`gram_ann` and `val_ann` are annotations that only exist for part of the test set. I wanted to keep all the columns consistent across all files, so I added them to train and validation as well. I'll check if removing them from those files is still compliant with this repo. Otherwise, I will do as you suggested. Thanks @lhoestq !", "@lhoestq I followed the exact steps you described but it seems like I'm getting the same error unfortunately. Any other ideas? Thanks in advance", "Hi ! If you move `dataset_infos.json` from `data/` to the root of your dataset repository if should work :)", "I tried that and pushed to the [hub](https://huggingface.co/datasets/StonyBrookNLP/tellmewhy/tree/main). Now, there is a new error.\r\n```\r\n File \"/home/yklal95/tellmewhy/src/prepare_data.py\", line 67, in main\r\n dataset = load_dataset('StonyBrookNLP/tellmewhy')\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/load.py\", line 1746, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py\", line 704, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py\", line 775, in _download_and_prepare\r\n verify_checksums(\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/utils/info_utils.py\", line 33, in verify_checksums\r\n raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\ndatasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'/home/yklal95/tellmewhy/data/test.json', '/home/yklal95/tellmewhy/data/validation.json', '/home/yklal95/tellmewhy/data/train.json'}\r\n```\r\nNo changes were made to any of the other files and they are still on the hub. Let me know if you have any ideas @lhoestq Thanks!", "Oh I see - the code I gave you returns local paths instead of URLs to store metadata about files to download.\r\nI opened a PR in your repo here to remove this: https://huggingface.co/datasets/StonyBrookNLP/tellmewhy/discussions/1\r\nsorry for the inconvenience !", "It works now! Thanks a lot @lhoestq " ]
2022-09-21T16:23:06
2022-09-29T13:07:29
2022-09-29T13:07:29
NONE
null
null
null
## Describe the bug I have added a new dataset with the identifier `StonyBrookNLP/tellmewhy` to the hub. When I load the individual files using my local copy using `dataset = datasets.load_dataset("json", data_files="data/train.jsonl")`, it loads the dataset correctly. However, when I try to load it from the hub, I get an error (pasted below). Additionally, `dataset = datasets.load_dataset("json", data_dir="data/")` throws the same error. ## Steps to reproduce the bug ```python dataset = datasets.load_dataset('StonyBrookNLP/tellmewhy') ``` ## Expected results Successfully load the `StonyBrookNLP/tellmewhy` dataset. ## Actual results ``` Using custom data configuration StonyBrookNLP--tellmewhy-82712924092694ff Downloading and preparing dataset json/StonyBrookNLP--tellmewhy to /home/yklal95/.cache/huggingface/datasets/StonyBrookNLP___json/StonyBrookNLP--tellmewhy-82712924092694ff/0.0.0/a3e658c4731e59120d44081ac10bf85dc7e1388126b92338344ce9661907f253... Downloading data files: 100%|██████████████████████████████| 3/3 [00:00<00:00, 957.46it/s] Extracting data files: 100%|███████████████████████████████| 3/3 [00:00<00:00, 299.14it/s] Traceback (most recent call last): File "/home/yklal95/tmw-generalization/src/load_datasets.py", line 17, in <module> main(args) File "/home/yklal95/tmw-generalization/src/load_datasets.py", line 11, in main dataset = datasets.load_dataset(args.dataset_name) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/load.py", line 1746, in load_dataset builder_instance.download_and_prepare( File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 1277, in _prepare_split writer.write_table(table) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/arrow_writer.py", line 524, in write_table pa_table = table_cast(pa_table, self._schema) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 2005, in table_cast return cast_table_to_schema(table, schema) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1969, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1969, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1681, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1681, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1822, in cast_array_to_feature casted_values = _c(array.values, feature.feature) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1683, in wrapper return func(array, *args, **kwargs) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1853, in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1683, in wrapper return func(array, *args, **kwargs) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1761, in array_cast raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}") TypeError: Couldn't cast array of type int64 to null ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-4.15.0-121-generic-x86_64-with-glibc2.27 - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5009/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5009/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5005/comments
https://api.github.com/repos/huggingface/datasets/issues/5005/events
https://github.com/huggingface/datasets/issues/5005
1,380,952,960
I_kwDODunzps5ST6uA
5,005
Release 2.5.0 breaks transformers CI
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Shall we revert https://github.com/huggingface/datasets/pull/4971 @mariosasko ?\r\n\r\nAnd for consistency we can update IterableDataset.map later" ]
2022-09-21T13:39:19
2022-09-21T14:11:57
2022-09-21T14:11:57
MEMBER
null
null
null
## Describe the bug As reported by @lhoestq: > see https://app.circleci.com/pipelines/github/huggingface/transformers/47634/workflows/b491886b-e66e-4edb-af96-8b459e72aa25/jobs/564563 this is used here: [https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55[…]torch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py](https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55250e7da/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L482-L488)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5005/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5002
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5002/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5002/comments
https://api.github.com/repos/huggingface/datasets/issues/5002/events
https://github.com/huggingface/datasets/issues/5002
1,380,589,402
I_kwDODunzps5SSh9a
5,002
Dataset Viewer issue for loubnabnl/humaneval-x
{ "login": "loubnabnl", "id": 44069155, "node_id": "MDQ6VXNlcjQ0MDY5MTU1", "avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loubnabnl", "html_url": "https://github.com/loubnabnl", "followers_url": "https://api.github.com/users/loubnabnl/followers", "following_url": "https://api.github.com/users/loubnabnl/following{/other_user}", "gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}", "starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions", "organizations_url": "https://api.github.com/users/loubnabnl/orgs", "repos_url": "https://api.github.com/users/loubnabnl/repos", "events_url": "https://api.github.com/users/loubnabnl/events{/privacy}", "received_events_url": "https://api.github.com/users/loubnabnl/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "It's a bug! Thanks for reporting, I'm looking at it", "Fixed." ]
2022-09-21T09:06:17
2022-09-21T11:49:49
2022-09-21T11:49:49
NONE
null
null
null
### Link https://huggingface.co/datasets/loubnabnl/humaneval-x/viewer/ ### Description The dataset has subsets but the viewer gets stuck in the default subset even when I select another one (the data loading of the subsets works fine) ### Owner Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5002/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/5002/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/5000
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5000/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5000/comments
https://api.github.com/repos/huggingface/datasets/issues/5000/events
https://github.com/huggingface/datasets/issues/5000
1,379,709,398
I_kwDODunzps5SPLHW
5,000
Dataset Viewer issue for asapp/slue
{ "login": "fwu-asapp", "id": 56092571, "node_id": "MDQ6VXNlcjU2MDkyNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/56092571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fwu-asapp", "html_url": "https://github.com/fwu-asapp", "followers_url": "https://api.github.com/users/fwu-asapp/followers", "following_url": "https://api.github.com/users/fwu-asapp/following{/other_user}", "gists_url": "https://api.github.com/users/fwu-asapp/gists{/gist_id}", "starred_url": "https://api.github.com/users/fwu-asapp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fwu-asapp/subscriptions", "organizations_url": "https://api.github.com/users/fwu-asapp/orgs", "repos_url": "https://api.github.com/users/fwu-asapp/repos", "events_url": "https://api.github.com/users/fwu-asapp/events{/privacy}", "received_events_url": "https://api.github.com/users/fwu-asapp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<img width=\"519\" alt=\"Capture d’écran 2022-09-20 à 22 33 47\" src=\"https://user-images.githubusercontent.com/1676121/191358952-1220cb7d-745a-4203-a66b-3c707b25038f.png\">\r\n\r\n```\r\nNot found.\r\n\r\nError code: SplitsResponseNotFound\r\n```\r\n\r\nhttps://datasets-server.huggingface.co/splits?dataset=asapp/slue\r\n\r\n```json\r\n{\"error\":\"Not found.\"}\r\n```", "I just launched a refresh. It's weird, I don't see any entry for this dataset in the cache, it's a bug on our side. In order to try to understand what happened, did you change the visibility status from private to public, by any chance?", "The dataset is being refreshed, please retry later.\r\n\r\n<img width=\"802\" alt=\"Capture d’écran 2022-09-20 à 22 39 46\" src=\"https://user-images.githubusercontent.com/1676121/191360072-7cc86486-4e84-4b47-8f9a-4a69fe84a5ac.png\">\r\n", "OK. We now have an issue because the dataset cannot be streamed, and the dataset viewer relies on it.\r\n\r\nMaybe @huggingface/datasets can help:\r\n\r\n```\r\nError code: StreamingRowsError\r\nException: NotImplementedError\r\nMessage: Extraction protocol for TAR archives like 'https://public-dataset-model-store.awsdev.asapp.com/users/sshon/public/slue/slue-voxpopuli_v0.2_blind.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.\r\nTraceback: Traceback (most recent call last):\r\n File \"/src/services/worker/src/worker/responses/first_rows.py\", line 337, in get_first_rows_response\r\n rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)\r\n File \"/src/services/worker/src/worker/utils.py\", line 123, in decorator\r\n return func(*args, **kwargs)\r\n File \"/src/services/worker/src/worker/responses/first_rows.py\", line 65, in get_rows\r\n ds = load_dataset(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1739, in load_dataset\r\n return builder_instance.as_streaming_dataset(split=split)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1025, in as_streaming_dataset\r\n splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n File \"/tmp/modules-cache/datasets_modules/datasets/asapp--slue/adaa0c78233e1a1df9c2f054e690ec5fc3eaf453bd76b80fe5cbe5728e55d9b1/slue.py\", line 189, in _split_generators\r\n dl_dir = dl_manager.download_and_extract(_DL_URLS[config_name])\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 944, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 907, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 385, in map_nested\r\n return function(data_struct)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 912, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 390, in _get_extraction_protocol\r\n raise NotImplementedError(\r\n NotImplementedError: Extraction protocol for TAR archives like 'https://public-dataset-model-store.awsdev.asapp.com/users/sshon/public/slue/slue-voxpopuli_v0.2_blind.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.\r\n```", "Thanks @severo, \r\n\r\nDo I have to modify the python script to support streaming so that it can be previewed?\r\nIs there a document somewhere that I can follow?\r\n", "Hi @fwu-asapp thanks for reporting, and thanks @severo for the investigation.\r\n\r\nAs explained by @severo, the preview requires that your dataset loading script supports streaming.\r\n\r\nThere are several options here:\r\n- the easiest would be to replace the source files, archived using ZIP instead TAR: the TAR format does not allow random access while streaming, but only sequential access; the ZIP files support streaming out of the box.\r\n- alternatively, to stream TAR archives you can use `dl_manager.iter_archive`: the only prerequisite is that your \"index\" files (.tsv) should have been archived before their corresponding audio files, so while iterating the content of the TAR archive, the metadata files appear first. I think this is the case for voxpopuli tar but not for voxceleb.\r\n- if your .tsv files were not archived before their corresponding audio files (I think this is the case for voxceleb), then you should extract the .tsv files and host them separately (you can host them on the same Hugging Face Hub).\r\n - you can take as example, e.g.: https://huggingface.co/datasets/vivos/blob/main/vivos.py\r\n\r\nAs an advanced approach, you can handle both streaming and non-streaming cases separately.\r\n- as for example: https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py or https://huggingface.co/datasets/google/fleurs/blob/main/fleurs.py\r\n\r\nSee related discussion:\r\n- https://github.com/huggingface/datasets/issues/4697#issuecomment-1191502492", "Thanks @albertvillanova for your clarification. I'll talk to my collaborators to see if we can replace those files. Let me just close this issue for now.", "FYI, after replacing the source files with the ZIP ones, the dataset viewer works well. Thanks again to @severo and @albertvillanova for your help!", "Great! And thank you for sharing that interesting dataset!" ]
2022-09-20T16:45:45
2022-09-27T07:04:03
2022-09-21T07:24:07
NONE
null
null
null
### Link https://huggingface.co/datasets/asapp/slue/viewer/ ### Description Hi, I wonder how to get the dataset viewer of our slue dataset to work. Best, Felix ### Owner Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5000/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4996
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4996/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4996/comments
https://api.github.com/repos/huggingface/datasets/issues/4996/events
https://github.com/huggingface/datasets/issues/4996
1,379,345,161
I_kwDODunzps5SNyMJ
4,996
Dataset Viewer issue for Jean-Baptiste/wikiner_fr
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The script uses `Dataset.load_from_disk`, which as you can expect, doesn't work in streaming mode.\r\n\r\nIt would probably be more practical to load the dataset locally using `Dataset.load_from_disk` first and then `push_to_hub` to upload it in Parquet on the Hub", "I've transferred this issue to the Hub repo: https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr/discussions/3\r\n\r\nI'm closing this." ]
2022-09-20T12:32:07
2022-09-27T12:35:44
2022-09-27T12:35:44
CONTRIBUTOR
null
null
null
### Link https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr ### Description ``` Error code: StreamingRowsError Exception: FileNotFoundError Message: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json' Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token) File "/src/services/worker/src/worker/utils.py", line 123, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/responses/first_rows.py", line 77, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 718, in __iter__ for key, example in self._iter(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 708, in _iter yield from ex_iterable File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 112, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/tmp/modules-cache/datasets_modules/datasets/Jean-Baptiste--wikiner_fr/683a580ba6ec769d508f7dfc603a651667b0ed3817b1ae5bfd45f97cc024923f/wikiner_fr.py", line 165, in _generate_examples dataset = Dataset.load_from_disk(filepath) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1210, in load_from_disk with open(Path(dataset_path, config.DATASET_STATE_JSON_FILENAME).as_posix(), encoding="utf-8") as state_file: FileNotFoundError: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json' ``` Is it an error with the dataset script, or the data itself, @huggingface/datasets? https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr/tree/main ### Owner No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4996/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4995
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4995/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4995/comments
https://api.github.com/repos/huggingface/datasets/issues/4995/events
https://github.com/huggingface/datasets/issues/4995
1,379,108,482
I_kwDODunzps5SM4aC
4,995
Get a specific Exception when the dataset has no data
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[]
2022-09-20T09:31:59
2022-09-21T12:21:25
2022-09-21T12:21:25
CONTRIBUTOR
null
null
null
In the dataset viewer on the Hub (https://huggingface.co/datasets/glue/viewer), we would like (https://github.com/huggingface/moon-landing/issues/3882) to show a specific message when the repository lacks any data files. In that case, instead of showing a complex traceback, we want to show a call to action to help the user upload data. To do that, it would be very helpful to know for sure that the repository is missing any (supported) data files. It could be done by raising a custom exception, for example, `NoDataError`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4995/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4995/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4994
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4994/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4994/comments
https://api.github.com/repos/huggingface/datasets/issues/4994/events
https://github.com/huggingface/datasets/issues/4994
1,379,084,015
I_kwDODunzps5SMybv
4,994
delete the hardcoded license list in `datasets`
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2022-09-20T09:14:41
2022-09-22T11:45:47
2022-09-22T11:45:47
MEMBER
null
null
null
> Feel free to delete the license list in `datasets` [...] > > Also FYI in #4926 I also removed all the validation steps anyway (language, license, types etc.) _Originally posted by @lhoestq in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238401662_ > [...], in my opinion we can just delete this file from `datasets`, the validation is happening hub-side anyways now? _Originally posted by @julien-c in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238390659_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4994/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4994/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4990/comments
https://api.github.com/repos/huggingface/datasets/issues/4990/events
https://github.com/huggingface/datasets/issues/4990
1,378,120,806
I_kwDODunzps5SJHRm
4,990
"no-token" is passed to `huggingface_hub` when token is `None`
{ "login": "Wauplin", "id": 11801849, "node_id": "MDQ6VXNlcjExODAxODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Wauplin", "html_url": "https://github.com/Wauplin", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "repos_url": "https://api.github.com/users/Wauplin/repos", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @Wauplin, thanks for raising this potential issue.\r\n\r\nThe choice of passing `\"no-token\"` instead of `None` was made in this PR:\r\n- #4536 \r\n\r\nAccording to the PR description, the reason why it is passed is to avoid that `HfApi.dataset_info` uses the local token when no token should be used.", "Hi @albertvillanova , thanks for finding the original issue :+1: \r\n\r\nAs of next release of `huggingface_hub`, the `token` argument will be deprecated in favor of the `use_auth_token` argument in `dataset_info` method. This change as been done by @SBrandeis in https://github.com/huggingface/huggingface_hub/pull/928. `use_auth_token` is a bit different and allow the case \"don't sent the cached token by default\".\r\n\r\nIf you want to strictly avoid sending the cached token from `datasets`, you can use:\r\n```py\r\n# token=token if token else \"no-token\", <- will fail because token is not valid\r\n\r\nuse_auth_token=token if token else False, # using the new `use_auth_token` parameter\r\n```\r\n\r\nAnd as a note, I am currently updating the \"don't send the cached token by default\"-rule to \"don't send the cached token on public repos by default but use it in private ones\" in https://github.com/huggingface/huggingface_hub/pull/1064. This will not change the fact that `use_auth_token=False` doesn't send the token at all.\r\n", "What is current strategy in term of updating `huggingface_hub` version in `datasets` ? I don't want to break stuff in the next release so let's find a proper solution :) ", "As soon as `token` is deprecated and hfh has a new release, we'll update `datasets` to use the new argument instead. Does it sound good to you ?", "Perfect :ok_hand: ", "Hi @Wauplin, thanks for the warning about the deprecation of `token` in favor of `use_auth_token`.\r\n\r\nIndeed, in datasets we use internally `use_auth_token`, which in this case was transformed to `token` to call `HfApi.dataset_info`:\r\nhttps://github.com/huggingface/datasets/blob/1a9385d7cc8a3241b44015145ef56a230fdadc51/src/datasets/load.py#L747\r\n\r\nTherefore, for the new hfh release, the fix will be trivial: we will pass directly `use_auth_token`.\r\n\r\nAs discussed during our meeting yesterday, due to the fact that at datasets we support multiple hfh versions, I think we should handle passing `token` or `use_auth_token` depending on the hfh version." ]
2022-09-19T15:14:40
2022-09-30T09:16:00
2022-09-30T09:16:00
CONTRIBUTOR
null
null
null
## Describe the bug In the 2 lines listed below, a token is passed to `huggingface_hub` to get information from a dataset. If no token is provided, a "no-token" string is passed. What is the purpose of it ? If no real, I would prefer if the `None` value could be sent directly to be handle by `huggingface_hub`. I feel that here it is working because we assume the token will never be validated. https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L753 https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L1121 ## Expected results Pass `token=None` to `huggingface_hub`. ## Actual results `token="no-token"` is passed. ## Environment info `huggingface_hub v0.10.0dev`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4990/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4990/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4989
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4989/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4989/comments
https://api.github.com/repos/huggingface/datasets/issues/4989/events
https://github.com/huggingface/datasets/issues/4989
1,376,832,233
I_kwDODunzps5SEMrp
4,989
Running add_column() seems to corrupt existing sequence-type column info
{ "login": "derek-rocheleau", "id": 93728165, "node_id": "U_kgDOBZYtpQ", "avatar_url": "https://avatars.githubusercontent.com/u/93728165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/derek-rocheleau", "html_url": "https://github.com/derek-rocheleau", "followers_url": "https://api.github.com/users/derek-rocheleau/followers", "following_url": "https://api.github.com/users/derek-rocheleau/following{/other_user}", "gists_url": "https://api.github.com/users/derek-rocheleau/gists{/gist_id}", "starred_url": "https://api.github.com/users/derek-rocheleau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/derek-rocheleau/subscriptions", "organizations_url": "https://api.github.com/users/derek-rocheleau/orgs", "repos_url": "https://api.github.com/users/derek-rocheleau/repos", "events_url": "https://api.github.com/users/derek-rocheleau/events{/privacy}", "received_events_url": "https://api.github.com/users/derek-rocheleau/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Nevermind, I was incorrect." ]
2022-09-17T17:42:05
2022-09-19T12:54:54
2022-09-19T12:54:54
NONE
null
null
null
I have a dataset that contains a column ("foo") that is a sequence type of length 4. So when I run .to_pandas() on it, the resulting dataframe correctly contains 4 columns - foo_0, foo_1, foo_2, foo_3. So the 1st row of the dataframe might look like: ds = load_dataset(...) df = ds.to_pandas() df: foo_0 | foo_1 | foo_2 | foo_3 0.0 | 1.0 | 2.0 | 3.0 If I run .add_column("new_col", data) on the dataset, and then .to_pandas() on the resulting new dataset, the resulting dataframe contains only 2 columns - foo, new_col. The values in column foo are lists of length 4, the 4 elements that should have been split into separate columns. Dataframe 1st row would be: ds = load_dataset(...) new_ds = ds.add_column("new_col", data) df = new_ds.to_pandas() df: foo | new_col [0.0, 1.0, 2.0, 3.0] | new_val I've explored the 2 datasets in a debugger and haven't noticed any changes to any attributes related to the foo column, but I can't determine why the dataframes are so different.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4989/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4989/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4988/comments
https://api.github.com/repos/huggingface/datasets/issues/4988/events
https://github.com/huggingface/datasets/issues/4988
1,376,096,584
I_kwDODunzps5SBZFI
4,988
Add `IterableDataset.from_generator` to the API
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
{ "login": "hamid-vakilzadeh", "id": 56002455, "node_id": "MDQ6VXNlcjU2MDAyNDU1", "avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hamid-vakilzadeh", "html_url": "https://github.com/hamid-vakilzadeh", "followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers", "following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}", "gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}", "starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions", "organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs", "repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos", "events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}", "received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events", "type": "User", "site_admin": false }
[ { "login": "hamid-vakilzadeh", "id": 56002455, "node_id": "MDQ6VXNlcjU2MDAyNDU1", "avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hamid-vakilzadeh", "html_url": "https://github.com/hamid-vakilzadeh", "followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers", "following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}", "gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}", "starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions", "organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs", "repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos", "events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}", "received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events", "type": "User", "site_admin": false } ]
null
[ "#take", "Thanks @hamid-vakilzadeh ! Let us know if you have some questions or if we can help", "Thank you! I certainly will reach out if I need any help." ]
2022-09-16T15:19:41
2022-10-05T12:10:49
2022-10-05T12:10:49
CONTRIBUTOR
null
null
null
We've just added `Dataset.from_generator` to the API. It would also be cool to add `IterableDataset.from_generator` to support creating an iterable dataset from a generator. cc @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4988/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4988/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4983
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4983/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4983/comments
https://api.github.com/repos/huggingface/datasets/issues/4983/events
https://github.com/huggingface/datasets/issues/4983
1,375,667,654
I_kwDODunzps5R_wXG
4,983
How to convert torch.utils.data.Dataset to huggingface dataset?
{ "login": "DEROOCE", "id": 77595952, "node_id": "MDQ6VXNlcjc3NTk1OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/77595952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DEROOCE", "html_url": "https://github.com/DEROOCE", "followers_url": "https://api.github.com/users/DEROOCE/followers", "following_url": "https://api.github.com/users/DEROOCE/following{/other_user}", "gists_url": "https://api.github.com/users/DEROOCE/gists{/gist_id}", "starred_url": "https://api.github.com/users/DEROOCE/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DEROOCE/subscriptions", "organizations_url": "https://api.github.com/users/DEROOCE/orgs", "repos_url": "https://api.github.com/users/DEROOCE/repos", "events_url": "https://api.github.com/users/DEROOCE/events{/privacy}", "received_events_url": "https://api.github.com/users/DEROOCE/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi! I think you can use the newly-added `from_generator` method for that:\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndef gen():\r\n for idx in len(torch_dataset):\r\n yield torch_dataset[idx] # this has to be a dictionary\r\n ## or if it's an IterableDataset\r\n # for ex in torch_dataset:\r\n # yield ex\r\n\r\ndset = Dataset.from_generator(gen)\r\n```", "Maybe `Dataset.from_list` can work as well no ?\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndset = Dataset.from_list(torch_dataset)\r\n```", "> ```python\r\n> from datasets import Dataset\r\n> \r\n> def gen():\r\n> for idx in len(torch_dataset):\r\n> yield torch_dataset[idx] # this has to be a dictionary\r\n> ## or if it's an IterableDataset\r\n> # for ex in torch_dataset:\r\n> # yield ex\r\n> \r\n> dset = Dataset.from_generator(gen)\r\n> ```\r\n\r\nI try to use `Dataset.from_generator()` method, and it returns an error:\r\n```bash\r\nAttributeError: type object 'Dataset' has no attribute 'from_generator'\r\n```\r\nAnd I think it maybe the version of my datasets package is out-of-date, so I update it\r\n```bash\r\npip install --upgrade datasets\r\n```\r\nBut after that, the code still return the above Error. ", "> ```python\r\n> dset = Dataset.from_list(torch_dataset)\r\n> ```\r\n\r\nIt seems that Dataset also has no `from_list` method 😂\r\n```bash\r\nAttributeError: type object 'Dataset' has no attribute 'from_list'\r\n```", "> I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:\r\n> \r\n> ```python\r\n> from datasets import Dataset\r\n> data = [[1, 2],[3, 4]]\r\n> ds = Dataset.from_dict({\"data\": data})\r\n> ds = ds.with_format(\"torch\")\r\n> ds[0]\r\n> ds[:2]\r\n> ```\r\n> \r\n> So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert? Thanks.\r\n\r\nMy dummy code is like:\r\n```python\r\nimport os\r\nimport json\r\nfrom torch.utils import data\r\nimport datasets\r\n\r\ndef gen(torch_dataset):\r\n for idx in len(torch_dataset):\r\n yield torch_dataset[idx] # this has to be a dictionary\r\n\r\nclass MyDataset(data.Dataset):\r\n def __init__(self, path):\r\n self.dict = []\r\n for line in open(path, 'r', encoding='utf-8'):\r\n j_dict = json.loads(line)\r\n self.dict.append(j_dict['context'])\r\n \r\n def __getitem__(self, idx):\r\n return self.dict[idx]\r\n\r\n def __len__(self):\r\n return len(self.dict)\r\n\r\nroot_path = os.path.dirname(os.path.abspath(__file__))\r\npath = os.path.join(root_path, 'dataset', 'train.json')\r\ntorch_dataset = MyDataset(path)\r\n\r\ndit = []\r\nfor line in open(path, 'r', encoding='utf-8'):\r\n j_dict = json.loads(line)\r\n dit.append(j_dict['context'])\r\ndset1 = datasets.Dataset.from_list(dit)\r\nprint(dset1)\r\ndset2 = datasets.Dataset.from_generator(gen)\r\nprint(dset2)\r\n```", "We're releasing `from_generator` and `from_list` today :)\r\nIn the meantime you can play with them by installing `datasets` from source", "> We're releasing `from_generator` and `from_list` today :) In the meantime you can play with them by installing `datasets` from source\r\n\r\nThanks a lot for your work!", "> > I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:\r\n> > ```python\r\n> > from datasets import Dataset\r\n> > data = [[1, 2],[3, 4]]\r\n> > ds = Dataset.from_dict({\"data\": data})\r\n> > ds = ds.with_format(\"torch\")\r\n> > ds[0]\r\n> > ds[:2]\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert? Thanks.\r\n> \r\n> My dummy code is like:\r\n> \r\n> ```python\r\n> import os\r\n> import json\r\n> from torch.utils import data\r\n> import datasets\r\n> \r\n> def gen(torch_dataset):\r\n> for idx in len(torch_dataset):\r\n> yield torch_dataset[idx] # this has to be a dictionary\r\n> \r\n> class MyDataset(data.Dataset):\r\n> def __init__(self, path):\r\n> self.dict = []\r\n> for line in open(path, 'r', encoding='utf-8'):\r\n> j_dict = json.loads(line)\r\n> self.dict.append(j_dict['context'])\r\n> \r\n> def __getitem__(self, idx):\r\n> return self.dict[idx]\r\n> \r\n> def __len__(self):\r\n> return len(self.dict)\r\n> \r\n> root_path = os.path.dirname(os.path.abspath(__file__))\r\n> path = os.path.join(root_path, 'dataset', 'train.json')\r\n> torch_dataset = MyDataset(path)\r\n> \r\n> dit = []\r\n> for line in open(path, 'r', encoding='utf-8'):\r\n> j_dict = json.loads(line)\r\n> dit.append(j_dict['context'])\r\n> dset1 = datasets.Dataset.from_list(dit)\r\n> print(dset1)\r\n> dset2 = datasets.Dataset.from_generator(gen)\r\n> print(dset2)\r\n> ```\r\nHi, when I am using this code to build my own dataset, ` datasets.Dataset.from_generator(gen)` report `TypeError: cannot pickle generator object` whre MyDataset returns a dict like {'image': bytes, 'text': string}. How can I resolve this? Thanks a lot!", "Hi ! Right now generator functions are expected to be picklable, so that `datasets` can hash it and use the hash to cache the resulting Dataset on disk. Maybe this can be improved.\r\n\r\nIn the meantime, can you check that you're not using unpickable objects. In your case it looks like you're using a generator object that is unpickable. It might come from an opened file, e.g. this doesn't work:\r\n```python\r\nwith open(...) as f:\r\n\r\n def gen():\r\n for x in f:\r\n yield json.loads(x)\r\n\r\n ds = Dataset.from_generator(gen)\r\n```\r\nbut this does work:\r\n```python\r\ndef gen():\r\n with open(...) as f:\r\n for x in f:\r\n yield json.loads(x)\r\n\r\nds = Dataset.from_generator(gen)\r\n```", "> Hi ! Right now generator functions are expected to be picklable, so that `datasets` can hash it and use the hash to cache the resulting Dataset on disk. Maybe this can be improved.\r\n> \r\n> In the meantime, can you check that you're not using unpickable objects. In your case it looks like you're using a generator object that is unpickable. It might come from an opened file, e.g. this doesn't work:\r\n> \r\n> ```python\r\n> with open(...) as f:\r\n> \r\n> def gen():\r\n> for x in f:\r\n> yield json.loads(x)\r\n> \r\n> ds = Dataset.from_generator(gen)\r\n> ```\r\n> \r\n> but this does work:\r\n> \r\n> ```python\r\n> def gen():\r\n> with open(...) as f:\r\n> for x in f:\r\n> yield json.loads(x)\r\n> \r\n> ds = Dataset.from_generator(gen)\r\n> ```\r\n\r\nThanks a lot! That's the reason why I have encountered this issue. Sorry for bothering you again with another problem, since my dataset is large and I use IterableDataset.from_generator which has no attribute with_transform, how can I equip it with some customed preprocessings like Dataset.from_generator? Should I move the preprocessing to the my torch Dataset?", "Iterable datasets are lazy: exactly like `with_transform` they apply processing on the fly when accessing the examples.\r\n\r\nTherefore you can use `my_iterable_dataset.map()` instead :)", "@lhoestq thanks a lot and I have successfully made it work~", "@lhoestq I am having a similar issue. Can you help me understand which kinds of generators are picklable? I previously thought that no generators are picklable so I'm intrigued to hear this.", "Generator functions are generally picklable. E.g.\r\n```python\r\nimport dill as pickle\r\n\r\ndef generator_fn():\r\n for i in range(10):\r\n yield i\r\n\r\npickle.dumps(generator_fn)\r\n```\r\n\r\nhowever generators are not picklable\r\n```python\r\ngenerator = generator_fn()\r\npickle.dumps(generator)\r\n# TypeError: cannot pickle 'generator' object\r\n```\r\n\r\nThough it can happen that some generator functions are not recursively picklable if they use global objects that are not picklable:\r\n```python\r\ndef generator_fn_not_picklable():\r\n for i in generator:\r\n yield i\r\n\r\npickle.dumps(generator_fn_not_picklable, recurse=True)\r\n# TypeError: cannot pickle 'generator' object\r\n````", "I'm trying to create an IterableDataset from a generator but I get this error:\r\n`PicklingError: Can't pickle <built-in function input>: it's not the same object as builtins.input`\r\n\r\nWhat can I do?" ]
2022-09-16T09:15:10
2023-12-14T20:54:15
2022-09-20T11:23:43
NONE
null
null
null
I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below: ```python from datasets import Dataset data = [[1, 2],[3, 4]] ds = Dataset.from_dict({"data": data}) ds = ds.with_format("torch") ds[0] ds[:2] ``` So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert? Thanks.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4983/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4983/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4982
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4982/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4982/comments
https://api.github.com/repos/huggingface/datasets/issues/4982/events
https://github.com/huggingface/datasets/issues/4982
1,375,604,693
I_kwDODunzps5R_g_V
4,982
Create dataset_infos.json with VALIDATION and TEST splits
{ "login": "skalinin", "id": 26695348, "node_id": "MDQ6VXNlcjI2Njk1MzQ4", "avatar_url": "https://avatars.githubusercontent.com/u/26695348?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skalinin", "html_url": "https://github.com/skalinin", "followers_url": "https://api.github.com/users/skalinin/followers", "following_url": "https://api.github.com/users/skalinin/following{/other_user}", "gists_url": "https://api.github.com/users/skalinin/gists{/gist_id}", "starred_url": "https://api.github.com/users/skalinin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skalinin/subscriptions", "organizations_url": "https://api.github.com/users/skalinin/orgs", "repos_url": "https://api.github.com/users/skalinin/repos", "events_url": "https://api.github.com/users/skalinin/events{/privacy}", "received_events_url": "https://api.github.com/users/skalinin/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "@mariosasko could you help me with this issue? we've started the discussion from [here](https://github.com/huggingface/datasets/issues/4895#issuecomment-1248227130)", "Hi again! Can you please pass the directory name containing the dataset script instead of the script name to `datasets-cli test`?", "Yes, it worked! thanks a lot" ]
2022-09-16T08:21:19
2022-09-28T07:59:39
2022-09-28T07:59:39
NONE
null
null
null
The problem is described in that [issue](https://github.com/huggingface/datasets/issues/4895#issuecomment-1247975569). > When I try to create data_infos.json using datasets-cli test Peter.py --save_infos --all_configs I get an error: > ValueError: Unknown split "test". Should be one of ['train']. > > The data_infos.json is created perfectly fine when I use only one split - datasets.Split.TRAIN > > You can find the code here: https://huggingface.co/datasets/sberbank-ai/Peter/tree/add_splits (add_splits branch) I tried to clear the cache folder, than I got an another error. I run: ``` git clone https://huggingface.co/datasets/sberbank-ai/Peter cd Peter git checkout add_splits # switch to a add_splits branch rm dataset_infos.json # remove local dataset_infos.json rm -r ~/.cache/huggingface # remove cached dataset_infos.json datasets-cli test Peter.py --save_infos --all_configs # trying to create new dataset_infos.json ``` The error message: ``` Using custom data configuration default Testing builder 'default' (1/1) Downloading and preparing dataset peter/default to /Users/kalinin/.cache/huggingface/datasets/peter/default/0.0.0/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d... Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 5160.63it/s] Extracting data files: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last): File "/usr/local/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.9/site-packages/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/usr/local/lib/python3.9/site-packages/datasets/commands/test.py", line 137, in run builder.download_and_prepare( File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 1227, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 771, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/kalinin/.cache/huggingface/modules/datasets_modules/datasets/Peter/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d/Peter.py", line 23, in _split_generators data_files = dl_manager.download_and_extract(_URLS) File "/usr/local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 431, in download_and_extract return self.extract(self.download(url_or_urls)) File "/usr/local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 403, in extract extracted_paths = map_nested( File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 393, in map_nested mapped = [ File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 394, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 330, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 213, in cached_path output_path = ExtractManager(cache_dir=download_config.cache_dir).extract( File "/usr/local/lib/python3.9/site-packages/datasets/utils/extract.py", line 46, in extract self.extractor.extract(input_path, output_path, extractor_format) File "/usr/local/lib/python3.9/site-packages/datasets/utils/extract.py", line 263, in extract with FileLock(lock_path): File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 399, in __init__ max_filename_length = os.statvfs(os.path.dirname(lock_file)).f_namemax FileNotFoundError: [Errno 2] No such file or directory: '' Exception ignored in: <function BaseFileLock.__del__ at 0x11caeec10> Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 328, in __del__ self.release(force=True) File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 303, in release with self._thread_lock: AttributeError: 'UnixFileLock' object has no attribute '_thread_lock' Extracting data files: 0%| | 0/4 [00:00<?, ?it/s] ``` Can you help me please? ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-12.5.1-x86_64-i386-64bit - Python version: 3.9.5 - PyArrow version: 9.0.0 - Pandas version: 1.2.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4982/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4982/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4981
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4981/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4981/comments
https://api.github.com/repos/huggingface/datasets/issues/4981/events
https://github.com/huggingface/datasets/issues/4981
1,375,086,773
I_kwDODunzps5R9ii1
4,981
Can't create a dataset with `float16` features
{ "login": "dconathan", "id": 15098095, "node_id": "MDQ6VXNlcjE1MDk4MDk1", "avatar_url": "https://avatars.githubusercontent.com/u/15098095?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dconathan", "html_url": "https://github.com/dconathan", "followers_url": "https://api.github.com/users/dconathan/followers", "following_url": "https://api.github.com/users/dconathan/following{/other_user}", "gists_url": "https://api.github.com/users/dconathan/gists{/gist_id}", "starred_url": "https://api.github.com/users/dconathan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dconathan/subscriptions", "organizations_url": "https://api.github.com/users/dconathan/orgs", "repos_url": "https://api.github.com/users/dconathan/repos", "events_url": "https://api.github.com/users/dconathan/events{/privacy}", "received_events_url": "https://api.github.com/users/dconathan/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi @dconathan, thanks for reporting.\r\n\r\nWe rely on Arrow as a backend, and as far as I know currently support for `float16` in Arrow is not fully implemented in Python (C++), hence the `ArrowNotImplementedError` you get.\r\n\r\nSee, e.g.: https://arrow.apache.org/docs/status.html?highlight=float16#data-types", "Thanks for the link…. didn’t realize arrow didn’t support it yet. Should it be removed from https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_classes#datasets.Value until Arrow supports it?", "Yes, you are right: maybe we should either remove it from our docs or add a comment explaining the issue.\r\n\r\nThe thing is that in Arrow it is partially supported: you can create `float16` values, but you can't cast them from/to other types. And current implementation of `Value` always tries to perform a cast from `float64` to `float16`.", "Maybe we can just add a note in the `Value` documentation ?", "Would you accept a PR to fix this? @lhoestq Do you have an idea of how hard it would be to fix?", "I think the issue comes mostly from pyarrow not supporting `float16` completely.\r\n\r\nFor example you stil can't cast from/to `float16`\r\n```python\r\nimport numpy as np\r\nimport pyarrow as pa\r\n\r\npa.array(range(5)).cast(pa.float16())\r\n# ArrowNotImplementedError: Unsupported cast from int64 to halffloat using function cast_half_float\r\npa.array(range(5), pa.float32()).cast(pa.float16())\r\n# ArrowNotImplementedError: Unsupported cast from float to halffloat using function cast_half_float\r\npa.array(range(5), pa.float16())\r\n# ArrowTypeError: Expected np.float16 instance\r\npa.array(np.arange(5, dtype=np.float16())).cast(pa.float32())\r\n# ArrowNotImplementedError: Unsupported cast from halffloat to float using function cast_float\r\n```", "Hmm it seems like we can either:\r\n1. try to fix pyarrow upstream\r\n2. half-support float16 with some workaround to make sure we don't ever do casting internally\r\n" ]
2022-09-15T21:03:24
2023-03-22T21:40:09
null
CONTRIBUTOR
null
null
null
## Describe the bug I can't create a dataset with `float16` features. I understand from the traceback that this is a `pyarrow` error, but I don't see anywhere in the `datasets` documentation about how to successfully do this. Is it actually supported? I've tried older versions of `pyarrow` as well with the same exact error. The bug seems to arise from `datasets` casting the values to `double` and then `pyarrow` doesn't know how to convert those back to `float16`... does that sound right? Is there a way to bypass this since it's not necessary in the `numpy` and `torch` cases? Thanks! ## Steps to reproduce the bug All of the following raise the following error with the same exact (as far as I can tell) traceback: ```python ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float ``` ```python from datasets import Dataset, Features, Value Dataset.from_dict({"x": [0.0, 1.0, 2.0]}, features=Features(x=Value("float16"))) import numpy as np Dataset.from_dict({"x": np.arange(3, dtype=np.float16)}, features=Features(x=Value("float16"))) import torch Dataset.from_dict({"x": torch.arange(3).to(torch.float16)}, features=Features(x=Value("float16"))) ``` ## Expected results A dataset with `float16` features is successfully created. ## Actual results ```python --------------------------------------------------------------------------- ArrowNotImplementedError Traceback (most recent call last) Cell In [14], line 1 ----> 1 Dataset.from_dict({"x": [1.0, 2.0, 3.0]}, features=Features(x=Value("float16"))) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py:870, in Dataset.from_dict(cls, mapping, features, info, split) 865 mapping = features.encode_batch(mapping) 866 mapping = { 867 col: OptimizedTypedSequence(data, type=features[col] if features is not None else None, col=col) 868 for col, data in mapping.items() 869 } --> 870 pa_table = InMemoryTable.from_pydict(mapping=mapping) 871 if info.features is None: 872 info.features = Features({col: ts.get_inferred_type() for col, ts in mapping.items()}) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:750, in InMemoryTable.from_pydict(cls, *args, **kwargs) 734 @classmethod 735 def from_pydict(cls, *args, **kwargs): 736 """ 737 Construct a Table from Arrow arrays or columns 738 (...) 748 :class:`datasets.table.Table`: 749 """ --> 750 return cls(pa.Table.from_pydict(*args, **kwargs)) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:3648, in pyarrow.lib.Table.from_pydict() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:5174, in pyarrow.lib._from_pydict() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:343, in pyarrow.lib.asarray() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:231, in pyarrow.lib.array() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py:197, in TypedSequence.__arrow_array__(self, type) 192 # otherwise we can finally use the user's type 193 elif type is not None: 194 # We use cast_array_to_feature to support casting to custom types like Audio and Image 195 # Also, when trying type "string", we don't want to convert integers or floats to "string". 196 # We only do it if trying_type is False - since this is what the user asks for. --> 197 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) 198 return out 199 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs) 1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1682 else: -> 1683 return func(array, *args, **kwargs) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1853, in cast_array_to_feature(array, feature, allow_number_to_str) 1851 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str) 1852 elif not isinstance(feature, (Sequence, dict, list, tuple)): -> 1853 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) 1854 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs) 1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1682 else: -> 1683 return func(array, *args, **kwargs) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1762, in array_cast(array, pa_type, allow_number_to_str) 1760 if pa.types.is_null(pa_type) and not pa.types.is_null(array.type): 1761 raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}") -> 1762 return array.cast(pa_type) 1763 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}") File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:919, in pyarrow.lib.Array.cast() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/compute.py:389, in cast(arr, target_type, safe, options) 387 else: 388 options = CastOptions.safe(target_type) --> 389 return call_function("cast", [arr], options) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:560, in pyarrow._compute.call_function() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:355, in pyarrow._compute.Function.call() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:121, in pyarrow.lib.check_status() ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float ``` ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4981/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4981/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/4980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4980/comments
https://api.github.com/repos/huggingface/datasets/issues/4980/events
https://github.com/huggingface/datasets/issues/4980
1,374,868,083
I_kwDODunzps5R8tJz
4,980
Make `pyarrow` optional
{ "login": "KOLANICH", "id": 240344, "node_id": "MDQ6VXNlcjI0MDM0NA==", "avatar_url": "https://avatars.githubusercontent.com/u/240344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KOLANICH", "html_url": "https://github.com/KOLANICH", "followers_url": "https://api.github.com/users/KOLANICH/followers", "following_url": "https://api.github.com/users/KOLANICH/following{/other_user}", "gists_url": "https://api.github.com/users/KOLANICH/gists{/gist_id}", "starred_url": "https://api.github.com/users/KOLANICH/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KOLANICH/subscriptions", "organizations_url": "https://api.github.com/users/KOLANICH/orgs", "repos_url": "https://api.github.com/users/KOLANICH/repos", "events_url": "https://api.github.com/users/KOLANICH/events{/privacy}", "received_events_url": "https://api.github.com/users/KOLANICH/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "The whole datasets library is pretty much a wrapper to pyarrow (just take a look at some of the source for a Dataset) https://github.com/huggingface/datasets/blob/51aef08ad7053c0bfe8f9a961207b26df15850d3/src/datasets/arrow_dataset.py#L639 \r\n\r\nI think removing the pyarrow dependency would involve a complete rewrite / a different library with minimal functionality (datasets-lite ?)", "Thanks for the proposal, @KOLANICH. And also thanks for your answer, @dconathan.\r\n\r\nIndeed, we are using `pyarrow` as the backend for our datasets, in order to cache them and also allow memory-mapping (using datasets larger than your RAM memory).\r\n\r\nOne way to avoid using `pyarrow` could be loading the datasets in streaming mode, by passing `streaming=True` to `load_dataset`. This way you basically get a generator for the dataset; nothing is downloaded, nor cached. ", "Thanks for the info. Could `datasets` then be made optional for `transformers` instead? I used `transformers` only to deal with pretrained models to deploy them (convert to ONNX, and then I use TVM), so I don't really need `pyarrow` and `datasets` by now.\r\n" ]
2022-09-15T17:38:03
2022-09-16T17:23:47
2022-09-16T17:23:47
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** Is `pyarrow` really needed for every dataset? **Describe the solution you'd like** It is made optional. **Describe alternatives you've considered** Likely, no.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4980/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4980/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4977
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4977/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4977/comments
https://api.github.com/repos/huggingface/datasets/issues/4977/events
https://github.com/huggingface/datasets/issues/4977
1,372,962,157
I_kwDODunzps5R1b1t
4,977
Providing dataset size
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi @sashavor, thanks for your suggestion.\r\n\r\nUntil now we have the CLI command \r\n```\r\ndatasets-cli test datasets/<your-dataset-folder> --save_infos --all_configs\r\n```\r\nthat generates the `dataset_infos.json` with the size of the downloaded dataset, among other information.\r\n\r\nWe are currently in the middle of removing those JSON files and putting their information directly in the header of the `README.md` (as YAML tags). Normally, the CLI command should continue working but saving its output to the dataset card instead. See:\r\n- #4926", "Additionally, the download size can be inferred by doing HEAD requests to the files to be downloaded. And for files hosted on the hub you can even get the file sizes using the Hub API", "Amazing @albertvillanova ! I think just having that information visible in the dataset info (without having to do any requests/additional coding) would be really useful :hugs: " ]
2022-09-14T13:09:27
2022-09-15T16:03:58
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** Especially for big datasets like [LAION](https://huggingface.co/datasets/laion/laion2B-en/), it's hard to know exactly the downloaded size (because there are many files and you don't have their exact size when downloaded). **Describe the solution you'd like** Auto-populating the downloaded dataset size on the dataset page would be really useful, including that of each split (when there are some). **Describe alternatives you've considered** People should be adding this to dataset cards, but I don't think that is systematically the case :slightly_smiling_face: **Additional context** Mentioned to @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4977/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4977/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/4976
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4976/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4976/comments
https://api.github.com/repos/huggingface/datasets/issues/4976/events
https://github.com/huggingface/datasets/issues/4976
1,372,322,382
I_kwDODunzps5Ry_pO
4,976
Hope to adapt Python3.9 as soon as possible
{ "login": "RedHeartSecretMan", "id": 74012141, "node_id": "MDQ6VXNlcjc0MDEyMTQx", "avatar_url": "https://avatars.githubusercontent.com/u/74012141?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RedHeartSecretMan", "html_url": "https://github.com/RedHeartSecretMan", "followers_url": "https://api.github.com/users/RedHeartSecretMan/followers", "following_url": "https://api.github.com/users/RedHeartSecretMan/following{/other_user}", "gists_url": "https://api.github.com/users/RedHeartSecretMan/gists{/gist_id}", "starred_url": "https://api.github.com/users/RedHeartSecretMan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RedHeartSecretMan/subscriptions", "organizations_url": "https://api.github.com/users/RedHeartSecretMan/orgs", "repos_url": "https://api.github.com/users/RedHeartSecretMan/repos", "events_url": "https://api.github.com/users/RedHeartSecretMan/events{/privacy}", "received_events_url": "https://api.github.com/users/RedHeartSecretMan/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi! `datasets` should work in Python 3.9. What kind of issue have you encountered?", "There is this related issue already: https://github.com/huggingface/datasets/issues/4113\r\nAnd I guess we need a CI job for 3.9 ^^", "Perhaps we should report this issue in the `filelock` repo?" ]
2022-09-14T04:42:22
2022-09-26T16:32:35
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context about the feature request here.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4976/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4976/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/4965
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4965/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4965/comments
https://api.github.com/repos/huggingface/datasets/issues/4965/events
https://github.com/huggingface/datasets/issues/4965
1,368,661,002
I_kwDODunzps5RlBwK
4,965
[Apple M1] MemoryError: Cannot allocate write+execute memory for ffi.callback()
{ "login": "hoangtnm", "id": 35718590, "node_id": "MDQ6VXNlcjM1NzE4NTkw", "avatar_url": "https://avatars.githubusercontent.com/u/35718590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hoangtnm", "html_url": "https://github.com/hoangtnm", "followers_url": "https://api.github.com/users/hoangtnm/followers", "following_url": "https://api.github.com/users/hoangtnm/following{/other_user}", "gists_url": "https://api.github.com/users/hoangtnm/gists{/gist_id}", "starred_url": "https://api.github.com/users/hoangtnm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hoangtnm/subscriptions", "organizations_url": "https://api.github.com/users/hoangtnm/orgs", "repos_url": "https://api.github.com/users/hoangtnm/repos", "events_url": "https://api.github.com/users/hoangtnm/events{/privacy}", "received_events_url": "https://api.github.com/users/hoangtnm/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi! This seems like a bug in `soundfile`. Could you please open an issue in their repo? `soundfile` works without any issues on my M1, so I'm not sure we can help.", "Hi @mariosasko, can you share how you installed `soundfile` on your mac M1?", "Hi @hoangtnm - I upgraded to python 3.10 and it fixed the problem for me. I was also running 3.8 on an M1 mac.", "Same here, upgrade python didn't work for me \r\n\r\nMemoryError: Cannot allocate write+execute memory for ffi.callback()\r\n\r\nany idea?", "This is a `soundfile` issue, so there isn't much we can do about it. Hopefully, it gets fixed soon." ]
2022-09-10T15:55:49
2024-01-12T14:37:32
2023-07-21T14:45:50
NONE
null
null
null
## Describe the bug I'm trying to run `cast_column("audio", Audio())` on Apple M1 Pro, but it seems that it doesn't work. ## Steps to reproduce the bug ```python import datasets dataset = load_dataset("csv", data_files="./train.csv")["train"] dataset = dataset.map(lambda x: {"audio": str(DATA_DIR / "audio" / x["audio"])}) dataset = dataset.cast_column("audio", Audio()) dataset[0] ``` ## Expected results ``` {'audio': {'bytes': None, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav'}, 'english_transcription': 'I would like to set up a joint account with my partner', 'intent_class': 11, 'lang_id': 4, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'transcription': 'I would like to set up a joint account with my partner'} ``` ## Actual results ````--------------------------------------------------------------------------- MemoryError Traceback (most recent call last) Input In [6], in <cell line: 1>() ----> 1 dataset[0] File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2165, in Dataset.__getitem__(self, key) 2163 def __getitem__(self, key): # noqa: F811 2164 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 2165 return self._getitem( 2166 key, 2167 ) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2150, in Dataset._getitem(self, key, decoded, **kwargs) 2148 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs) 2149 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 2150 formatted_output = format_table( 2151 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 2152 ) 2153 return formatted_output File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns) 530 python_formatter = PythonFormatter(features=None) 531 if format_columns is None: --> 532 return formatter(pa_table, query_type=query_type) 533 elif query_type == "column": 534 if key in format_columns: File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type) 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 280 if query_type == "row": --> 281 return self.format_row(pa_table) 282 elif query_type == "column": 283 return self.format_column(pa_table) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:312, in PythonFormatter.format_row(self, pa_table) 310 row = self.python_arrow_extractor().extract_row(pa_table) 311 if self.decoded: --> 312 row = self.python_features_decoder.decode_row(row) 313 return row File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:221, in PythonFeaturesDecoder.decode_row(self, row) 220 def decode_row(self, row: dict) -> dict: --> 221 return self.features.decode_example(row) if self.features else row File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1647, in Features.decode_example(self, example, token_per_repo_id) 1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1635 """Decode example with custom feature decoding. 1636 1637 Args: (...) 1644 :obj:`dict[str, Any]` 1645 """ -> 1647 return { 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1649 if self._column_requires_decoding[column_name] 1650 else value 1651 for column_name, (feature, value) in zip_dict( 1652 {key: value for key, value in self.items() if key in example}, example 1653 ) 1654 } File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1648, in <dictcomp>(.0) 1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1635 """Decode example with custom feature decoding. 1636 1637 Args: (...) 1644 :obj:`dict[str, Any]` 1645 """ 1647 return { -> 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1649 if self._column_requires_decoding[column_name] 1650 else value 1651 for column_name, (feature, value) in zip_dict( 1652 {key: value for key, value in self.items() if key in example}, example 1653 ) 1654 } File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1260, in decode_nested_example(schema, obj, token_per_repo_id) 1257 # Object with special decoding: 1258 elif isinstance(schema, (Audio, Image)): 1259 # we pass the token to read and decode files from private repositories in streaming mode -> 1260 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None 1261 return obj File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:156, in Audio.decode_example(self, value, token_per_repo_id) 154 array, sampling_rate = self._decode_non_mp3_file_like(file) 155 else: --> 156 array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id) 157 return {"path": path, "array": array, "sampling_rate": sampling_rate} File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:257, in Audio._decode_non_mp3_path_like(self, path, format, token_per_repo_id) 254 use_auth_token = None 256 with xopen(path, "rb", use_auth_token=use_auth_token) as f: --> 257 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono) 258 return array, sampling_rate File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/util/decorators.py:88, in deprecate_positional_args.<locals>._inner_deprecate_positional_args.<locals>.inner_f(*args, **kwargs) 86 extra_args = len(args) - len(all_args) 87 if extra_args <= 0: ---> 88 return f(*args, **kwargs) 90 # extra_args > 0 91 args_msg = [ 92 "{}={}".format(name, arg) 93 for name, arg in zip(kwonly_args[:extra_args], args[-extra_args:]) 94 ] File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:164, in load(path, sr, mono, offset, duration, dtype, res_type) 161 else: 162 # Otherwise try soundfile first, and then fall back if necessary 163 try: --> 164 y, sr_native = __soundfile_load(path, offset, duration, dtype) 166 except RuntimeError as exc: 167 # If soundfile failed, try audioread instead 168 if isinstance(path, (str, pathlib.PurePath)): File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:195, in __soundfile_load(path, offset, duration, dtype) 192 context = path 193 else: 194 # Otherwise, create the soundfile object --> 195 context = sf.SoundFile(path) 197 with context as sf_desc: 198 sr_native = sf_desc.samplerate File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:629, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd) 626 self._mode = mode 627 self._info = _create_info_struct(file, mode, samplerate, channels, 628 format, subtype, endian) --> 629 self._file = self._open(file, mode_int, closefd) 630 if set(mode).issuperset('r+') and self.seekable(): 631 # Move write position to 0 (like in Python file objects) 632 self.seek(0) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1179, in SoundFile._open(self, file, mode_int, closefd) 1177 file_ptr = _snd.sf_open_fd(file, mode_int, self._info, closefd) 1178 elif _has_virtual_io_attrs(file, mode_int): -> 1179 file_ptr = _snd.sf_open_virtual(self._init_virtual_io(file), 1180 mode_int, self._info, _ffi.NULL) 1181 else: 1182 raise TypeError("Invalid file: {0!r}".format(self.name)) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1197, in SoundFile._init_virtual_io(self, file) 1194 def _init_virtual_io(self, file): 1195 """Initialize callback functions for sf_open_virtual().""" 1196 @_ffi.callback("sf_vio_get_filelen") -> 1197 def vio_get_filelen(user_data): 1198 curr = file.tell() 1199 file.seek(0, SEEK_END) MemoryError: Cannot allocate write+execute memory for ffi.callback(). You might be running on a system that prevents this. For more information, see https://cffi.readthedocs.io/en/latest/using.html#callbacks ``` ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4965/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4965/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4964
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4964/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4964/comments
https://api.github.com/repos/huggingface/datasets/issues/4964/events
https://github.com/huggingface/datasets/issues/4964
1,368,617,322
I_kwDODunzps5Rk3Fq
4,964
Column of arrays (2D+) are using unreasonably high memory
{ "login": "vigsterkr", "id": 30353, "node_id": "MDQ6VXNlcjMwMzUz", "avatar_url": "https://avatars.githubusercontent.com/u/30353?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vigsterkr", "html_url": "https://github.com/vigsterkr", "followers_url": "https://api.github.com/users/vigsterkr/followers", "following_url": "https://api.github.com/users/vigsterkr/following{/other_user}", "gists_url": "https://api.github.com/users/vigsterkr/gists{/gist_id}", "starred_url": "https://api.github.com/users/vigsterkr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vigsterkr/subscriptions", "organizations_url": "https://api.github.com/users/vigsterkr/orgs", "repos_url": "https://api.github.com/users/vigsterkr/repos", "events_url": "https://api.github.com/users/vigsterkr/events{/privacy}", "received_events_url": "https://api.github.com/users/vigsterkr/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "note i have tried the same code with `datasets` version 2.4.0, the outcome is the very same as described above.", "Seems related to issues #4623 and #4802 so it would appear this issue has been around for a few months.", "Hi ! `Dataset.from_dict` keeps the data in memory. You can write on disk and reload them with\r\n```python\r\ndataset.save_to_disk(\"path/to/local\")\r\ndataset = load_from_disk(\"path/to/local\")\r\n```\r\nthis way you'll end up with a dataset loaded from your disk using memory mapping, and it won't fill up your RAM :)\r\n\r\nrelated to https://github.com/huggingface/datasets/issues/4861", "@lhoestq thnx for getting back to me! i've tested the suggested method, but unfortunately the memory consumption is the very same:\r\n\r\n```\r\nfrom datasets import Dataset, Features, Array2D, Array3D, load_from_disk\r\nimport numpy as np\r\n\r\ncolumn_name = \"a\"\r\narray_shape = (64, 64, 3)\r\n\r\ndata = np.random.random((10000,) + array_shape)\r\ndataset = Dataset.from_dict({column_name: data}, features=Features({column_name: Array3D(shape=array_shape, dtype=\"float64\")}))\r\ndataset.save_to_disk(\"foo\")\r\n\r\nfoo_db = load_from_disk(\"foo\")\r\ncolum_value = foo_db[column_name]\r\n```\r\n\r\nthe very same happens when you create the dataset, but dont specify the feature type.\r\n\r\ni've tried running this on different envs (macOS, linux) and it's behaving the very same way.", "When you call `colum_value = foo_db[column_name]`, you load the full column in memory.\r\n\r\nIf you want to avoid filling up your memory, you can access chunks of data instead\r\n```python\r\nembeddings = dataset[i:i + chunk_size][\"embeddings\"]\r\n```", "@lhoestq yeah that's intentional, i.e. i really want to load the whole column into the memory. but as said above there's an unreasonable amount of overhead for the memory. the np array itself is using about 1G of memory:\r\n```\r\n>>> getsizeof(data)/1024/1024\r\n937.5001525878906\r\n```\r\nthat accessing of column above is using 10x memory compared to the original numpy array.", "The dataset must be twice as big because we use regular arrow ListArray under the hood and not FixedSizeListArray. Basically we store unnecessary offsets.\r\n\r\nAnd this should affect performance as well. When we developed this, FixedSizeListArray still had some issues but they should be resolved on the PyArrow side now", "A doubling would be fine. My very basic understanding of PyArrow is that using ListArray is probably related to the issue though. Using a multi-dimensional array in datasets is storing everything as strange nested 1d object arrays, which I imagine is creating the massive overhead.\r\n\r\nI think it should be a PyArrow Tensor, no?", "PyArrow tensors are not part of the Arrow format AFAIK:\r\n\r\n> There is no direct support in the arrow columnar format to store Tensors as column values.\r\n\r\nsource: https://github.com/apache/arrow/issues/4802#issuecomment-508494694", "That's... unfortunate. I didn't realize that." ]
2022-09-10T13:07:22
2022-09-22T18:29:22
null
CONTRIBUTOR
null
null
null
## Describe the bug When trying to store `Array2D, Array3D, etc` as column values in a dataset, accessing that column (or creating depending on how you create it, see code below) will cause more than 10 fold of memory usage. ## Steps to reproduce the bug ```python from datasets import Dataset, Features, Array2D, Array3D import numpy as np column_name = "a" array_shape = (64, 64, 3) data = np.random.random((10000,) + array_shape) dataset = Dataset.from_dict({column_name: data}, features=Features({column_name: Array3D(shape=array_shape, dtype="float64")})) ``` the code above will use about 10Gb of RAM while constructing the `dataset` object. The code below will use roughly the same amount of memory (and time) when trying to actually access the data itself of that column. ```python from datasets import Dataset import numpy as np column_name = "a" array_shape = (64, 64, 3) data = np.random.random((10000,) + array_shape) dataset = Dataset.from_dict({column_name: data}) dataset[column_name] ``` ## Expected results Some memory overhead, but not like as it is now and certainly not an overhead of such runtime that is currently happening. ## Actual results Enormous memory- and runtime overhead. ## Environment info - `datasets` version: 2.3.2 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4964/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4964/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/4963
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4963/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4963/comments
https://api.github.com/repos/huggingface/datasets/issues/4963/events
https://github.com/huggingface/datasets/issues/4963
1,368,201,188
I_kwDODunzps5RjRfk
4,963
Dataset without script does not support regular JSON data file
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @julien-c,\r\n\r\nOut of the box, we only support JSON lines (NDJSON) data files, but your data file is a regular JSON file. The reason is we use `pyarrow.json.read_json` and this only supports line-delimited JSON. " ]
2022-09-09T18:45:33
2022-09-20T15:40:07
2022-09-20T15:40:07
MEMBER
null
null
null
### Link https://huggingface.co/datasets/julien-c/label-studio-my-dogs ### Description <img width="1115" alt="image" src="https://user-images.githubusercontent.com/326577/189422048-7e9c390f-bea7-4521-a232-43f049ccbd1f.png"> ### Owner Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4963/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4963/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4961/comments
https://api.github.com/repos/huggingface/datasets/issues/4961/events
https://github.com/huggingface/datasets/issues/4961
1,368,124,033
I_kwDODunzps5Ri-qB
4,961
fsspec 2022.8.2 breaks xopen in streaming mode
{ "login": "DCNemesis", "id": 3616964, "node_id": "MDQ6VXNlcjM2MTY5NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DCNemesis", "html_url": "https://github.com/DCNemesis", "followers_url": "https://api.github.com/users/DCNemesis/followers", "following_url": "https://api.github.com/users/DCNemesis/following{/other_user}", "gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}", "starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions", "organizations_url": "https://api.github.com/users/DCNemesis/orgs", "repos_url": "https://api.github.com/users/DCNemesis/repos", "events_url": "https://api.github.com/users/DCNemesis/events{/privacy}", "received_events_url": "https://api.github.com/users/DCNemesis/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "loading `fsspec==2022.7.1` fixes this issue, setup.py would need to be changed to prevent users from using the latest version of fsspec.", "Opened [PR](https://github.com/huggingface/datasets/pull/4962) to address this.", "Hi @DCNemesis, thanks for reporting.\r\n\r\nThat was a temporary issue in `fsspec` releases 2022.8.0 and 2022.8.1. But they fixed it in their patch release 2022.8.2 (and yanked both previous versions). See:\r\n- https://github.com/huggingface/transformers/pull/18846\r\n\r\nAre you sure you have version 2022.8.2 installed?\r\n```shell\r\npip install -U fsspec\r\n```\r\n", "@albertvillanova I was using a temporary Google Colab instance, but checking it again today it seems it was loading 2022.8.1 rather than 2022.8.2. It's surprising that colab is using the version that was replaced the same day it was released. Testing with 2022.8.2 did work. It appears Colab [will be fixing it](https://github.com/googlecolab/colabtools/issues/3055) on their end too. ", "Thanks for the additional information.\r\n\r\nOnce we know 2022.8.2 works, I'm closing this issue. Feel free to reopen it if necessary.", "Colab just upgraded their default `fsspec` version to 2022.8.2:\r\n- https://github.com/googlecolab/colabtools/issues/3055#issuecomment-1244019010" ]
2022-09-09T17:26:55
2022-09-12T17:45:50
2022-09-12T14:32:05
NONE
null
null
null
## Describe the bug When fsspec 2022.8.2 is installed in your environment, xopen will prematurely close files, making streaming mode inoperable. ## Steps to reproduce the bug ```python import datasets data = datasets.load_dataset('MLCommons/ml_spoken_words', 'id_wav', split='train', streaming=True) ``` ## Expected results Dataset should load as iterator. ## Actual results ``` [/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1737 # Return iterable dataset in case of streaming 1738 if streaming: -> 1739 return builder_instance.as_streaming_dataset(split=split) 1740 1741 # Some datasets are already processed on the HF google storage [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path) 1023 ) 1024 self._check_manual_download(dl_manager) -> 1025 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} 1026 # By default, return all splits 1027 if split is None: [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _split_generators(self, dl_manager) 182 name=datasets.Split.TRAIN, 183 gen_kwargs={ --> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages], 185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in 186 self.config.languages] if not dl_manager.is_streaming else None, [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in <listcomp>(.0) 182 name=datasets.Split.TRAIN, 183 gen_kwargs={ --> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages], 185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in 186 self.config.languages] if not dl_manager.is_streaming else None, [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives(dl_manager, lang, format, split) 267 # for streaming case 268 def _download_audio_archives(dl_manager, lang, format, split): --> 269 archives_paths = _download_audio_archives_paths(dl_manager, lang, format, split) 270 return [dl_manager.iter_archive(archive_path) for archive_path in archives_paths] [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives_paths(dl_manager, lang, format, split) 251 n_files_path = dl_manager.download(n_files_url) 252 --> 253 with open(n_files_path, "r", encoding="utf-8") as file: 254 n_files = int(file.read().strip()) # the file contains a number of archives 255 ValueError: I/O operation on closed file. ``` ## Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4961/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4960
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4960/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4960/comments
https://api.github.com/repos/huggingface/datasets/issues/4960/events
https://github.com/huggingface/datasets/issues/4960
1,368,035,159
I_kwDODunzps5Rio9X
4,960
BioASQ AttributeError: 'BuilderConfig' object has no attribute 'schema'
{ "login": "DSLituiev", "id": 8426290, "node_id": "MDQ6VXNlcjg0MjYyOTA=", "avatar_url": "https://avatars.githubusercontent.com/u/8426290?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DSLituiev", "html_url": "https://github.com/DSLituiev", "followers_url": "https://api.github.com/users/DSLituiev/followers", "following_url": "https://api.github.com/users/DSLituiev/following{/other_user}", "gists_url": "https://api.github.com/users/DSLituiev/gists{/gist_id}", "starred_url": "https://api.github.com/users/DSLituiev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DSLituiev/subscriptions", "organizations_url": "https://api.github.com/users/DSLituiev/orgs", "repos_url": "https://api.github.com/users/DSLituiev/repos", "events_url": "https://api.github.com/users/DSLituiev/events{/privacy}", "received_events_url": "https://api.github.com/users/DSLituiev/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[ "Following worked:\r\n\r\n```\r\ndata_dir = \"/Users/dlituiev/repos/datasets/bioasq/\"\r\nbioasq_task_b = load_dataset(\"aps/bioasq_task_b\", data_dir=data_dir, name=\"bioasq_9b_source\")\r\n```\r\n\r\nWould maintainers be open to one of the following:\r\n- automating this with a latest default config (e.g. `bioasq_9b_source`); how can this be generalized to other datasets?\r\n- providing an actionable error message that lists available `name` values? I only got available `name` values once I've provided something there (`name=\"aps/bioasq_task_b\"`), before it would not even mention that it requires `name` argument", "Hi ! In general the list of available configurations is prompted. I think this is an issue with this specific dataset.\r\n\r\nFeel free to open a new discussions at https://huggingface.co/datasets/aps/bioasq_task_b/discussions\r\n\r\ncc @apsdehal\r\n\r\nIn particular it sounds like the `BUILDER_CONFIG_CLASS= BigBioConfig ` class attribute is missing and the _info should account for schema being None and raise an error" ]
2022-09-09T16:06:43
2022-09-13T08:51:03
null
NONE
null
null
null
## Describe the bug I am trying to load a dataset from drive and running into an error. ## Steps to reproduce the bug ```python data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b" bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir) ``` ## Actual results `AttributeError: 'BuilderConfig' object has no attribute 'schema'` <details> ``` Using custom data configuration default-a1ca3e05be5abf2f --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Input In [8], in <cell line: 2>() 1 data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b" ----> 2 bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir) File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1723, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1720 ignore_verifications = ignore_verifications or save_infos 1722 # Create a dataset builder -> 1723 builder_instance = load_dataset_builder( 1724 path=path, 1725 name=name, 1726 data_dir=data_dir, 1727 data_files=data_files, 1728 cache_dir=cache_dir, 1729 features=features, 1730 download_config=download_config, 1731 download_mode=download_mode, 1732 revision=revision, 1733 use_auth_token=use_auth_token, 1734 **config_kwargs, 1735 ) 1737 # Return iterable dataset in case of streaming 1738 if streaming: File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1526, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1523 raise ValueError(error_msg) 1525 # Instantiate the dataset builder -> 1526 builder_instance: DatasetBuilder = builder_cls( 1527 cache_dir=cache_dir, 1528 config_name=config_name, 1529 data_dir=data_dir, 1530 data_files=data_files, 1531 hash=hash, 1532 features=features, 1533 use_auth_token=use_auth_token, 1534 **builder_kwargs, 1535 **config_kwargs, 1536 ) 1538 return builder_instance File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:1154, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs) 1153 def __init__(self, *args, writer_batch_size=None, **kwargs): -> 1154 super().__init__(*args, **kwargs) 1155 # Batch size used by the ArrowWriter 1156 # It defines the number of samples that are kept in memory before writing them 1157 # and also the length of the arrow chunks 1158 # None means that the ArrowWriter will use its default value 1159 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:307, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs) 305 if info is None: 306 info = self.get_exported_dataset_info() --> 307 info.update(self._info()) 308 info.builder_name = self.name 309 info.config_name = self.config.name File ~/.cache/huggingface/modules/datasets_modules/datasets/aps--bioasq_task_b/3d54b1213f7e8001eef755af92877f9efa44161ee83c2a70d5d649defa95759e/bioasq_task_b.py:477, in BioasqTaskBDataset._info(self) 474 def _info(self): 475 476 # BioASQ Task B source schema --> 477 if self.config.schema == "source": 478 features = datasets.Features( 479 { 480 "id": datasets.Value("string"), (...) 504 } 505 ) 506 # simplified schema for QA tasks AttributeError: 'BuilderConfig' object has no attribute 'schema' ``` </details> ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.10.4 - PyArrow version: 9.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4960/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/4958
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4958/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4958/comments
https://api.github.com/repos/huggingface/datasets/issues/4958/events
https://github.com/huggingface/datasets/issues/4958
1,367,695,376
I_kwDODunzps5RhWAQ
4,958
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.4.0/datasets/jsonl/jsonl.py
{ "login": "hasakikiki", "id": 66322047, "node_id": "MDQ6VXNlcjY2MzIyMDQ3", "avatar_url": "https://avatars.githubusercontent.com/u/66322047?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hasakikiki", "html_url": "https://github.com/hasakikiki", "followers_url": "https://api.github.com/users/hasakikiki/followers", "following_url": "https://api.github.com/users/hasakikiki/following{/other_user}", "gists_url": "https://api.github.com/users/hasakikiki/gists{/gist_id}", "starred_url": "https://api.github.com/users/hasakikiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hasakikiki/subscriptions", "organizations_url": "https://api.github.com/users/hasakikiki/orgs", "repos_url": "https://api.github.com/users/hasakikiki/repos", "events_url": "https://api.github.com/users/hasakikiki/events{/privacy}", "received_events_url": "https://api.github.com/users/hasakikiki/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I have solved this problem... The extension of the file should be `.json` not `.jsonl`" ]
2022-09-09T11:29:55
2022-09-09T11:38:44
2022-09-09T11:38:44
NONE
null
null
null
Hi, When I use load_dataset from local jsonl files, below error happens, and I type the link into the browser prompting me `404: Not Found`. I download the other `.py` files using the same method and it works. It seems that the server is missing the appropriate file, or it is a problem with the code version. ``` ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x2b08342004c0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))"))) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4958/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4955
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4955/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4955/comments
https://api.github.com/repos/huggingface/datasets/issues/4955/events
https://github.com/huggingface/datasets/issues/4955
1,366,382,314
I_kwDODunzps5RcVbq
4,955
Raise a more precise error when the URL is unreachable in streaming mode
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2022-09-08T13:52:37
2022-09-08T13:53:36
null
CONTRIBUTOR
null
null
null
See for example: - https://github.com/huggingface/datasets/issues/3191 - https://github.com/huggingface/datasets/issues/3186 It would help provide clearer information on the Hub and help the dataset maintainer solve the issue by themselves quicker. Currently: - https://huggingface.co/datasets/compguesswhat <img width="1029" alt="Capture d’écran 2022-09-08 à 15 51 37" src="https://user-images.githubusercontent.com/1676121/189139946-6deffb91-f21b-4281-8825-a98026c69740.png"> - https://huggingface.co/datasets/nli_tr <img width="1032" alt="Capture d’écran 2022-09-08 à 15 51 44" src="https://user-images.githubusercontent.com/1676121/189139963-d26490ed-ad23-48ea-9cfc-1ab9c4d08d0c.png"> cc @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4955/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4955/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/4953
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4953/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4953/comments
https://api.github.com/repos/huggingface/datasets/issues/4953/events
https://github.com/huggingface/datasets/issues/4953
1,366,356,514
I_kwDODunzps5RcPIi
4,953
CI test of TensorFlow is failing
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
2022-09-08T13:39:29
2022-09-08T15:14:45
2022-09-08T15:14:45
MEMBER
null
null
null
## Describe the bug The following CI test fails: https://github.com/huggingface/datasets/runs/8246722693?check_suite_focus=true ``` FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - AssertionError: ``` Details: ``` _________________________ TempSeedTest.test_tensorflow _________________________ [gw0] linux -- Python 3.7.13 /opt/hostedtoolcache/Python/3.7.13/x64/bin/python self = <tests.test_py_utils.TempSeedTest testMethod=test_tensorflow> @require_tf def test_tensorflow(self): import tensorflow as tf from tensorflow.keras import layers def gen_random_output(): model = layers.Dense(2) x = tf.random.uniform((1, 3)) return model(x).numpy() with temp_seed(42, set_tensorflow=True): out1 = gen_random_output() with temp_seed(42, set_tensorflow=True): out2 = gen_random_output() out3 = gen_random_output() > np.testing.assert_equal(out1, out2) E AssertionError: E Arrays are not equal E E Mismatched elements: 2 / 2 (100%) E Max absolute difference: 0.84619296 E Max relative difference: 16.083529 E x: array([[-0.793581, 0.333286]], dtype=float32) E y: array([[0.052612, 0.539708]], dtype=float32) tests/test_py_utils.py:149: AssertionError ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4953/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4953/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4945
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4945/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4945/comments
https://api.github.com/repos/huggingface/datasets/issues/4945/events
https://github.com/huggingface/datasets/issues/4945
1,364,691,096
I_kwDODunzps5RV4iY
4,945
Push to hub can push splits that do not respect the regex
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
2022-09-07T13:45:17
2022-09-13T10:16:35
2022-09-13T10:16:35
MEMBER
null
null
null
## Describe the bug The `push_to_hub` method can push splits that do not respect the regex check that is used for downloads. Therefore, splits may be pushed but never re-used, which can be painful if the split was done after runtime preprocessing. ## Steps to reproduce the bug ```python >>> from datasets import Dataset, DatasetDict, load_dataset >>> d = Dataset.from_dict({'x': [1,2,3], 'y': [1,2,3]}) >>> di = DatasetDict() >>> di['identifier-with-column'] = d >>> di.push_to_hub('open-source-metrics/test') Pushing split identifier-with-column to the Hub. Pushing dataset shards to the dataset hub: 100%|██████████| 1/1 [00:04<00:00, 4.40s/it] ``` Loading it afterwards: ```python >>> load_dataset('open-source-metrics/test') Downloading: 100%|██████████| 610/610 [00:00<00:00, 432kB/s] Using custom data configuration open-source-metrics--test-28b63ec7cde80488 Downloading and preparing dataset None/None (download: 950 bytes, generated: 48 bytes, post-processed: Unknown size, total: 998 bytes) to /home/lysandre/.cache/huggingface/datasets/open-source-metrics___parquet/open-source-metrics--test-28b63ec7cde80488/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec... Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data: 100%|██████████| 950/950 [00:00<00:00, 1.01MB/s] Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.48s/it] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 2291.97it/s] Traceback (most recent call last): File "/home/lysandre/.pyenv/versions/3.10.6/lib/python3.10/code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 1, in <module> File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/load.py", line 1746, in load_dataset builder_instance.download_and_prepare( File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/builder.py", line 771, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 48, in _split_generators splits.append(datasets.SplitGenerator(name=split_name, gen_kwargs={"files": files})) File "<string>", line 5, in __init__ File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/splits.py", line 599, in __post_init__ NamedSplit(self.name) # check that it's a valid split name File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/splits.py", line 346, in __init__ raise ValueError(f"Split name should match '{_split_re}' but got '{split_name}'.") ValueError: Split name should match '^\w+(\.\w+)*$' but got 'identifier-with-column'. ``` ## Expected results I would expect `push_to_hub` to stop me in my tracks if trying to upload a split that will not be working afterwards. ## Actual results See above ## Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.15.64-1-lts-x86_64-with-glibc2.36 - Python version: 3.10.6 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4945/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4945/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4944
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4944/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4944/comments
https://api.github.com/repos/huggingface/datasets/issues/4944/events
https://github.com/huggingface/datasets/issues/4944
1,364,313,569
I_kwDODunzps5RUcXh
4,944
larger dataset, larger GPU memory in the training phase? Is that correct?
{ "login": "debby1103", "id": 38886373, "node_id": "MDQ6VXNlcjM4ODg2Mzcz", "avatar_url": "https://avatars.githubusercontent.com/u/38886373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/debby1103", "html_url": "https://github.com/debby1103", "followers_url": "https://api.github.com/users/debby1103/followers", "following_url": "https://api.github.com/users/debby1103/following{/other_user}", "gists_url": "https://api.github.com/users/debby1103/gists{/gist_id}", "starred_url": "https://api.github.com/users/debby1103/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/debby1103/subscriptions", "organizations_url": "https://api.github.com/users/debby1103/orgs", "repos_url": "https://api.github.com/users/debby1103/repos", "events_url": "https://api.github.com/users/debby1103/events{/privacy}", "received_events_url": "https://api.github.com/users/debby1103/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "does the trainer save it in GPU? sooo curious... how to fix it", "It's my bad. didn't limit the input length" ]
2022-09-07T08:46:30
2022-09-07T12:34:58
2022-09-07T12:34:58
NONE
null
null
null
from datasets import set_caching_enabled set_caching_enabled(False) for ds_name in ["squad","newsqa","nqopen","narrativeqa"]: train_ds = load_from_disk("../../../dall/downstream/processedproqa/{}-train.hf".format(ds_name)) break train_ds = concatenate_datasets([train_ds,train_ds,train_ds,train_ds]) #operation 1 trainer = QuestionAnsweringTrainer( #huggingface trainer model=model, args=training_args, train_dataset=train_ds, eval_dataset= None, eval_examples=None, answer_column_name=answer_column, dataset_name="squad", tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics if training_args.predict_with_generate else None, ) with operation 1, the GPU memory increases from 16G to 23G
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4944/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4942
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4942/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4942/comments
https://api.github.com/repos/huggingface/datasets/issues/4942/events
https://github.com/huggingface/datasets/issues/4942
1,363,869,421
I_kwDODunzps5RSv7t
4,942
Trec Dataset has incorrect labels
{ "login": "wmpauli", "id": 6539145, "node_id": "MDQ6VXNlcjY1MzkxNDU=", "avatar_url": "https://avatars.githubusercontent.com/u/6539145?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wmpauli", "html_url": "https://github.com/wmpauli", "followers_url": "https://api.github.com/users/wmpauli/followers", "following_url": "https://api.github.com/users/wmpauli/following{/other_user}", "gists_url": "https://api.github.com/users/wmpauli/gists{/gist_id}", "starred_url": "https://api.github.com/users/wmpauli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wmpauli/subscriptions", "organizations_url": "https://api.github.com/users/wmpauli/orgs", "repos_url": "https://api.github.com/users/wmpauli/repos", "events_url": "https://api.github.com/users/wmpauli/events{/privacy}", "received_events_url": "https://api.github.com/users/wmpauli/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @wmpauli. \r\n\r\nIndeed we recently fixed this issue:\r\n- #4801 \r\n\r\nThe fix will be accessible after our next library release. In the meantime, you can have it by passing `revision=\"main\"` to `load_dataset`." ]
2022-09-06T22:13:40
2022-09-08T11:12:03
2022-09-08T11:12:03
NONE
null
null
null
## Describe the bug Both coarse and fine labels seem to be out of line. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = "trec" raw_datasets = load_dataset(dataset) df = pd.DataFrame(raw_datasets["test"]) df.head() ``` ## Expected results text (string) | coarse_label (class label) | fine_label (class label) -- | -- | -- How far is it from Denver to Aspen ? | 5 (NUM) | 40 (NUM:dist) What county is Modesto , California in ? | 4 (LOC) | 32 (LOC:city) Who was Galileo ? | 3 (HUM) | 31 (HUM:desc) What is an atom ? | 2 (DESC) | 24 (DESC:def) When did Hawaii become a state ? | 5 (NUM) | 39 (NUM:date) ## Actual results index | label-coarse |label-fine | text -- |-- | -- | -- 0 | 4 | 40 | How far is it from Denver to Aspen ? 1 | 5 | 21 | What county is Modesto , California in ? 2 | 3 | 12 | Who was Galileo ? 3 | 0 | 7 | What is an atom ? 4 | 4 | 8 | When did Hawaii become a state ? ## Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.4.0-1086-azure-x86_64-with-glibc2.27 - Python version: 3.9.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4942/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4936
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4936/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4936/comments
https://api.github.com/repos/huggingface/datasets/issues/4936/events
https://github.com/huggingface/datasets/issues/4936
1,363,274,907
I_kwDODunzps5RQeyb
4,936
vivos (Vietnamese speech corpus) dataset not accessible
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "If you need an example of a small audio datasets, I just created few hours ago a speech dataset with only 300MB of compressed audio files https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia. It works also with streaming (@albertvillanova helped me adding this functionality) :-)", "@cahya-wirawan omg this is awesome!! thank you! ", "We have contacted the authors to ask them." ]
2022-09-06T13:17:55
2022-09-21T06:06:02
2022-09-12T07:14:20
CONTRIBUTOR
null
null
null
## Describe the bug VIVOS data is not accessible anymore, neither of these links work (at least from France): * https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (data) * https://ailab.hcmus.edu.vn/vivos (dataset page) Therefore `load_dataset` doesn't work. ## Steps to reproduce the bug ```python ds = load_dataset("vivos") ``` ## Expected results dataset loaded ## Actual results ``` ConnectionError: Couldn't reach https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='ailab.hcmus.edu.vn', port=443): Max retries exceeded with url: /assets/vivos.tar.gz (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f9d8a27d190>: Failed to establish a new connection: [Errno -5] No address associated with hostname'))"))) ``` Will try to contact the authors, as we wanted to use Vivos as an example in documentation on how to create scripts for audio datasets (https://github.com/huggingface/datasets/pull/4872), because it's small and straightforward and uses tar archives.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4936/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4936/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4935
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4935/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4935/comments
https://api.github.com/repos/huggingface/datasets/issues/4935/events
https://github.com/huggingface/datasets/issues/4935
1,363,226,736
I_kwDODunzps5RQTBw
4,935
Dataset Viewer issue for ubuntu_dialogs_corpus
{ "login": "CibinQuadance", "id": 87330568, "node_id": "MDQ6VXNlcjg3MzMwNTY4", "avatar_url": "https://avatars.githubusercontent.com/u/87330568?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CibinQuadance", "html_url": "https://github.com/CibinQuadance", "followers_url": "https://api.github.com/users/CibinQuadance/followers", "following_url": "https://api.github.com/users/CibinQuadance/following{/other_user}", "gists_url": "https://api.github.com/users/CibinQuadance/gists{/gist_id}", "starred_url": "https://api.github.com/users/CibinQuadance/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CibinQuadance/subscriptions", "organizations_url": "https://api.github.com/users/CibinQuadance/orgs", "repos_url": "https://api.github.com/users/CibinQuadance/repos", "events_url": "https://api.github.com/users/CibinQuadance/events{/privacy}", "received_events_url": "https://api.github.com/users/CibinQuadance/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "The dataset maintainers (https://huggingface.co/datasets/ubuntu_dialogs_corpus) decided to forbid the dataset from being downloaded automatically (https://huggingface.co/docs/datasets/v2.4.0/en/loading#manual-download), and the dataset viewer respects this.\r\nWe will try to improve the error display though. Thanks for reporting." ]
2022-09-06T12:41:50
2022-09-06T12:51:25
2022-09-06T12:51:25
NONE
null
null
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4935/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4935/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4934
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4934/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4934/comments
https://api.github.com/repos/huggingface/datasets/issues/4934/events
https://github.com/huggingface/datasets/issues/4934
1,363,034,253
I_kwDODunzps5RPkCN
4,934
Dataset Viewer issue for indonesian-nlp/librivox-indonesia
{ "login": "cahya-wirawan", "id": 7669893, "node_id": "MDQ6VXNlcjc2Njk4OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cahya-wirawan", "html_url": "https://github.com/cahya-wirawan", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "The error is not related to the dataset viewer. I'm having a look...", "Thanks @albertvillanova for checking the issue. Actually, I can use the dataset like following:\r\n```\r\n>>> from datasets import load_dataset\r\n>>> ds=load_dataset(\"indonesian-nlp/librivox-indonesia\")\r\nNo config specified, defaulting to: librivox-indonesia/all\r\nReusing dataset librivox-indonesia (/root/.cache/huggingface/datasets/indonesian-nlp___librivox-indonesia/all/1.0.0/9a934a42bfb53dc103003d191618443b8a786bea2bd7bb0bc2d9454b8494521e)\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 500.87it/s]\r\n>>> ds\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['path', 'language', 'reader', 'sentence', 'audio'],\r\n num_rows: 7815\r\n })\r\n})\r\n>>> ds[\"train\"][0]\r\n{'path': '/root/.cache/huggingface/datasets/downloads/extracted/c8ead52370fa28feb64643ea9d05cd7d820192dc8a1700d665ec45ec7624f5a3/librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3', 'language': 'sun', 'reader': '3174', 'sentence': 'pernyataan umum ngeunaan hak hak asasi manusa sakabeh manusa', 'audio': {'path': '/root/.cache/huggingface/datasets/downloads/extracted/c8ead52370fa28feb64643ea9d05cd7d820192dc8a1700d665ec45ec7624f5a3/librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3', 'array': array([ 0. , 0. , 0. , ..., -0.02419001,\r\n -0.01957154, -0.01502833], dtype=float32), 'sampling_rate': 44100}}\r\n\r\n```\r\nIt would be just nice if I also can see it using dataset viewer.", "Yes, the issue arises when streaming (that is used by the viewer): your script does not support streaming and to support it in this case there are some subtleties that we are explaining better in our docs in a work-in progress pull request:\r\n- #4872\r\n\r\nJust note that when streaming, `local_extracted_archive` is None, and this code line generates the error:\r\n```python\r\nfilepath = local_extracted_archive + \"/librivox-indonesia/audio_transcription.csv\"\r\n```\r\n\r\nFor a proper implementation, you could have a look at: https://huggingface.co/datasets/common_voice/blob/main/common_voice.py\r\n\r\nYou can test your script locally by passing `streaming=True` to `load_dataset`:\r\n```python\r\nds = load_dataset(\"indonesian-nlp/librivox-indonesia\", split=\"train\", streaming=True); item = next(iter(ds)); item\r\n```", "Great, I will have a look and update the script. Thanks.", "Hi @albertvillanova , I just add the streaming functionality and it works in the first try :-) Thanks a lot!", "Awesome!!! :hugs: " ]
2022-09-06T10:03:23
2022-09-06T12:46:40
2022-09-06T12:46:40
CONTRIBUTOR
null
null
null
### Link https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia ### Description I created a new speech dataset https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia, but the dataset preview doesn't work with following error message: ``` Server error Status code: 400 Exception: TypeError Message: unsupported operand type(s) for +: 'NoneType' and 'str' ``` Please help, I am not sure what the problem here is. Thanks a lot. ### Owner Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4934/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4933
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4933/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4933/comments
https://api.github.com/repos/huggingface/datasets/issues/4933/events
https://github.com/huggingface/datasets/issues/4933
1,363,013,023
I_kwDODunzps5RPe2f
4,933
Dataset/DatasetDict.filter() cannot have `batched=True` due to `mask` (numpy array?) being non-iterable.
{ "login": "tianjianjiang", "id": 4812544, "node_id": "MDQ6VXNlcjQ4MTI1NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tianjianjiang", "html_url": "https://github.com/tianjianjiang", "followers_url": "https://api.github.com/users/tianjianjiang/followers", "following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}", "gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}", "starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions", "organizations_url": "https://api.github.com/users/tianjianjiang/orgs", "repos_url": "https://api.github.com/users/tianjianjiang/repos", "events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}", "received_events_url": "https://api.github.com/users/tianjianjiang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! When `batched=True`, you filter function must take a batch as input, and return a list of booleans.\r\n\r\nIn your case, something like\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n\r\nds_mc4_ja = load_dataset(\"mc4\", \"ja\") # This will take 6+ hours... perhaps test it with a toy dataset instead?\r\nds_mc4_ja_2020 = ds_mc4_ja.filter(\r\n lambda batch: [timestamp[:4] == \"2020\" for timestamp in batch[\"timestamp\"]],\r\n batched=True,\r\n)\r\n```\r\n\r\nLet me know if it helps !", "> Hi ! When `batched=True`, you filter function must take a batch as input, and return a list of booleans.\r\n> [...]\r\n> Let me know if it helps !\r\n\r\nHi @lhoestq,\r\n\r\nAh, my bad, I totally forgot that part...\r\nSorry for the trouble and thank you for the kind help!" ]
2022-09-06T09:47:48
2022-09-06T11:44:27
2022-09-06T11:44:27
CONTRIBUTOR
null
null
null
## Describe the bug `Dataset/DatasetDict.filter()` cannot have `batched=True` due to `mask` (numpy array?) being non-iterable. ## Steps to reproduce the bug (In a python 3.7.12 env, I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.) ```python from datasets import load_dataset ds_mc4_ja = load_dataset("mc4", "ja") # This will take 6+ hours... perhaps test it with a toy dataset instead? ds_mc4_ja_2020 = ds_mc4_ja.filter( lambda example: example["timestamp"][:4] == "2020", batched=True, ) ``` ## Expected results No error ## Actual results ```python --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/multiprocess/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 524, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py", line 480, in wrapper out = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2779, in _map_single offset=offset, File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2347, in decorated result = f(decorated_item, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 4946, in get_indices_from_mask_function indices_array = [i for i, to_keep in zip(indices, mask) if to_keep] TypeError: zip argument #2 must support iteration """ The above exception was the direct cause of the following exception: TypeError Traceback (most recent call last) /tmp/ipykernel_51348/2345782281.py in <module> 7 batched=True, 8 # batch_size=10_000, ----> 9 num_proc=111, 10 ) 11 # ds_mc4_ja_clean_2020 = ds_mc4_ja.filter( /opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc) 878 desc=desc, 879 ) --> 880 for k, dataset in self.items() 881 } 882 ) /opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0) 878 desc=desc, 879 ) --> 880 for k, dataset in self.items() 881 } 882 ) /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 522 } 523 # apply actual function --> 524 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 525 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 526 # re-apply format to the output /opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 478 # Call actual function 479 --> 480 out = func(self, *args, **kwargs) 481 482 # Update fingerprint of in-place transforms + update in-place history of transforms /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2920 new_fingerprint=new_fingerprint, 2921 input_columns=input_columns, -> 2922 desc=desc, 2923 ) 2924 new_dataset = copy.deepcopy(self) /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2498 2499 for index, async_result in results.items(): -> 2500 transformed_shards[index] = async_result.get() 2501 2502 assert ( /opt/conda/lib/python3.7/site-packages/multiprocess/pool.py in get(self, timeout) 655 return self._value 656 else: --> 657 raise self._value 658 659 def _set(self, i, obj): TypeError: zip argument #2 must support iteration ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-debian-10.12 - Python version: 3.7.12 - PyArrow version: 9.0.0 - Pandas version: 1.3.5 (I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4933/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4933/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4932
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4932/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4932/comments
https://api.github.com/repos/huggingface/datasets/issues/4932/events
https://github.com/huggingface/datasets/issues/4932
1,362,522,423
I_kwDODunzps5RNnE3
4,932
Dataset Viewer issue for bigscience-biomedical/biosses
{ "login": "galtay", "id": 663051, "node_id": "MDQ6VXNlcjY2MzA1MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/663051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/galtay", "html_url": "https://github.com/galtay", "followers_url": "https://api.github.com/users/galtay/followers", "following_url": "https://api.github.com/users/galtay/following{/other_user}", "gists_url": "https://api.github.com/users/galtay/gists{/gist_id}", "starred_url": "https://api.github.com/users/galtay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/galtay/subscriptions", "organizations_url": "https://api.github.com/users/galtay/orgs", "repos_url": "https://api.github.com/users/galtay/repos", "events_url": "https://api.github.com/users/galtay/events{/privacy}", "received_events_url": "https://api.github.com/users/galtay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Possibly not related to the dataset viewer in itself. cc @huggingface/datasets.\r\n\r\nIn particular, I think that the import of bigbiohub is not working here: https://huggingface.co/datasets/bigscience-biomedical/biosses/blob/main/biosses.py#L29 (requires a relative path?)\r\n\r\n```python\r\n>>> from datasets import get_dataset_config_names\r\n>>> get_dataset_config_names('bigscience-biomedical/biosses')\r\nDownloading builder script: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8.00k/8.00k [00:00<00:00, 7.47MB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 289, in get_dataset_config_names\r\n dataset_module = dataset_module_factory(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1247, in dataset_module_factory\r\n raise e1 from None\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1220, in dataset_module_factory\r\n return HubDatasetModuleFactoryWithScript(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 931, in get_module\r\n local_imports = _download_additional_modules(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 215, in _download_additional_modules\r\n raise ImportError(\r\nImportError: To be able to use bigscience-biomedical/biosses, you need to install the following dependency: bigbiohub.\r\nPlease install it using 'pip install bigbiohub' for instance'\r\n```", "Opened a PR here to (hopefully) fix the dataset script: https://huggingface.co/datasets/bigscience-biomedical/biosses/discussions/1/files", "thanks for taking a look @severo . agree this isn't related to dataset viewer (sorry just clicked on the auto issue creator). also thanks @lhoestq , I see the format to use for relative imports. was a bit confused b/c it seems to be working here \r\n\r\nhttps://huggingface.co/datasets/bigscience-biomedical/scitail/blob/main/scitail.py#L31\r\n\r\nI'll try this PR a see what happens. ", "closing as I think the issue is relative imports and attempting to read json files directly in the repo (thanks again @lhoestq ) " ]
2022-09-05T22:40:32
2022-09-06T14:24:56
2022-09-06T14:24:56
NONE
null
null
null
### Link https://huggingface.co/datasets/bigscience-biomedical/biosses ### Description I've just been working on adding the dataset loader script to this dataset and working with the relative imports. I'm not sure how to interpret the error below (show where the dataset preview used to be) . ``` Status code: 400 Exception: ModuleNotFoundError Message: No module named 'datasets_modules.datasets.bigscience-biomedical--biosses.ddbd5893bf6c2f4db06f407665eaeac619520ba41f69d94ead28f7cc5b674056.bigbiohub' ``` ### Owner Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4932/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4932/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4924/comments
https://api.github.com/repos/huggingface/datasets/issues/4924/events
https://github.com/huggingface/datasets/issues/4924
1,358,611,513
I_kwDODunzps5Q-sQ5
4,924
Concatenate_datasets loads everything into RAM
{ "login": "louisdeneve", "id": 39416047, "node_id": "MDQ6VXNlcjM5NDE2MDQ3", "avatar_url": "https://avatars.githubusercontent.com/u/39416047?v=4", "gravatar_id": "", "url": "https://api.github.com/users/louisdeneve", "html_url": "https://github.com/louisdeneve", "followers_url": "https://api.github.com/users/louisdeneve/followers", "following_url": "https://api.github.com/users/louisdeneve/following{/other_user}", "gists_url": "https://api.github.com/users/louisdeneve/gists{/gist_id}", "starred_url": "https://api.github.com/users/louisdeneve/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/louisdeneve/subscriptions", "organizations_url": "https://api.github.com/users/louisdeneve/orgs", "repos_url": "https://api.github.com/users/louisdeneve/repos", "events_url": "https://api.github.com/users/louisdeneve/events{/privacy}", "received_events_url": "https://api.github.com/users/louisdeneve/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
2022-09-01T10:25:17
2022-09-01T11:50:54
2022-09-01T11:50:54
NONE
null
null
null
## Describe the bug When loading the datasets seperately and saving them on disk, I want to concatenate them. But `concatenate_datasets` is filling up my RAM and the process gets killed. Is there a way to prevent this from happening or is this intended behaviour? Thanks in advance ## Steps to reproduce the bug ```python gcs = gcsfs.GCSFileSystem(project='project') datasets = [load_from_disk(f'path/to/slice/of/data/{i}', fs=gcs, keep_in_memory=False) for i in range(10)] dataset = concatenate_datasets(datasets) ``` ## Expected results A concatenated dataset which is stored on my disk. ## Actual results Concatenated dataset gets loaded into RAM and overflows it which gets the process killed. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 8.0.1 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4924/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4924/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4922
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4922/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4922/comments
https://api.github.com/repos/huggingface/datasets/issues/4922/events
https://github.com/huggingface/datasets/issues/4922
1,357,684,018
I_kwDODunzps5Q7J0y
4,922
I/O error on Google Colab in streaming mode
{ "login": "jotterbach", "id": 5595043, "node_id": "MDQ6VXNlcjU1OTUwNDM=", "avatar_url": "https://avatars.githubusercontent.com/u/5595043?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jotterbach", "html_url": "https://github.com/jotterbach", "followers_url": "https://api.github.com/users/jotterbach/followers", "following_url": "https://api.github.com/users/jotterbach/following{/other_user}", "gists_url": "https://api.github.com/users/jotterbach/gists{/gist_id}", "starred_url": "https://api.github.com/users/jotterbach/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jotterbach/subscriptions", "organizations_url": "https://api.github.com/users/jotterbach/orgs", "repos_url": "https://api.github.com/users/jotterbach/repos", "events_url": "https://api.github.com/users/jotterbach/events{/privacy}", "received_events_url": "https://api.github.com/users/jotterbach/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
2022-08-31T18:08:26
2022-08-31T18:15:48
2022-08-31T18:15:48
NONE
null
null
null
## Describe the bug When trying to load a streaming dataset in Google Colab the loading fails with an I/O error ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION) list(hf_ds.take(5)) ``` ## Expected results It should load five data points ## Actual results ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [<ipython-input-13-7b5b8b1e7e58>](https://localhost:8080/#) in <module> 2 from datasets import load_dataset 3 hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION) ----> 4 list(hf_ds.take(5)) 6 frames [/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self) 716 717 def __iter__(self): --> 718 for key, example in self._iter(): 719 if self.features: 720 # `IterableDataset` automatically fills missing columns with None. [/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in _iter(self) 706 else: 707 ex_iterable = self._ex_iterable --> 708 yield from ex_iterable 709 710 def _iter_shard(self, shard_idx: int): [/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self) 582 583 def __iter__(self): --> 584 yield from islice(self.ex_iterable, self.n) 585 586 def shuffle_data_sources(self, generator: np.random.Generator) -> "TakeExamplesIterable": [/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self) 110 111 def __iter__(self): --> 112 yield from self.generate_examples_fn(**self.kwargs) 113 114 def shuffle_data_sources(self, generator: np.random.Generator) -> "ExamplesIterable": [~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in _generate_examples(self, split_subsets, extraction_map, with_translation) 845 raise ValueError("Invalid number of files: %d" % len(files)) 846 --> 847 for sub_key, ex in sub_generator(*sub_generator_args): 848 if not all(ex.values()): 849 continue [~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in _parse_parallel_sentences(f1, f2, filename1, filename2) 923 l2_sentences, l2 = parse_file(f2_i, filename2) 924 --> 925 for line_id, (s1, s2) in enumerate(zip(l1_sentences, l2_sentences)): 926 key = f"{f_id}/{line_id}" 927 yield key, {l1: s1, l2: s2} [~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in gen() 895 896 def gen(): --> 897 with open(path, encoding="utf-8") as f: 898 for line in f: 899 seg_match = re.match(seg_re, line) ValueError: I/O operation on closed file. ``` ## Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.4.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 9.0.0. (the same error happened with PyArrow version 6.0.0) - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4922/timeline
null
not_planned
https://api.github.com/repos/huggingface/datasets/issues/4920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4920/comments
https://api.github.com/repos/huggingface/datasets/issues/4920/events
https://github.com/huggingface/datasets/issues/4920
1,357,564,589
I_kwDODunzps5Q6sqt
4,920
Unable to load local tsv files through load_dataset method
{ "login": "DataNoob0723", "id": 44038517, "node_id": "MDQ6VXNlcjQ0MDM4NTE3", "avatar_url": "https://avatars.githubusercontent.com/u/44038517?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DataNoob0723", "html_url": "https://github.com/DataNoob0723", "followers_url": "https://api.github.com/users/DataNoob0723/followers", "following_url": "https://api.github.com/users/DataNoob0723/following{/other_user}", "gists_url": "https://api.github.com/users/DataNoob0723/gists{/gist_id}", "starred_url": "https://api.github.com/users/DataNoob0723/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DataNoob0723/subscriptions", "organizations_url": "https://api.github.com/users/DataNoob0723/orgs", "repos_url": "https://api.github.com/users/DataNoob0723/repos", "events_url": "https://api.github.com/users/DataNoob0723/events{/privacy}", "received_events_url": "https://api.github.com/users/DataNoob0723/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @DataNoob0723,\r\n\r\nUnder the hood, we use `pandas` to load CSV/TSV files. Therefore, you should use \"csv\" and pass `sep=\"\\t\"`, as explained in our docs: https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/loading_methods#from-files\r\n```python\r\nds = load_dataset('csv', sep=\"\\t\", data_files=data_files)\r\n``` " ]
2022-08-31T16:13:39
2022-09-01T05:31:30
2022-09-01T05:31:30
NONE
null
null
null
## Describe the bug Unable to load local tsv files through load_dataset method. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug data_files = { 'train': 'train.tsv', 'test': 'test.tsv' } raw_datasets = load_dataset('tsv', data_files=data_files) ## Expected results I am pretty sure the data files exist in the current directory. The above code should load them as Datasets, but threw exceptions. ## Actual results --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) [<ipython-input-9-24207899c1af>](https://localhost:8080/#) in <module> ----> 1 raw_datasets = load_dataset('tsv', data_files='train.tsv') 2 frames [/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1244 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1245 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" -> 1246 ) from None 1247 raise e1 from None 1248 else: FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/main/datasets/tsv/tsv.py ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4920/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4918/comments
https://api.github.com/repos/huggingface/datasets/issues/4918/events
https://github.com/huggingface/datasets/issues/4918
1,357,242,757
I_kwDODunzps5Q5eGF
4,918
Dataset Viewer issue for pysentimiento/spanish-targeted-sentiment-headlines
{ "login": "finiteautomata", "id": 167943, "node_id": "MDQ6VXNlcjE2Nzk0Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4", "gravatar_id": "", "url": "https://api.github.com/users/finiteautomata", "html_url": "https://github.com/finiteautomata", "followers_url": "https://api.github.com/users/finiteautomata/followers", "following_url": "https://api.github.com/users/finiteautomata/following{/other_user}", "gists_url": "https://api.github.com/users/finiteautomata/gists{/gist_id}", "starred_url": "https://api.github.com/users/finiteautomata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/finiteautomata/subscriptions", "organizations_url": "https://api.github.com/users/finiteautomata/orgs", "repos_url": "https://api.github.com/users/finiteautomata/repos", "events_url": "https://api.github.com/users/finiteautomata/events{/privacy}", "received_events_url": "https://api.github.com/users/finiteautomata/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, it's fixed now (I refreshed it manually). It's a known issue; we hope it will be fixed permanently in a few days.\r\n\r\n<img width=\"1508\" alt=\"Capture d’écran 2022-09-05 à 18 31 22\" src=\"https://user-images.githubusercontent.com/1676121/188489762-0ed86a7e-dfb3-46e8-a125-43b815a2c6f4.png\">\r\n", "Thanks @severo! " ]
2022-08-31T12:09:07
2022-09-05T21:36:34
2022-09-05T16:32:44
NONE
null
null
null
### Link https://huggingface.co/datasets/pysentimiento/spanish-targeted-sentiment-headlines ### Description After moving the dataset from my user (`finiteautomata`) to the `pysentimiento` organization, the dataset viewer says that it doesn't exist. ### Owner _No response_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4918/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4918/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4917/comments
https://api.github.com/repos/huggingface/datasets/issues/4917/events
https://github.com/huggingface/datasets/issues/4917
1,357,193,841
I_kwDODunzps5Q5SJx
4,917
Keys mismatch: make error message more informative
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "repos_url": "https://api.github.com/users/PaulLerner/repos", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
[ "Good idea ! I think this can be improved in `Features.reorder_fields_as()` indeed at\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/features/features.py#L1739-L1740\r\n\r\nIs it something you would be interested in contributing ?", "Is this open to work on? I'd love to take on this as my first issue.", "Hi @daspartho I’ve opened a PR #4919 \r\nI don’t think there’s much left to do", "ok : )" ]
2022-08-31T11:24:34
2022-09-05T08:43:38
2022-09-05T08:43:38
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** When loading a dataset from disk with a defect in its `dataset_info.json` describing its features (I don’t know when/why/how this happens but it deserves its own issue), you will get an error message like: `ValueError: Keys mismatch: between {'bar': Value(dtype='int64', id=None)} and {'foo': Value(dtype='int64', id=None)}` Which is fine when you have only a few features like in the example but it gets very hard to read when you have a lot of features in your dataset. **Describe the solution you'd like** The error message should give the difference between the features (what keys are in A but missing in B and vice-versa). It should also tell which keys are inferred from `dataset.arrow` and which come from `dataset_info.json`. Willing to help :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4917/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4916/comments
https://api.github.com/repos/huggingface/datasets/issues/4916/events
https://github.com/huggingface/datasets/issues/4916
1,357,076,940
I_kwDODunzps5Q41nM
4,916
Apache Beam unable to write the downloaded wikipedia dataset
{ "login": "Shilpac20", "id": 71849081, "node_id": "MDQ6VXNlcjcxODQ5MDgx", "avatar_url": "https://avatars.githubusercontent.com/u/71849081?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Shilpac20", "html_url": "https://github.com/Shilpac20", "followers_url": "https://api.github.com/users/Shilpac20/followers", "following_url": "https://api.github.com/users/Shilpac20/following{/other_user}", "gists_url": "https://api.github.com/users/Shilpac20/gists{/gist_id}", "starred_url": "https://api.github.com/users/Shilpac20/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shilpac20/subscriptions", "organizations_url": "https://api.github.com/users/Shilpac20/orgs", "repos_url": "https://api.github.com/users/Shilpac20/repos", "events_url": "https://api.github.com/users/Shilpac20/events{/privacy}", "received_events_url": "https://api.github.com/users/Shilpac20/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "See:\r\n- #4915" ]
2022-08-31T09:39:25
2022-08-31T10:53:19
2022-08-31T10:53:19
NONE
null
null
null
## Describe the bug Hi, I am currently trying to download wikipedia dataset using load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download. It downloads the file but while saving it in hugging face cache it fails to write. This happens for any available date of any language in wikipedia dump. I had raised another issue earlier #4915 but probably was not that clear and the solution provider misunderstood my problem. Hence raising one more issue. Any help is appreciated. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner') ``` ## Expected results to load the dataset ## Actual results I am pasting the error trace here: Downloading builder script: 35.9kB [00:00, ?B/s] Downloading metadata: 30.4kB [00:00, 1.94MB/s] Using custom data configuration 20220401.aa-date=20220401,language=aa Downloading and preparing dataset wikipedia/20220401.aa to C:\Users\Shilpa.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559... Downloading data: 100%|████████████████████████████████████████████████████████████| 11.1k/11.1k [00:00<00:00, 712kB/s] Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.82s/it] Extracting data files: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Downloading data: 100%|███████████████████████████████████████████████████████████| 35.6k/35.6k [00:00<00:00, 84.3kB/s] Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.93s/it] Traceback (most recent call last): File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process self.writer = self.sink.open_writer(init_result, str(uuid.uuid4())) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer return FileBasedSinkWriter(self, writer_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in init self.temp_handle = self.sink.open(temp_shard_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open self._file_handle = super().open(temp_path) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open temp_path, self.mime_type, self.compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create return filesystem.create(path, mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create return self._path_open(path, 'wb', mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open raw_file = io.open(path, mode) FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "G:/abc/temp.py", line 32, in beam_runner='DirectRunner') File "G:\Python3.7\lib\site-packages\datasets\load.py", line 1751, in load_dataset use_auth_token=use_auth_token, File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 705, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 1394, in _download_and_prepare pipeline_results = pipeline.run() File "G:\Python3.7\lib\site-packages\apache_beam\pipeline.py", line 574, in run return self.runner.run_pipeline(self, self._options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\direct\direct_runner.py", line 131, in run_pipeline return runner.run_pipeline(pipeline, options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 201, in run_pipeline options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 212, in run_via_runner_api return self.run_stages(stage_context, stages) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 443, in run_stages runner_execution_context, bundle_context_manager, bundle_input) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 776, in _execute_bundle bundle_manager)) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1000, in _run_bundle data_input, data_output, input_timers, expected_timer_output) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1309, in process_bundle result_future = self._worker_handler.control_conn.push(process_bundle_req) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\worker_handlers.py", line 380, in push response = self.worker.do_instruction(request) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 598, in do_instruction getattr(request, request_type), request.instruction_id) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 635, in process_bundle bundle_processor.process_bundle(instruction_id)) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 1004, in process_bundle element.data) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 227, in process_encoded self.output(decoded_value) File "apache_beam\runners\worker\operations.py", line 526, in apache_beam.runners.worker.operations.Operation.output File "apache_beam\runners\worker\operations.py", line 528, in apache_beam.runners.worker.operations.Operation.output File "apache_beam\runners\worker\operations.py", line 237, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 324, in apache_beam.runners.worker.operations.GeneralPurposeConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 905, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1507, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process self.writer = self.sink.open_writer(init_result, str(uuid.uuid4())) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer return FileBasedSinkWriter(self, writer_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in init self.temp_handle = self.sink.open(temp_shard_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open self._file_handle = super().open(temp_path) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open temp_path, self.mime_type, self.compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create return filesystem.create(path, mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create return self._path_open(path, 'wb', mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open raw_file = io.open(path, mode) RuntimeError: FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles'] ## Environment info Python: 3.7.6 Windows 10 Pro datasets :2.4.0 apache_beam: 2.41.0 mwparserfromhell: 0.6.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4916/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4916/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4915/comments
https://api.github.com/repos/huggingface/datasets/issues/4915/events
https://github.com/huggingface/datasets/issues/4915
1,356,009,042
I_kwDODunzps5Q0w5S
4,915
FileNotFoundError while downloading wikipedia dataset for any language
{ "login": "Shilpac20", "id": 71849081, "node_id": "MDQ6VXNlcjcxODQ5MDgx", "avatar_url": "https://avatars.githubusercontent.com/u/71849081?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Shilpac20", "html_url": "https://github.com/Shilpac20", "followers_url": "https://api.github.com/users/Shilpac20/followers", "following_url": "https://api.github.com/users/Shilpac20/following{/other_user}", "gists_url": "https://api.github.com/users/Shilpac20/gists{/gist_id}", "starred_url": "https://api.github.com/users/Shilpac20/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shilpac20/subscriptions", "organizations_url": "https://api.github.com/users/Shilpac20/orgs", "repos_url": "https://api.github.com/users/Shilpac20/repos", "events_url": "https://api.github.com/users/Shilpac20/events{/privacy}", "received_events_url": "https://api.github.com/users/Shilpac20/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @Shilpac20,\r\n\r\nAs explained in the Wikipedia dataset card: https://huggingface.co/datasets/wikipedia\r\n> You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).\r\n\r\nThis means that, before passing a specific date, you should first make sure it is available online, as Wikimedia only keeps last X months (depending on the size of the corresponding language dump)): e.g. to see which dates \"aa\" Wikipedia is available online, see https://dumps.wikimedia.org/aawiki/ (as of today 2022-08-31, the available dates are from [20220401](https://dumps.wikimedia.org/aawiki/20220401/) to [20220820](https://dumps.wikimedia.org/aawiki/20220820/)).", "Hi, the date that I have specified \"20220401\" is available for the language \"aa\". The error persists for any other available dates as present in https://dumps.wikimedia.org/aawiki/. The error is mainly due to apache beam not able to write the downloaded files. Any help on this?", "I see, sorry, I misread your issue.\r\n\r\nWe are investigating this.", "I am struggling with basically the same issue. I am trying to download the German Wikipedia dump.\r\n\r\nAs per the [documentation](https://huggingface.co/datasets/wikipedia), `\"20220301.de\"` should be available as a pre-processed dataset.\r\n\r\nIssuing the command mentioned in the documentation cited above\r\n\r\n from datasets import load_dataset\r\n load_dataset(\"wikipedia\", \"20220301.de\")\r\n\r\nraises the following `FileNotFound` error\r\n\r\n FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/dewiki/20220301/dumpstatus.json\r\n\r\nUsing the ([undocumented](https://huggingface.co/docs/datasets/v1.2.1/package_reference/loading_methods.html#datasets.load_dataset)?) call to `load_dataset()` with `language` and `date` parameters\r\n\r\n load_dataset(\"wikipedia\", language=\"de\", date=\"20220301\", beam_runner=\"DirectRunner\")\r\n\r\nproduces the same error.\r\n\r\nEDIT: as I am using `datasets` v2.7.1, I should be looking at [that version's documentation](https://huggingface.co/docs/datasets/v2.7.1/en/package_reference/loading_methods#datasets.load_dataset)! It is mentioned there, that additional `kwargs` are \"passed to the [BuilderConfig](https://huggingface.co/docs/datasets/v2.7.1/en/package_reference/builder_classes#datasets.BuilderConfig) and used in the [DatasetBuilder](https://huggingface.co/docs/datasets/v2.7.1/en/package_reference/builder_classes#datasets.DatasetBuilder)\". So I guess that is how `language` and `date` are used.\r\n\r\nAs I can see a folder `20221130` on `https://dumps.wikimedia.org/dewiki/`, I also tried\r\n\r\n from datasets import load_dataset\r\n load_dataset(\"wikipedia\", \"20221130.de\")\r\n\r\nwhich throws another error:\r\n\r\n ValueError: BuilderConfig 20221120.de not found. Available: ['20220301.aa', ... '20220301.de', ...\r\n\r\nbasically telling me that the dataset I originally requested (`'20220301.de'`) is available...\r\n\r\nIt seems that `load_dataset` is not handling the vanishing older dumps for Wikipedia correctly?", "I am able to start downloading the dataset when trying anything with the recent dumps for 20221201. But obviously, those are the big wiki dumps and I need the smaller preloaded version.\r\n\r\nI am now getting some error when the files show up in my cache but it will say FileNotFoundError at the end of the download for some reason. The cache directory to the datasets\\wikipedia\\date.bn\\ had something in it, then when the error came up it disappeared. \r\n\r\nIt is easy to test with the langauge \"bn\" because the amount of files is low.\r\n\r\ndataset = load_dataset('wikipedia', date=\"20221201\", language=\"bn\", split='train', beam_runner='DirectRunner')" ]
2022-08-30T16:15:46
2022-12-04T22:20:33
null
NONE
null
null
null
## Describe the bug Hi, I am currently trying to download wikipedia dataset using load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download. Environment: ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner') ``` ## Expected results to load the dataset ## Actual results I am pasting the error trace here: Downloading builder script: 35.9kB [00:00, ?B/s] Downloading metadata: 30.4kB [00:00, 1.94MB/s] Using custom data configuration 20220401.aa-date=20220401,language=aa Downloading and preparing dataset wikipedia/20220401.aa to C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559... Downloading data: 100%|████████████████████████████████████████████████████████████| 11.1k/11.1k [00:00<00:00, 712kB/s] Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.82s/it] Extracting data files: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Downloading data: 100%|███████████████████████████████████████████████████████████| 35.6k/35.6k [00:00<00:00, 84.3kB/s] Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.93s/it] Traceback (most recent call last): File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process self.writer = self.sink.open_writer(init_result, str(uuid.uuid4())) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer return FileBasedSinkWriter(self, writer_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in __init__ self.temp_handle = self.sink.open(temp_shard_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open self._file_handle = super().open(temp_path) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open temp_path, self.mime_type, self.compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create return filesystem.create(path, mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create return self._path_open(path, 'wb', mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open raw_file = io.open(path, mode) FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Shilpa\\.cache\\huggingface\\datasets\\wikipedia\\20220401.aa-date=20220401,language=aa\\2.0.0\\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "G:/abc/temp.py", line 32, in <module> beam_runner='DirectRunner') File "G:\Python3.7\lib\site-packages\datasets\load.py", line 1751, in load_dataset use_auth_token=use_auth_token, File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 705, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 1394, in _download_and_prepare pipeline_results = pipeline.run() File "G:\Python3.7\lib\site-packages\apache_beam\pipeline.py", line 574, in run return self.runner.run_pipeline(self, self._options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\direct\direct_runner.py", line 131, in run_pipeline return runner.run_pipeline(pipeline, options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 201, in run_pipeline options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 212, in run_via_runner_api return self.run_stages(stage_context, stages) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 443, in run_stages runner_execution_context, bundle_context_manager, bundle_input) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 776, in _execute_bundle bundle_manager)) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1000, in _run_bundle data_input, data_output, input_timers, expected_timer_output) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1309, in process_bundle result_future = self._worker_handler.control_conn.push(process_bundle_req) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\worker_handlers.py", line 380, in push response = self.worker.do_instruction(request) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 598, in do_instruction getattr(request, request_type), request.instruction_id) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 635, in process_bundle bundle_processor.process_bundle(instruction_id)) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 1004, in process_bundle element.data) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 227, in process_encoded self.output(decoded_value) File "apache_beam\runners\worker\operations.py", line 526, in apache_beam.runners.worker.operations.Operation.output File "apache_beam\runners\worker\operations.py", line 528, in apache_beam.runners.worker.operations.Operation.output File "apache_beam\runners\worker\operations.py", line 237, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 324, in apache_beam.runners.worker.operations.GeneralPurposeConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 905, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1507, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process self.writer = self.sink.open_writer(init_result, str(uuid.uuid4())) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer return FileBasedSinkWriter(self, writer_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in __init__ self.temp_handle = self.sink.open(temp_shard_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open self._file_handle = super().open(temp_path) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open temp_path, self.mime_type, self.compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create return filesystem.create(path, mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create return self._path_open(path, 'wb', mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open raw_file = io.open(path, mode) RuntimeError: FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Shilpa\\.cache\\huggingface\\datasets\\wikipedia\\20220401.aa-date=20220401,language=aa\\2.0.0\\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles'] ## Environment info Python: 3.7.6 Windows 10 Pro datasets :2.4.0 apache_beam: 2.41.0 mwparserfromhell: 0.6.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4915/timeline
null
reopened
https://api.github.com/repos/huggingface/datasets/issues/4912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4912/comments
https://api.github.com/repos/huggingface/datasets/issues/4912/events
https://github.com/huggingface/datasets/issues/4912
1,355,078,864
I_kwDODunzps5QxNzQ
4,912
datasets map() handles all data at a stroke and takes long time
{ "login": "BruceStayHungry", "id": 40711748, "node_id": "MDQ6VXNlcjQwNzExNzQ4", "avatar_url": "https://avatars.githubusercontent.com/u/40711748?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceStayHungry", "html_url": "https://github.com/BruceStayHungry", "followers_url": "https://api.github.com/users/BruceStayHungry/followers", "following_url": "https://api.github.com/users/BruceStayHungry/following{/other_user}", "gists_url": "https://api.github.com/users/BruceStayHungry/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceStayHungry/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceStayHungry/subscriptions", "organizations_url": "https://api.github.com/users/BruceStayHungry/orgs", "repos_url": "https://api.github.com/users/BruceStayHungry/repos", "events_url": "https://api.github.com/users/BruceStayHungry/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceStayHungry/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Interesting question ;)\r\n\r\n> Which is better? Process in map() or in data-collator\r\n\r\nAs you said, both can be used in practice: map() if you want to preprocess before training, or a data-collator (or the equivalent `dataset.set_transform`) if you want to preprocess on-the-fly during training. Both options are great and really depend on your case.\r\n\r\nTo choose between the two, here are IMO the main caveats of each approach:\r\n- if your preprocessing takes too much CPU for example, using a data-collator may slow down your training and your GPUs may not work at full speed\r\n- on the other hand, map() may take a lot of time and disk space to run if your dataset is too big.\r\n\r\n> Why huggingface advises map() function? There should be some advantages to using map()\r\n\r\nTo get the best throughput when training a model, it is often recommended to preprocess your dataset before training. Note that preprocessing may include other steps before tokenization such as data filtering, cleaning, chunking etc. which are often done before training.", "Thanks for your clear explanation @lhoestq ! \r\n> * if your preprocessing takes too much CPU for example, using a data-collator may slow down your training and your GPUs may not work at full speed\r\n> * on the other hand, map() may take a lot of time and disk space to run if your dataset is too big.\r\n\r\nI really agree with you. There should be some trade-off between processing before and during the train loop.\r\nBesides, I find `map()` function can cache the results once it has been executed. Very useful!", "I'm closing this issue if you don't mind, feel free to reopen if needed ;)", "@lhoestq How to preprocess on-the-fly during training?my data is about 1w hours, when I use map to preprocess, and It's not finished yet, but all disk space(2T) is full.", "Hi ! You can do that using `set_transform`, see https://huggingface.co/docs/datasets/process#format-transform for more info :)", "unfortunately , it not work.", "Could you share more details ?" ]
2022-08-30T02:25:56
2023-04-06T09:43:58
2022-09-06T09:23:35
NONE
null
null
null
**1. Background** Huggingface datasets package advises using `map()` to process data in batches. In the example code on pretraining masked language model, they use `map()` to tokenize all data at a stroke before the train loop. The corresponding code: ``` with accelerator.main_process_first(): tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not args.overwrite_cache, desc="Running tokenizer on every text in dataset" ) ``` **2. The problem** Thus, when I try the same pertaining code with a much larger corpus, it takes quite a long time to tokenize. Also, we can choose to tokenize data in `data-collator`. In this way, the program only tokenizes one batch in the next training step and avoids getting stuck in tokenization. **3. My question** As described above, my questions are: * **Which is better? Process in `map()` or in `data-collator`** * **Why huggingface advises `map()` function?** There should be some advantages to using `map()` Thanks for your answers!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4912/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4912/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4911/comments
https://api.github.com/repos/huggingface/datasets/issues/4911/events
https://github.com/huggingface/datasets/issues/4911
1,354,426,978
I_kwDODunzps5Quupi
4,911
[Tests] Ensure `datasets` supports renamed repositories
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 3761482852, "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue", "name": "good second issue", "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues" } ]
open
false
null
[]
null
[ "You could also switch to using `huggingface_hub` more directly, where such a guarantee is already tested =)\r\n\r\ncc @Wauplin " ]
2022-08-29T14:46:14
2022-08-29T15:31:03
null
MEMBER
null
null
null
On https://hf.co/datasets you can rename a dataset (or sometimes move it to another user/org). The website handles redirections correctly and AFAIK `datasets` does as well. However it would be nice to have an integration test to make sure we don't break support for renamed datasets. To implement this we can use the /api/repos/move endpoint on hub-ci to rename/move a repo (it is documented at https://huggingface.co/docs/hub/api)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4911/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4911/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/4910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4910/comments
https://api.github.com/repos/huggingface/datasets/issues/4910/events
https://github.com/huggingface/datasets/issues/4910
1,354,374,328
I_kwDODunzps5Quhy4
4,910
Identical keywords in build_kwargs and config_kwargs lead to TypeError in load_dataset_builder()
{ "login": "bablf", "id": 57184353, "node_id": "MDQ6VXNlcjU3MTg0MzUz", "avatar_url": "https://avatars.githubusercontent.com/u/57184353?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bablf", "html_url": "https://github.com/bablf", "followers_url": "https://api.github.com/users/bablf/followers", "following_url": "https://api.github.com/users/bablf/following{/other_user}", "gists_url": "https://api.github.com/users/bablf/gists{/gist_id}", "starred_url": "https://api.github.com/users/bablf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bablf/subscriptions", "organizations_url": "https://api.github.com/users/bablf/orgs", "repos_url": "https://api.github.com/users/bablf/repos", "events_url": "https://api.github.com/users/bablf/events{/privacy}", "received_events_url": "https://api.github.com/users/bablf/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
{ "login": "thepurpleowl", "id": 21123710, "node_id": "MDQ6VXNlcjIxMTIzNzEw", "avatar_url": "https://avatars.githubusercontent.com/u/21123710?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thepurpleowl", "html_url": "https://github.com/thepurpleowl", "followers_url": "https://api.github.com/users/thepurpleowl/followers", "following_url": "https://api.github.com/users/thepurpleowl/following{/other_user}", "gists_url": "https://api.github.com/users/thepurpleowl/gists{/gist_id}", "starred_url": "https://api.github.com/users/thepurpleowl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thepurpleowl/subscriptions", "organizations_url": "https://api.github.com/users/thepurpleowl/orgs", "repos_url": "https://api.github.com/users/thepurpleowl/repos", "events_url": "https://api.github.com/users/thepurpleowl/events{/privacy}", "received_events_url": "https://api.github.com/users/thepurpleowl/received_events", "type": "User", "site_admin": false }
[ { "login": "thepurpleowl", "id": 21123710, "node_id": "MDQ6VXNlcjIxMTIzNzEw", "avatar_url": "https://avatars.githubusercontent.com/u/21123710?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thepurpleowl", "html_url": "https://github.com/thepurpleowl", "followers_url": "https://api.github.com/users/thepurpleowl/followers", "following_url": "https://api.github.com/users/thepurpleowl/following{/other_user}", "gists_url": "https://api.github.com/users/thepurpleowl/gists{/gist_id}", "starred_url": "https://api.github.com/users/thepurpleowl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thepurpleowl/subscriptions", "organizations_url": "https://api.github.com/users/thepurpleowl/orgs", "repos_url": "https://api.github.com/users/thepurpleowl/repos", "events_url": "https://api.github.com/users/thepurpleowl/events{/privacy}", "received_events_url": "https://api.github.com/users/thepurpleowl/received_events", "type": "User", "site_admin": false } ]
null
[ "I am getting similar error - `TypeError: type object got multiple values for keyword argument 'name'` while following this [tutorial](https://huggingface.co/docs/datasets/dataset_script#create-a-dataset-loading-script). I am getting this error with the `dataset-cli test` command.\r\n\r\n`datasets` version: 2.4.0", "In my case, this was happening because I defined multiple `BuilderConfig` for multiple types, but didn't had all the data files that are requierd by those configs. \r\n\r\nI think this is different than the original issue by @bablf .", "Hi ! I think this can be fixed by letting the config_kwargs take over the builder kwargs here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/load.py#L1533-L1534\r\n\r\nmaybe something like this ?\r\n```python\r\n **{**builder_kwargs, **config_kwargs}\r\n```\r\n\r\nLet me know if you'd like to contribute and fix this bug, so I can assign you :)\r\n\r\n> In my case, this was happening because I defined multiple BuilderConfig for multiple types, but didn't had all the data files that are requierd by those configs.\r\n> \r\n> I think this is different than the original issue by @bablf .\r\n\r\nFeel free to to open an new issue, I'd be happy to help\r\n", "@lhoestq Yeah, I want to, please assign.", "Cool thank you ! Let me know if you have questions or if I can help", "@lhoestq On second thoughts, I think this might be expected behavior; although a better error message might help.\r\n\r\nReasoning: Given n configs, if no data file is provided for any config, then it should be an error. Then why it should not be the case if out of n configs, for some data files are provided but not for others. Also, I was using `--all_configs` flag with `dataset-cli test`.", "Ok I see - maybe we should check the values of builder_kwargs raise an error if any key in config_kwargs tries to overwrite it ? The builder kwargs are determined from the builder's type and location (in some cases it forces the base_path, data_files and config name for example)" ]
2022-08-29T14:11:48
2022-09-13T11:58:46
null
NONE
null
null
null
## Describe the bug In `load_dataset_builder()`, `build_kwargs` and `config_kwargs` can contain the same keywords leading to a TypeError("type object got multiple values for keyword argument "xyz"). I ran into this problem with the keyword: `base_path`. It might happen with other kwargs as well. I think a quickfix would be ```python builder_cls = import_main_class(dataset_module.module_path) builder_kwargs = dataset_module.builder_kwargs data_files = builder_kwargs.pop("data_files", data_files) config_name = builder_kwargs.pop("config_name", name) hash = builder_kwargs.pop("hash") base_path = builder_kwargs.pop("base_path") ``` and then pass base_path into `builder_cls`. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("rotten_tomatoes", base_path="./sample_data") ``` ## Expected results The docs state: `**config_kwargs` — Keyword arguments to be passed to the [BuilderConfig](https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/builder_classes#datasets.BuilderConfig) and used in the [DatasetBuilder](https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/builder_classes#datasets.DatasetBuilder). So I would expect to be able to pass the base_path into `load_dataset()`. ## Actual results TypeError("type object got multiple values for keyword argument "base_path"). ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: macOS-12.5-arm64-arm-64bit - Python version: 3.8.9 - PyArrow version: 9.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4910/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4910/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/4907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4907/comments
https://api.github.com/repos/huggingface/datasets/issues/4907/events
https://github.com/huggingface/datasets/issues/4907
1,353,808,348
I_kwDODunzps5QsXnc
4,907
None Type error for swda datasets
{ "login": "hannan72", "id": 8229163, "node_id": "MDQ6VXNlcjgyMjkxNjM=", "avatar_url": "https://avatars.githubusercontent.com/u/8229163?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hannan72", "html_url": "https://github.com/hannan72", "followers_url": "https://api.github.com/users/hannan72/followers", "following_url": "https://api.github.com/users/hannan72/following{/other_user}", "gists_url": "https://api.github.com/users/hannan72/gists{/gist_id}", "starred_url": "https://api.github.com/users/hannan72/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hannan72/subscriptions", "organizations_url": "https://api.github.com/users/hannan72/orgs", "repos_url": "https://api.github.com/users/hannan72/repos", "events_url": "https://api.github.com/users/hannan72/events{/privacy}", "received_events_url": "https://api.github.com/users/hannan72/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks for reporting @hannan72 ! I couldn't reproduce the error on my side, can you share the full stack trace please ?", "Thanks a lot for your response @lhoestq \r\nThe problem is solved accidentally today and I don't know exactly why it was happened yesterday.\r\nThe issue can be closed.", "Ok, let us know if you encounter the issue again ;)" ]
2022-08-29T07:05:20
2022-08-30T14:43:41
2022-08-30T14:43:41
NONE
null
null
null
## Describe the bug I got `'NoneType' object is not callable` error while calling the swda datasets. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("swda") ``` ## Expected results Run without error ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Python version: 3.8.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4907/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4907/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4906/comments
https://api.github.com/repos/huggingface/datasets/issues/4906/events
https://github.com/huggingface/datasets/issues/4906
1,353,223,925
I_kwDODunzps5QqI71
4,906
Can't import datasets AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import)
{ "login": "OPterminator", "id": 63536981, "node_id": "MDQ6VXNlcjYzNTM2OTgx", "avatar_url": "https://avatars.githubusercontent.com/u/63536981?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OPterminator", "html_url": "https://github.com/OPterminator", "followers_url": "https://api.github.com/users/OPterminator/followers", "following_url": "https://api.github.com/users/OPterminator/following{/other_user}", "gists_url": "https://api.github.com/users/OPterminator/gists{/gist_id}", "starred_url": "https://api.github.com/users/OPterminator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OPterminator/subscriptions", "organizations_url": "https://api.github.com/users/OPterminator/orgs", "repos_url": "https://api.github.com/users/OPterminator/repos", "events_url": "https://api.github.com/users/OPterminator/events{/privacy}", "received_events_url": "https://api.github.com/users/OPterminator/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks for reporting, @OPterminator.\r\n\r\nHowever, we are not able to reproduce this issue.\r\n\r\nThere might be 2 reasons why you get this exception:\r\n- Either the name of your local Python file: if it is called `datasets.py` this could generate a circular import when trying to import the Hugging Face `datasets` library.\r\n - You could try to rename it and run it again.\r\n- Another cause could be the simultaneous use of the packages `nlp` and `datasets`. Please note that we renamed the Hugging Face `nlp` library to `datasets` more than 2 years ago: they are 2 versions of the same library.\r\n - Please try to update your script and use only `datasets` (`nlp` name is no longer in use and is out of date).", "i am also facing this issue\r\n\r\n\r\n```\r\n----> 1 import datasets\r\n 3 dataset = datasets.load_dataset(\"ucberkeley-dlab/measuring-hate-speech\", \"binary\")\r\n 4 df = dataset[\"train\"].to_pandas()\r\n\r\nFile ~/.pyenv/versions/3.10.9/lib/python3.10/site-packages/datasets/__init__.py:52\r\n 50 from .fingerprint import disable_caching, enable_caching, is_caching_enabled, set_caching_enabled\r\n 51 from .info import DatasetInfo, MetricInfo\r\n---> 52 from .inspect import (\r\n 53 get_dataset_config_info,\r\n 54 get_dataset_config_names,\r\n 55 get_dataset_infos,\r\n 56 get_dataset_split_names,\r\n 57 inspect_dataset,\r\n 58 inspect_metric,\r\n 59 list_datasets,\r\n 60 list_metrics,\r\n 61 )\r\n 62 from .iterable_dataset import IterableDataset\r\n 63 from .load import load_dataset, load_dataset_builder, load_from_disk, load_metric\r\n\r\nFile ~/.pyenv/versions/3.10.9/lib/python3.10/site-packages/datasets/inspect.py:30\r\n 28 from .download.streaming_download_manager import StreamingDownloadManager\r\n...\r\n---> 16 logger = datasets.utils.logging.get_logger(__name__)\r\n 19 if datasets.config.PYARROW_VERSION.major >= 7:\r\n 21 def pa_table_to_pylist(table):\r\n```", "I am facing the same question. And this happens when i installing `evaluate` package while `jupyter notebook` running. I'm not sure if the error occured because of trying to import the package installed when the notebook is running. Surpringly when i stop the notebook and rerun, the issue has been solved itself. Hope this will be helpful : )", "I also got this error.\r\nIt helped me to find the python process and kill it, then restart the kernel and the error disappeared.", "> I also got this error. It helped me to find the python process and kill it, then restart the kernel and the error disappeared.\r\n\r\nYes!", "> I am facing the same question. And this happens when i installing `evaluate` package while `jupyter notebook` running. I'm not sure if the error occured because of trying to import the package installed when the notebook is running. Surpringly when i stop the notebook and rerun, the issue has been solved itself. Hope this will be helpful : )\r\n\r\nThank you! :)" ]
2022-08-28T02:23:24
2023-10-27T20:08:28
2022-10-03T12:22:50
NONE
null
null
null
## Describe the bug A clear and concise description of what the bug is. Not able to import datasets ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import os os.environ["WANDB_API_KEY"] = "0" ## to silence warning import numpy as np import random import sklearn import matplotlib.pyplot as plt import pandas as pd import sys import tensorflow as tf import plotly.express as px import transformers import tokenizers import nlp as nlp import utils import datasets ``` ## Expected results A clear and concise description of the expected results. import should work normal ## Actual results Specify the actual results or traceback. --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-21-b3b5b0b62103> in <module> 13 import nlp as nlp 14 import utils ---> 15 import datasets ~\anaconda3\lib\site-packages\datasets\__init__.py in <module> 44 from .fingerprint import disable_caching, enable_caching, is_caching_enabled, set_caching_enabled 45 from .info import DatasetInfo, MetricInfo ---> 46 from .inspect import ( 47 get_dataset_config_info, 48 get_dataset_config_names, ~\anaconda3\lib\site-packages\datasets\inspect.py in <module> 28 from .download.streaming_download_manager import StreamingDownloadManager 29 from .info import DatasetInfo ---> 30 from .load import dataset_module_factory, import_main_class, load_dataset_builder, metric_module_factory 31 from .utils.file_utils import relative_to_absolute_path 32 from .utils.logging import get_logger ~\anaconda3\lib\site-packages\datasets\load.py in <module> 53 from .iterable_dataset import IterableDataset 54 from .metric import Metric ---> 55 from .packaged_modules import ( 56 _EXTENSION_TO_MODULE, 57 _MODULE_SUPPORTS_METADATA, ~\anaconda3\lib\site-packages\datasets\packaged_modules\__init__.py in <module> 4 from typing import List 5 ----> 6 from .csv import csv 7 from .imagefolder import imagefolder 8 from .json import json ~\anaconda3\lib\site-packages\datasets\packaged_modules\csv\csv.py in <module> 13 14 ---> 15 logger = datasets.utils.logging.get_logger(__name__) 16 17 _PANDAS_READ_CSV_NO_DEFAULT_PARAMETERS = ["names", "prefix"] AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.4.0 - Platform: Windows-10-10.0.22000-SP0 - Python version: 3.8.8 - PyArrow version: 9.0.0 - Pandas version: 1.2.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4906/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4906/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4902/comments
https://api.github.com/repos/huggingface/datasets/issues/4902/events
https://github.com/huggingface/datasets/issues/4902
1,352,469,196
I_kwDODunzps5QnQrM
4,902
Name the default config `default`
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
closed
false
null
[]
null
[ "Addressed in #5331." ]
2022-08-26T16:16:22
2023-07-24T21:15:31
2023-07-24T21:15:31
CONTRIBUTOR
null
null
null
Currently, if a dataset has no configuration, a default configuration is created from the dataset name. For example, for a dataset loaded from the hub repository, such as https://huggingface.co/datasets/user/dataset (repo id is `user/dataset`), the default configuration will be `user--dataset`. It might be easier to handle to set it to `default`, or another reserved word.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4902/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/4902/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4900/comments
https://api.github.com/repos/huggingface/datasets/issues/4900/events
https://github.com/huggingface/datasets/issues/4900
1,352,405,855
I_kwDODunzps5QnBNf
4,900
Dataset Viewer issue for asaxena1990/Dummy_dataset
{ "login": "ankurcl", "id": 56627657, "node_id": "MDQ6VXNlcjU2NjI3NjU3", "avatar_url": "https://avatars.githubusercontent.com/u/56627657?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ankurcl", "html_url": "https://github.com/ankurcl", "followers_url": "https://api.github.com/users/ankurcl/followers", "following_url": "https://api.github.com/users/ankurcl/following{/other_user}", "gists_url": "https://api.github.com/users/ankurcl/gists{/gist_id}", "starred_url": "https://api.github.com/users/ankurcl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ankurcl/subscriptions", "organizations_url": "https://api.github.com/users/ankurcl/orgs", "repos_url": "https://api.github.com/users/ankurcl/repos", "events_url": "https://api.github.com/users/ankurcl/events{/privacy}", "received_events_url": "https://api.github.com/users/ankurcl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Seems to be linked to the use of the undocumented `_resolve_features` method in the dataset viewer backend:\r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"asaxena1990/Dummy_dataset\", name=\"asaxena1990--Dummy_dataset\", split=\"train\", streaming=True)\r\nUsing custom data configuration asaxena1990--Dummy_dataset-4a704ed7e5627563\r\n>>> dataset._resolve_features()\r\nFailed to read file 'https://huggingface.co/datasets/asaxena1990/Dummy_dataset/resolve/06885879a8bdd767d2d27695484fc6c83244617a/dummy_dataset_train.json' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Column() changed from object to array in row 0\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py\", line 109, in _generate_tables\r\n pa_table = paj.read_json(\r\n File \"pyarrow/_json.pyx\", line 246, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 1261, in _resolve_features\r\n features = _infer_features_from_batch(self._head())\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 686, in _head\r\n return _examples_to_batch([x for key, x in islice(self._iter(), n)])\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 686, in <listcomp>\r\n return _examples_to_batch([x for key, x in islice(self._iter(), n)])\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 708, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 112, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 651, in wrapper\r\n for key, table in generate_tables_fn(**kwargs):\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py\", line 137, in _generate_tables\r\n f\"This JSON file contain the following fields: {str(list(dataset.keys()))}. \"\r\nAttributeError: 'list' object has no attribute 'keys'\r\n```\r\n\r\nPinging @huggingface/datasets", "Hi ! JSON files containing a list of object are not supported yet, you can use JSON Lines files instead in the meantime\r\n```json\r\n{\"text\": \"can I know this?\", \"intent\": \"Know\", \"type\": \"Test\"}\r\n{\"text\": \"can I know this?\", \"intent\": \"Know\", \"type\": \"Test\"}\r\n...\r\n```", "A JSON list of objects is supported as of version 2.5.0." ]
2022-08-26T15:15:44
2023-07-24T15:42:09
2023-07-24T15:42:09
NONE
null
null
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4900/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4900/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4898/comments
https://api.github.com/repos/huggingface/datasets/issues/4898/events
https://github.com/huggingface/datasets/issues/4898
1,351,851,254
I_kwDODunzps5Qk5z2
4,898
Dataset Viewer issue for timit_asr
{ "login": "InayatUllah932", "id": 91126978, "node_id": "MDQ6VXNlcjkxMTI2OTc4", "avatar_url": "https://avatars.githubusercontent.com/u/91126978?v=4", "gravatar_id": "", "url": "https://api.github.com/users/InayatUllah932", "html_url": "https://github.com/InayatUllah932", "followers_url": "https://api.github.com/users/InayatUllah932/followers", "following_url": "https://api.github.com/users/InayatUllah932/following{/other_user}", "gists_url": "https://api.github.com/users/InayatUllah932/gists{/gist_id}", "starred_url": "https://api.github.com/users/InayatUllah932/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/InayatUllah932/subscriptions", "organizations_url": "https://api.github.com/users/InayatUllah932/orgs", "repos_url": "https://api.github.com/users/InayatUllah932/repos", "events_url": "https://api.github.com/users/InayatUllah932/events{/privacy}", "received_events_url": "https://api.github.com/users/InayatUllah932/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Yes, the dataset viewer is based on `datasets`, and the following does not work:\r\n\r\n```\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names('timit_asr')\r\nDownloading builder script: 7.48kB [00:00, 6.69MB/s]\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 354, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/timit_asr/43f9448dd5db58e95ee48a277f466481b151f112ea53e27f8173784da9254fb2/timit_asr.py\", line 117, in _split_generators\r\n data_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/posixpath.py\", line 231, in expanduser\r\n path = os.fspath(path)\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 404, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 359, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\ncc @huggingface/datasets ", "Due to license restriction, this dataset needs manual downloading of the original data.\r\n\r\nThis information is in the dataset card: https://huggingface.co/datasets/timit_asr\r\n> The dataset needs to be downloaded manually from https://catalog.ldc.upenn.edu/LDC93S1", "Maybe a better error message for datasets that need manual downloading? @severo \r\n\r\nMaybe we can raise a specific excpetion as done from `load_dataset`...", "Yes, ideally something like https://github.com/huggingface/datasets/blob/main/src/datasets/builder.py#L81\r\n", "The preview is now disabled (and a descriptive warning is displayed) for datasets requiring manual download. See:\r\n\r\n![timit_asr-manual-download](https://user-images.githubusercontent.com/8515462/193578572-3d21b790-f848-4257-9e9b-7cab3d76a269.png)\r\n" ]
2022-08-26T07:12:05
2022-10-03T12:40:28
2022-10-03T12:40:27
NONE
null
null
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4898/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4897/comments
https://api.github.com/repos/huggingface/datasets/issues/4897/events
https://github.com/huggingface/datasets/issues/4897
1,351,784,727
I_kwDODunzps5QkpkX
4,897
datasets generate large arrow file
{ "login": "jax11235", "id": 18533904, "node_id": "MDQ6VXNlcjE4NTMzOTA0", "avatar_url": "https://avatars.githubusercontent.com/u/18533904?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jax11235", "html_url": "https://github.com/jax11235", "followers_url": "https://api.github.com/users/jax11235/followers", "following_url": "https://api.github.com/users/jax11235/following{/other_user}", "gists_url": "https://api.github.com/users/jax11235/gists{/gist_id}", "starred_url": "https://api.github.com/users/jax11235/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jax11235/subscriptions", "organizations_url": "https://api.github.com/users/jax11235/orgs", "repos_url": "https://api.github.com/users/jax11235/repos", "events_url": "https://api.github.com/users/jax11235/events{/privacy}", "received_events_url": "https://api.github.com/users/jax11235/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! The cache files are the results of all the transforms you applied to the dataset using `map` for example.\r\nDid you run a transform that could potentially blow up the size of the dataset ?", "@lhoestq,\r\nI don't remember, but I can't imagine what kind of transform may generate data that grow over 200 times in size. \r\nI think maybe it doesn' matter, it's just cache after all." ]
2022-08-26T05:51:16
2022-09-18T05:07:52
2022-09-18T05:07:52
NONE
null
null
null
Checking the large file in disk, and found the large cache file in the cifar10 data directory: ![image](https://user-images.githubusercontent.com/18533904/186830449-ba96cdeb-0fe8-4543-994d-2abe7145933f.png) As we know, the size of cifar10 dataset is ~130MB, but the cache file has almost 30GB size, there may be some problems here.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4897/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4895
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4895/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4895/comments
https://api.github.com/repos/huggingface/datasets/issues/4895/events
https://github.com/huggingface/datasets/issues/4895
1,350,798,527
I_kwDODunzps5Qg4y_
4,895
load_dataset method returns Unknown split "validation" even if this dir exists
{ "login": "SamSamhuns", "id": 13418507, "node_id": "MDQ6VXNlcjEzNDE4NTA3", "avatar_url": "https://avatars.githubusercontent.com/u/13418507?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SamSamhuns", "html_url": "https://github.com/SamSamhuns", "followers_url": "https://api.github.com/users/SamSamhuns/followers", "following_url": "https://api.github.com/users/SamSamhuns/following{/other_user}", "gists_url": "https://api.github.com/users/SamSamhuns/gists{/gist_id}", "starred_url": "https://api.github.com/users/SamSamhuns/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SamSamhuns/subscriptions", "organizations_url": "https://api.github.com/users/SamSamhuns/orgs", "repos_url": "https://api.github.com/users/SamSamhuns/repos", "events_url": "https://api.github.com/users/SamSamhuns/events{/privacy}", "received_events_url": "https://api.github.com/users/SamSamhuns/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I don't know the main problem but it looks like, it is ignoring the last directory in your case. So, create a directory called 'zzz' in the same folder as train, validation and test. if it doesn't work, create a directory called \"aaa\". It worked for me.\r\n", "@SamSamhuns could you please try to load it with the current main-branch version of `datasets`? I suppose the problem is that it tries to get splits names from filenames in this case, ignoring directories names, but `val` wasn't in keywords at that time, but it was fixed recently in this PR https://github.com/huggingface/datasets/pull/4844. ", "I have a similar problem.\r\nWhen I try to create `data_infos.json` using `datasets-cli test Peter.py --save_infos --all_configs` I get an error:\r\n`ValueError: Unknown split \"test\". Should be one of ['train'].`\r\n\r\nThe `data_infos.json` is created perfectly fine when I use only one split - `datasets.Split.TRAIN`\r\n\r\n@polinaeterna Could you help here please?\r\n\r\nYou can find the code here: https://huggingface.co/datasets/sberbank-ai/Peter/tree/add_splits (add_splits branch)", "@skalinin It seems the `dataset_infos.json` of your dataset is missing the info on the test split (and `datasets-cli` doesn't ignore the cached infos at the moment, which is a known bug), so your issue is not related to this one. I think you can fix your issue by deleting all the cached `dataset_infos.json` (in the local repo and in `~/.cache/huggingface/modules`) before running the `datasets-cli test` command. Let us know if that doesn't help, and I can try to generate it myself.", "This code indeed behaves as expected on `main`. But suppose the `val_234.png` is renamed to some other value not containing one of [these](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L31) keywords, in that case, this issue becomes relevant again because the real cause of it is the order in which we check the predefined split patterns to assign data files to each split - first we assign data files based on filenames, and only if this fails meaning not a single split found (`val` is not recognized here in the older versions of `datasets`, which results in an empty `validation` split), do we assign based on directory names.\r\n\r\n@polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if `data_dir` is specified (or if `load_dataset(data_dir)` is called)? ", "> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nyes that makes sense !", "Looks like the `val/validation` dir name issue is fixed with the current main-branch version of the `datasets` repository. \r\n\r\n> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nI agree with this as well. I would expect higher precedence to the directory name over the file name. Right now if I place a single file named `train_00001.jpg` under the `validation` directory, `load_dataset` cannot find the validation split.", "Thanks for the reply\r\n\r\nI've created a separate [issue](https://github.com/huggingface/datasets/issues/4982#issue-1375604693) for my problem.", "> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nSounds good to me! opened a PR: https://github.com/huggingface/datasets/pull/4985", "Hi there @polinaeterna @mariosasko ! I have installed 5.2.3.dev0, which should have this fix. Unfortunately, I am still getting the error:\r\n`ValueError: Unknown split \"validation\". Should be one of ['train'].` When I call `load_dataset(\"csv\", data_files=files, split=split)`\r\n\r\nAny help would be greatly appreciated!", "hi @shaneacton ! could you please show your dataset structure?", "Hi there @polinaeterna . My local CSV files are stored as follows:\r\nbinding:\r\n---------- tune.csv\r\n---------- public_data:\r\n--------------------------- train.csv\r\n\r\n`self.list_shards(split)` sucessfully finds the relevant data files", "@shaneacton do you have `validation.csv`/`val.csv`/`valid.csv`/`dev.csv` file in your data folder? I can't find it in the structure you provided", "@polinaeterna no, does the name of the split need to match the name of the file exactly?\r\n\r\nBut my train file is not actually named 'train.py' its called 'XXXXXXXXX_train_XXXXXXXX.csv'\r\nAnd the code works fine for train, but fails for validation.\r\nDoes the file name need to _contain_ the split name?", "@shaneacton what files do you expect to be included in \"validation\" split? yes, you should somehow indicate that a file belongs to a certain split - either by including split name in a filename or by putting it into a folder with split name, you can also check out [this documentation page](https://huggingface.co/docs/datasets/main/en/repository_structure) :)\r\nby default all the data goes to a single `train` split", "@polinaeterna I have specified my train/test/tune files via the `split_to_filepattern` argument when initialising my `FileDataSource` class. This is how `list_shards` is able to find the right files.\r\nAfter your last message, I have tried renaminig my data files to simply `train.csv` and `validation.csv`, however I am still getting the same error: `Unknown split \"validation\". Should be one of ['train']`", "@polinaeterna I have solved the issue. The solution was to call:\r\n`load_dataset(\"csv\", data_files={split: files}, split=split)`" ]
2022-08-25T12:11:00
2022-10-06T17:49:28
2022-09-29T08:07:50
NONE
null
null
null
## Describe the bug The `datasets.load_dataset` returns a `ValueError: Unknown split "validation". Should be one of ['train', 'test'].` when running `load_dataset(local_data_dir_path, split="validation")` even if the `validation` sub-directory exists in the local data path. The data directories are as follows and attached to this issue: ``` test_data1 |_ train |_ 1012.png |_ metadata.jsonl ... |_ test ... |_ validation |_ 234.png |_ metadata.jsonl ... test_data2 |_ train |_ train_1012.png |_ metadata.jsonl ... |_ test ... |_ validation |_ val_234.png |_ metadata.jsonl ... ``` They contain the same image files and `metadata.jsonl` but the images in `test_data2` have the split names prepended i.e. `train_1012.png, val_234.png` and the images in `test_data1` do not have the split names prepended to the image names i.e. `1012.png, 234.png` I actually saw in another issue `val` was not recognized as a split name but here I would expect the files to take the split from the parent directory name i.e. val should become part of the validation split? ## Steps to reproduce the bug ```python import datasets datasets.logging.set_verbosity_error() from datasets import load_dataset, get_dataset_split_names # the following only finds train, validation and test splits correctly path = "./test_data1" print("######################", get_dataset_split_names(path), "######################") dataset_list = [] for spt in ["train", "test", "validation"]: dataset = load_dataset(path, split=spt) dataset_list.append(dataset) # the following only finds train and test splits path = "./test_data2" print("######################", get_dataset_split_names(path), "######################") dataset_list = [] for spt in ["train", "test", "validation"]: dataset = load_dataset(path, split=spt) dataset_list.append(dataset) ``` ## Expected results ``` ###################### ['train', 'test', 'validation'] ###################### ###################### ['train', 'test', 'validation'] ###################### ``` ## Actual results ``` Traceback (most recent call last): File "test_data_loader.py", line 11, in <module> dataset = load_dataset(path, split=spt) File "/home/venv/lib/python3.8/site-packages/datasets/load.py", line 1758, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 893, in as_dataset datasets = map_nested( File "/home/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 385, in map_nested return function(data_struct) File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 924, in _build_single_dataset ds = self._as_dataset( File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 993, in _as_dataset dataset_kwargs = ArrowReader(self._cache_dir, self.info).read( File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 211, in read files = self.get_file_instructions(name, instructions, split_infos) File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 184, in get_file_instructions file_instructions = make_file_instructions( File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 107, in make_file_instructions absolute_instructions = instruction.to_absolute(name2len) File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in to_absolute return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions] File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in <listcomp> return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions] File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 433, in _rel_to_abs_instr raise ValueError(f'Unknown split "{split}". Should be one of {list(name2len)}.') ValueError: Unknown split "validation". Should be one of ['train', 'test']. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Linux Ubuntu 18.04 - Python version: 3.8.12 - PyArrow version: 9.0.0 Data files [test_data1.zip](https://github.com/huggingface/datasets/files/9424463/test_data1.zip) [test_data2.zip](https://github.com/huggingface/datasets/files/9424468/test_data2.zip)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4895/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4893
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4893/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4893/comments
https://api.github.com/repos/huggingface/datasets/issues/4893/events
https://github.com/huggingface/datasets/issues/4893
1,350,655,674
I_kwDODunzps5QgV66
4,893
Oversampling strategy for iterable datasets in `interleave_datasets`
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 3761482852, "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue", "name": "good second issue", "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues" } ]
closed
false
{ "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false }
[ { "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @lhoestq,\r\nI plunged into the code and it should be manageable for me to work on it!\r\n#take\r\n\r\nAlso, setting `d1`, `d2` and `d3` as you did raised a `SyntaxError: 'yield' inside list comprehension` for me, on Python 3.8.10.\r\nThe following snippet works for me though:\r\n```\r\nd1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [0, 1, 2]])), {}))\r\nd2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [10, 11, 12, 13]])), {}))\r\nd3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [20, 21, 22, 23, 24]])), {}))\r\n```\r\n\r\n", "Great @ylacombe thanks ! I'm assigning you this issue", "Hi @ylacombe :) Is there anything I can do to help ? Feel free to ping me if you have any question :)", "Hi @lhoestq,\r\n\r\nI actually have already wrote the code last time [on this commit](https://github.com/ylacombe/datasets/commit/84769db97facc78a33ec53f7b1b395951e1804df) but I still have to change the docs and write some tests though. I'm working on it.\r\n\r\nHowever, I still your advice on one matter. \r\nIn #4831, when using a `Dataset` list with probabilities, I had change the original behavior so that it stops as soon as one or all datasets are out of samples. By nature, this behavior can't be applied with an `IterableDataset` because one only knows an iterable dataset is out of sample when receiving a StopIteration error after calling the iterator once again. \r\nTo sum up, as it is right know, the behavior is not consistent with an `IterableDataset` list or a `Dataset` list, when using probabilities.\r\nTo be honest, I think that the current behavior with a `Dataset` list is desirable and avoid having too many samples, so I would recommand keeping that as it is, but I can understand the desire to have the same behavior for both classes. \r\nWhat do you think ? Please let me know if you need more details.\r\n\r\n\r\nEDIT:\r\nHere is an example:\r\n```\r\n>>> from tests.test_iterable_dataset import *\r\n>>> d1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [0, 1, 2]])), {}))\r\n>>> d2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [10, 11, 12, 13]])), {}))\r\n>>> d3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [20, 21, 22, 23, 24]])), {}))\r\n>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)\r\n>>> [x[\"a\"] for x in dataset]\r\n[10, 0, 11, 1, 2, 20, 12, 13]\r\n>>> from tests.test_arrow_dataset import *\r\n>>> d1 = Dataset.from_dict({\"a\": [0, 1, 2]})\r\n>>> d2 = Dataset.from_dict({\"a\": [10, 11, 12]})\r\n>>> d3 = Dataset.from_dict({\"a\": [20, 21, 22]})\r\n>>> interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)[\"a\"]\r\n[10, 0, 11, 1, 2]\r\n[10, 0, 11, 1, 2]\r\n```\r\n ", "Hi ! Awesome :) \r\n\r\nMaybe you can pre-load the next sample to know if the dataset is empty or not ?\r\nThis way it should be possible to have the same behavior for `IterableDataset`", "Hi @ylacombe let us know if we can help with anything :)", "Hi @lhoestq, I've finally made some advances in the matter. I've modified the `IterableDataset` behavior so that it aligns with the `Dataset` behavior as we have discussed. The documentation has been dealt with too. \r\nIt works as expected on my examples. However I'm having trouble figuring out how to test `interleave_datasets` on `test_iterable_datasets.py` as I have never worked with pytest. Could you help me on that or give me some indications? \r\n", "Thanks @ylacombe :)\r\n\r\nUsing the `pytest` command, you can run all the functions in a python file that start with \"test_*\" and make sure they return not errors:\r\n```\r\npytest tests/test_iterable_dataset.py\r\n```\r\n\r\nIn our case it can be nice to define a `test_interleave_datasets_with_oversampling` function. This function can contain the code example that we mentioned earlier in this github issue to make sure it works as expected.", "Resolved via #5036." ]
2022-08-25T10:06:55
2022-10-03T12:37:46
2022-10-03T12:37:46
MEMBER
null
null
null
In https://github.com/huggingface/datasets/pull/4831 @ylacombe added an oversampling strategy for `interleave_datasets`. However right now it doesn't work for datasets loaded using `load_dataset(..., streaming=True)`, which are `IterableDataset` objects. It would be nice to expand `interleave_datasets` for iterable datasets as well to support this oversampling strategy ```python >>> from datasets.iterable_dataset import IterableDataset, ExamplesIterable >>> d1 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [0, 1, 2]], {})) >>> d2 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [10, 11, 12, 13]], {})) >>> d3 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [20, 21, 22, 23, 24]], {})) >>> dataset = interleave_datasets([d1, d2, d3]) # is supported >>> [x["a"] for x in dataset] [0, 10, 20, 1, 11, 21, 2, 12, 22] >>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted") # is not supported yet >>> [x["a"] for x in dataset] [0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24] ``` This can be implemented by adding the strategy to both `CyclingMultiSourcesExamplesIterable` and `RandomlyCyclingMultiSourcesExamplesIterable` used in `_interleave_iterable_datasets` in `iterable_dataset.py` I would be happy to share some guidance if anyone would like to give it a shot :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4893/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4889
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4889/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4889/comments
https://api.github.com/repos/huggingface/datasets/issues/4889/events
https://github.com/huggingface/datasets/issues/4889
1,349,758,525
I_kwDODunzps5Qc649
4,889
torchaudio 11.0 yields different results than torchaudio 12.1 when loading MP3
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Maybe we can just pass this along to torchaudio @lhoestq @albertvillanova ? It be great if you could investigate if the errors lies in datasets or in torchaudio.", "torchaudio did a change in [0.12](https://github.com/pytorch/audio/releases/tag/v0.12.0) on MP3 decoding (which affects common voice):\r\n> MP3 decoding is now handled by FFmpeg in sox_io backend. (https://github.com/pytorch/audio/pull/2419, https://github.com/pytorch/audio/pull/2428)\r\n> - FFmpeg is now used as fallback in sox_io backend, and now MP3 decoding is handled by FFmpeg. To load MP3 audio with torchaudio.load, please install a compatible version of FFmpeg (Version 4 when using an official binary distribution).\r\n> - Note that, whereas the previous MP3 decoding scheme pads the output audio, the new scheme does not. As a consequence, the new version returns shorter audio tensors.", "Do we have a solution for this now? Should we just upgrade to `torchaudio 0.12.0` then? ", "`datasets` supports `torchaudio` 0.12 if you have an environment that supports reading MP3 with `torchaudio`, i.e. if you have `ffmpeg>=4`", "Closing as we no longer use `torchaudio` for decoding." ]
2022-08-24T16:54:43
2023-03-02T15:33:05
2023-03-02T15:33:04
CONTRIBUTOR
null
null
null
## Describe the bug When loading Common Voice with torchaudio 0.11.0 the results are different to 0.12.1 which leads to problems in transformers see: https://github.com/huggingface/transformers/pull/18749 ## Steps to reproduce the bug If you run the following code once with `torchaudio==0.11.0+cu102` and `torchaudio==0.12.1+cu102` you can see that the tensors differ. This is a pretty big breaking change and makes some integration tests fail in Transformers. ```python #!/usr/bin/env python3 from datasets import load_dataset import datasets import numpy as np import torch import torchaudio print("torch vesion", torch.__version__) print("torchaudio vesion", torchaudio.__version__) save_audio = True load_audios = False if save_audio: ds = load_dataset("common_voice", "en", split="train", streaming=True) ds = ds.cast_column("audio", datasets.Audio(sampling_rate=16_000)) ds_iter = iter(ds) sample = next(ds_iter) np.save(f"audio_sample_{torch.__version__}", sample["audio"]["array"]) print(sample["audio"]["array"]) if load_audios: array_torch_11 = np.load("/home/patrick/audio_sample_1.11.0+cu102.npy") print("Array 11 Shape", array_torch_11.shape) print("Array 11 abs sum", np.sum(np.abs(array_torch_11))) array_torch_12 = np.load("/home/patrick/audio_sample_1.12.1+cu102.npy") print("Array 12 Shape", array_torch_12.shape) print("Array 12 abs sum", np.sum(np.abs(array_torch_12))) ``` Having saved the tensors the print output yields: ``` torch vesion 1.12.1+cu102 torchaudio vesion 0.12.1+cu102 Array 11 Shape (122880,) Array 11 abs sum 1396.4988 Array 12 Shape (123264,) Array 12 abs sum 1396.5193 ``` ## Expected results torchaudio 11.0 and 12.1 should yield same results. ## Actual results See above. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.1.1.dev0 - Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - PyArrow version: 6.0.1 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4889/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4888/comments
https://api.github.com/repos/huggingface/datasets/issues/4888/events
https://github.com/huggingface/datasets/issues/4888
1,349,447,521
I_kwDODunzps5Qbu9h
4,888
Dataset Viewer issue for subjqa
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "It's a bug in the viewer, thanks for reporting it. We're hoping to update to a new version in the next few days which should fix it.", "Fixed \r\n\r\nhttps://huggingface.co/datasets/subjqa\r\n\r\n<img width=\"1040\" alt=\"Capture d’écran 2022-09-08 à 10 23 26\" src=\"https://user-images.githubusercontent.com/1676121/189073210-2a57ff88-8bb1-44bd-851e-0e75473cea3f.png\">\r\n" ]
2022-08-24T13:26:20
2022-09-08T08:23:42
2022-09-08T08:23:42
MEMBER
null
null
null
### Link https://huggingface.co/datasets/subjqa ### Description Getting the following error for this dataset: ``` Status code: 500 Exception: Status500Error Message: 2 or more items returned, instead of 1 ``` Not sure what's causing it though 🤔 ### Owner Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4888/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/4886
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4886/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4886/comments
https://api.github.com/repos/huggingface/datasets/issues/4886/events
https://github.com/huggingface/datasets/issues/4886
1,349,285,569
I_kwDODunzps5QbHbB
4,886
Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid
{ "login": "JeanKaddour", "id": 11850255, "node_id": "MDQ6VXNlcjExODUwMjU1", "avatar_url": "https://avatars.githubusercontent.com/u/11850255?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JeanKaddour", "html_url": "https://github.com/JeanKaddour", "followers_url": "https://api.github.com/users/JeanKaddour/followers", "following_url": "https://api.github.com/users/JeanKaddour/following{/other_user}", "gists_url": "https://api.github.com/users/JeanKaddour/gists{/gist_id}", "starred_url": "https://api.github.com/users/JeanKaddour/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JeanKaddour/subscriptions", "organizations_url": "https://api.github.com/users/JeanKaddour/orgs", "repos_url": "https://api.github.com/users/JeanKaddour/repos", "events_url": "https://api.github.com/users/JeanKaddour/events{/privacy}", "received_events_url": "https://api.github.com/users/JeanKaddour/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi! IIRC one of the files in this dataset is corrupted due to https://github.com/huggingface/datasets/pull/4081 (fixed now).\r\n\r\n@NielsRogge Could you please re-generate and re-push this dataset (or I can do it if you share the generation script)?", "Could you put something in place to catch these problems? I'm seeing this on another dataset consistently too and I guess I can't fix it in code?", "Hey,\r\n\r\nYes the notebook I used to upload this dataset can be found here: https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing.\r\n\r\nIf you have time to regenerate the dataset, would be great.", "Sorry, maybe I wasn't clear enough that it's a different dataset `laion2B-multi-joined-translated-to-en`. I think there should be checks in the upload, tests on the server, or validation after download (hashes) to catch these problems.\r\n\r\nLots of bandwidth wasted otherwise! /cc @mariosasko", "Yes @alexjc sorry was more a reply to @JeanKaddour.\r\n\r\nAnd indeed it'd be great to have additional checks to avoid these errors. ", "cc @severo since such checks should probably be implemented on the datasets-server side.", "Hi,\r\n\r\nIt seems the problem is still persist. I have encountered the exact same problem using just 2 line of code above. \r\n\r\nThe error code is as follows:\r\n\r\n```\r\n發生例外狀況: DatasetGenerationError\r\nAn error occurred while generating the dataset\r\npyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\n File \"/code/ddpm_learn/train.py\", line 65, in <module>\r\n dataset = load_dataset(\"huggan/CelebA-HQ\", cache_dir=\"./CelebA-HQ\"\r\ndatasets.builder.DatasetGenerationError: An error occurred while generating the dataset\r\n```", "Yes for the moment refer to the notebook linked above if you want to create a HF dataset yourself", "Hi @NielsRogge ,\r\nI can help to push the dataset to the cloud. However, I cannot locate the situation so far. I wonder if \r\n1. the downloaded files so far has corruption s.t. the file cannot generate properly, or\r\n2. the downloaded files has no bug, the bug is caused by buggy upload program so that I can use what I have just downloaded to re-upload to cloud\r\n\r\nThank, \r\nAllan" ]
2022-08-24T11:24:21
2023-02-02T02:40:53
null
NONE
null
null
null
## Describe the bug Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('huggan/CelebA-HQ') ``` ## Expected results See https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing#scrollTo=N3ml_7f8kzDd ## Actual results ``` File "/home/jean/projects/cold_diffusion/celebA.py", line 4, in <module> dataset = load_dataset('huggan/CelebA-HQ') File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/load.py", line 1793, in load_dataset builder_instance.download_and_prepare( File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 1274, in _prepare_split for key, table in logging.tqdm( File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 67, in _generate_tables parquet_file = pq.ParquetFile(f) File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/pyarrow/parquet/__init__.py", line 286, in __init__ self.reader.open( File "pyarrow/_parquet.pyx", line 1227, in pyarrow._parquet.ParquetReader.open File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets-2.4.1.dev0 - Platform: Ubuntu 18.04 - Python version: 3.10 - PyArrow version: pyarrow 9.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4886/timeline
null
null