url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 48
51
| id
int64 600M
2.19B
| node_id
stringlengths 18
24
| number
int64 2
6.73k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
listlengths 0
30
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | draft
null | pull_request
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5717
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5717/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5717/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5717/events
|
https://github.com/huggingface/datasets/issues/5717
| 1,658,729,866 |
I_kwDODunzps5i3jWK
| 5,717 |
Errror when saving to disk a dataset of images
|
{
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false |
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Looks like as long as the number of shards makes a batch lower than 1000 images it works. In my training set I have 40K images. If I use `num_shards=40` (batch of 1000 images) I get the error, but if I update it to `num_shards=50` (batch of 800 images) it works.\r\n\r\nI will be happy to share my dataset privately if it can help to better debug.",
"Hi! I didn't manage to reproduce this behavior, so sharing the dataset with us would help a lot. \r\n\r\n> My dataset is around 50K images, is this error might be due to a bad image?\r\n\r\nThis shouldn't be the case as we save raw data to disk without decoding it.",
"OK, thanks! The dataset is currently hosted on a gcs bucket. How would you like to proceed for sharing the link? ",
"You could follow [this](https://cloud.google.com/storage/docs/collaboration#browser) procedure or upload the dataset to Google Drive (50K images is not that much unless high-res) and send me an email with the link.",
"Thanks @mariosasko. I just sent you the GDrive link.",
"Thanks @jplu! I managed to reproduce the `TypeError` - it stems from [this](https://github.com/huggingface/datasets/blob/e3f4f124a1b118a5bfff5bae76b25a68aedbebbc/src/datasets/features/image.py#L258-L264) line, which can return a `ChunkedArray` (its mask is also chunked then, which Arrow does not allow) when the embedded data is too big to fit in a standard `Array`.\r\n\r\nI'm working on a fix.",
"@yairl-dn You should be able to bypass this issue by reducing `datasets.config.DEFAULT_MAX_BATCH_SIZE` (1000 by default)\r\n\r\nIn Datasets 3.0, the Image storage format will be simplified, so this should be easier to fix then.",
"The same error occurs with my save_to_disk() of Audio() items. I still get it with:\r\n```python\r\nimport datasets\r\ndatasets.config.DEFAULT_MAX_BATCH_SIZE=35\r\nfrom datasets import Features, Array2D, Value, Dataset, Sequence, Audio\r\n```\r\n\r\n```\r\nSaving the dataset (41/47 shards): 88%|██████████████████████████████████████████▉ | 297/339 [01:21<00:11, 3.65 examples/s]\r\nTraceback (most recent call last):\r\nFile \"/mnt/ddrive/prj/voice/voice-training-dataset-create/./dataset.py\", line 155, in <module>\r\ncreate_dataset(args)\r\nFile \"/mnt/ddrive/prj/voice/voice-training-dataset-create/./dataset.py\", line 137, in create_dataset\r\nhf_dataset.save_to_disk(args.outds)\r\nFile \"/home/j/src/py/datasets/src/datasets/arrow_dataset.py\", line 1532, in save_to_disk\r\nfor job_id, done, content in Dataset._save_to_disk_single(**kwargs):\r\nFile \"/home/j/src/py/datasets/src/datasets/arrow_dataset.py\", line 1563, in _save_to_disk_single\r\nwriter.write_table(pa_table)\r\nFile \"/home/j/src/py/datasets/src/datasets/arrow_writer.py\", line 574, in write_table\r\npa_table = embed_table_storage(pa_table)\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/table.py\", line 2307, in embed_table_storage\r\narrays = [\r\n^\r\nFile \"/home/j/src/py/datasets/src/datasets/table.py\", line 2308, in <listcomp>\r\nembed_array_storage(table[name], feature) if require_storage_embed(feature) else table[name]\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/table.py\", line 1831, in wrapper\r\nreturn pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/table.py\", line 1831, in <listcomp>\r\nreturn pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/table.py\", line 2177, in embed_array_storage\r\nreturn feature.embed_storage(array)\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/features/audio.py\", line 276, in embed_storage\r\nstorage = pa.StructArray.from_arrays([bytes_array, path_array], [\"bytes\", \"path\"], mask=bytes_array.is_null())\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"pyarrow/array.pxi\", line 2850, in pyarrow.lib.StructArray.from_arrays\r\nFile \"pyarrow/array.pxi\", line 3290, in pyarrow.lib.c_mask_inverted_from_obj\r\nTypeError: Mask must be a pyarrow.Array of type boolean\r\n```",
"Similar to @jaggzh, setting `datasets.config.DEFAULT_MAX_BATCH_SIZE` did not help in my case (same error here but for different dataset: https://github.com/Stanford-AIMI/RRG24/issues/2).\r\n\r\nThis is also reproducible with this open dataset: https://huggingface.co/datasets/nlphuji/winogavil/discussions/1\r\n\r\nHere's some code to do so:\r\n```python\r\nimport datasets\r\n\r\ndatasets.config.DEFAULT_MAX_BATCH_SIZE = 1\r\n\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"nlphuji/winogavil\")\r\n\r\nds.save_to_disk(\"temp\")\r\n```\r\n\r\nI've done some more debugging with `datasets==2.18.0` (which incorporates PR #6283 as suggested by @lhoestq in the above dataset discussion), and it seems like the culprit might now be these lines: https://github.com/huggingface/datasets/blob/ca8409a8bec4508255b9c3e808d0751eb1005260/src/datasets/table.py#L2111-L2115\r\n\r\nFrom what I understand (and apologies I'm new to pyarrow), for an Image or Audio feature, these lines recursively call `embed_array_storage` for a list of either feature, ending up in the feature's `embed_storage` function. For all values in the list, `embed_storage` reads the bytes if they're not already loaded. The issue is the list being passed to the first recursive call is `array.values` which are the underlying values of `array` regardless of `array`'s slicing (as influenced by parameters such as `datasets.config.DEFAULT_MAX_BATCH_SIZE`). This results in the same overflowing list of bytes that result in the ChunkedArray being returned in `embed_storage`. Even if the array weren't to overflow and this code ran without throwing an exception, it still seems incorrect to load all values if you ultimately only want some subset with `ListArray.from_arrays(offsets, values)`; it seems some wasted effort if those values thrown out will get loaded again in the next batch and vice versa for the current batch of values during later batches.\r\n\r\nMaybe there's a fix where you could pass a mask to `embed_storage` such that it only loads the values you ultimately want for the current batch? Curious to see if you agree with this diagnosis of the problem and if you think this fix is viable @mariosasko?",
"Would be nice if they have something similar to Dagshub's S3 sync; it worked like a charm for my bigger datasets.",
"I guess also the proposed masking solution simply enables `datasets.config.DEFAULT_MAX_BATCH_SIZE` by reducing the number of elements loaded, it does not address the underlying problem of trying to load all the images as bytes into a pyarrow array.\r\n\r\nI'm happy to turn this into an actual PR but here's what I've implemented locally at `tables.py:embed_array_storage` to fix the above test case (`nlphuji/winogavil`) and my own use case:\r\n```python\r\n elif pa.types.is_list(array.type):\r\n # feature must be either [subfeature] or Sequence(subfeature)\r\n # Merge offsets with the null bitmap to avoid the \"Null bitmap with offsets slice not supported\" ArrowNotImplementedError\r\n array_offsets = _combine_list_array_offsets_with_mask(array)\r\n\r\n # mask underlying struct array so array_values.to_pylist()\r\n # fills None (see feature.embed_storage)\r\n idxs = np.arange(len(array.values))\r\n idxs = pa.ListArray.from_arrays(array_offsets, idxs).flatten()\r\n mask = np.ones(len(array.values)).astype(bool)\r\n mask[idxs] = False\r\n mask = pa.array(mask)\r\n # indexing 0 might be problematic but not sure\r\n # how else to get arbitrary keys from a struct array\r\n array_keys = array.values[0].keys()\r\n # is array.values always a struct array?\r\n array_values = pa.StructArray.from_arrays(\r\n arrays=[array.values.field(k) for k in array_keys],\r\n names=array_keys,\r\n mask=mask,\r\n )\r\n if isinstance(feature, list):\r\n return pa.ListArray.from_arrays(array_offsets, _e(array_values, feature[0]))\r\n if isinstance(feature, Sequence) and feature.length == -1:\r\n return pa.ListArray.from_arrays(array_offsets, _e(array_values, feature.feature))\r\n```\r\n\r\nAgain though I'm new to pyarrow so this might not be the cleanest implementation, also I'm really not sure if there are other cases where this solution doesn't work. Would love to get some feedback from the hf folks!"
] | 2023-04-07T11:59:17 | 2024-03-12T14:15:59 | null |
CONTRIBUTOR
| null | null | null |
### Describe the bug
Hello!
I have an issue when I try to save on disk my dataset of images. The error I get is:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1442, in save_to_disk
for job_id, done, content in Dataset._save_to_disk_single(**kwargs):
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1473, in _save_to_disk_single
writer.write_table(pa_table)
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_writer.py", line 570, in write_table
pa_table = embed_table_storage(pa_table)
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2268, in embed_table_storage
arrays = [
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2269, in <listcomp>
embed_array_storage(table[name], feature) if require_storage_embed(feature) else table[name]
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 1817, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 1817, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2142, in embed_array_storage
return feature.embed_storage(array)
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/features/image.py", line 269, in embed_storage
storage = pa.StructArray.from_arrays([bytes_array, path_array], ["bytes", "path"], mask=bytes_array.is_null())
File "pyarrow/array.pxi", line 2766, in pyarrow.lib.StructArray.from_arrays
File "pyarrow/array.pxi", line 2961, in pyarrow.lib.c_mask_inverted_from_obj
TypeError: Mask must be a pyarrow.Array of type boolean
```
My dataset is around 50K images, is this error might be due to a bad image?
Thanks for the help.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("imagefolder", data_dir="/path/to/dataset")
dataset["train"].save_to_disk("./myds", num_shards=40)
```
### Expected behavior
Having my dataset properly saved to disk.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- PyArrow version: 11.0.0
- Pandas version: 2.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5717/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5716
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5716/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5716/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5716/events
|
https://github.com/huggingface/datasets/issues/5716
| 1,658,613,092 |
I_kwDODunzps5i3G1k
| 5,716 |
Handle empty audio
|
{
"login": "v-yunbin",
"id": 38179632,
"node_id": "MDQ6VXNlcjM4MTc5NjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/v-yunbin",
"html_url": "https://github.com/v-yunbin",
"followers_url": "https://api.github.com/users/v-yunbin/followers",
"following_url": "https://api.github.com/users/v-yunbin/following{/other_user}",
"gists_url": "https://api.github.com/users/v-yunbin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/v-yunbin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/v-yunbin/subscriptions",
"organizations_url": "https://api.github.com/users/v-yunbin/orgs",
"repos_url": "https://api.github.com/users/v-yunbin/repos",
"events_url": "https://api.github.com/users/v-yunbin/events{/privacy}",
"received_events_url": "https://api.github.com/users/v-yunbin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi! Can you share one of the problematic audio files with us?\r\n\r\nI tried to reproduce the error with the following code: \r\n```python\r\nimport soundfile as sf\r\nimport numpy as np\r\nfrom datasets import Audio\r\n\r\nsf.write(\"empty.wav\", np.array([]), 16000)\r\nAudio(sampling_rate=24000).decode_example({\"path\": \"empty.wav\", \"bytes\": None})\r\n```\r\nBut without success.\r\n\r\nAlso, what version of `librosa` is installed in your env? (You can get this info with `python -c \"import librosa; print(librosa.__version__)`)\r\n\r\n",
"I'm closing this issue as the reproducer hasn't been provided."
] | 2023-04-07T09:51:40 | 2023-09-27T17:47:08 | 2023-09-27T17:47:08 |
NONE
| null | null | null |
Some audio paths exist, but they are empty, and an error will be reported when reading the audio path.How to use the filter function to avoid the empty audio path?
when a audio is empty, when do resample , it will break:
`array, sampling_rate = sf.read(f) array = librosa.resample(array, orig_sr=sampling_rate, target_sr=self.sampling_rate)`
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5716/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5715
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5715/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5715/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5715/events
|
https://github.com/huggingface/datasets/issues/5715
| 1,657,479,788 |
I_kwDODunzps5iyyJs
| 5,715 |
Return Numpy Array (fixed length) Mode, in __get_item__, Instead of List
|
{
"login": "jungbaepark",
"id": 34066771,
"node_id": "MDQ6VXNlcjM0MDY2Nzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/34066771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jungbaepark",
"html_url": "https://github.com/jungbaepark",
"followers_url": "https://api.github.com/users/jungbaepark/followers",
"following_url": "https://api.github.com/users/jungbaepark/following{/other_user}",
"gists_url": "https://api.github.com/users/jungbaepark/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jungbaepark/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jungbaepark/subscriptions",
"organizations_url": "https://api.github.com/users/jungbaepark/orgs",
"repos_url": "https://api.github.com/users/jungbaepark/repos",
"events_url": "https://api.github.com/users/jungbaepark/events{/privacy}",
"received_events_url": "https://api.github.com/users/jungbaepark/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false | null |
[] | null |
[
"Hi! \r\n\r\nYou can use [`.set_format(\"np\")`](https://huggingface.co/docs/datasets/process#format) to get NumPy arrays (or Pytorch tensors with `.set_format(\"torch\")`) in `__getitem__`.\r\n\r\nAlso, have you been able to reproduce the linked PyTorch issue with a HF dataset?\r\n "
] | 2023-04-06T13:57:48 | 2023-04-20T17:16:26 | 2023-04-20T17:16:26 |
NONE
| null | null | null |
### Feature request
There are old known issues, but they can be easily forgettable problems in multiprocessing with pytorch-dataloader:
Too high usage of RAM or shared-memory in pytorch when we set num workers > 1 and returning type of dataset or dataloader is "List" or "Dict".
https://github.com/pytorch/pytorch/issues/13246
With huggingface datasets, unfortunately, the default return type is the list, so the problem is raised too often if we do not set anything for the issue.
However, this issue can be released when the returning output is fixed in length.
Therefore, I request the mode, returning outputs with fixed length (e.g. numpy array) rather than list.
The design would be good when we load datasets as
```python
load_dataset(..., with_return_as_fixed_tensor=True)
```
### Motivation
The general solution for this issue is already in the comments: https://github.com/pytorch/pytorch/issues/13246#issuecomment-905703662
: Numpy or Pandas seems not to have problems, while both have the string type.
(I'm not sure that the sequence of huggingface datasets can solve this problem as well)
### Your contribution
I'll read it ! thanks
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5715/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5713
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5713/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5713/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5713/events
|
https://github.com/huggingface/datasets/issues/5713
| 1,657,141,251 |
I_kwDODunzps5ixfgD
| 5,713 |
ArrowNotImplementedError when loading dataset from the hub
|
{
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi Julien ! This sounds related to https://github.com/huggingface/datasets/issues/5695 - TL;DR: you need to have shards smaller than 2GB to avoid this issue\r\n\r\nThe number of rows per shard is computed using an estimated size of the full dataset, which can sometimes lead to shards bigger than `max_shard_size`. The estimation is currently done using the first samples of the dataset (which can surely be improved). We should probably open an issue to fix this once and for all.\r\n\r\nAnyway for your specific dataset I'd suggest you to pass `num_shards` instead of `max_shard_size` for now, and make sure to have enough shards to end up with shards smaller than 2GB",
"Hi Quentin! Thanks a lot! Using `num_shards` instead of `max_shard_size` works as expected.\r\n\r\nIndeed the way you describe how the size is computed cannot really work with the dataset I'm building as all the image doesn't have the same resolution and then size. Opening an issue on this might be a good idea."
] | 2023-04-06T10:27:22 | 2023-04-06T13:06:22 | 2023-04-06T13:06:21 |
CONTRIBUTOR
| null | null | null |
### Describe the bug
Hello,
I have created a dataset by using the image loader. Once the dataset is created I try to download it and I get the error:
```
Traceback (most recent call last):
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1860, in _prepare_split_single
for _, table in generator:
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 69, in _generate_tables
for batch_idx, record_batch in enumerate(
File "pyarrow/_parquet.pyx", line 1323, in iter_batches
File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1893, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
Create the dataset and push it to the hub:
```python
from datasets import load_dataset
dataset = load_dataset("imagefolder", data_dir="/path/to/dataset")
dataset.push_to_hub("org/dataset-name", private=True, max_shard_size="1GB")
```
Then use it:
```python
from datasets import load_dataset
dataset = load_dataset("org/dataset-name")
```
### Expected behavior
To properly download and use the pushed dataset.
Something else to note is that I specified to have shards of 1GB max, but at the end, for the train set, it is an almost 7GB single file that is pushed.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- PyArrow version: 11.0.0
- Pandas version: 2.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5713/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5712
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5712/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5712/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5712/events
|
https://github.com/huggingface/datasets/issues/5712
| 1,655,972,106 |
I_kwDODunzps5itCEK
| 5,712 |
load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load()
|
{
"login": "rcasero",
"id": 1219084,
"node_id": "MDQ6VXNlcjEyMTkwODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcasero",
"html_url": "https://github.com/rcasero",
"followers_url": "https://api.github.com/users/rcasero/followers",
"following_url": "https://api.github.com/users/rcasero/following{/other_user}",
"gists_url": "https://api.github.com/users/rcasero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rcasero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rcasero/subscriptions",
"organizations_url": "https://api.github.com/users/rcasero/orgs",
"repos_url": "https://api.github.com/users/rcasero/repos",
"events_url": "https://api.github.com/users/rcasero/events{/privacy}",
"received_events_url": "https://api.github.com/users/rcasero/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Closing since this is a duplicate of #5711",
"> Closing since this is a duplicate of #5711\r\n\r\nSorry @mariosasko , my internet went down went submitting the issue, and somehow it ended up creating a duplicate"
] | 2023-04-05T16:47:10 | 2023-04-06T08:32:37 | 2023-04-05T17:17:44 |
NONE
| null | null | null |
### Describe the bug
Hi,
I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1.
```python
ds = datasets.load_dataset(path=dataset_dir,
name=configuration,
data_dir=dataset_dir,
cache_dir=cache_dir,
aux_dir=aux_dir,
# download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD,
num_proc=18)
```
When upgrading datasets to 2.11.0, it fails with error
```
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 1651, in _download_and_prepare
super()._download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 964, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 682, in _split_generators
self.some_function()
File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 1314, in some_function()
x_df = pd.DataFrame({'cell_type_descriptor': fp['x'].tolist()})
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/numpy/lib/npyio.py", line 248, in __getitem__
bytes = self.zip.open(key)
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 1530, in open
fheader = zef_file.read(sizeFileHeader)
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 744, in read
self._file.seek(self._pos)
ValueError: seek of closed file
```
### Steps to reproduce the bug
Sorry, I cannot share the data or code because they are not mine to share, but the point of failure is a call in `some_function()`
```python
with np.load(filename) as fp:
x_df = pd.DataFrame({'feature': fp['x'].tolist()})
```
I'll try to generate a short snippet that reproduces the error.
### Expected behavior
I would expect that `load_dataset` works on the custom datasets generation script for v2.11.0 the same way it works for 2.10.1, without making `np.load()` give a `ValueError: seek of closed file` error.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.8
- Huggingface_hub version: 0.12.0
- PyArrow version: 11.0.0
- Pandas version: 1.5.2
- numpy: 1.24.2
- This is an offline dataset that uses `datasets.config.HF_DATASETS_OFFLINE = True` in the generation script.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5712/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5711
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5711/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5711/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5711/events
|
https://github.com/huggingface/datasets/issues/5711
| 1,655,971,647 |
I_kwDODunzps5itB8_
| 5,711 |
load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load()
|
{
"login": "rcasero",
"id": 1219084,
"node_id": "MDQ6VXNlcjEyMTkwODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcasero",
"html_url": "https://github.com/rcasero",
"followers_url": "https://api.github.com/users/rcasero/followers",
"following_url": "https://api.github.com/users/rcasero/following{/other_user}",
"gists_url": "https://api.github.com/users/rcasero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rcasero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rcasero/subscriptions",
"organizations_url": "https://api.github.com/users/rcasero/orgs",
"repos_url": "https://api.github.com/users/rcasero/repos",
"events_url": "https://api.github.com/users/rcasero/events{/privacy}",
"received_events_url": "https://api.github.com/users/rcasero/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"It seems like https://github.com/huggingface/datasets/pull/5626 has introduced this error. \r\n\r\ncc @albertvillanova \r\n\r\nI think replacing:\r\nhttps://github.com/huggingface/datasets/blob/0803a006db1c395ac715662cc6079651f77c11ea/src/datasets/download/streaming_download_manager.py#L777-L778\r\nwith:\r\n```python\r\nreturn np.load(xopen(filepath_or_buffer, \"rb\", use_auth_token=use_auth_token), *args, **kwargs)\r\n```\r\nshould fix the issue.\r\n\r\n(Maybe this is also worth doing a patch release afterward)",
"Thanks for reporting, @rcasero.\r\n\r\nI can have a look..."
] | 2023-04-05T16:46:49 | 2023-04-07T09:16:59 | 2023-04-07T09:16:59 |
NONE
| null | null | null |
### Describe the bug
Hi,
I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1.
```python
ds = datasets.load_dataset(path=dataset_dir,
name=configuration,
data_dir=dataset_dir,
cache_dir=cache_dir,
aux_dir=aux_dir,
# download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD,
num_proc=18)
```
When upgrading datasets to 2.11.0, it fails with error
```
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 1651, in _download_and_prepare
super()._download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 964, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 682, in _split_generators
self.some_function()
File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 1314, in some_function()
x_df = pd.DataFrame({'cell_type_descriptor': fp['x'].tolist()})
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/numpy/lib/npyio.py", line 248, in __getitem__
bytes = self.zip.open(key)
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 1530, in open
fheader = zef_file.read(sizeFileHeader)
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 744, in read
self._file.seek(self._pos)
ValueError: seek of closed file
```
### Steps to reproduce the bug
Sorry, I cannot share the data or code because they are not mine to share, but the point of failure is a call in `some_function()`
```python
with np.load(embedding_filename) as fp:
x_df = pd.DataFrame({'feature': fp['x'].tolist()})
```
I'll try to generate a short snippet that reproduces the error.
### Expected behavior
I would expect that `load_dataset` works on the custom datasets generation script for v2.11.0 the same way it works for 2.10.1, without making `np.load()` give a `ValueError: seek of closed file` error.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.8
- Huggingface_hub version: 0.12.0
- PyArrow version: 11.0.0
- Pandas version: 1.5.2
- numpy: 1.24.2
- This is an offline dataset that uses `datasets.config.HF_DATASETS_OFFLINE = True` in the generation script.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5711/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5710
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5710/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5710/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5710/events
|
https://github.com/huggingface/datasets/issues/5710
| 1,655,703,534 |
I_kwDODunzps5isAfu
| 5,710 |
OSError: Memory mapping file failed: Cannot allocate memory
|
{
"login": "Saibo-creator",
"id": 53392976,
"node_id": "MDQ6VXNlcjUzMzkyOTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/53392976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saibo-creator",
"html_url": "https://github.com/Saibo-creator",
"followers_url": "https://api.github.com/users/Saibo-creator/followers",
"following_url": "https://api.github.com/users/Saibo-creator/following{/other_user}",
"gists_url": "https://api.github.com/users/Saibo-creator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saibo-creator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saibo-creator/subscriptions",
"organizations_url": "https://api.github.com/users/Saibo-creator/orgs",
"repos_url": "https://api.github.com/users/Saibo-creator/repos",
"events_url": "https://api.github.com/users/Saibo-creator/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saibo-creator/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi! This error means that PyArrow's internal [`mmap`](https://man7.org/linux/man-pages/man2/mmap.2.html) call failed to allocate memory, which can be tricky to debug. Since this error is more related to PyArrow than us, I think it's best to report this issue in their [repo](https://github.com/apache/arrow) (they are more experienced on this matter). Also, googling \"mmap cannot allocate memory\" returns some approaches to solving this problem."
] | 2023-04-05T14:11:26 | 2023-04-20T17:16:40 | 2023-04-20T17:16:40 |
NONE
| null | null | null |
### Describe the bug
Hello, I have a series of datasets each of 5 GB, 600 datasets in total. So together this makes 3TB.
When I trying to load all the 600 datasets into memory, I get the above error message.
Is this normal because I'm hitting the max size of memory mapping of the OS?
Thank you
```terminal
0_21/cache-e9c42499f65b1881.arrow
load_hf_datasets_from_disk: 82%|████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 494/600 [07:26<01:35, 1.11it/s]
Traceback (most recent call last):
File "example_load_genkalm_dataset.py", line 35, in <module>
multi_ds.post_process(max_node_num=args.max_node_num,max_seq_length=args.max_seq_length,delay=args.delay)
File "/home/geng/GenKaLM/src/dataloader/dataset.py", line 142, in post_process
genkalm_dataset = GenKaLM_Dataset.from_hf_dataset(path_or_name=ds_path, max_seq_length=self.max_seq_length,
File "/home/geng/GenKaLM/src/dataloader/dataset.py", line 47, in from_hf_dataset
hf_ds = load_from_disk(path_or_name)
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/load.py", line 1848, in load_from_disk
return Dataset.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1549, in load_from_disk
arrow_table = concat_tables(
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 1805, in concat_tables
tables = list(tables)
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1550, in <genexpr>
table_cls.from_file(Path(dataset_path, data_file["filename"]).as_posix())
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 1065, in from_file
table = _memory_mapped_arrow_table_from_file(filename)
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 50, in _memory_mapped_arrow_table_from_file
memory_mapped_stream = pa.memory_map(filename)
File "pyarrow/io.pxi", line 950, in pyarrow.lib.memory_map
File "pyarrow/io.pxi", line 911, in pyarrow.lib.MemoryMappedFile._open
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status
OSError: Memory mapping file failed: Cannot allocate memory
```
### Steps to reproduce the bug
Sorry I can not provide a reproducible code as the data is stored on my server and it's too large to share.
### Expected behavior
I expect the 3TB of data can be fully mapped to memory
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-4.15.0-204-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyArrow version: 11.0.0
- Pandas version: 1.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5710/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5710/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5709
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5709/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5709/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5709/events
|
https://github.com/huggingface/datasets/issues/5709
| 1,655,423,503 |
I_kwDODunzps5iq8IP
| 5,709 |
Manually dataset info made not taken into account
|
{
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"hi @jplu ! Did I understand you correctly that you create the dataset, push it to the Hub with `.push_to_hub` and you see a `dataset_infos.json` file there, then you edit this file, load the dataset with `load_dataset` and you don't see any changes in `.info` attribute of a dataset object? \r\n\r\nThis is actually weird that when you push your dataset to the Hub, a `dataset_infos.json` file is created, because this file is deprecated and it should create `README.md` with the `dataset_info` field instead. Some keys are also deprecated, like \"supervised_keys\" and \"task_templates\".\r\n\r\nCan you please provide a toy reproducible example of how you create and push the dataset? And also why do you want to change this file, especially the number of bytes and examples?",
"Hi @polinaeterna Yes I have created the dataset with `Dataset.from_dict` applied some updates afterward and when I pushed to the hub I had a `dataset_infos.json` file and there was a `README.md` file as well.\r\n\r\nI didn't know that the JSON file was deprecated. So I have built my dataset with `ImageBuilder` instead and now it works like a charm without having to touch anything.\r\n\r\nI haven't succeed to reproduce the creation of the JSON file with a toy example, hence, I certainly did some mistakes when I have manipulated my dataset manually at first. My bad."
] | 2023-04-05T11:15:17 | 2023-04-06T08:52:20 | 2023-04-06T08:52:19 |
CONTRIBUTOR
| null | null | null |
### Describe the bug
Hello,
I'm manually building an image dataset with the `from_dict` approach. I also build the features with the `cast_features` methods. Once the dataset is created I push it on the hub, and a default `dataset_infos.json` file seems to have been automatically added to the repo in same time. Hence I update it manually with all the missing info, but when I download the dataset the info are never updated.
Former `dataset_infos.json` file:
```
{"default": {
"description": "",
"citation": "",
"homepage": "",
"license": "",
"features": {
"image": {
"_type": "Image"
},
"labels": {
"names": [
"Fake",
"Real"
],
"_type": "ClassLabel"
}
},
"splits": {
"validation": {
"name": "validation",
"num_bytes": 901010094.0,
"num_examples": 3200,
"dataset_name": null
},
"train": {
"name": "train",
"num_bytes": 901010094.0,
"num_examples": 3200,
"dataset_name": null
}
},
"download_size": 1802008414,
"dataset_size": 1802020188.0,
"size_in_bytes": 3604028602.0
}}
```
After I update it manually it looks like:
```
{
"bstrai--deepfake-detection":{
"description":"",
"citation":"",
"homepage":"",
"license":"",
"features":{
"image":{
"decode":true,
"id":null,
"_type":"Image"
},
"labels":{
"num_classes":2,
"names":[
"Fake",
"Real"
],
"id":null,
"_type":"ClassLabel"
}
},
"supervised_keys":{
"input":"image",
"output":"labels"
},
"task_templates":[
{
"task":"image-classification",
"image_column":"image",
"label_column":"labels"
}
],
"config_name":null,
"splits":{
"validation":{
"name":"validation",
"num_bytes":36627822,
"num_examples":123,
"dataset_name":"deepfake-detection"
},
"train":{
"name":"train",
"num_bytes":901023694,
"num_examples":3200,
"dataset_name":"deepfake-detection"
}
},
"download_checksums":null,
"download_size":937562209,
"dataset_size":937651516,
"size_in_bytes":1875213725
}
}
```
Anything I should do to have the new infos in the `dataset_infos.json` to be taken into account? Or it is not possible yet?
Thanks!
### Steps to reproduce the bug
-
### Expected behavior
-
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- PyArrow version: 11.0.0
- Pandas version: 2.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5709/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5708
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5708/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5708/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5708/events
|
https://github.com/huggingface/datasets/issues/5708
| 1,655,023,642 |
I_kwDODunzps5ipaga
| 5,708 |
Dataset sizes are in MiB instead of MB in dataset cards
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Example of bulk edit: https://huggingface.co/datasets/aeslc/discussions/5",
"looks great! \r\n\r\nDo you encode the fact that you've already converted a dataset? (to not convert it twice) or do you base yourself on the info contained in `dataset_info`",
"I am only looping trough the dataset cards, assuming that all of them were created with MiB.\r\n\r\nI agree we should only run the bulk edit once for all canonical datasets: I'm using a for-loop over canonical datasets.",
"yes, worst case, we have this in structured data:\r\n\r\n<img width=\"337\" alt=\"image\" src=\"https://user-images.githubusercontent.com/326577/230037051-06caddcb-08c8-4953-a710-f3d122917db3.png\">\r\n",
"I have just included as well the conversion from MB to GB if necessary. See: \r\n- https://huggingface.co/datasets/bookcorpus/discussions/2/files\r\n- https://huggingface.co/datasets/asnq/discussions/2/files",
"Nice. Is it another loop? Because in https://huggingface.co/datasets/amazon_us_reviews/discussions/2/files we have `32377.29 MB` for example",
"First, I tested some batches to check the changes made. Then I incorporated the MB to GB conversion. Now I'm running the rest.",
"The bulk edit parsed 751 canonical datasets and updated 166.",
"Thanks a lot!\r\n\r\nThe sizes now match as expected!\r\n\r\n<img width=\"1446\" alt=\"Capture d’écran 2023-04-05 à 16 10 15\" src=\"https://user-images.githubusercontent.com/1676121/230107044-ac2a76ea-a4fe-4e81-a925-f464b85f5edd.png\">\r\n",
"I made another bulk edit of ancient canonical datasets that were moved to community organization. I have parsed 11 datasets and opened a PR on 3 of them:\r\n- [x] \"allenai/scicite\": https://huggingface.co/datasets/allenai/scicite/discussions/3\r\n- [x] \"allenai/scifact\": https://huggingface.co/datasets/allenai/scifact/discussions/2\r\n- [x] \"dair-ai/emotion\": https://huggingface.co/datasets/dair-ai/emotion/discussions/6",
"should we force merge the PR and close this issue?",
"I merged the PRs for \"scicite\" and \"scifact\"."
] | 2023-04-05T06:36:03 | 2023-12-21T10:20:28 | 2023-12-21T10:20:27 |
MEMBER
| null | null | null |
As @severo reported in an internal discussion (https://github.com/huggingface/moon-landing/issues/5929):
Now we show the dataset size:
- from the dataset card (in the side column)
- from the datasets-server (in the viewer)
But, even if the size is the same, we see a mismatch because the viewer shows MB, while the info from the README generally shows MiB (even if it's written MB -> https://huggingface.co/datasets/blimp/blob/main/README.md?code=true#L1932)
<img width="664" alt="Capture d’écran 2023-04-04 à 10 16 01" src="https://user-images.githubusercontent.com/1676121/229730887-0bd8fa6e-9462-46c6-bd4e-4d2c5784cabb.png">
TODO: Values to be fixed in: `Size of downloaded dataset files:`, `Size of the generated dataset:` and `Total amount of disk used:`
- [x] Bulk edit on the Hub to fix this in all canonical datasets
- [x] Bulk PR on the Hub to fix ancient canonical datasets that were moved to organizations
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5708/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5706
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5706/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5706/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5706/events
|
https://github.com/huggingface/datasets/issues/5706
| 1,653,545,835 |
I_kwDODunzps5ijxtr
| 5,706 |
Support categorical data types for Parquet
|
{
"login": "kklemon",
"id": 1430243,
"node_id": "MDQ6VXNlcjE0MzAyNDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1430243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kklemon",
"html_url": "https://github.com/kklemon",
"followers_url": "https://api.github.com/users/kklemon/followers",
"following_url": "https://api.github.com/users/kklemon/following{/other_user}",
"gists_url": "https://api.github.com/users/kklemon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kklemon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kklemon/subscriptions",
"organizations_url": "https://api.github.com/users/kklemon/orgs",
"repos_url": "https://api.github.com/users/kklemon/repos",
"events_url": "https://api.github.com/users/kklemon/events{/privacy}",
"received_events_url": "https://api.github.com/users/kklemon/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false |
{
"login": "mhattingpete",
"id": 22622299,
"node_id": "MDQ6VXNlcjIyNjIyMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/22622299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mhattingpete",
"html_url": "https://github.com/mhattingpete",
"followers_url": "https://api.github.com/users/mhattingpete/followers",
"following_url": "https://api.github.com/users/mhattingpete/following{/other_user}",
"gists_url": "https://api.github.com/users/mhattingpete/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mhattingpete/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mhattingpete/subscriptions",
"organizations_url": "https://api.github.com/users/mhattingpete/orgs",
"repos_url": "https://api.github.com/users/mhattingpete/repos",
"events_url": "https://api.github.com/users/mhattingpete/events{/privacy}",
"received_events_url": "https://api.github.com/users/mhattingpete/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mhattingpete",
"id": 22622299,
"node_id": "MDQ6VXNlcjIyNjIyMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/22622299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mhattingpete",
"html_url": "https://github.com/mhattingpete",
"followers_url": "https://api.github.com/users/mhattingpete/followers",
"following_url": "https://api.github.com/users/mhattingpete/following{/other_user}",
"gists_url": "https://api.github.com/users/mhattingpete/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mhattingpete/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mhattingpete/subscriptions",
"organizations_url": "https://api.github.com/users/mhattingpete/orgs",
"repos_url": "https://api.github.com/users/mhattingpete/repos",
"events_url": "https://api.github.com/users/mhattingpete/events{/privacy}",
"received_events_url": "https://api.github.com/users/mhattingpete/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Hi ! We could definitely a type that holds the categories and uses a DictionaryType storage. There's a ClassLabel type that is similar with a 'names' parameter (similar to a id2label in deep learning frameworks) that uses an integer array as storage.\r\n\r\nIt can be added in `features.py`. Here are some pointers:\r\n- the conversion from HF type to PyArrow type is done in `get_nested_type`\r\n- the conversion from Pyarrow type to HF type is done in `generate_from_arrow_type`\r\n- `encode_nested_example` and `decode_nested_example` are used to do user's value (what users see) <-> storage value (what is in the pyarrow.array) if there's any conversion to do",
"@kklemon did you implement this? Otherwise I would like to give it a try",
"@mhattingpete no, I hadn't time for this so far. Feel free to work on this.",
"#self-assign",
"This would be super useful, so +1. \r\n\r\nAlso, these prior issues/PRs seem relevant: \r\nhttps://github.com/huggingface/datasets/issues/1906\r\nhttps://github.com/huggingface/datasets/pull/1936",
"Hi, this is a really useful feature, has this been implemented yet? ",
"Hey folks -- I'm thinking about trying a PR for this. As far as I can tell the only sticky point is that auto-generation of features from a pyarrow schema will fail under the current `generate_from_arrow_type` function because there is no encoding of the categorical string label -> int map in the pa.dictionary type itself; that is stored with the full array. \r\n\r\nI see two ways to solve this. Option 1 is to require datasets with categorical types to use pyarrow schema metadata to encode the entire HF feature dictionary, that way categorical types don't ever need to be inferred from the pa type alone. The downside to this is that it means that these datasets will be a bit brittle, as if the feature encoding API ever changes, they will suddenly be unloadable. \r\n\r\nThe other option is to modify `generate_from_arrow_type` to take per-field metadata, and include just that metadata (the category labels) in the schema metadata. \r\n\r\nDoes anyone at HF have any preference on these two (or alternate) approaches?",
"Maybe we don't need to store the string label -> int map in the categorical for the corresponding `datasets` feature ?",
"I think that does need to be stored in the Feature object. Similar to how\r\n`ClassLabel` needs the class names for some of the provided default\r\nfunctionality (e.g., encoding or decoding values) here, a categorical\r\nfeature needs the same. Without storing that information, would you suggest\r\nthat categorical features just be stored internally as integer arrays?\r\n\r\nOn Fri, Sep 8, 2023, 5:37 AM Quentin Lhoest ***@***.***>\r\nwrote:\r\n\r\n> Maybe we don't need to store the string label -> int map in the\r\n> categorical for the corresponding datasets feature ?\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/5706#issuecomment-1711375652>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AADS5XZV3RA4GBRVBLJN72LXZLROZANCNFSM6AAAAAAWSOUTJ4>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"Well IIRC you can concatenate two Arrow arrays with different dictionaries together. But for `datasets` would mean updating the `datasets` features when concatenating two arrays of the same type, which is not supported right now. That's why if there is a way to have it without storing the mapping in the feature object it would be nice.\r\n\r\nFor decoding we do have the string<->integer mapping from the array `dictionary` attribute so we're fine. For encoding I think it can work if we only encode when converting python objects to pyarrow in `TypedSequence.__arrow_array__` in `arow_writer.py`. It can work by converting the python objects to a pyarrow array and then use the `dictionary_encode` method.\r\n\r\nAnother concern about concatenation: I noticed **pyarrow creates the new dictionary and indices in memory** when concatenating two dictionary encoded arrays. This can be a problem for big datastets, and we should probably use ChunkedArray objects instead. This can surely be taken care of in `array_concat` in `table.py`\r\n\r\ncc @mariosasko in case you have other ideas\r\n\r\n",
"Hmm, that is a good point. What if we implemented this feature first in a\r\nmanner that didn't allow concatenation of arrays with different index to\r\ncategory maps? Then concatenation would be very straightforward, and I\r\nthink this is reasonable if the index to category map is stored in the\r\nschema as well. Obviously, this is limited in how folks could use the\r\nfeature, but they can always fall back to raw strings if needed, and as\r\nusage increases we'll have more data to see what the right solution here\r\nis.\r\n\r\nOn Fri, Sep 8, 2023, 6:49 AM Quentin Lhoest ***@***.***>\r\nwrote:\r\n\r\n> Well IIRC you can concatenate two Arrow arrays with different dictionaries\r\n> together. But for datasets would mean updating the datasets features when\r\n> concatenating two arrays of the same type, which is not supported right\r\n> now. That's why if there is a way to have it without storing the mapping in\r\n> the feature object it would be nice.\r\n>\r\n> For decoding we do have the string<->integer mapping from the array\r\n> dictionary attribute so we're fine. For encoding I think it can work if\r\n> we only encode when converting python objects to pyarrow in\r\n> TypedSequence.__arrow_array__ in arow_writer.py. It can work by\r\n> converting the python objects to a pyarrow array and then use the\r\n> dictionary_encode method.\r\n>\r\n> Another concern about concatenation: I noticed *pyarrow creates the new\r\n> dictionary and indices in memory* when concatenating two dictionary\r\n> encoded arrays. This can be a problem for big datastets, and we should\r\n> probably use ChunkedArray objects instead. This can surely be taken care of\r\n> in array_concat in table.py\r\n>\r\n> cc @mariosasko <https://github.com/mariosasko> in case you have other\r\n> ideas\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/5706#issuecomment-1711468806>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AADS5X4E2KC2IXLDPYR3XZLXZLZ2FANCNFSM6AAAAAAWSOUTJ4>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"@lhoestq @mariosasko just re-pinging on this so I can push forward further here. What are your thoughts on disallowing concatenation of categorical arrays for now such that the index to category map can be stored in the schema metadata? And/or other approaches that should be taken here?\r\n",
"I think the easiest for now would be to add a `dictionary_decode` argument to the parquet loaders that would convert the dictionary type back to strings when set to `True`, and make `dictionary_decode=False` raise `NotImplementedError` for now if there are dictionary type columns. Would that be ok as a first step ?",
"I mean, that would certainly be easiest but I don't think it really solves this issue in a meaningful way. This just changes the burden from string conversion from the user to HF Datasets, but doesn't actually enable HF Datasets to take advantage of the (very significant) storage and associated speed/memory savings offered by using categorical types. Given that those savings are what is of real interest here, I think keeping it explicit that it is not supported (and forcing the user to do the conversion) might actually be better that way this problem stays top of mind.\r\n\r\nIs there an objection with supporting categorical types explicitly through the medium I outlined above, where the error is raised if you try to concat two differently typed categorical columns?",
"> This just changes the burden from string conversion from the user to HF Datasets, but doesn't actually enable HF Datasets to take advantage of the (very significant) storage and associated speed/memory savings offered by using categorical types.\r\n\r\nThere's already a ClassLabel type that does pretty much the same thing (store as integer instead of string) if it can help\r\n\r\n> Is there an objection with supporting categorical types explicitly through the medium I outlined above, where the error is raised if you try to concat two differently typed categorical columns?\r\n\r\nYea we do concatenation quite often (e.g. in `map`) so I don't think this is a viable option",
"But how often in the cases where concatenation is done now would the\r\ncategorical label vocabulary actually change? I think it would be in\r\nbasically none of them. And in such cases, concatenation remains very easy,\r\nno?\r\n\r\nOn Fri, Sep 22, 2023, 12:02 PM Quentin Lhoest ***@***.***>\r\nwrote:\r\n\r\n> This just changes the burden from string conversion from the user to HF\r\n> Datasets, but doesn't actually enable HF Datasets to take advantage of the\r\n> (very significant) storage and associated speed/memory savings offered by\r\n> using categorical types.\r\n>\r\n> There's already a ClassLabel type that does pretty much the same thing\r\n> (store as integer instead of string) if it can help\r\n>\r\n> Is there an objection with supporting categorical types explicitly through\r\n> the medium I outlined above, where the error is raised if you try to concat\r\n> two differently typed categorical columns?\r\n>\r\n> Yea we do concatenation quite often (e.g. in map) so I don't think this\r\n> is a viable option\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/5706#issuecomment-1731667012>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AADS5X5CGWFXDCML6UKCWYLX3WZBXANCNFSM6AAAAAAWSOUTJ4>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"Arrow IPC seems to require unified dictionaries anyway so actually we could surely focus only on this use case indeed @mmcdermott \r\n\r\nSo defining a new Feature type in `datasets` that contains the dictionary mapping should be fine (and concatenation would work out of the box), and it should also take care of checking that the data it encodes/decodes has the right dictionary. Do you think it can be done without impacting iterating speed for the other types @mariosasko ?\r\n\r\nRight now we have little bandwidth to work in this kind of things though"
] | 2023-04-04T09:45:35 | 2023-09-22T16:53:37 | null |
NONE
| null | null | null |
### Feature request
Huggingface datasets does not seem to support categorical / dictionary data types for Parquet as of now. There seems to be a `TODO` in the code for this feature but no implementation yet. Below you can find sample code to reproduce the error that is currently thrown when attempting to read a Parquet file with categorical columns:
```python
import pandas as pd
import pyarrow.parquet as pq
from datasets import load_dataset
# Create categorical sample DataFrame
df = pd.DataFrame({'type': ['foo', 'bar']}).astype('category')
df.to_parquet('data.parquet')
# Read back as pyarrow table
table = pq.read_table('data.parquet')
print(table.schema)
# type: dictionary<values=string, indices=int32, ordered=0>
# Load with huggingface datasets
load_dataset('parquet', data_files='data.parquet')
```
Error:
```
Traceback (most recent call last):
File ".venv/lib/python3.10/site-packages/datasets/builder.py", line 1875, in _prepare_split_single
writer.write_table(table)
File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 566, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 379, in _build_writer
inferred_features = Features.from_arrow_schema(inferred_schema)
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in from_arrow_schema
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in <dictcomp>
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1361, in generate_from_arrow_type
raise NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table
NotImplementedError
```
### Motivation
Categorical data types, as offered by Pandas and implemented with the `DictionaryType` dtype in `pyarrow` can significantly reduce dataset size and are a handy way to turn textual features into numerical representations and back. Lack of support in Huggingface datasets greatly reduces compatibility with a common Pandas / Parquet feature.
### Your contribution
I could provide a PR. However, it would be nice to have an initial complexity estimate from one of the core developers first.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5706/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5705
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5705/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5705/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5705/events
|
https://github.com/huggingface/datasets/issues/5705
| 1,653,500,383 |
I_kwDODunzps5ijmnf
| 5,705 |
Getting next item from IterableDataset took forever.
|
{
"login": "HongtaoYang",
"id": 16588434,
"node_id": "MDQ6VXNlcjE2NTg4NDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/16588434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HongtaoYang",
"html_url": "https://github.com/HongtaoYang",
"followers_url": "https://api.github.com/users/HongtaoYang/followers",
"following_url": "https://api.github.com/users/HongtaoYang/following{/other_user}",
"gists_url": "https://api.github.com/users/HongtaoYang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HongtaoYang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HongtaoYang/subscriptions",
"organizations_url": "https://api.github.com/users/HongtaoYang/orgs",
"repos_url": "https://api.github.com/users/HongtaoYang/repos",
"events_url": "https://api.github.com/users/HongtaoYang/events{/privacy}",
"received_events_url": "https://api.github.com/users/HongtaoYang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi! It can take some time to iterate over Parquet files as big as yours, convert the samples to Python, and find the first one that matches a filter predicate before yielding it...",
"Thanks @mariosasko, I figured it was the filter operation. I'm closing this issue because it is not a bug, it is the expected beheaviour."
] | 2023-04-04T09:16:17 | 2023-04-05T23:35:41 | 2023-04-05T23:35:41 |
NONE
| null | null | null |
### Describe the bug
I have a large dataset, about 500GB. The format of the dataset is parquet.
I then load the dataset and try to get the first item
```python
def get_one_item():
dataset = load_dataset("path/to/datafiles", split="train", cache_dir=".", streaming=True)
dataset = dataset.filter(lambda example: example['text'].startswith('Ar'))
print(next(iter(dataset)))
```
However, this function never finish. I waited ~10mins, the function was still running so I killed the process. I'm now using `line_profiler` to profile how long it would take to return one item. I'll be patient and wait for as long as it needs.
I suspect the filter operation is the reason why it took so long. Can I get some possible reasons behind this?
### Steps to reproduce the bug
Unfortunately without my data files, there is no way to reproduce this bug.
### Expected behavior
With `IteralbeDataset`, I expect the first item to be returned instantly.
### Environment info
- datasets version: 2.11.0
- python: 3.7.12
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5705/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5702
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5702/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5702/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5702/events
|
https://github.com/huggingface/datasets/issues/5702
| 1,653,104,720 |
I_kwDODunzps5iiGBQ
| 5,702 |
Is it possible or how to define a `datasets.Sequence` that could potentially be either a dict, a str, or None?
|
{
"login": "gitforziio",
"id": 10508116,
"node_id": "MDQ6VXNlcjEwNTA4MTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/10508116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gitforziio",
"html_url": "https://github.com/gitforziio",
"followers_url": "https://api.github.com/users/gitforziio/followers",
"following_url": "https://api.github.com/users/gitforziio/following{/other_user}",
"gists_url": "https://api.github.com/users/gitforziio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gitforziio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gitforziio/subscriptions",
"organizations_url": "https://api.github.com/users/gitforziio/orgs",
"repos_url": "https://api.github.com/users/gitforziio/repos",
"events_url": "https://api.github.com/users/gitforziio/events{/privacy}",
"received_events_url": "https://api.github.com/users/gitforziio/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false | null |
[] | null |
[
"Hi ! `datasets` uses Apache Arrow as backend to store the data, and it requires each column to have a fixed type. Therefore a column can't have a mix of dicts/lists/strings.\r\n\r\nThough it's possible to have one (nullable) field for each type:\r\n```python\r\nfeatures = Features({\r\n \"text_alone\": Value(\"string\"),\r\n \"text_with_idxes\": {\r\n \"text\": Value(\"string\"),\r\n \"idxes\": Value(\"int64\")\r\n }\r\n})\r\n```\r\n\r\nbut you'd have to reformat your data fiels or define a [dataset loading script](https://huggingface.co/docs/datasets/dataset_script) to apply the appropriate parsing.\r\n\r\nAlternatively we could explore supporting the Arrow [Union](https://arrow.apache.org/docs/python/generated/pyarrow.UnionType.html) type which could solve this issue, but I don't know if it's well supported in python and with the rest of the ecosystem like Parquet",
"@lhoestq Thank you! I further wonder if it's possible to use list subscripts as keys of a feature? Like\r\n```python\r\nfeatures = Features({\r\n 0: Value(\"string\"),\r\n 1: {\r\n \"text\": Value(\"string\"),\r\n \"idxes\": [Value(\"int64\")]\r\n },\r\n 2: Value(\"string\"),\r\n # ...\r\n})\r\n```",
"Column names need to be strings, so you could use \"1\", \"2\", etc. or give appropriate column names",
"@lhoestq Got it. Thank you!"
] | 2023-04-04T03:20:43 | 2023-04-05T14:15:18 | 2023-04-05T14:15:17 |
NONE
| null | null | null |
### Feature request
Hello! Apologies if my question sounds naive:
I was wondering if it’s possible, or how one would go about defining a 'datasets.Sequence' element in datasets.Features that could potentially be either a dict, a str, or None?
Specifically, I’d like to define a feature for a list that contains 18 elements, each of which has been pre-defined as either a `dict or None` or `str or None` - as demonstrated in the slightly misaligned data provided below:
```json
[
[
{"text":"老妇人","idxes":[0,1,2]},null,{"text":"跪","idxes":[3]},null,null,null,null,{"text":"在那坑里","idxes":[4,5,6,7]},null,null,null,null,null,null,null,null,null,null],
[
{"text":"那些水","idxes":[13,14,15]},null,{"text":"舀","idxes":[11]},null,null,null,null,null,{"text":"在那坑里","idxes":[4,5,6,7]},null,{"text":"出","idxes":[12]},null,null,null,null,null,null,null],
[
{"text":"水","idxes":[38]},
null,
{"text":"舀","idxes":[40]},
"假", // note this is just a standalone string
null,null,null,{"text":"坑里","idxes":[35,36]},null,null,null,null,null,null,null,null,null,null]]
```
### Motivation
I'm currently working with a dataset of the following structure and I couldn't find a solution in the [documentation](https://huggingface.co/docs/datasets/v2.11.0/en/package_reference/main_classes#datasets.Features).
```json
{"qid":"3-train-1058","context":"桑桑害怕了。从玉米地里走到田埂上,他遥望着他家那幢草房子里的灯光,知道母亲没有让他回家的意思,很伤感,有点想哭。但没哭,转身朝阿恕家走去。","corefs":[[{"text":"桑桑","idxes":[0,1]},{"text":"他","idxes":[17]}]],"non_corefs":[],"outputs":[[{"text":"他","idxes":[17]},null,{"text":"走","idxes":[11]},null,null,null,null,null,{"text":"从玉米地里","idxes":[6,7,8,9,10]},{"text":"到田埂上","idxes":[12,13,14,15]},null,null,null,null,null,null,null,null],[{"text":"他","idxes":[17]},null,{"text":"走","idxes":[66]},null,null,null,null,null,null,null,{"text":"转身朝阿恕家去","idxes":[60,61,62,63,64,65,67]},null,null,null,null,null,null,null],[{"text":"灯光","idxes":[30,31]},null,null,null,null,null,null,{"text":"草房子里","idxes":[25,26,27,28]},null,null,null,null,null,null,null,null,null,null],[{"text":"他","idxes":[17]},{"text":"他家那幢草房子","idxes":[21,22,23,24,25,26,27]},null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"远"],[{"text":"他","idxes":[17]},{"text":"阿恕家","idxes":[63,64,65]},null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"变近"]]}
```
### Your contribution
I'm going to provide the dataset at https://huggingface.co/datasets/2030NLP/SpaCE2022 .
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5702/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5699
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5699/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5699/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5699/events
|
https://github.com/huggingface/datasets/issues/5699
| 1,652,437,419 |
I_kwDODunzps5ifjGr
| 5,699 |
Issue when wanting to split in memory a cached dataset
|
{
"login": "FrancoisNoyez",
"id": 47528215,
"node_id": "MDQ6VXNlcjQ3NTI4MjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/47528215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FrancoisNoyez",
"html_url": "https://github.com/FrancoisNoyez",
"followers_url": "https://api.github.com/users/FrancoisNoyez/followers",
"following_url": "https://api.github.com/users/FrancoisNoyez/following{/other_user}",
"gists_url": "https://api.github.com/users/FrancoisNoyez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FrancoisNoyez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrancoisNoyez/subscriptions",
"organizations_url": "https://api.github.com/users/FrancoisNoyez/orgs",
"repos_url": "https://api.github.com/users/FrancoisNoyez/repos",
"events_url": "https://api.github.com/users/FrancoisNoyez/events{/privacy}",
"received_events_url": "https://api.github.com/users/FrancoisNoyez/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"Hi ! Good catch, this is wrong indeed and thanks for opening a PR :)"
] | 2023-04-03T17:00:07 | 2023-04-04T16:52:42 | null |
NONE
| null | null | null |
### Describe the bug
**In the 'train_test_split' method of the Dataset class** (defined datasets/arrow_dataset.py), **if 'self.cache_files' is not empty**, then, **regarding the input parameters 'train_indices_cache_file_name' and 'test_indices_cache_file_name', if they are None**, we modify them to make them not None, to see if we can just provide back / work from cached data. But if we can't provide cached data, we move on with the call to the method, except those two values are not None anymore, which will conflict with the use of the 'keep_in_memory' parameter down the line.
Indeed, at some point we end up calling the 'select' method, **and if 'keep_in_memory' is True**, since the value of this method's parameter 'indices_cache_file_name' is now not None anymore, **an exception is raised, whose message is "Please use either 'keep_in_memory' or 'indices_cache_file_name' but not both.".**
Because of that, it's impossible to perform a train / test split of a cached dataset while requesting that the result not be cached. Which is inconvenient when one is just performing experiments, with no intention of caching the result.
Aside from this being inconvenient, **the code which lead up to that situation seems simply wrong** to me: the input variable should not be modified so as to change the user's intention just to perform a test, if that test can fail and respecting the user's intention is necessary to proceed in that case.
To fix this, I suggest to use other variables / other variable names, in order to host the value(s) needed to perform the test, so as not to change the originally input values needed by the rest of the method's code.
Also, **I don't see why an exception should be raised when the 'select' method is called with both 'keep_in_memory'=True and 'indices_cache_file_name'!=None**: should the use of 'keep_in_memory' not prevail anyway, specifying that the user does not want to perform caching, and so making irrelevant the value of 'indices_cache_file_name'? This is indeed what happens when we look further in the code, in the '\_select_with_indices_mapping' method: when 'keep_in_memory' is True, then the value of indices_cache_file_name does not matter, the data will be written to a stream buffer anyway.
Hence I suggest to remove the raising of exception in those circumstances. Notably, to remove the raising of it in the 'select', '\_select_with_indices_mapping', 'shuffle' and 'map' methods.
### Steps to reproduce the bug
```python
import datasets
def generate_examples():
for i in range(10):
yield {"id": i}
dataset_ = datasets.Dataset.from_generator(
generate_examples,
keep_in_memory=False,
)
dataset_.train_test_split(
test_size=3,
shuffle=False,
keep_in_memory=True,
train_indices_cache_file_name=None,
test_indices_cache_file_name=None,
)
```
### Expected behavior
The result of the above code should be a DatasetDict instance.
Instead, we get the following exception stack:
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[3], line 1
----> 1 dataset_.train_test_split(
2 test_size=3,
3 shuffle=False,
4 keep_in_memory=True,
5 train_indices_cache_file_name=None,
6 test_indices_cache_file_name=None,
7 )
File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:528, in transmit_format.<locals>.wrapper(*args, **kwargs)
521 self_format = {
522 "type": self._format_type,
523 "format_kwargs": self._format_kwargs,
524 "columns": self._format_columns,
525 "output_all_columns": self._output_all_columns,
526 }
527 # apply actual function
--> 528 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
529 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
530 # re-apply format to the output
File ~/Work/Developments/datasets/src/datasets/fingerprint.py:511, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
507 validate_fingerprint(kwargs[fingerprint_name])
509 # Call actual function
--> 511 out = func(dataset, *args, **kwargs)
513 # Update fingerprint of in-place transforms + update in-place history of transforms
515 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:4428, in Dataset.train_test_split(self, test_size, train_size, shuffle, stratify_by_column, seed, generator, keep_in_memory, load_from_cache_file, train_indices_cache_file_name, test_indices_cache_file_name, writer_batch_size, train_new_fingerprint, test_new_fingerprint)
4425 test_indices = permutation[:n_test]
4426 train_indices = permutation[n_test : (n_test + n_train)]
-> 4428 train_split = self.select(
4429 indices=train_indices,
4430 keep_in_memory=keep_in_memory,
4431 indices_cache_file_name=train_indices_cache_file_name,
4432 writer_batch_size=writer_batch_size,
4433 new_fingerprint=train_new_fingerprint,
4434 )
4435 test_split = self.select(
4436 indices=test_indices,
4437 keep_in_memory=keep_in_memory,
(...)
4440 new_fingerprint=test_new_fingerprint,
4441 )
4443 return DatasetDict({"train": train_split, "test": test_split})
File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:528, in transmit_format.<locals>.wrapper(*args, **kwargs)
521 self_format = {
522 "type": self._format_type,
523 "format_kwargs": self._format_kwargs,
524 "columns": self._format_columns,
525 "output_all_columns": self._output_all_columns,
526 }
527 # apply actual function
--> 528 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
529 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
530 # re-apply format to the output
File ~/Work/Developments/datasets/src/datasets/fingerprint.py:511, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
507 validate_fingerprint(kwargs[fingerprint_name])
509 # Call actual function
--> 511 out = func(dataset, *args, **kwargs)
513 # Update fingerprint of in-place transforms + update in-place history of transforms
515 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:3679, in Dataset.select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint)
3645 """Create a new dataset with rows selected following the list/array of indices.
3646
3647 Args:
(...)
3676 ```
3677 """
3678 if keep_in_memory and indices_cache_file_name is not None:
-> 3679 raise ValueError("Please use either `keep_in_memory` or `indices_cache_file_name` but not both.")
3681 if len(self.list_indexes()) > 0:
3682 raise DatasetTransformationNotAllowedError(
3683 "Using `.select` on a dataset with attached indexes is not allowed. You can first run `.drop_index() to remove your index and then re-add it."
3684 )
ValueError: Please use either `keep_in_memory` or `indices_cache_file_name` but not both.
```
### Environment info
- `datasets` version: 2.11.1.dev0
- Platform: Linux-5.4.236-1-MANJARO-x86_64-with-glibc2.2.5
- Python version: 3.8.12
- Huggingface_hub version: 0.13.3
- PyArrow version: 11.0.0
- Pandas version: 2.0.0
***
***
EDIT:
Now with a pull request to fix this [here](https://github.com/huggingface/datasets/pull/5700)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5699/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5698
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5698/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5698/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5698/events
|
https://github.com/huggingface/datasets/issues/5698
| 1,652,183,611 |
I_kwDODunzps5ielI7
| 5,698 |
Add Qdrant as another search index
|
{
"login": "kacperlukawski",
"id": 2649301,
"node_id": "MDQ6VXNlcjI2NDkzMDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2649301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kacperlukawski",
"html_url": "https://github.com/kacperlukawski",
"followers_url": "https://api.github.com/users/kacperlukawski/followers",
"following_url": "https://api.github.com/users/kacperlukawski/following{/other_user}",
"gists_url": "https://api.github.com/users/kacperlukawski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kacperlukawski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kacperlukawski/subscriptions",
"organizations_url": "https://api.github.com/users/kacperlukawski/orgs",
"repos_url": "https://api.github.com/users/kacperlukawski/repos",
"events_url": "https://api.github.com/users/kacperlukawski/events{/privacy}",
"received_events_url": "https://api.github.com/users/kacperlukawski/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null |
[
"@mariosasko I'd appreciate your feedback on this. "
] | 2023-04-03T14:25:19 | 2023-04-11T10:28:40 | null |
CONTRIBUTOR
| null | null | null |
### Feature request
I'd suggest adding Qdrant (https://qdrant.tech) as another search index available, so users can directly build an index from a dataset. Currently, FAISS and ElasticSearch are only supported: https://huggingface.co/docs/datasets/faiss_es
### Motivation
ElasticSearch is a keyword-based search system, while FAISS is a vector search library. Vector database, such as Qdrant, is a different tool based on similarity (like FAISS) but is not limited to a single machine. It makes the vector database well-suited for bigger datasets and collaboration if several people want to access a particular dataset.
### Your contribution
I can provide a PR implementing that functionality on my own.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5698/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5698/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5696
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5696/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5696/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5696/events
|
https://github.com/huggingface/datasets/issues/5696
| 1,651,707,008 |
I_kwDODunzps5icwyA
| 5,696 |
Shuffle a sharded iterable dataset without seed can lead to duplicate data
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null |
[] | 2023-04-03T09:40:03 | 2023-04-04T14:58:18 | 2023-04-04T14:58:18 |
MEMBER
| null | null | null |
As reported in https://github.com/huggingface/datasets/issues/5360
If `seed=None` in `.shuffle()`, shuffled datasets don't use the same shuffling seed across nodes.
Because of that, the lists of shards is not shuffled the same way across nodes, and therefore some shards may be assigned to multiple nodes instead of exactly one.
This can happen only when you have a number of shards that is a factor of the number of nodes.
The current workaround is to always set a `seed` in `.shuffle()`
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5696/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5695
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5695/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5695/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5695/events
|
https://github.com/huggingface/datasets/issues/5695
| 1,650,974,156 |
I_kwDODunzps5iZ93M
| 5,695 |
Loading big dataset raises pyarrow.lib.ArrowNotImplementedError
|
{
"login": "amariucaitheodor",
"id": 32778667,
"node_id": "MDQ6VXNlcjMyNzc4NjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/32778667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amariucaitheodor",
"html_url": "https://github.com/amariucaitheodor",
"followers_url": "https://api.github.com/users/amariucaitheodor/followers",
"following_url": "https://api.github.com/users/amariucaitheodor/following{/other_user}",
"gists_url": "https://api.github.com/users/amariucaitheodor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amariucaitheodor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amariucaitheodor/subscriptions",
"organizations_url": "https://api.github.com/users/amariucaitheodor/orgs",
"repos_url": "https://api.github.com/users/amariucaitheodor/repos",
"events_url": "https://api.github.com/users/amariucaitheodor/events{/privacy}",
"received_events_url": "https://api.github.com/users/amariucaitheodor/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi ! It looks like an issue with PyArrow: https://issues.apache.org/jira/browse/ARROW-5030\r\n\r\nIt appears it can happen when you have parquet files with row groups larger than 2GB.\r\nI can see that your parquet files are around 10GB. It is usually advised to keep a value around the default value 500MB to avoid these issues.\r\n\r\nNote that currently the row group size is simply defined by the number of rows `datasets.config.DEFAULT_MAX_BATCH_SIZE`, so reducing this value could let you have parquet files bigger than 2GB and with row groups lower than 2GB.\r\n\r\nWould it be possible for you to re-upload the dataset with the default shard size 500MB ?",
"Hey, thanks for the reply! I've since switched to working with the locally-saved dataset (which works).\r\nMaybe it makes sense to show a warning for uploads with large shard sizes? Since the functionality completely breaks (due to the PyArrow bug).",
"Just tried uploading the same dataset with 500MB shards, I get an errors 4 hours in:\r\n\r\n```\r\nPushing dataset shards to the dataset hub: 25%|██▍ | 358/1453 [4:40:31<14:18:00, 47.01s/it]\r\nTraceback (most recent call last):\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 344, in _inner_upload_lfs_object\r\n return _upload_lfs_object(operation=operation, lfs_batch_action=batch_action, token=token)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 391, in _upload_lfs_object\r\n lfs_upload(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 254, in lfs_upload\r\n _upload_multi_part(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 374, in _upload_multi_part\r\n hf_raise_for_status(part_upload_res)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 301, in hf_raise_for_status\r\n raise HfHubHTTPError(str(e), response=response) from e\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 46, in __init__\r\n server_data = response.json()\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/requests/models.py\", line 899, in json\r\n return complexjson.loads(\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/json/__init__.py\", line 357, in loads\r\n return _default_decoder.decode(s)\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/json/decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/json/decoder.py\", line 355, in raw_decode\r\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"process_wit.py\", line 146, in <module>\r\n dataset.push_to_hub(FINAL_PATH, max_shard_size=\"500MB\", private=False)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/datasets/dataset_dict.py\", line 1534, in push_to_hub\r\n repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 4804, in _push_parquet_shards_to_hub\r\n _retry(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 281, in _retry\r\n return func(*func_args, **func_kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py\", line 120, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 2593, in upload_file\r\n commit_info = self.create_commit(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py\", line 120, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 2411, in create_commit\r\n upload_lfs_files(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py\", line 120, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 351, in upload_lfs_files\r\n thread_map(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py\", line 69, in thread_map\r\n return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py\", line 51, in _executor_map\r\n return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/tqdm/std.py\", line 1178, in __iter__\r\n for obj in iterable:\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/concurrent/futures/_base.py\", line 619, in result_iterator\r\n yield fs.pop().result()\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/concurrent/futures/_base.py\", line 444, in result\r\n return self.__get_result()\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/concurrent/futures/_base.py\", line 389, in __get_result\r\n raise self._exception\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/concurrent/futures/thread.py\", line 57, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 346, in _inner_upload_lfs_object\r\n raise RuntimeError(f\"Error while uploading '{operation.path_in_repo}' to the Hub.\") from exc\r\nRuntimeError: Error while uploading 'data/train-00358-of-01453-22a5cc8b3eb12be3.parquet' to the Hub.\r\n```\r\nLocal saves do work, however.",
"Hmmm that was probably an intermitent bug, you can resume the upload by re-running push_to_hub",
"Leaving this other error here for the record, which occurs when I load the +700GB dataset from the hub with shard sizes of 500MB:\r\n\r\n```\r\n Traceback (most recent call last): \r\n File \"/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py\", line 1860, in _prepare_split_single\r\n for _, table in generator:\r\n File \"/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py\", line 69, in _generate_tables\r\n for batch_idx, record_batch in enumerate(\r\n File \"pyarrow/_parquet.pyx\", line 1323, in iter_batches\r\n File \"pyarrow/error.pxi\", line 115, in pyarrow.lib.check_status\r\nOSError: Corrupt snappy compressed data.\r\n```\r\nI will probably switch back to the local big dataset or shrink it."
] | 2023-04-02T14:42:44 | 2023-04-11T09:17:54 | 2023-04-10T08:04:04 |
NONE
| null | null | null |
### Describe the bug
Calling `datasets.load_dataset` to load the (publicly available) dataset `theodor1289/wit` fails with `pyarrow.lib.ArrowNotImplementedError`.
### Steps to reproduce the bug
Steps to reproduce this behavior:
1. `!pip install datasets`
2. `!huggingface-cli login`
3. This step will throw the error (it might take a while as the dataset has ~170GB):
```python
from datasets import load_dataset
dataset = load_dataset("theodor1289/wit", "train", use_auth_token=True)
```
Stack trace:
```
(torch-multimodal) bash-4.2$ python test.py
Downloading and preparing dataset None/None to /cluster/work/cotterell/tamariucai/HuggingfaceDatasets/theodor1289___parquet/theodor1289--wit-7a3e984414a86a0f/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...
Downloading data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 491.68it/s]
Extracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 16.93it/s]
Traceback (most recent call last):
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 1860, in _prepare_split_single
for _, table in generator:
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 69, in _generate_tables
for batch_idx, record_batch in enumerate(
File "pyarrow/_parquet.pyx", line 1323, in iter_batches
File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/cluster/work/cotterell/tamariucai/multimodal-mirror/examples/test.py", line 2, in <module>
dataset = load_dataset("theodor1289/wit", "train", use_auth_token=True)
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 1893, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
The dataset is loaded in variable `dataset`.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.4
- Huggingface_hub version: 0.13.3
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5695/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5694
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5694/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5694/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5694/events
|
https://github.com/huggingface/datasets/issues/5694
| 1,650,467,793 |
I_kwDODunzps5iYCPR
| 5,694 |
Dataset configuration
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] |
open
| false | null |
[] | null |
[
"Originally we also though about adding it to the YAML part of the README.md:\r\n\r\n```yaml\r\nbuilder_config:\r\n data_dir: data\r\n data_files:\r\n - split: train\r\n pattern: \"train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*\"\r\n```\r\n\r\nHaving it in the README.md could make it easier to modify it in the UI on HF, and for validation on commit",
"From internal discussions we agreed to go with the YAML approach, since it's the one that seems more appropriate to be modified by a human on the Hub or locally (while JSON e.g. for models are usually created programmatically).",
"Current format:\r\n```yaml\r\nbuilder_config:\r\n data_files:\r\n - split: train\r\n pattern: data/train-*\r\n```"
] | 2023-04-01T13:08:05 | 2023-04-04T14:54:37 | null |
MEMBER
| null | null | null |
Following discussions from https://github.com/huggingface/datasets/pull/5331
We could have something like `config.json` to define the configuration of a dataset.
```json
{
"data_dir": "data"
"data_files": {
"train": "train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*"
}
}
```
we could also support a list for several configs with a 'config_name' field.
The alternative was to use YAML in the README.md.
I think it could also support a `dataset_type` field to specify which dataset builder class to use, and the other parameters would be the builder's parameters. Some parameters exist for all builders like `data_files` and `data_dir`, but some parameters are builder specific like `sep` for csv.
This format would be used in `push_to_hub` to be able to push multiple configs.
cc @huggingface/datasets
EDIT: actually we're going for the YAML approach in README.md
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5694/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5694/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5692
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5692/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5692/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5692/events
|
https://github.com/huggingface/datasets/issues/5692
| 1,649,818,644 |
I_kwDODunzps5iVjwU
| 5,692 |
pyarrow.lib.ArrowInvalid: Unable to merge: Field <field> has incompatible types
|
{
"login": "cyanic-selkie",
"id": 32219669,
"node_id": "MDQ6VXNlcjMyMjE5NjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/32219669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyanic-selkie",
"html_url": "https://github.com/cyanic-selkie",
"followers_url": "https://api.github.com/users/cyanic-selkie/followers",
"following_url": "https://api.github.com/users/cyanic-selkie/following{/other_user}",
"gists_url": "https://api.github.com/users/cyanic-selkie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyanic-selkie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyanic-selkie/subscriptions",
"organizations_url": "https://api.github.com/users/cyanic-selkie/orgs",
"repos_url": "https://api.github.com/users/cyanic-selkie/repos",
"events_url": "https://api.github.com/users/cyanic-selkie/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyanic-selkie/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"Hi! The link pointing to the code that generated the dataset is broken. Can you please fix it to make debugging easier?",
"> Hi! The link pointing to the code that generated the dataset is broken. Can you please fix it to make debugging easier?\r\n\r\nSorry about that, it's fixed now.\r\n",
"@cyanic-selkie could you explain how you fixed it? I met the same error in loading other datasets, is it due to the version of the library enviroment? ",
"@MingsYang I never fixed it. If you're referring to my comment above, I only meant I fixed the link to my code.\r\n\r\nAnyway, I managed to work around the issue by using `streaming` when loading the dataset.",
"@cyanic-selkie Emm, I get it. I just tried to use a new version python enviroment, and it show no errors anymore.",
"Upgrade pyarrow to the latest version solves this problem in my case."
] | 2023-03-31T18:19:40 | 2024-01-14T07:24:21 | null |
NONE
| null | null | null |
### Describe the bug
When loading the dataset [wikianc-en](https://huggingface.co/datasets/cyanic-selkie/wikianc-en) which I created using [this](https://github.com/cyanic-selkie/wikianc) code, I get the following error:
```
Traceback (most recent call last):
File "/home/sven/code/rector/answer-detection/train.py", line 106, in <module>
(dataset, weights) = get_dataset(args.dataset, tokenizer, labels, args.padding)
File "/home/sven/code/rector/answer-detection/dataset.py", line 106, in get_dataset
dataset = load_dataset("cyanic-selkie/wikianc-en")
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/load.py", line 1794, in load_dataset
ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1106, in as_dataset
datasets = map_nested(
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 443, in map_nested
mapped = [
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 444, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 346, in _single_map_nested
return function(data_struct)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1136, in _build_single_dataset
ds = self._as_dataset(
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1207, in _as_dataset
dataset_kwargs = ArrowReader(cache_dir, self.info).read(
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 239, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 260, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 203, in _read_files
pa_table = concat_tables(pa_tables) if len(pa_tables) != 1 else pa_tables[0]
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1808, in concat_tables
return ConcatenationTable.from_tables(tables, axis=axis)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1514, in from_tables
return cls.from_blocks(blocks)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1427, in from_blocks
table = cls._concat_blocks(blocks, axis=0)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1373, in _concat_blocks
return pa.concat_tables(pa_tables, promote=True)
File "pyarrow/table.pxi", line 5224, in pyarrow.lib.concat_tables
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Unable to merge: Field paragraph_anchors has incompatible types: list<: struct<start: uint32 not null, end: uint32 not null, qid: uint32, pageid: uint32, title: string not null> not null> vs list<item: struct<start: uint32, end: uint32, qid: uint32, pageid: uint32, title: string>>
```
This only happens when I load the `train` split, indicating that the size of the dataset is the deciding factor.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("cyanic-selkie/wikianc-en", split="train")
```
### Expected behavior
The dataset should load normally without any errors.
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-6.2.8-arch1-1-x86_64-with-glibc2.37
- Python version: 3.10.10
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5692/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5690
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5690/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5690/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5690/events
|
https://github.com/huggingface/datasets/issues/5690
| 1,649,289,883 |
I_kwDODunzps5iTiqb
| 5,690 |
raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api
|
{
"login": "wccccp",
"id": 55964850,
"node_id": "MDQ6VXNlcjU1OTY0ODUw",
"avatar_url": "https://avatars.githubusercontent.com/u/55964850?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wccccp",
"html_url": "https://github.com/wccccp",
"followers_url": "https://api.github.com/users/wccccp/followers",
"following_url": "https://api.github.com/users/wccccp/following{/other_user}",
"gists_url": "https://api.github.com/users/wccccp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wccccp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wccccp/subscriptions",
"organizations_url": "https://api.github.com/users/wccccp/orgs",
"repos_url": "https://api.github.com/users/wccccp/repos",
"events_url": "https://api.github.com/users/wccccp/events{/privacy}",
"received_events_url": "https://api.github.com/users/wccccp/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"Hi @wccccp, thanks for reporting. \r\nThat's weird since `huggingface_hub` _has_ a module called `hf_api` and you are using a recent version of it. \r\n\r\nWhich version of `datasets` are you using? And is it a bug that you experienced only recently? (cc @lhoestq can it be somehow related to the recent release of `datasets`?)\r\n\r\n~@wccccp what I can suggest you is to uninstall and reinstall completely huggingface_hub and datasets? My first guess is that there is a discrepancy somewhere in your setup 😕~",
"@wccccp Actually I have also been able to reproduce the error so it's not an issue with your setup.\r\n\r\n@huggingface/datasets I found this issue quite weird. Is this a module that is not used very often?\r\nThe problematic line is [this one](https://github.com/huggingface/datasets/blame/c33e8ce68b5000988bf6b2e4bca27ffaa469acea/src/datasets/data_files.py#L476) where `huggingface_hub.hf_api.DatasetInfo` is used. `huggingface_hub` is imported [here](https://github.com/huggingface/datasets/blame/c33e8ce68b5000988bf6b2e4bca27ffaa469acea/src/datasets/data_files.py#L6) as `import huggingface_hub`. However since modules are lazy-loaded in `hfh` you need to explicitly import them (i.e. `import huggingface_hub.hf_api`).\r\n\r\nWhat's weird is that nothing has changed for months. Datasets code seems that it didn't change for 2 years when I git-blame this part. And lazy-loading was introduced 1 year ago in `huggingface_hub`. Could it be that `data_files.py` is a file almost never used?\r\n",
"For context, I tried to run `import huggingface_hub; huggingface_hub.hf_api.DatasetInfo` in the terminal with different versions of `hfh` and I need to go back to `huggingface_hub==0.7.0` to make it work (latest is 0.13.3).",
"Before the error happens at line 120 in `data_files.py`, `datasets.filesystems.hffilesystem` is imported at the top of `data_files.py` and this file does `from huggingface_hub.hf_api import DatasetInfo` - so `huggingface_hub.hf_api` is imported. Not sure how the error could happen, what version of `datasets` are you using @wccccp ?",
"Closing due to inactivity."
] | 2023-03-31T08:22:22 | 2023-07-21T14:21:57 | 2023-07-21T14:21:57 |
NONE
| null | null | null |
### Describe the bug
rta.sh
Traceback (most recent call last):
File "run.py", line 7, in <module>
import datasets
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module>
from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/builder.py", line 44, in <module>
from .data_files import DataFilesDict, _sanitize_patterns
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/data_files.py", line 120, in <module>
dataset_info: huggingface_hub.hf_api.DatasetInfo,
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/huggingface_hub/__init__.py", line 290, in __getattr__
raise AttributeError(f"No {package_name} attribute {name}")
AttributeError: No huggingface_hub attribute hf_api
### Reproduction
_No response_
### Logs
```shell
Traceback (most recent call last):
File "run.py", line 7, in <module>
import datasets
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module>
from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/builder.py", line 44, in <module>
from .data_files import DataFilesDict, _sanitize_patterns
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/data_files.py", line 120, in <module>
dataset_info: huggingface_hub.hf_api.DatasetInfo,
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/huggingface_hub/__init__.py", line 290, in __getattr__
raise AttributeError(f"No {package_name} attribute {name}")
AttributeError: No huggingface_hub attribute hf_api
```
### System info
```shell
- huggingface_hub version: 0.13.2
- Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/appuser/.cache/huggingface/token
- Has saved token ?: False
- Configured git credential helpers:
- FastAI: N/A
- Tensorflow: N/A
- Torch: 1.7.1
- Jinja2: N/A
- Graphviz: N/A
- Pydot: N/A
- Pillow: 9.3.0
- hf_transfer: N/A
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /home/appuser/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /home/appuser/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/appuser/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5690/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5688
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5688/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5688/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5688/events
|
https://github.com/huggingface/datasets/issues/5688
| 1,648,463,504 |
I_kwDODunzps5iQY6Q
| 5,688 |
Wikipedia download_and_prepare for GCS
|
{
"login": "adrianfagerland",
"id": 25522531,
"node_id": "MDQ6VXNlcjI1NTIyNTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/25522531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adrianfagerland",
"html_url": "https://github.com/adrianfagerland",
"followers_url": "https://api.github.com/users/adrianfagerland/followers",
"following_url": "https://api.github.com/users/adrianfagerland/following{/other_user}",
"gists_url": "https://api.github.com/users/adrianfagerland/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adrianfagerland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adrianfagerland/subscriptions",
"organizations_url": "https://api.github.com/users/adrianfagerland/orgs",
"repos_url": "https://api.github.com/users/adrianfagerland/repos",
"events_url": "https://api.github.com/users/adrianfagerland/events{/privacy}",
"received_events_url": "https://api.github.com/users/adrianfagerland/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Hi @adrianfagerland, thanks for reporting.\r\n\r\nPlease note that \"wikipedia\" is a special dataset, with an Apache Beam builder: https://beam.apache.org/\r\nYou can find more info about Beam datasets in our docs: https://huggingface.co/docs/datasets/beam\r\n\r\nIt was implemented to be run in parallel processing, using one of the distributed back-ends supported by Apache Beam: https://beam.apache.org/get-started/beam-overview/#apache-beam-pipeline-runners\r\n\r\nThat is, you are trying to process the source wikipedia data on your machine (not distributed) when passing `beam_runner=\"DirectRunner\"`.\r\n\r\nAs documented in the wikipedia dataset page (https://huggingface.co/datasets/wikipedia):\r\n\r\n Some subsets of Wikipedia have already been processed by HuggingFace, and you can load them just with:\r\n \r\n from datasets import load_dataset\r\n \r\n load_dataset(\"wikipedia\", \"20220301.en\")\r\n\r\n The list of pre-processed subsets is:\r\n - \"20220301.de\"\r\n - \"20220301.en\"\r\n - \"20220301.fr\"\r\n - \"20220301.frr\"\r\n - \"20220301.it\"\r\n - \"20220301.simple\"\r\n\r\nTo download the available processed data (in Arrow format):\r\n```python\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\")\r\nbuilder.download_and_prepare(your_path)\r\n```",
"When running this using :\r\n```\r\nimport datasets\r\nfrom apache_beam.options.pipeline_options import PipelineOptions\r\nfrom gcsfs import GCSFileSystem\r\n\r\nstorage_options = {\"project\":\"tdt4310\", \"token\":\"cloud\"}\r\nfs = GCSFileSystem(**storage_options)\r\n\r\noutput_dir = \"gcs://quiz_transformer/\"\r\nbeam_options = PipelineOptions(\r\n region=\"europe-west4\",\r\n project=\"tdt4310\",\r\n temp_location=output_dir+\"tmp/\")\r\n\r\n\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\", beam_runner=\"dataflow\", beam_options=beam_options)\r\nbuilder.download_and_prepare(\r\n output_dir, storage_options=storage_options, file_format=\"parquet\")\r\n```\r\nI now get this error:\r\n```\r\nraise FileNotFoundError(f\"Couldn't find file at {url}\")\r\nFileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json\r\nDownloading data files: 0%| | 0/1 [00:00<?, ?it/s]\r\n```\r\n\r\nI get the same error for this:\r\n```\r\nimport datasets\r\nfrom gcsfs import GCSFileSystem\r\n\r\nstorage_options = {\"project\":\"tdt4310\", \"token\":\"cloud\"}\r\nfs = GCSFileSystem(**storage_options)\r\n\r\noutput_dir = \"gcs://quiz_transformer/\"\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\")\r\nbuilder.download_and_prepare(\r\n output_dir, storage_options=storage_options, file_format=\"parquet\")\r\n```\r\n\r\n\r\n\r\n",
"`wikipedia` is no longer a Beam dataset, so the above code should work now.\r\n\r\nPS: You can use [these files](https://huggingface.co/datasets/wikipedia/tree/main/data/20220301.en) (or a newer dump at https://huggingface.co/datasets/wikimedia/wikipedia/tree/main/20231101.en) instead of generating the Parquet version yourself"
] | 2023-03-30T23:43:22 | 2024-03-15T15:59:18 | 2024-03-15T15:59:18 |
NONE
| null | null | null |
### Describe the bug
I am unable to download the wikipedia dataset onto GCS.
When I run the script provided the memory firstly gets eaten up, then it crashes.
I tried running this on a VM with 128GB RAM and all I got was a two empty files: _data_builder.lock_, _data.incomplete/beam-temp-wikipedia-train-1ab2039acf3611ed87a9893475de0093_
I have troubleshot this for two straight days now, but I am just unable to get the dataset into storage.
### Steps to reproduce the bug
Run this and insert a path:
```
import datasets
builder = datasets.load_dataset_builder(
"wikipedia", language="en", date="20230320", beam_runner="DirectRunner")
builder.download_and_prepare({path}, file_format="parquet")
```
This is where the problem of it eating RAM occurs.
I have also tried several versions of this, based on the docs:
```
import gcsfs
import datasets
storage_options = {"project": "tdt4310", "token": "cloud"}
fs = gcsfs.GCSFileSystem(**storage_options)
output_dir = "gcs://wikipediadata/"
builder = datasets.load_dataset_builder(
"wikipedia", date="20230320", language="en", beam_runner="DirectRunner")
builder.download_and_prepare(
output_dir, storage_options=storage_options, file_format="parquet")
```
The error message that is received here is:
> ValueError: Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path specified: gcs://wikipediadata/wikipedia-train [while running 'train/Save to parquet/Write/WriteImpl/InitializeWrite']
I have ran `pip install apache-beam[gcp]`
### Expected behavior
The wikipedia data loaded into GCS
Everything worked when testing with a smaller demo dataset found somewhere in the docs
### Environment info
Newest published version of datasets. Python 3.9. Also tested with Python 3.7. 128GB RAM Google Cloud VM instance.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5688/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5687
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5687/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5687/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5687/events
|
https://github.com/huggingface/datasets/issues/5687
| 1,647,009,018 |
I_kwDODunzps5iK1z6
| 5,687 |
Document to compress data files before uploading
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
closed
| false | null |
[] | null |
[
"Great idea!\r\n\r\nShould we also take this opportunity to include some audio/image file formats? Currently, it still reads very text heavy. Something like:\r\n\r\n> We support many text, audio, and image data extensions such as `.zip`, `.rar`, `.mp3`, and `.jpg` among many others. For data extensions like `.csv`, `.json`, `.jsonl`, and `txt`, we recommend compressing them before uploading to the Hub. These file extensions are not tracked by Git LFS by default, and if they're too large, they will not be committed and uploaded. Take a look at the `.gitattributes` file in your repository for a complete list of supported file extensions.",
"Hi @stevhliu, thanks for your suggestion.\r\n\r\nI agree it is a good opportunity to mention that audio/image file formats are also supported.\r\n\r\nNit:\r\nI would not mention .zip, .rar after \"text, audio, and image data extensions\". Those are \"compression\" extensions and not \"text, audio, and image data extensions\".\r\n\r\nWhat about something similar to:\r\n> We support many text, audio, and image data extensions such as `.csv`, `.mp3`, and `.jpg` among many others. For text data extensions like `.csv`, `.json`, `.jsonl`, and `.txt`, we recommend compressing them before uploading to the Hub (to `.zip` or `.gz` file extension for example). \r\n>\r\n> Note that text file extensions are not tracked by Git LFS by default, and if they're too large, they will not be committed and uploaded. Take a look at the `.gitattributes` file in your repository for a complete list of tracked file extensions by default.\r\n\r\nNote that for compressions I have mentioned:\r\n- gz, to compress individual files\r\n- zip, to compress and archive multiple files; zip is preferred rather than tar because it supports streaming out of the box",
"Perfect, thanks for making the distinction between compression and data extensions!"
] | 2023-03-30T06:41:07 | 2023-04-19T07:25:59 | 2023-04-19T07:25:59 |
MEMBER
| null | null | null |
In our docs to [Share a dataset to the Hub](https://huggingface.co/docs/datasets/upload_dataset), we tell users to upload directly their data files, like CSV, JSON, JSON-Lines, text,... However, these extensions are not tracked by Git LFS by default, as they are not in the `.giattributes` file. Therefore, if they are too large, Git will fail to commit/upload them.
I think for those file extensions (.csv, .json, .jsonl, .txt), we should better recommend to **compress** their data files (using ZIP for example) before uploading them to the Hub.
- Compressed files are tracked by Git LFS in our default `.gitattributes` file
What do you think?
CC: @stevhliu
See related issue:
- https://huggingface.co/datasets/tcor0005/langchain-docs-400-chunksize/discussions/1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5687/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5687/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5685
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5685/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5685/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5685/events
|
https://github.com/huggingface/datasets/issues/5685
| 1,646,048,667 |
I_kwDODunzps5iHLWb
| 5,685 |
Broken Image render on the hub website
|
{
"login": "FrancescoSaverioZuppichini",
"id": 15908060,
"node_id": "MDQ6VXNlcjE1OTA4MDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/15908060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FrancescoSaverioZuppichini",
"html_url": "https://github.com/FrancescoSaverioZuppichini",
"followers_url": "https://api.github.com/users/FrancescoSaverioZuppichini/followers",
"following_url": "https://api.github.com/users/FrancescoSaverioZuppichini/following{/other_user}",
"gists_url": "https://api.github.com/users/FrancescoSaverioZuppichini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FrancescoSaverioZuppichini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrancescoSaverioZuppichini/subscriptions",
"organizations_url": "https://api.github.com/users/FrancescoSaverioZuppichini/orgs",
"repos_url": "https://api.github.com/users/FrancescoSaverioZuppichini/repos",
"events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/events{/privacy}",
"received_events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi! \r\n\r\nYou can fix the viewer by adding the `dataset_info` YAML field deleted in https://huggingface.co/datasets/Francesco/cell-towers/commit/b95b59ddd91ebe9c12920f0efe0ed415cd0d4298 back to the metadata section of the card. \r\n\r\nTo avoid this issue in the feature, you can use `huggingface_hub`'s [RepoCard](https://huggingface.co/docs/huggingface_hub/package_reference/cards) API to update the dataset card instead of `upload_file`:\r\n```python\r\nfrom huggingface_hub import DatasetCard\r\n# Load card\r\ncard = DatasetCard.load(\"<namespace>/<repo_id>\")\r\n# Modify card content\r\ncard.content = ...\r\n# Push card to the Hub\r\ncard.push_to_hub(\"<namespace>/<repo_id>\")\r\n```\r\n\r\nHowever, the best solution would be to use the features info stored in the header of the Parquet shards generated with `push_to_hub` on the viewer side to avoid unexpected issues such as this one. This shouldn't be too hard to address.",
"Thanks for reporting @FrancescoSaverioZuppichini.\r\n\r\nFor future issues with your specific dataset, you can use its \"Community\" tab to start a conversation: https://huggingface.co/datasets/Francesco/cell-towers/discussions/new",
"Thanks @albertvillanova , @mariosasko I was not aware of this requirement from the doc (must have skipped :sweat_smile: )\r\n\r\nConfirmed, adding back `dataset_info` fixed the issu"
] | 2023-03-29T15:25:30 | 2023-03-30T07:54:25 | 2023-03-30T07:54:25 |
NONE
| null | null | null |
### Describe the bug
Hi :wave:
Not sure if this is the right place to ask, but I am trying to load a huge amount of datasets on the hub (:partying_face: ) but I am facing a little issue with the `image` type

See this [dataset](https://huggingface.co/datasets/Francesco/cell-towers), basically for some reason the first image has numerical bytes inside, not sure if that is okay, but the image render feature **doesn't work**
So the dataset is stored in the following way
```python
builder.download_and_prepare(output_dir=str(output_dir))
ds = builder.as_dataset(split="train")
# [NOTE] no idea how to push it from the builder folder
ds.push_to_hub(repo_id=repo_id)
builder.as_dataset(split="validation").push_to_hub(repo_id=repo_id)
ds = builder.as_dataset(split="test")
ds.push_to_hub(repo_id=repo_id)
```
The build is this class
```python
class COCOLikeDatasetBuilder(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
def _info(self):
features = datasets.Features(
{
"image_id": datasets.Value("int64"),
"image": datasets.Image(),
"width": datasets.Value("int32"),
"height": datasets.Value("int32"),
"objects": datasets.Sequence(
{
"id": datasets.Value("int64"),
"area": datasets.Value("int64"),
"bbox": datasets.Sequence(
datasets.Value("float32"), length=4
),
"category": datasets.ClassLabel(names=categories),
}
),
}
)
return datasets.DatasetInfo(
description=description,
features=features,
homepage=homepage,
license=license,
citation=citation,
)
def _split_generators(self, dl_manager):
archive = dl_manager.download(url)
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"annotation_file_path": "train/_annotations.coco.json",
"files": dl_manager.iter_archive(archive),
},
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION,
gen_kwargs={
"annotation_file_path": "test/_annotations.coco.json",
"files": dl_manager.iter_archive(archive),
},
),
datasets.SplitGenerator(
name=datasets.Split.TEST,
gen_kwargs={
"annotation_file_path": "valid/_annotations.coco.json",
"files": dl_manager.iter_archive(archive),
},
),
]
def _generate_examples(self, annotation_file_path, files):
def process_annot(annot, category_id_to_category):
return {
"id": annot["id"],
"area": annot["area"],
"bbox": annot["bbox"],
"category": category_id_to_category[annot["category_id"]],
}
image_id_to_image = {}
idx = 0
# This loop relies on the ordering of the files in the archive:
# Annotation files come first, then the images.
for path, f in files:
file_name = os.path.basename(path)
if annotation_file_path in path:
annotations = json.load(f)
category_id_to_category = {
category["id"]: category["name"]
for category in annotations["categories"]
}
print(category_id_to_category)
image_id_to_annotations = collections.defaultdict(list)
for annot in annotations["annotations"]:
image_id_to_annotations[annot["image_id"]].append(annot)
image_id_to_image = {
annot["file_name"]: annot for annot in annotations["images"]
}
elif file_name in image_id_to_image:
image = image_id_to_image[file_name]
objects = [
process_annot(annot, category_id_to_category)
for annot in image_id_to_annotations[image["id"]]
]
print(file_name)
yield idx, {
"image_id": image["id"],
"image": {"path": path, "bytes": f.read()},
"width": image["width"],
"height": image["height"],
"objects": objects,
}
idx += 1
```
Basically, I want to add to the hub every dataset I come across on coco format
Thanks
Fra
### Steps to reproduce the bug
In this case, you can just navigate on the [dataset](https://huggingface.co/datasets/Francesco/cell-towers)
### Expected behavior
I was expecting the image rendering feature to work
### Environment info
Not a lot to share, I am using `datasets` from a fresh venv
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5685/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5682
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5682/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5682/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5682/events
|
https://github.com/huggingface/datasets/issues/5682
| 1,646,000,571 |
I_kwDODunzps5iG_m7
| 5,682 |
ValueError when passing ignore_verifications
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[] | 2023-03-29T15:00:30 | 2023-03-29T17:28:58 | 2023-03-29T17:28:58 |
MEMBER
| null | null | null |
When passing `ignore_verifications=True` to `load_dataset`, we get a ValueError:
```
ValueError: 'none' is not a valid VerificationMode
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5682/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5681
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5681/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5681/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5681/events
|
https://github.com/huggingface/datasets/issues/5681
| 1,645,630,784 |
I_kwDODunzps5iFlVA
| 5,681 |
Add information about patterns search order to the doc about structuring repo
|
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
closed
| false |
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Good idea, I think I've seen this a couple of times before too on the forums. I can work on this :)",
"Closed in #5693 "
] | 2023-03-29T11:44:49 | 2023-04-03T18:31:11 | 2023-04-03T18:31:11 |
CONTRIBUTOR
| null | null | null |
Following [this](https://github.com/huggingface/datasets/issues/5650) issue I think we should add a note about the order of patterns that is used to find splits, see [my comment](https://github.com/huggingface/datasets/issues/5650#issuecomment-1488412527). Also we should reference this page in pages about packaged loaders.
I have a déjà vu that it had already been discussed as some point but I don't remember....
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5681/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5679
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5679/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5679/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5679/events
|
https://github.com/huggingface/datasets/issues/5679
| 1,645,184,622 |
I_kwDODunzps5iD4Zu
| 5,679 |
Allow load_dataset to take a working dir for intermediate data
|
{
"login": "lu-wang-dl",
"id": 38018689,
"node_id": "MDQ6VXNlcjM4MDE4Njg5",
"avatar_url": "https://avatars.githubusercontent.com/u/38018689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lu-wang-dl",
"html_url": "https://github.com/lu-wang-dl",
"followers_url": "https://api.github.com/users/lu-wang-dl/followers",
"following_url": "https://api.github.com/users/lu-wang-dl/following{/other_user}",
"gists_url": "https://api.github.com/users/lu-wang-dl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lu-wang-dl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lu-wang-dl/subscriptions",
"organizations_url": "https://api.github.com/users/lu-wang-dl/orgs",
"repos_url": "https://api.github.com/users/lu-wang-dl/repos",
"events_url": "https://api.github.com/users/lu-wang-dl/events{/privacy}",
"received_events_url": "https://api.github.com/users/lu-wang-dl/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null |
[
"Hi ! AFAIK a dataset must be present on a local disk to be able to efficiently memory map the datasets Arrow files. What makes you think that it is possible to load from a cloud storage and have good performance ?\r\n\r\nAnyway it's already possible to download_and_prepare a dataset as Arrow files in a cloud storage with:\r\n```python\r\nbuilder = load_dataset_builder(..., cache_dir=\"/temp/dir\")\r\nbuilder.download_and_prepare(\"/cloud_dir\")\r\n```\r\n\r\nbut then \r\n```python\r\nds = builder.as_dataset()\r\n```\r\nwould fail if \"/cloud_dir\" is not a local directory.",
"In my use case, I am trying to mount the S3 bucket as local system with S3FS-FUSE / [goofys](https://github.com/kahing/goofys). I want to use S3 to save the download data and save checkpoint for training for persistent. Setting the s3 location as cache directory is not fast enough. That is why I want to set a work directory for temp data for memory map and only save the final result to s3 cache. ",
"You can try setting `HF_DATASETS_DOWNLOADED_DATASETS_PATH` and `HF_DATASETS_EXTRACTED_DATASETS_PATH` to S3, and `HF_DATASETS_CACHE` to your local disk.\r\n\r\nThis way all your downloaded and extracted data are on your mounted S3, but the datasets Arrow files are on your local disk",
"If we hope to also persist the Arrow files on the mounted S3 but work with the efficiency of local disk, is there any recommended way to do this, other than copying the Arrow files from local disk to S3?"
] | 2023-03-29T07:21:09 | 2023-04-12T22:30:25 | null |
NONE
| null | null | null |
### Feature request
As a user, I can set a working dir for intermediate data creation. The processed files will be moved to the cache dir, like
```
load_dataset(…, working_dir=”/temp/dir”, cache_dir=”/cloud_dir”).
```
### Motivation
This will help the use case for using datasets with cloud storage as cache. It will help boost the performance.
### Your contribution
I can provide a PR to fix this if the proposal seems reasonable.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5679/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5679/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5678
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5678/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5678/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5678/events
|
https://github.com/huggingface/datasets/issues/5678
| 1,645,018,359 |
I_kwDODunzps5iDPz3
| 5,678 |
Add support to create a Dataset from spark dataframe
|
{
"login": "lu-wang-dl",
"id": 38018689,
"node_id": "MDQ6VXNlcjM4MDE4Njg5",
"avatar_url": "https://avatars.githubusercontent.com/u/38018689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lu-wang-dl",
"html_url": "https://github.com/lu-wang-dl",
"followers_url": "https://api.github.com/users/lu-wang-dl/followers",
"following_url": "https://api.github.com/users/lu-wang-dl/following{/other_user}",
"gists_url": "https://api.github.com/users/lu-wang-dl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lu-wang-dl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lu-wang-dl/subscriptions",
"organizations_url": "https://api.github.com/users/lu-wang-dl/orgs",
"repos_url": "https://api.github.com/users/lu-wang-dl/repos",
"events_url": "https://api.github.com/users/lu-wang-dl/events{/privacy}",
"received_events_url": "https://api.github.com/users/lu-wang-dl/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false | null |
[] | null |
[
"if i read spark Dataframe , got an error on multi-node Spark cluster.\r\nDid the Api (Dataset.from_spark) support Spark cluster, read dataframe and save_to_disk?\r\n\r\nError: \r\n_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforma\r\ntion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.\r\n23/06/16 21:17:20 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)\r\n\r\n",
"How to perform predictions on Dataset object in Spark with multi-node cluster parallelism?",
"Addressed in #5701"
] | 2023-03-29T04:36:28 | 2023-07-21T14:15:38 | 2023-07-21T14:15:38 |
NONE
| null | null | null |
### Feature request
Add a new API `Dataset.from_spark` to create a Dataset from Spark DataFrame.
### Motivation
Spark is a distributed computing framework that can handle large datasets. By supporting loading Spark DataFrames directly into Hugging Face Datasets, we enable take the advantages of spark to processing the data in parallel.
By providing a seamless integration between these two frameworks, we make it easier for data scientists and developers to work with both Spark and Hugging Face in the same workflow.
### Your contribution
We can discuss about the ideas and I can help preparing a PR for this feature.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5678/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5678/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5677
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5677/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5677/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5677/events
|
https://github.com/huggingface/datasets/issues/5677
| 1,644,828,606 |
I_kwDODunzps5iChe-
| 5,677 |
Dataset.map() crashes when any column contains more than 1000 empty dictionaries
|
{
"login": "mtoles",
"id": 7139344,
"node_id": "MDQ6VXNlcjcxMzkzNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7139344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mtoles",
"html_url": "https://github.com/mtoles",
"followers_url": "https://api.github.com/users/mtoles/followers",
"following_url": "https://api.github.com/users/mtoles/following{/other_user}",
"gists_url": "https://api.github.com/users/mtoles/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mtoles/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mtoles/subscriptions",
"organizations_url": "https://api.github.com/users/mtoles/orgs",
"repos_url": "https://api.github.com/users/mtoles/repos",
"events_url": "https://api.github.com/users/mtoles/events{/privacy}",
"received_events_url": "https://api.github.com/users/mtoles/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null |
[] | 2023-03-29T00:01:31 | 2023-07-07T14:01:14 | 2023-07-07T14:01:14 |
NONE
| null | null | null |
### Describe the bug
`Dataset.map()` crashes any time any column contains more than `writer_batch_size` (default 1000) empty dictionaries, regardless of whether the column is being operated on. The error does not occur if the dictionaries are non-empty.
### Steps to reproduce the bug
Example:
```
import datasets
def add_one(example):
example["col2"] += 1
return example
n = 1001 # crashes
# n = 999 # works
ds = datasets.Dataset.from_dict({"col1": [{}] * n, "col2": [1] * n})
ds = ds.map(add_one, writer_batch_size=1000)
```
### Expected behavior
Above code should not crash
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.10
- Python version: 3.8.15
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5677/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5675
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5675/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5675/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5675/events
|
https://github.com/huggingface/datasets/issues/5675
| 1,641,763,478 |
I_kwDODunzps5h21KW
| 5,675 |
Filter datasets by language code
|
{
"login": "named-entity",
"id": 5658496,
"node_id": "MDQ6VXNlcjU2NTg0OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5658496?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/named-entity",
"html_url": "https://github.com/named-entity",
"followers_url": "https://api.github.com/users/named-entity/followers",
"following_url": "https://api.github.com/users/named-entity/following{/other_user}",
"gists_url": "https://api.github.com/users/named-entity/gists{/gist_id}",
"starred_url": "https://api.github.com/users/named-entity/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/named-entity/subscriptions",
"organizations_url": "https://api.github.com/users/named-entity/orgs",
"repos_url": "https://api.github.com/users/named-entity/repos",
"events_url": "https://api.github.com/users/named-entity/events{/privacy}",
"received_events_url": "https://api.github.com/users/named-entity/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"The dataset still can be found, if instead of using the search form you just enter the language code in the url, like https://huggingface.co/datasets?language=language:myv. \r\n\r\nBut of course having a more complete list of languages in the search form (or just a fallback to the language codes, if they are missing from the code=>language mapping) would be much more convenient!",
"Hi! I've opened a PR to make these languages searchable on the Hub.",
"Thanks @mariosasko!\r\nDo you think it is possible to turn this into a more scalable pipeline? Such as:\r\n1. Looping through all the datasets on the hub and collecting the set of all their language codes;\r\n2. Selecting the codes not covered yet in `Language.ts`\r\n3. Looking up their codes at https://iso639-3.sil.org/code_tables/639/data\r\n4. Adding all the newly found language codes to `Language.ts`",
"@avidale This has been discussed in https://github.com/huggingface/datasets/issues/4881, so also feel free to share your opinion there."
] | 2023-03-27T09:42:28 | 2023-03-30T08:08:15 | 2023-03-30T08:08:15 |
NONE
| null | null | null |
Hi! I use the language search field on https://huggingface.co/datasets
However, some of the datasets tagged by ISO language code are not accessible by this search form.
For example, [myv_ru_2022](https://huggingface.co/datasets/slone/myv_ru_2022) is has `myv` language tag but it is not included in Languages search form.
I've also noticed the same problem with `mhr` (see https://huggingface.co/datasets/AigizK/mari-russian-parallel-corpora)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5675/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5675/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5674
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5674/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5674/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5674/events
|
https://github.com/huggingface/datasets/issues/5674
| 1,641,084,105 |
I_kwDODunzps5h0PTJ
| 5,674 |
Stored XSS
|
{
"login": "Fadavvi",
"id": 21213484,
"node_id": "MDQ6VXNlcjIxMjEzNDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/21213484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fadavvi",
"html_url": "https://github.com/Fadavvi",
"followers_url": "https://api.github.com/users/Fadavvi/followers",
"following_url": "https://api.github.com/users/Fadavvi/following{/other_user}",
"gists_url": "https://api.github.com/users/Fadavvi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Fadavvi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fadavvi/subscriptions",
"organizations_url": "https://api.github.com/users/Fadavvi/orgs",
"repos_url": "https://api.github.com/users/Fadavvi/repos",
"events_url": "https://api.github.com/users/Fadavvi/events{/privacy}",
"received_events_url": "https://api.github.com/users/Fadavvi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi! You can contact `[email protected]` to report this vulnerability."
] | 2023-03-26T20:55:58 | 2023-03-27T21:01:55 | 2023-03-27T21:01:55 |
NONE
| null | null | null |
### Describe the bug
I found a Stored XSS on a page that can be publicly accessible to all visitors. But I didn't find a suitable place to report.
Please guide me on this.
### Steps to reproduce the bug
Due to security restrictions, I don't want to publish it publicly.
### Expected behavior
User inputs must be sanitized before rendering.
### Environment info
https://huggingface.co/ Web UI
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5674/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5672
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5672/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5672/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5672/events
|
https://github.com/huggingface/datasets/issues/5672
| 1,641,005,322 |
I_kwDODunzps5hz8EK
| 5,672 |
Pushing dataset to hub crash
|
{
"login": "tzvc",
"id": 14275989,
"node_id": "MDQ6VXNlcjE0Mjc1OTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/14275989?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tzvc",
"html_url": "https://github.com/tzvc",
"followers_url": "https://api.github.com/users/tzvc/followers",
"following_url": "https://api.github.com/users/tzvc/following{/other_user}",
"gists_url": "https://api.github.com/users/tzvc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tzvc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tzvc/subscriptions",
"organizations_url": "https://api.github.com/users/tzvc/orgs",
"repos_url": "https://api.github.com/users/tzvc/repos",
"events_url": "https://api.github.com/users/tzvc/events{/privacy}",
"received_events_url": "https://api.github.com/users/tzvc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi ! It's been fixed by https://github.com/huggingface/datasets/pull/5598. We're doing a new release tomorrow with the fix and you'll be able to push your 100k images ;)\r\n\r\nBasically `push_to_hub` used to fail if the remote repository already exists and has a README.md without dataset_info in the YAML tags.\r\n\r\nIn the meantime you can install datasets from source",
"Hi @lhoestq ,\r\n\r\nWhat version of datasets library fix this case? I am using the last `v2.10.1` and I get the same error.",
"We just released 2.11 which includes a fix :)"
] | 2023-03-26T17:42:13 | 2023-03-30T08:11:05 | 2023-03-30T08:11:05 |
NONE
| null | null | null |
### Describe the bug
Uploading a dataset with `push_to_hub()` fails without error description.
### Steps to reproduce the bug
Hey there,
I've built a image dataset of 100k images + text pair as described here https://huggingface.co/docs/datasets/image_dataset#imagefolder
Now I'm trying to push it to the hub but I'm running into issues. First, I tried doing it via git directly, I added all the files in git lfs and pushed but I got hit with an error saying huggingface only accept up to 10k files in a folder.
So I'm now trying with the `push_to_hub()` func as follow:
```python
from datasets import load_dataset
import os
dataset = load_dataset("imagefolder", data_dir="./data", split="train")
dataset.push_to_hub("tzvc/organization-logos", token=os.environ.get('HF_TOKEN'))
```
But again, this produces an error:
```
Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 100212/100212 [00:00<00:00, 439108.61it/s]
Downloading and preparing dataset imagefolder/default to /home/contact_theochampion/.cache/huggingface/datasets/imagefolder/default-20567ffc703aa314/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f...
Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 100211/100211 [00:00<00:00, 149323.73it/s]
Downloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 15947.92it/s]
Extracting data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2245.34it/s]
Dataset imagefolder downloaded and prepared to /home/contact_theochampion/.cache/huggingface/datasets/imagefolder/default-20567ffc703aa314/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f. Subsequent calls will reuse this data.
Resuming upload of the dataset shards.
Pushing dataset shards to the dataset hub: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 14/14 [00:31<00:00, 2.24s/it]
Downloading metadata: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 118/118 [00:00<00:00, 225kB/s]
Traceback (most recent call last):
File "/home/contact_theochampion/organization-logos/push_to_hub.py", line 5, in <module>
dataset.push_to_hub("tzvc/organization-logos", token=os.environ.get('HF_TOKEN'))
File "/home/contact_theochampion/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 5245, in push_to_hub
repo_info = dataset_infos[next(iter(dataset_infos))]
StopIteration
```
What could be happening here ?
### Expected behavior
The dataset is pushed to the hub
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-5.10.0-21-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5672/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5672/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5671
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5671/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5671/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5671/events
|
https://github.com/huggingface/datasets/issues/5671
| 1,640,840,012 |
I_kwDODunzps5hzTtM
| 5,671 |
How to use `load_dataset('glue', 'cola')`
|
{
"login": "makinzm",
"id": 40193664,
"node_id": "MDQ6VXNlcjQwMTkzNjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/40193664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makinzm",
"html_url": "https://github.com/makinzm",
"followers_url": "https://api.github.com/users/makinzm/followers",
"following_url": "https://api.github.com/users/makinzm/following{/other_user}",
"gists_url": "https://api.github.com/users/makinzm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makinzm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makinzm/subscriptions",
"organizations_url": "https://api.github.com/users/makinzm/orgs",
"repos_url": "https://api.github.com/users/makinzm/repos",
"events_url": "https://api.github.com/users/makinzm/events{/privacy}",
"received_events_url": "https://api.github.com/users/makinzm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Sounds like an issue with incompatible `transformers` dependencies versions.\r\n\r\nCan you try to update `transformers` ?\r\n\r\nEDIT: I checked the `transformers` dependencies and it seems like you need `tokenizers>=0.10.1,<0.11` with `transformers==4.5.1`\r\n\r\nEDIT2: this old version of `datasets` seems to import `transformers` but it's no longer the case, so you could also simply update `datasets` and `transformers` won't be imported",
"Thank you for advising me to update these libraries versions.\r\n\r\nI can implement codes using `datasets==2.10.1` and `transformers==4.27.3`"
] | 2023-03-26T09:40:34 | 2023-03-28T07:43:44 | 2023-03-28T07:43:43 |
NONE
| null | null | null |
### Describe the bug
I'm new to use HuggingFace datasets but I cannot use `load_dataset('glue', 'cola')`.
- I was stacked by the following problem:
```python
from datasets import load_dataset
cola_dataset = load_dataset('glue', 'cola')
---------------------------------------------------------------------------
InvalidVersion Traceback (most recent call last)
File <timed exec>:1
(Omit because of long error message)
File /usr/local/lib/python3.8/site-packages/packaging/version.py:197, in Version.__init__(self, version)
195 match = self._regex.search(version)
196 if not match:
--> 197 raise InvalidVersion(f"Invalid version: '{version}'")
199 # Store the parsed out pieces of the version
200 self._version = _Version(
201 epoch=int(match.group("epoch")) if match.group("epoch") else 0,
202 release=tuple(int(i) for i in match.group("release").split(".")),
(...)
208 local=_parse_local_version(match.group("local")),
209 )
InvalidVersion: Invalid version: '0.10.1,<0.11'
```
- You can check this full error message in my repository: [MLOps-Basics/week_0_project_setup/experimental_notebooks/data_exploration.ipynb](https://github.com/makinzm/MLOps-Basics/blob/eabab4b837880607d9968d3fa687c70177b2affd/week_0_project_setup/experimental_notebooks/data_exploration.ipynb)
### Steps to reproduce the bug
- This is my repository to reproduce: [MLOps-Basics/week_0_project_setup](https://github.com/makinzm/MLOps-Basics/tree/eabab4b837880607d9968d3fa687c70177b2affd/week_0_project_setup)
1. cd `/DockerImage` and command `docker build . -t week0`
2. cd `/` and command `docker-compose up`
3. Run `experimental_notebooks/data_exploration.ipynb`
----
Just to be sure, I wrote down Dockerfile and requirements.txt
- Dockerfile
```Dockerfile
FROM python:3.8
WORKDIR /root/working
RUN apt-get update && \
apt-get install -y python3-dev python3-pip python3-venv && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip3 install --no-cache-dir jupyter notebook && pip install --no-cache-dir -r requirements.txt
CMD ["bash"]
```
- requirements.txt
```txt
pytorch-lightning==1.2.10
datasets==1.6.2
transformers==4.5.1
scikit-learn==0.24.2
```
### Expected behavior
There is no bug to implement `load_dataset('glue', 'cola')`
### Environment info
I already wrote it.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5671/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5670
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5670/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5670/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5670/events
|
https://github.com/huggingface/datasets/issues/5670
| 1,640,607,045 |
I_kwDODunzps5hya1F
| 5,670 |
Unable to load multi class classification datasets
|
{
"login": "ysahil97",
"id": 19690506,
"node_id": "MDQ6VXNlcjE5NjkwNTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/19690506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ysahil97",
"html_url": "https://github.com/ysahil97",
"followers_url": "https://api.github.com/users/ysahil97/followers",
"following_url": "https://api.github.com/users/ysahil97/following{/other_user}",
"gists_url": "https://api.github.com/users/ysahil97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ysahil97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ysahil97/subscriptions",
"organizations_url": "https://api.github.com/users/ysahil97/orgs",
"repos_url": "https://api.github.com/users/ysahil97/repos",
"events_url": "https://api.github.com/users/ysahil97/events{/privacy}",
"received_events_url": "https://api.github.com/users/ysahil97/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi ! This sounds related to https://github.com/huggingface/datasets/issues/5406\r\n\r\nUpdating `datasets` fixes the issue ;)",
"Thanks @lhoestq!\r\n\r\nI'll close this issue now."
] | 2023-03-25T18:06:15 | 2023-03-27T22:54:56 | 2023-03-27T22:54:56 |
NONE
| null | null | null |
### Describe the bug
I've been playing around with huggingface library, mostly with `datasets` and wanted to download the multi class classification datasets to fine tune BERT on this task. ([link](https://huggingface.co/docs/transformers/training#train-with-pytorch-trainer)).
While loading the dataset, I'm getting the following error snippet.
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[44], line 3
1 from datasets import load_dataset
----> 3 imdb_dataset = load_dataset("yelp_review_full")
4 imdb_dataset
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/load.py:1719, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1716 ignore_verifications = ignore_verifications or save_infos
1718 # Create a dataset builder
-> 1719 builder_instance = load_dataset_builder(
1720 path=path,
1721 name=name,
1722 data_dir=data_dir,
1723 data_files=data_files,
1724 cache_dir=cache_dir,
1725 features=features,
1726 download_config=download_config,
1727 download_mode=download_mode,
1728 revision=revision,
1729 use_auth_token=use_auth_token,
1730 **config_kwargs,
1731 )
1733 # Return iterable dataset in case of streaming
1734 if streaming:
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/load.py:1523, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1520 raise ValueError(error_msg)
1522 # Instantiate the dataset builder
-> 1523 builder_instance: DatasetBuilder = builder_cls(
1524 cache_dir=cache_dir,
1525 config_name=config_name,
1526 data_dir=data_dir,
1527 data_files=data_files,
1528 hash=hash,
1529 features=features,
1530 use_auth_token=use_auth_token,
1531 **builder_kwargs,
1532 **config_kwargs,
1533 )
1535 return builder_instance
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:1292, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs)
1291 def __init__(self, *args, writer_batch_size=None, **kwargs):
-> 1292 super().__init__(*args, **kwargs)
1293 # Batch size used by the ArrowWriter
1294 # It defines the number of samples that are kept in memory before writing them
1295 # and also the length of the arrow chunks
1296 # None means that the ArrowWriter will use its default value
1297 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:312, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs)
309 # prepare info: DatasetInfo are a standardized dataclass across all datasets
310 # Prefill datasetinfo
311 if info is None:
--> 312 info = self.get_exported_dataset_info()
313 info.update(self._info())
314 info.builder_name = self.name
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:412, in DatasetBuilder.get_exported_dataset_info(self)
400 def get_exported_dataset_info(self) -> DatasetInfo:
401 """Empty DatasetInfo if doesn't exist
402
403 Example:
(...)
410 ```
411 """
--> 412 return self.get_all_exported_dataset_infos().get(self.config.name, DatasetInfo())
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:398, in DatasetBuilder.get_all_exported_dataset_infos(cls)
385 @classmethod
386 def get_all_exported_dataset_infos(cls) -> DatasetInfosDict:
387 """Empty dict if doesn't exist
388
389 Example:
(...)
396 ```
397 """
--> 398 return DatasetInfosDict.from_directory(cls.get_imported_module_dir())
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:370, in DatasetInfosDict.from_directory(cls, dataset_infos_dir)
368 dataset_metadata = DatasetMetadata.from_readme(Path(dataset_infos_dir) / "README.md")
369 if "dataset_info" in dataset_metadata:
--> 370 return cls.from_metadata(dataset_metadata)
371 if os.path.exists(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME)):
372 # this is just to have backward compatibility with dataset_infos.json files
373 with open(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME), encoding="utf-8") as f:
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:396, in DatasetInfosDict.from_metadata(cls, dataset_metadata)
387 return cls(
388 {
389 dataset_info_yaml_dict.get("config_name", "default"): DatasetInfo._from_yaml_dict(
(...)
393 }
394 )
395 else:
--> 396 dataset_info = DatasetInfo._from_yaml_dict(dataset_metadata["dataset_info"])
397 dataset_info.config_name = dataset_metadata["dataset_info"].get("config_name", "default")
398 return cls({dataset_info.config_name: dataset_info})
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:332, in DatasetInfo._from_yaml_dict(cls, yaml_data)
330 yaml_data = copy.deepcopy(yaml_data)
331 if yaml_data.get("features") is not None:
--> 332 yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
333 if yaml_data.get("splits") is not None:
334 yaml_data["splits"] = SplitDict._from_yaml_list(yaml_data["splits"])
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1745, in Features._from_yaml_list(cls, yaml_data)
1742 else:
1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}")
-> 1745 return cls.from_dict(from_yaml_inner(yaml_data))
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1741, in Features._from_yaml_list.<locals>.from_yaml_inner(obj)
1739 elif isinstance(obj, list):
1740 names = [_feature.pop("name") for _feature in obj]
-> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
1742 else:
1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}")
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1741, in <dictcomp>(.0)
1739 elif isinstance(obj, list):
1740 names = [_feature.pop("name") for _feature in obj]
-> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
1742 else:
1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}")
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1736, in Features._from_yaml_list.<locals>.from_yaml_inner(obj)
1734 return {"_type": snakecase_to_camelcase(obj["dtype"])}
1735 else:
-> 1736 return from_yaml_inner(obj["dtype"])
1737 else:
1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]}
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1738, in Features._from_yaml_list.<locals>.from_yaml_inner(obj)
1736 return from_yaml_inner(obj["dtype"])
1737 else:
-> 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]}
1739 elif isinstance(obj, list):
1740 names = [_feature.pop("name") for _feature in obj]
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1706, in Features._from_yaml_list.<locals>.unsimplify(feature)
1704 if isinstance(feature.get("class_label"), dict) and isinstance(feature["class_label"].get("names"), dict):
1705 label_ids = sorted(feature["class_label"]["names"])
-> 1706 if label_ids and label_ids != list(range(label_ids[-1] + 1)):
1707 raise ValueError(
1708 f"ClassLabel expected a value for all label ids [0:{label_ids[-1] + 1}] but some ids are missing."
1709 )
1710 feature["class_label"]["names"] = [feature["class_label"]["names"][label_id] for label_id in label_ids]
TypeError: can only concatenate str (not "int") to str
```
The same issue happens when I try to load `go-emotions` multi class classification dataset. Could somebody guide me on how to fix this issue?
### Steps to reproduce the bug
Run the following code snippet in a python script/ notebook cell:
```
from datasets import load_dataset
yelp_dataset = load_dataset("yelp_review_full")
yelp_dataset
```
### Expected behavior
The dataset should be loaded perfectly, which showing the train, test and unsupervised splits with the basic data statistics
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31
- Python version: 3.10.9
- PyArrow version: 8.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5670/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5669
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5669/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5669/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5669/events
|
https://github.com/huggingface/datasets/issues/5669
| 1,638,070,046 |
I_kwDODunzps5hovce
| 5,669 |
Almost identical datasets, huge performance difference
|
{
"login": "eli-osherovich",
"id": 2437102,
"node_id": "MDQ6VXNlcjI0MzcxMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eli-osherovich",
"html_url": "https://github.com/eli-osherovich",
"followers_url": "https://api.github.com/users/eli-osherovich/followers",
"following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}",
"gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions",
"organizations_url": "https://api.github.com/users/eli-osherovich/orgs",
"repos_url": "https://api.github.com/users/eli-osherovich/repos",
"events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}",
"received_events_url": "https://api.github.com/users/eli-osherovich/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"Do I miss something here?",
"Hi! \r\n\r\nThe first dataset stores images as bytes (the \"image\" column type is `datasets.Image()`) and decodes them as `PIL.Image` objects and the second dataset stores them as variable-length lists (the \"image\" column type is `datasets.Sequence(...)`)), so I guess going from `arrow bytes -> NumPy -> decoding as PIL.Image -> PyTorch` is faster than going from `arrow list -> NumPy -> PyTorch`. \r\n\r\nTo store image bytes in the second example, you can do the following:\r\n\r\n```python\r\ndef transform(example):\r\n example[\"image2\"] = cv2.imread(example[\"image_file_path\"])\r\n return example\r\n\r\nfeatures = dataset.features.copy()\r\ndel features[\"image\"]\r\nfeatures[\"image2\"] = datasets.Image()\r\ndataset2 = dataset.map(transform, remove_columns=[\"image\"], features=features)\r\n\r\nfor x in DataLoader(dataset2.with_format(\"torch\"), batch_size=16, shuffle=True, num_workers=8):\r\n pass\r\n```",
"Thanks, @mariosasko. I could not understand why a (decoded) sequence should be MUCH slower than an encoded image (that must be decoded every time). At any rate, I tried you suggestion. It made the `map` step to run extremely slow (consumes all the 16GB of memory and starts swapping)\r\n\r\nI tried also the easiest (as I see it) scenario, where images are kept as bytes, but it made things even worse: not only it was extremely slow, but also crashes\r\n\r\n```python\r\n\r\ndef transform(example):\r\n example[\"image2\"] = cv2.imread(example[\"image_file_path\"]).tobytes()\r\n return example\r\n\r\ndataset2 = dataset.map(transform, remove_columns=[\"image\"])\r\n\r\nfor x in DataLoader(dataset2.with_format(\"torch\"), batch_size=16, shuffle=True, num_workers=8):\r\n pass\r\n\r\n\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nOutput exceeds the size limit. Open the full output data in a text editor\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nFile ~/virtenvs/py310/lib/python3.10/site-packages/torch/utils/data/dataloader.py:1133, in _MultiProcessingDataLoaderIter._try_get_data(self, timeout)\r\n 1132 try:\r\n-> 1133 data = self._data_queue.get(timeout=timeout)\r\n 1134 return (True, data)\r\n\r\nFile ~/virtenvs/py310/lib/python3.10/multiprocessing/queues.py:113, in Queue.get(self, block, timeout)\r\n 112 timeout = deadline - time.monotonic()\r\n--> 113 if not self._poll(timeout):\r\n 114 raise Empty\r\n\r\nFile ~/virtenvs/py310/lib/python3.10/multiprocessing/connection.py:257, in _ConnectionBase.poll(self, timeout)\r\n 256 self._check_readable()\r\n--> 257 return self._poll(timeout)\r\n\r\nFile ~/virtenvs/py310/lib/python3.10/multiprocessing/connection.py:424, in Connection._poll(self, timeout)\r\n 423 def _poll(self, timeout):\r\n--> 424 r = wait([self], timeout)\r\n 425 return bool(r)\r\n\r\nFile ~/virtenvs/py310/lib/python3.10/multiprocessing/connection.py:931, in wait(object_list, timeout)\r\n 930 while True:\r\n--> 931 ready = selector.select(timeout)\r\n 932 if ready:\r\n...\r\n-> 1146 raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e\r\n 1147 if isinstance(e, queue.Empty):\r\n 1148 return (False, None)\r\n\r\nRuntimeError: DataLoader worker (pid(s) 195393) exited unexpectedly\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nResource temporarily unavailable (src/thread.cpp:269)\r\nResource temporarily unavailable (src/thread.cpp:269)\r\n```\r\n",
"Correction: the `beans` dataset stores the image file paths, not the bytes.\r\n\r\nFor your use case, I think it makes more sense to use `with_tranform` than `map` and lazily decode images with `cv2.imread` when indexing an example/batch:\r\n```python\r\nimport cv2\r\n\r\ndef transform(batch):\r\n batch[\"image2\"] = np.stack([cv2.imread(image_file_path) for image_file_path in batch[\"image_file_path\"]])\r\n return batch\r\n\r\ndataset = dataset.with_transform(transform)\r\n```\r\n",
"This is incorrect.\n\nDid you try to run it? dataset[0] returns a tensor of numbers. dataset2[0]\nreturns the same tensor, but after a few long seconds. Looping over a\nthousand of images cannot take 15 minutes.\n\nOn Fri, 24 Mar 2023 at 19:28 Mario Šaško ***@***.***> wrote:\n\n> Correction: the beans dataset stores the image file paths, not the bytes.\n>\n> For your use case, I think it makes more sense to use with_tranform than\n> map and lazily decode images with cv2.imread when accessing an\n> example/batch:\n>\n> import cv2\n> def transform(batch):\n> batch[\"image2\"] = np.stack([cv2.imread(image_file_path) for image_file_path in batch[\"image_file_path\"]])\n> return batch\n> dataset = dataset.with_transform(transform)\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5669#issuecomment-1483084347>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AASS73SHRWXIQX6SCYCJ7ITW5XDUDANCNFSM6AAAAAAWFSHWEM>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n",
"I updated the transform with the NumPy -> PyTorch conversion.\r\n\r\nI'm sharing the entire code:\r\n```python\r\nimport cv2\r\nimport numpy as np\r\nimport datasets\r\nimport torch\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data import DataLoader\r\n\r\ndataset = load_dataset(\"beans\", split=\"train\")\r\n\r\ndef transform(batch):\r\n # # Pillow decodes as RGB\r\n # batch[\"image\"] = torch.stack([torch.from_numpy(cv2.cvtColor(cv2.imread(image_file_path), cv2.COLOR_BGR2RGB)) for image_file_path in batch[\"image_file_path\"]])\r\n batch[\"image\"] = torch.stack([torch.from_numpy(cv2.imread(image_file_path)) for image_file_path in batch[\"image_file_path\"]])\r\n batch[\"labels\"] = torch.tensor(batch[\"labels\"])\r\n return batch\r\n\r\ndataset2 = dataset.cast_column(\"image\", datasets.Image(decode=False)).with_transform(transform)\r\n\r\nfor x in DataLoader(dataset2, batch_size=16, shuffle=True, num_workers=8):\r\n pass\r\n```\r\n\r\nThis code is ≈ 10% faster on my machine than the default decoding with Pillow and `.with_format(\"torch\")`.",
"Thanks, @mariosasko \r\nMy question remain unanswered though. Why is the `map`ed dataset so slow? My understanding is that a dataset of numpy arrays should be must faster than a dataset that has to decode images into numpy arrays every time one accesses an item. "
] | 2023-03-23T18:20:20 | 2023-04-09T18:56:23 | null |
CONTRIBUTOR
| null | null | null |
### Describe the bug
I am struggling to understand (huge) performance difference between two datasets that are almost identical.
### Steps to reproduce the bug
# Fast (normal) dataset speed:
```python
import cv2
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("beans", split="train")
for x in DataLoader(dataset.with_format("torch"), batch_size=16, shuffle=True, num_workers=8):
pass
```
The above pass over the dataset takes about 1.5 seconds on my computer.
However, if I re-create (almost) the same dataset, the sweep takes HUGE amount of time: 15 minutes. Steps to reproduce:
```python
def transform(example):
example["image2"] = cv2.imread(example["image_file_path"])
return example
dataset2 = dataset.map(transform, remove_columns=["image"])
for x in DataLoader(dataset2.with_format("torch"), batch_size=16, shuffle=True, num_workers=8):
pass
```
### Expected behavior
Same timings
### Environment info
python==3.10.9
datasets==2.10.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5669/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5666
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5666/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5666/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5666/events
|
https://github.com/huggingface/datasets/issues/5666
| 1,637,675,062 |
I_kwDODunzps5hnPA2
| 5,666 |
Support tensorflow 2.12.0 in CI
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[] | 2023-03-23T14:37:51 | 2023-03-23T16:14:54 | 2023-03-23T16:14:54 |
MEMBER
| null | null | null |
Once we find out the root cause of:
- #5663
we should revert the temporary pin on tensorflow introduced by:
- #5664
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5666/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5665
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5665/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5665/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5665/events
|
https://github.com/huggingface/datasets/issues/5665
| 1,637,193,648 |
I_kwDODunzps5hlZew
| 5,665 |
Feature request: IterableDataset.push_to_hub
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null |
[] | 2023-03-23T09:53:04 | 2023-03-23T09:53:16 | null |
CONTRIBUTOR
| null | null | null |
### Feature request
It'd be great to have a lazy push to hub, similar to the lazy loading we have with `IterableDataset`.
Suppose you'd like to filter [LAION](https://huggingface.co/datasets/laion/laion400m) based on certain conditions, but as LAION doesn't fit into your disk, you'd like to leverage streaming:
```
from datasets import load_dataset
dataset = load_dataset("laion/laion400m", streaming=True, split="train")
```
Then you could filter the dataset based on certain conditions:
```
filtered_dataset = dataset.filter(lambda example: example['HEIGHT'] > 400)
```
In order to persist this dataset and push it back to the hub, one currently needs to first load the entire filtered dataset on disk and then push:
```
from datasets import Dataset
Dataset.from_generator(filtered_dataset.__iter__).push_to_hub(...)
```
It would be great if we can instead lazy push to the data to the hub (basically stream the data to the hub), not being limited by our disk size:
```
filtered_dataset.push_to_hub("my-filtered-dataset")
```
### Motivation
This feature would be very useful for people that want to filter huge datasets without having to load the entire dataset or a filtered version thereof on their local disk.
### Your contribution
Happy to test out a PR :)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5665/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5665/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5663
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5663/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5663/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5663/events
|
https://github.com/huggingface/datasets/issues/5663
| 1,637,173,248 |
I_kwDODunzps5hlUgA
| 5,663 |
CI is broken: ModuleNotFoundError: jax requires jaxlib to be installed
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[] | 2023-03-23T09:39:43 | 2023-03-23T10:09:55 | 2023-03-23T10:09:55 |
MEMBER
| null | null | null |
CI test_py310 is broken: see https://github.com/huggingface/datasets/actions/runs/4498945505/jobs/7916194236?pr=5662
```
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_in_memory - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_on_disk - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_audio - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_device - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_image - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_jnp_array_kwargs - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/features/test_features.py::CastToPythonObjectsTest::test_cast_to_python_objects_jax - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
===== 8 failed, 2147 passed, 10 skipped, 37 warnings in 228.69s (0:03:48) ======
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5663/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5661
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5661/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5661/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5661/events
|
https://github.com/huggingface/datasets/issues/5661
| 1,637,129,445 |
I_kwDODunzps5hlJzl
| 5,661 |
CI is broken: Unnecessary `dict` comprehension
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[] | 2023-03-23T09:13:01 | 2023-03-23T09:37:51 | 2023-03-23T09:37:51 |
MEMBER
| null | null | null |
CI check_code_quality is broken:
```
src/datasets/arrow_dataset.py:3267:35: C416 [*] Unnecessary `dict` comprehension (rewrite using `dict()`)
Found 1 error.
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5661/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5660
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5660/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5660/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5660/events
|
https://github.com/huggingface/datasets/issues/5660
| 1,635,543,646 |
I_kwDODunzps5hfGpe
| 5,660 |
integration with imbalanced-learn
|
{
"login": "tansaku",
"id": 30216,
"node_id": "MDQ6VXNlcjMwMjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/30216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tansaku",
"html_url": "https://github.com/tansaku",
"followers_url": "https://api.github.com/users/tansaku/followers",
"following_url": "https://api.github.com/users/tansaku/following{/other_user}",
"gists_url": "https://api.github.com/users/tansaku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tansaku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tansaku/subscriptions",
"organizations_url": "https://api.github.com/users/tansaku/orgs",
"repos_url": "https://api.github.com/users/tansaku/repos",
"events_url": "https://api.github.com/users/tansaku/events{/privacy}",
"received_events_url": "https://api.github.com/users/tansaku/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] |
closed
| false | null |
[] | null |
[
"You can convert any dataset to pandas to be used with imbalanced-learn using `.to_pandas()`\r\n\r\nOtherwise if you want to keep a `Dataset` object and still use e.g. [make_imbalance](https://imbalanced-learn.org/stable/references/generated/imblearn.datasets.make_imbalance.html#imblearn.datasets.make_imbalance), you just need to pass the list of rows ids and labels:\r\n\r\n```python\r\nrow_indices = list(range(len(dataset)))\r\nresampled_row_indices, _ = make_imbalance(\r\n row_indices,\r\n dataset[\"label\"],\r\n sampling_strategy={0: 25, 1: 50, 2: 50},\r\n random_state=RANDOM_STATE,\r\n)\r\n\r\nresampled_dataset = dataset.select(resampled_row_indices)\r\n```"
] | 2023-03-22T11:05:17 | 2023-07-06T18:10:15 | 2023-07-06T18:10:15 |
NONE
| null | null | null |
### Feature request
Wouldn't it be great if the various class balancing operations from imbalanced-learn were available as part of datasets?
### Motivation
I'm trying to use imbalanced-learn to balance a dataset, but it's not clear how to get the two to interoperate - what would be great would be some examples. I've looked online, asked gpt-4, but so far not making much progress.
### Your contribution
If I can get this working myself I can submit a PR with example code to go in the docs
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5660/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5659
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5659/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5659/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5659/events
|
https://github.com/huggingface/datasets/issues/5659
| 1,635,447,540 |
I_kwDODunzps5hevL0
| 5,659 |
[Audio] Soundfile/libsndfile requirements too stringent for decoding mp3 files
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"cc @polinaeterna @lhoestq ",
"@sanchit-gandhi can you please also post the logs of `pip install soundfile==0.12.1`? To check what wheel is being installed or if it's being built from source (I think it's the latter case). \r\nRequired `libsndfile` binary **should** be bundeled with `soundfile` wheel but I assume it **might not** be the case for some non standard Linux distributions. \r\nThe only solution for using `soundfile` here is to build [`libsndfile`](https://github.com/libsndfile/libsndfile) from source:\r\n\r\n```bash\r\ngit clone https://github.com/libsndfile/libsndfile.git\r\ncd libsndfile/\r\nautoreconf -vif\r\n./configure --enable-werror \r\nmake\r\nmake install\r\n```\r\nfor this, some building libraries should be installed, for Debian/Ubuntu it's like:\r\n```bash\r\napt install autoconf autogen automake build-essential libasound2-dev \\\r\n libflac-dev libogg-dev libtool libvorbis-dev libopus-dev libmp3lame-dev \\\r\n libmpg123-dev pkg-config python\r\n```\r\nbut for other Linux distributions it might be different.\r\n\r\nWhen the binary is compiled, it should be put into location where `soundfile` would search for it (the directory is named `_soundfile_data`), it depends on where`libsdfile` (from the previous step) and `soundfile` were installed, might be something like this:\r\n\r\n```bash\r\ncp /usr/local/lib/libsndfile.so /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\ncp /usr/local/lib/libsndfile.la /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\n```\r\n\r\nAnother solution is to not use `soundfile` and apply custom processing function with `torchaudio` while setting `decode=False` in `Audio` feature and passing custom function to `.map`. ",
"Not sure if it may help, but you could also try updating `pip` before installing soundfile",
"@lhoestq @sanchit-gandhi. I encountered the same error (also on the TPU v4) when trying to run `datasets` from source.\r\n\r\nDowngrading soundfile with `pip install soundfile==0.12.0` seems to fix the issue for me.",
"Maybe let's open an issue at https://github.com/bastibe/python-soundfile/issues in case they might know why you get `OSError: cannot load library 'libsndfile.so'` ?",
"> @sanchit-gandhi can you please also post the logs of `pip install soundfile==0.12.1`? To check what wheel is being installed or if it's being built from source (I think it's the latter case). Required `libsndfile` binary **should** be bundeled with `soundfile` wheel but I assume it **might not** be the case for some non standard Linux distributions. The only solution for using `soundfile` here is to build [`libsndfile`](https://github.com/libsndfile/libsndfile) from source:\r\n> \r\n> ```shell\r\n> git clone https://github.com/libsndfile/libsndfile.git\r\n> cd libsndfile/\r\n> autoreconf -vif\r\n> ./configure --enable-werror \r\n> make\r\n> make install\r\n> ```\r\n\r\nThis fixed the issue for me. After installing libsndfile as described above, I had to uninstall soundfile and re-install it with this command. `pip install \"soundfile>=0.12.1\"`",
"Thank you so much for the comprehensive instructions @polinaeterna! Also confirming that they worked for me 🤗 In my case, I had to run several of these commands under \"sudo\" for privileges, but otherwise this workaround gave a successful `libsndfile` install:\r\n\r\n1. Grab source code:\r\n```\r\ngit clone https://github.com/libsndfile/libsndfile.git\r\n```\r\n\r\n2. Set up a build environment:\r\n```\r\nsudo apt install autoconf autogen automake build-essential libasound2-dev \\\r\n libflac-dev libogg-dev libtool libvorbis-dev libopus-dev libmp3lame-dev \\\r\n libmpg123-dev pkg-config python\r\n```\r\n\r\n3. Build and test `libsndfile`:\r\n\r\n```\r\nautoreconf -vif\r\n./configure --enable-werror\r\nsudo make\r\nsudo make check\r\n```\r\n\r\n4. Create `_soundfile_data` submodule (if it does not exist already):\r\n```\r\nsudo mkdir /usr/local/lib/python3.8/dist-packages/_soundfile_data/\r\n```\r\n\r\n5. Copy `libsndfile` files into submodule:\r\n```\r\nsudo cp /usr/local/lib/libsndfile.* /usr/local/lib/python3.8/dist-packages/_soundfile_data/\r\n```",
"On a different machine, I also tried separately by first upgrading pip, then installing soundfile. This worked too! Thanks @lhoestq 🙌",
"> @sanchit-gandhi can you please also post the logs of `pip install soundfile==0.12.1`? To check what wheel is being installed or if it's being built from source (I think it's the latter case). Required `libsndfile` binary **should** be bundeled with `soundfile` wheel but I assume it **might not** be the case for some non standard Linux distributions. The only solution for using `soundfile` here is to build [`libsndfile`](https://github.com/libsndfile/libsndfile) from source:\r\n> \r\n> ```shell\r\n> git clone https://github.com/libsndfile/libsndfile.git\r\n> cd libsndfile/\r\n> autoreconf -vif\r\n> ./configure --enable-werror \r\n> make\r\n> make install\r\n> ```\r\n> \r\n> for this, some building libraries should be installed, for Debian/Ubuntu it's like:\r\n> \r\n> ```shell\r\n> apt install autoconf autogen automake build-essential libasound2-dev \\\r\n> libflac-dev libogg-dev libtool libvorbis-dev libopus-dev libmp3lame-dev \\\r\n> libmpg123-dev pkg-config python\r\n> ```\r\n> \r\n> but for other Linux distributions it might be different.\r\n> \r\n> When the binary is compiled, it should be put into location where `soundfile` would search for it (the directory is named `_soundfile_data`), it depends on where`libsdfile` (from the previous step) and `soundfile` were installed, might be something like this:\r\n> \r\n> ```shell\r\n> cp /usr/local/lib/libsndfile.so /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\n> cp /usr/local/lib/libsndfile.la /usr/local/lib/python3.7/dist-packages/_soundfile_data/\r\n> ```\r\n> \r\n> Another solution is to not use `soundfile` and apply custom processing function with `torchaudio` while setting `decode=False` in `Audio` feature and passing custom function to `.map`.\r\n\r\nThanks, the solution solved my problem. \r\n\r\n1. Purge uninstall libsndfile, uninstall python-soundfile.\r\n2. Build libsndfile from source code and install.\r\n3. Build python-soundfile from source code and install\r\n4. Well done.",
"> Thank you so much for the comprehensive instructions @polinaeterna! Also confirming that they worked for me 🤗 In my case, I had to run several of these commands under \"sudo\" for privileges, but otherwise this workaround gave a successful `libsndfile` install:\r\n> \r\n> 1. Grab source code:\r\n> \r\n> ```\r\n> git clone https://github.com/libsndfile/libsndfile.git\r\n> ```\r\n> \r\n> 2. Set up a build environment:\r\n> \r\n> ```\r\n> sudo apt install autoconf autogen automake build-essential libasound2-dev \\\r\n> libflac-dev libogg-dev libtool libvorbis-dev libopus-dev libmp3lame-dev \\\r\n> libmpg123-dev pkg-config python\r\n> ```\r\n> \r\n> 3. Build and test `libsndfile`:\r\n> \r\n> ```\r\n> autoreconf -vif\r\n> ./configure --enable-werror\r\n> sudo make\r\n> sudo make check\r\n> ```\r\n> \r\n> 4. Create `_soundfile_data` submodule (if it does not exist already):\r\n> \r\n> ```\r\n> sudo mkdir /usr/local/lib/python3.8/dist-packages/_soundfile_data/\r\n> ```\r\n> \r\n> 5. Copy `libsndfile` files into submodule:\r\n> \r\n> ```\r\n> sudo cp /usr/local/lib/libsndfile.* /usr/local/lib/python3.8/dist-packages/_soundfile_data/\r\n> ```\r\n\r\nI had to run 'make install' or the `/usr/local/lib/libsndfile.*` files didn't exist.\r\n\r\nIt's working though!",
"I had the same issue but it is working now! Thanks for all of your comments!",
"I had the same issue on SageMaker but not on Colab;\r\nThe `soundfile` versioning was fine.\r\n\r\n my approach to solve it was to match {\"numpy\", \"numba\"} exact versions\r\n\r\n```\r\n! pip install \"numpy==1.23.5\"\r\n! pip install \"numpy==0.58.1\"\r\n\r\n```\r\nthe numbers are from Colab where successfully I could do the job.\r\n\r\n"
] | 2023-03-22T10:07:33 | 2024-01-17T13:59:22 | 2023-04-07T08:51:28 |
CONTRIBUTOR
| null | null | null |
### Describe the bug
I'm encountering several issues trying to load mp3 audio files using `datasets` on a TPU v4.
The PR https://github.com/huggingface/datasets/pull/5573 updated the audio loading logic to rely solely on the `soundfile`/`libsndfile` libraries for loading audio samples, regardless of their file type.
The installation guide suggests that `libsndfile` is bundled in when `soundfile` is pip installed:
https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/docs/source/installation.md?plain=1#L70-L71
However, just pip installing `soundfile==0.12.1` throws an error that `libsndfile` is missing:
```
pip install soundfile==0.12.1
```
Then:
```python
>>> soundfile
>>> soundfile.__libsndfile_version__
```
<details>
<summary> Traceback (most recent call last): </summary>
```
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 161, in <module>
import _soundfile_data # ImportError if this doesn't exist
ModuleNotFoundError: No module named '_soundfile_data'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 170, in <module>
raise OSError('sndfile library not found using ctypes.util.find_library')
OSError: sndfile library not found using ctypes.util.find_library
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 192, in <module>
_snd = _ffi.dlopen(_explicit_libname)
OSError: cannot load library 'libsndfile.so': libsndfile.so: cannot open shared object file: No such file or directory
```
</details>
Thus, I've followed the official instructions for installing the `soundfile` package from https://github.com/bastibe/python-soundfile#installation, which states that `libsndfile` needs to be installed separately as:
```
pip install --upgrade soundfile
sudo apt install libsndfile1
```
We can now import `soundfile`:
```python
>>> import soundfile
>>> soundfile.__version__
'0.12.1'
>>> soundfile.__libsndfile_version__
'1.0.28'
```
We see that we have `soundfile==0.12.1`, which matches the `datasets[audio]` package constraints:
https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/setup.py#L144-L147
But we have `libsndfile==1.0.28`, which is too low for decoding mp3 files:
https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/src/datasets/config.py#L136-L138
Updating/upgrading the `libsndfile` doesn't change this:
```
sudo apt-get update
sudo apt-get upgrade
```
Is there any other suggestion for how to get a compatible `libsndfile` version? Currently, the version bundled with Ubuntu `apt-get` is too low for decoding mp3 files.
Maybe we could add this under `setup.py` such that we install the correct `libsndfile` version when we do `pip install datasets[audio]`? IMO this would help circumvent such version issues.
### Steps to reproduce the bug
Environment described above. Loading mp3 files:
```python
from datasets import load_dataset
common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True)
print(next(iter(common_voice_es)))
```
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[4], line 2
1 common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True)
----> 2 print(next(iter(common_voice_es)))
File ~/datasets/src/datasets/iterable_dataset.py:941, in IterableDataset.__iter__(self)
937 for key, example in ex_iterable:
938 if self.features:
939 # `IterableDataset` automatically fills missing columns with None.
940 # This is done with `_apply_feature_types_on_example`.
--> 941 yield _apply_feature_types_on_example(
942 example, self.features, token_per_repo_id=self._token_per_repo_id
943 )
944 else:
945 yield example
File ~/datasets/src/datasets/iterable_dataset.py:700, in _apply_feature_types_on_example(example, features, token_per_repo_id)
698 encoded_example = features.encode_example(example)
699 # Decode example for Audio feature, e.g.
--> 700 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id)
701 return decoded_example
File ~/datasets/src/datasets/features/features.py:1864, in Features.decode_example(self, example, token_per_repo_id)
1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1851 """Decode example with custom feature decoding.
1852
1853 Args:
(...)
1861 `dict[str, Any]`
1862 """
-> 1864 return {
1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1866 if self._column_requires_decoding[column_name]
1867 else value
1868 for column_name, (feature, value) in zip_dict(
1869 {key: value for key, value in self.items() if key in example}, example
1870 )
1871 }
File ~/datasets/src/datasets/features/features.py:1865, in <dictcomp>(.0)
1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1851 """Decode example with custom feature decoding.
1852
1853 Args:
(...)
1861 `dict[str, Any]`
1862 """
1864 return {
-> 1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1866 if self._column_requires_decoding[column_name]
1867 else value
1868 for column_name, (feature, value) in zip_dict(
1869 {key: value for key, value in self.items() if key in example}, example
1870 )
1871 }
File ~/datasets/src/datasets/features/features.py:1308, in decode_nested_example(schema, obj, token_per_repo_id)
1305 elif isinstance(schema, (Audio, Image)):
1306 # we pass the token to read and decode files from private repositories in streaming mode
1307 if obj is not None and schema.decode:
-> 1308 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1309 return obj
File ~/datasets/src/datasets/features/audio.py:167, in Audio.decode_example(self, value, token_per_repo_id)
162 raise RuntimeError(
163 "Decoding 'opus' files requires system library 'libsndfile'>=1.0.31, "
164 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. '
165 )
166 elif not config.IS_MP3_SUPPORTED and audio_format == "mp3":
--> 167 raise RuntimeError(
168 "Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, "
169 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. '
170 )
172 if file is None:
173 token_per_repo_id = token_per_repo_id or {}
RuntimeError: Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`.
```
### Expected behavior
Load mp3 files!
### Environment info
- `datasets` version: 2.10.2.dev0
- Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.13.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- Soundfile version: 0.12.1
- Libsndfile version: 1.0.28
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5659/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5659/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5654
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5654/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5654/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5654/events
|
https://github.com/huggingface/datasets/issues/5654
| 1,633,523,705 |
I_kwDODunzps5hXZf5
| 5,654 |
Offset overflow when executing Dataset.map
|
{
"login": "jan-pair",
"id": 118280608,
"node_id": "U_kgDOBwzRoA",
"avatar_url": "https://avatars.githubusercontent.com/u/118280608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jan-pair",
"html_url": "https://github.com/jan-pair",
"followers_url": "https://api.github.com/users/jan-pair/followers",
"following_url": "https://api.github.com/users/jan-pair/following{/other_user}",
"gists_url": "https://api.github.com/users/jan-pair/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jan-pair/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jan-pair/subscriptions",
"organizations_url": "https://api.github.com/users/jan-pair/orgs",
"repos_url": "https://api.github.com/users/jan-pair/repos",
"events_url": "https://api.github.com/users/jan-pair/events{/privacy}",
"received_events_url": "https://api.github.com/users/jan-pair/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"Upd. the above code works if we replace `25` with `1`, but the result value at key \"hr\" is not a tensor but a list of lists of lists of uint8.\r\n\r\nAdding `train_data.set_format(\"torch\")` after map fixes this, but the original issue remains\r\n\r\n",
"As a workaround, one can replace\r\n`return {\"hr\": torch.stack([crop_transf(tensor) for _ in range(25)])}`\r\nwith\r\n`return {f\"hr_crop_{i}\": crop_transf(tensor) for i in range(25)}`\r\nand then choose appropriate crop randomly in further processing, but I still don't understand why the original approach doesn't work(\r\n"
] | 2023-03-21T09:33:27 | 2023-03-21T10:32:07 | null |
NONE
| null | null | null |
### Describe the bug
Hi, I'm trying to use `.map` method to cache multiple random crops from the image to speed up data processing during training, as the image size is too big.
The map function executes all iterations, and then returns the following error:
```bash
Traceback (most recent call last):
File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3353, in _map_single
writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 582, in finalize
self.write_examples_on_file()
File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 446, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 555, in write_batch
self.write_table(pa_table, writer_batch_size)
File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 567, in write_table
pa_table = pa_table.combine_chunks()
File "pyarrow/table.pxi", line 3315, in pyarrow.lib.Table.combine_chunks
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
```
Here is the minimal code (`/home/datasets/DIV2K_train_HR` is just a folder of images that can be replaced by any appropriate):
### Steps to reproduce the bug
```python
from glob import glob
import torch
from datasets import Dataset, Image
from torchvision.transforms import PILToTensor, RandomCrop
file_paths = glob("/home/datasets/DIV2K_train_HR/*")
to_tensor = PILToTensor()
crop_transf = RandomCrop(size=256)
def prepare_data(example):
tensor = to_tensor(example["image"].convert("RGB"))
return {"hr": torch.stack([crop_transf(tensor) for _ in range(25)])}
train_data = Dataset.from_dict({"image": file_paths}).cast_column("image", Image())
train_data = train_data.map(
prepare_data,
cache_file_name="/home/datasets/DIV2K_train_HR_crops.tmp",
desc="Caching multiple random crops of image",
remove_columns="image",
)
print(train_data[0].keys(), train_data[0]["hr"].shape)
```
### Expected behavior
Cached file is stored at `"/home/datasets/DIV2K_train_HR_crops.tmp"`, output is `dict_keys(['hr']) torch.Size([25, 3, 256, 256])`
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.10
- Python version: 3.8.16
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- Pytorch version: 2.0.0+cu117
- torchvision version: 0.15.1+cu117
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5654/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5653
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5653/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5653/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5653/events
|
https://github.com/huggingface/datasets/issues/5653
| 1,633,254,159 |
I_kwDODunzps5hWXsP
| 5,653 |
Doc: save_to_disk, `num_proc` will affect `num_shards`, but it's not documented
|
{
"login": "RmZeta2718",
"id": 42400165,
"node_id": "MDQ6VXNlcjQyNDAwMTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/42400165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RmZeta2718",
"html_url": "https://github.com/RmZeta2718",
"followers_url": "https://api.github.com/users/RmZeta2718/followers",
"following_url": "https://api.github.com/users/RmZeta2718/following{/other_user}",
"gists_url": "https://api.github.com/users/RmZeta2718/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RmZeta2718/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RmZeta2718/subscriptions",
"organizations_url": "https://api.github.com/users/RmZeta2718/orgs",
"repos_url": "https://api.github.com/users/RmZeta2718/repos",
"events_url": "https://api.github.com/users/RmZeta2718/events{/privacy}",
"received_events_url": "https://api.github.com/users/RmZeta2718/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false | null |
[] | null |
[
"I agree this should be documented"
] | 2023-03-21T05:25:35 | 2023-03-24T16:36:23 | 2023-03-24T16:36:23 |
NONE
| null | null | null |
### Describe the bug
[`num_proc`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_proc) will affect `num_shards`, but it's not documented
### Steps to reproduce the bug
Nothing to reproduce
### Expected behavior
[document of `num_shards`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_shards) explicitly says that it depends on `max_shard_size`, it should also mention `num_proc`.
### Environment info
datasets main document
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5653/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5651
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5651/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5651/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5651/events
|
https://github.com/huggingface/datasets/issues/5651
| 1,631,967,509 |
I_kwDODunzps5hRdkV
| 5,651 |
expanduser in save_to_disk
|
{
"login": "RmZeta2718",
"id": 42400165,
"node_id": "MDQ6VXNlcjQyNDAwMTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/42400165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RmZeta2718",
"html_url": "https://github.com/RmZeta2718",
"followers_url": "https://api.github.com/users/RmZeta2718/followers",
"following_url": "https://api.github.com/users/RmZeta2718/following{/other_user}",
"gists_url": "https://api.github.com/users/RmZeta2718/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RmZeta2718/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RmZeta2718/subscriptions",
"organizations_url": "https://api.github.com/users/RmZeta2718/orgs",
"repos_url": "https://api.github.com/users/RmZeta2718/repos",
"events_url": "https://api.github.com/users/RmZeta2718/events{/privacy}",
"received_events_url": "https://api.github.com/users/RmZeta2718/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false |
{
"login": "benjaminbrown038",
"id": 35114142,
"node_id": "MDQ6VXNlcjM1MTE0MTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benjaminbrown038",
"html_url": "https://github.com/benjaminbrown038",
"followers_url": "https://api.github.com/users/benjaminbrown038/followers",
"following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}",
"gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions",
"organizations_url": "https://api.github.com/users/benjaminbrown038/orgs",
"repos_url": "https://api.github.com/users/benjaminbrown038/repos",
"events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}",
"received_events_url": "https://api.github.com/users/benjaminbrown038/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "benjaminbrown038",
"id": 35114142,
"node_id": "MDQ6VXNlcjM1MTE0MTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benjaminbrown038",
"html_url": "https://github.com/benjaminbrown038",
"followers_url": "https://api.github.com/users/benjaminbrown038/followers",
"following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}",
"gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions",
"organizations_url": "https://api.github.com/users/benjaminbrown038/orgs",
"repos_url": "https://api.github.com/users/benjaminbrown038/repos",
"events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}",
"received_events_url": "https://api.github.com/users/benjaminbrown038/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"`save_to_disk` should indeed expand `~`. Marking it as a \"good first issue\".",
"#self-assign\r\n\r\nFile path to code: \r\n\r\nhttps://github.com/huggingface/datasets/blob/2.13.0/src/datasets/arrow_dataset.py#L1364\r\n\r\n@RmZeta2718 I created a pull request for this issue. ",
"Hello, \r\nIt says `save_to_disk` is deprecated in 2.8.0, so the alternative to this will be `storage_options`? \r\n\r\nhttps://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.save_to_disk",
"@ashikshafi08 I think you misunderstood the warning. The method `save_to_disk` is not deprecated only the optional parameter `fs`.\r\nAlso @benjaminbrown038 as I cannot find your PR I would like to work on this if you don't mind.",
"@mariosasko It's been several months and the PR is not reviewed. Could you please take a look? I assume this is not complicated and could be merged fairly soon."
] | 2023-03-20T12:02:18 | 2023-10-27T14:04:37 | 2023-10-27T14:04:37 |
NONE
| null | null | null |
### Describe the bug
save_to_disk() does not expand `~`
1. `dataset = load_datasets("any dataset")`
2. `dataset.save_to_disk("~/data")`
3. a folder named "~" created in current folder
4. FileNotFoundError is raised, because the expanded path does not exist (`/home/<user>/data`)
related issue https://github.com/huggingface/transformers/issues/10628
### Steps to reproduce the bug
As described above.
### Expected behavior
expanduser correctly
### Environment info
- datasets 2.10.1
- python 3.10
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5651/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5650
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5650/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5650/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5650/events
|
https://github.com/huggingface/datasets/issues/5650
| 1,630,336,919 |
I_kwDODunzps5hLPeX
| 5,650 |
load_dataset can't work correct with my image data
|
{
"login": "WiNE-iNEFF",
"id": 41611046,
"node_id": "MDQ6VXNlcjQxNjExMDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/41611046?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WiNE-iNEFF",
"html_url": "https://github.com/WiNE-iNEFF",
"followers_url": "https://api.github.com/users/WiNE-iNEFF/followers",
"following_url": "https://api.github.com/users/WiNE-iNEFF/following{/other_user}",
"gists_url": "https://api.github.com/users/WiNE-iNEFF/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WiNE-iNEFF/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WiNE-iNEFF/subscriptions",
"organizations_url": "https://api.github.com/users/WiNE-iNEFF/orgs",
"repos_url": "https://api.github.com/users/WiNE-iNEFF/repos",
"events_url": "https://api.github.com/users/WiNE-iNEFF/events{/privacy}",
"received_events_url": "https://api.github.com/users/WiNE-iNEFF/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Can you post a reproducible code snippet of what you tried to do?\r\n\r\n",
"> Can you post a reproducible code snippet of what you tried to do?\n> \n> \n\n```python\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"my_folder_name\", split=\"train\")\n```",
"hi @WiNE-iNEFF ! can you please also tell a bit more about how your data is structured (directory structure and filenames patterns)?",
"> hi @WiNE-iNEFF ! can you please also tell a bit more about how your data is structured (directory structure and filenames patterns)?\n\nAll file have format .png converted in RGBA. \nIn main folder \"MyData\" contain 4 folder with images. In function load_dataset i use folder \"MyData\"",
"@WiNE-iNEFF I'm sorry there is still not enough information to answer your question :( For now I can only assume that your [filenames contain split names](https://huggingface.co/docs/datasets/repository_structure#splits-and-file-names) which are somehow incorrectly parsed. \r\nWhat would be the output if you omit `split` while loading? Like just\r\n```python\r\nds = load_dataset(\"MyData\")\r\nprint(ds)\r\n```\r\n\r\n",
"> @WiNE-iNEFF I'm sorry there is still not enough information to answer your question :( For now I can only assume that your [filenames contain split names](https://huggingface.co/docs/datasets/repository_structure#splits-and-file-names) which are somehow incorrectly parsed. \n> What would be the output if you omit `split` while loading? Like just\n> ```python\n> ds = load_dataset(\"MyData\")\n> print(ds)\n> ```\n> \n> \n\n```python\nDataset({\n features: ['image', 'label'],\n num_rows: 4\n})\n```",
"@WiNE-iNEFF My only guess is that 4 images in your data have `\"train\"` string in their names (something like `\"train_image_0.png\"`) and others do not and the loader ignores all the files that do not contain split name in filename. If it's true, please try to remove \"train\" from filenames. Or maybe they are inside a directory named \"train\", then the directory should be renamed (unless you want to put only these 4 specific images to the train but apparently you do not).\r\n\r\nIf there is a bug I cannot investigate it unfortunately because I cannot reproduce your case without some data samples. ",
"> @WiNE-iNEFF My only guess is that 4 images in your data have `\"train\"` string in their names (something like `\"train_image_0.png\"`) and others do not and the loader ignores all the files that do not contain split name in filename. If it's true, please try to remove \"train\" from filenames. Or maybe they are inside a directory named \"train\", then the directory should be renamed (unless you want to put only these 4 specific images to the train but apparently you do not).\n> \n> If there is a bug I cannot investigate it unfortunately because I cannot reproduce your case without some data samples. \n\nI checked my files and some of them do have the words train, valid and test in their names, but the number of such images is more than 500, not 4.",
"@WiNE-iNEFF Probably they are named inconsistently so that the correct pattern for which files should correspond to which split cannot be inferred. You can make it clearer to the loader by removing split names from filenames and putting files in separate folder for each split (you can take a look at the [documentation for imagefolder](https://huggingface.co/docs/datasets/image_dataset#imagefolder)):\r\n```\r\n Fuaimeanna2/\r\n├─ test\r\n│ ├─ label_0\r\n│ │ ├── filename_0.jpg\r\n│ │ └── filename_1.jpg\r\n│ │ └── ...\r\n│ ├─ label_1\r\n│ │ └── ...\r\n│ ├─ label_2\r\n│ │ └── ...\r\n│ └─ label_3\r\n│ └── ...\r\n├─ train\r\n│ ├─ label_0\r\n│ │ └── ...\r\n│ ├─ label_1\r\n│ │ └── ...\r\n│ ├─ label_2\r\n│ │ └── ...\r\n│ └─ label_3\r\n│ └── ...\r\n└── validation\r\n ├─ label_0\r\n │ └── ...\r\n ├─ label_1\r\n │ └── ...\r\n ├─ label_2\r\n │ └── ...\r\n └─ label_3\r\n └── ...\r\n```",
"> @WiNE-iNEFF Probably they are named inconsistently so that the correct pattern for which files should correspond to which split cannot be inferred. You can make it clearer to the loader by removing split names from filenames and putting files in separate folder for each split (you can take a look at the [documentation for imagefolder](https://huggingface.co/docs/datasets/image_dataset#imagefolder)):\n> ```\n> Fuaimeanna2/\n> ├─ test\n> │ ├─ label_0\n> │ │ ├── filename_0.jpg\n> │ │ └── filename_1.jpg\n> │ │ └── ...\n> │ ├─ label_1\n> │ │ └── ...\n> │ ├─ label_2\n> │ │ └── ...\n> │ └─ label_3\n> │ └── ...\n> ├─ train\n> │ ├─ label_0\n> │ │ └── ...\n> │ ├─ label_1\n> │ │ └── ...\n> │ ├─ label_2\n> │ │ └── ...\n> │ └─ label_3\n> │ └── ...\n> └── validation\n> ├─ label_0\n> │ └── ...\n> ├─ label_1\n> │ └── ...\n> ├─ label_2\n> │ └── ...\n> └─ label_3\n> └── ...\n> ```\n\nI have read this documentation more than once. It just wasn't a problem before.",
"Hi,\r\n\r\nYou need to use:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imagefolder\", split=\"train\", data_dir=\"path_to_your_folder\")\r\n```\r\ninstead of \r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"my_folder_name\", split=\"train\")\r\n```\r\nTo create an image dataset from your local folders.",
"> Hi,\r\n> \r\n> You need to use:\r\n> \r\n> ```\r\n> from datasets import load_dataset\r\n> \r\n> dataset = load_dataset(\"imagefolder\", split=\"train\", data_dir=\"path_to_your_folder\")\r\n> ```\r\n> \r\n> instead of\r\n> \r\n> ```\r\n> from datasets import load_dataset\r\n> \r\n> dataset = load_dataset(\"my_folder_name\", split=\"train\")\r\n> ```\r\n> \r\n> To create an image dataset from your local folders.\r\n\r\nThank you, but even using the method that you wrote above absolutely nothing changes, especially without using data_dir on my other data everything works fine",
"@WiNE-iNEFF have you tried the suggestion I posted above? with removing split names from filenames and structuring files in folders? \r\n\r\n\r\n> even using the method that you wrote above absolutely nothing changes\r\n\r\nfyi - nothing changed because these two approaches are basically the same. it's just that when you pass your data directory as a dataset name (`load_dataset(\"my_folder_name\"`), not as `data_dir` (`load_dataset(\"imagefolder\", data_dir=\"my_folder_name\"`), `datasets` infers what module to use (`imagefolder` in your case) automatically, by file extensions.",
"Oh I didn't know that! OK but in any case, not sure why the image builder isn't working for @WiNE-iNEFF. But it's hard for us to help if we can't reproduce. I'd just check the structure of the folders, see if the splits are correctly set up, etc.",
"> @WiNE-iNEFF have you tried the suggestion I posted above? with removing split names from filenames and structuring files in folders? \n> \n> \n> > even using the method that you wrote above absolutely nothing changes\n> \n> fyi - nothing changed because these two approaches are basically the same. it's just that when you pass your data directory as a dataset name (`load_dataset(\"my_folder_name\"`), not as `data_dir` (`load_dataset(\"imagefolder\", data_dir=\"my_folder_name\"`), `datasets` infers what module to use (`imagefolder` in your case) automatically, by file extensions.\n\nI'll try to try your method over the next few days, then I'll write it turned out ",
"> @WiNE-iNEFF have you tried the suggestion I posted above? with removing split names from filenames and structuring files in folders? \n> \n> \n> > even using the method that you wrote above absolutely nothing changes\n> \n> fyi - nothing changed because these two approaches are basically the same. it's just that when you pass your data directory as a dataset name (`load_dataset(\"my_folder_name\"`), not as `data_dir` (`load_dataset(\"imagefolder\", data_dir=\"my_folder_name\"`), `datasets` infers what module to use (`imagefolder` in your case) automatically, by file extensions.\n\nI tried creating a `train` folder and put my image folders in it. As a result, all 18,000 images were loaded. ",
"@WiNE-iNEFF great! So to explain what happened according to my assumptions:\r\n\r\nWhen you use a standard packaged loader (like `imagefolder`, `csv`, `jsonl`, and so on) and load your data like `load_dataset(\"my_folder_name\")` or `load_dataset(\"imagefolder\", data_dir=\"my_folder_name\"`, the library searches for patterns to divide files into splits. This is described a bit in [this doc](https://huggingface.co/docs/datasets/v2.10.0/en/repository_structure#splits-and-file-names). And the order to search for patterns is the following:\r\n1. first it checks for [pattern like `data/<split_name>-xxxxx-of-xxxxx`](https://huggingface.co/docs/datasets/v2.10.0/en/repository_structure#custom-split-names) (which allows to pass custom split names)\r\n2. then for directories named as splits (if you have directories named `train`, `test` etc.)\r\n3. then for [splits in filenames](https://huggingface.co/docs/datasets/v2.10.0/en/repository_structure#splits-and-file-names) (like if you have files named `train-image.jpg`, `test_0.jpg`, ...)\r\n4. then if no pattern was found, it treats all files as belonging to a single `train` split\r\n\r\nThe code is [here](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L215).\r\nSo I assume that in your case, since you didn't have directories for splits (pattern 2), some files that included split keywords (pattern 3) were included and others were ignored as not matching the pattern. And when you added `train` directory, the pattern for directories (pattern 2) was triggered first and everything worked as expected. Everything worked in your previous cases probably because you didn't have split names keywords in filenames, so all the files ended up being a part of a single train split (pattern 4).\r\n\r\nAnother way to mitigate this apart from structuring your data according to the patterns is to explicitly state with files belong to which splits by passing them with `data_files` parameter:\r\n```python\r\nload_dataset(\"my_folder_name\", data_files={\"train\": \"**\"}) # to tell that all files should be included \r\n```\r\n\r\nNow I see that this order should be explained in documentation and also referenced in sections for packaged modules like `imagefolder`, thank you for pointing this out. \r\n\r\n \r\n",
"@NielsRogge @polinaeterna I have a similar problem when reading my dataset. I want to use DETR for object detection, but my data is in YOLO format. With a dataset of 10k images, yolo format involves having 10k labels. As far as I read regarding [COCO format](https://auto.gluon.ai/stable/tutorials/multimodal/object_detection/data_preparation/convert_data_to_coco_format.html), there must be one JSON per split. However, as I post in the [Hugging Face forum](https://discuss.huggingface.co/t/prepare-dataset-from-yolo-format-to-coco-for-detr/34894), when it is read, the number of rows is 1, which does not make sense. \r\nThe instruction to read the train-val-test splits are: \r\n```python\r\nfrom datasets import load_dataset\r\ndata_files = {\r\n\t\"train\": './train_labels.json',\r\n\t\"validation\": './val_labels.json',\r\n\t\"test\": './test_labels.json'\r\n}\r\ndataset = load_dataset(\"json\", data_files=data_files)\r\n```\r\nAn example of the short version of the json file I read, to reproduce my error, is the following: \r\n\r\n``` json\r\n{\r\n \"info\": {},\r\n \"licenses\": [],\r\n \"images\": [\r\n {\r\n \"id\": 1,\r\n \"file_name\": \"aceca_100.mp4frame21.png\",\r\n \"width\": 1280,\r\n \"height\": 720,\r\n \"pixel_values\": null,\r\n \"pixel_mask\": null\r\n },\r\n {\r\n \"id\": 2,\r\n \"file_name\": \"aceca_100.mp4frame24.png\",\r\n \"width\": 1280,\r\n \"height\": 720,\r\n \"pixel_values\": null,\r\n \"pixel_mask\": null\r\n },\r\n {\r\n \"id\": 3,\r\n \"file_name\": \"aceca_100.mp4frame25.png\",\r\n \"width\": 1280,\r\n \"height\": 720,\r\n \"pixel_values\": null,\r\n \"pixel_mask\": null}],\r\n \"annotations\": [\r\n {\r\n \"id\": 1,\r\n \"image_id\": 1,\r\n \"category_id\": 0,\r\n \"bbox\": [0.0, 278.21896388398557, 86.94096523844935, 156.0293445072134],\r\n \"area\": 13565.341816979679,\r\n \"iscrowd\": 0\r\n },\r\n {\r\n \"id\": 2,\r\n \"image_id\": 2,\r\n \"category_id\": 0,\r\n \"bbox\": [149.28851295721816, 297.6359759754418, 34.76802347007475, 98.03908698442889],\r\n \"area\": 3408.625277259324,\r\n \"iscrowd\": 0\r\n },\r\n {\r\n \"id\": 3,\r\n \"image_id\": 3,\r\n \"category_id\": 0,\r\n \"bbox\": [153.3817197549372, 300.168969412891, 31.787555842913775, 89.69583163436312],\r\n \"area\": 2851.2112569539095,\r\n \"iscrowd\": 0\r\n }\r\n ],\r\n \"categories\": [\r\n {\r\n \"id\": 0, \"name\": \"person\"\r\n }\r\n ]\r\n }\r\n```\r\nIf full files required, my email is [email protected]",
"Hi @Alberto1404, to load an object detection dataset it's recommended to make use of the metadata feature as explained [here](https://huggingface.co/docs/datasets/image_dataset#object-detection). ",
"Thank you @NielsRogge! It works!!!",
"You can now refer to https://huggingface.co/docs/datasets/repository_structure to learn about the `datasets`' data files inference, so I'm closing this issue."
] | 2023-03-18T13:59:13 | 2023-07-24T14:13:02 | 2023-07-24T14:13:01 |
NONE
| null | null | null |
I have about 20000 images in my folder which divided into 4 folders with class names.
When i use load_dataset("my_folder_name", split="train") this function create dataset in which there are only 4 images, the remaining 19000 images were not added there. What is the problem and did not understand. Tried converting images and the like but absolutely nothing worked
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5650/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5649
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5649/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5649/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5649/events
|
https://github.com/huggingface/datasets/issues/5649
| 1,630,173,460 |
I_kwDODunzps5hKnkU
| 5,649 |
The index column created with .to_sql() is dependent on the batch_size when writing
|
{
"login": "lsb",
"id": 45281,
"node_id": "MDQ6VXNlcjQ1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/45281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lsb",
"html_url": "https://github.com/lsb",
"followers_url": "https://api.github.com/users/lsb/followers",
"following_url": "https://api.github.com/users/lsb/following{/other_user}",
"gists_url": "https://api.github.com/users/lsb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lsb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lsb/subscriptions",
"organizations_url": "https://api.github.com/users/lsb/orgs",
"repos_url": "https://api.github.com/users/lsb/repos",
"events_url": "https://api.github.com/users/lsb/events{/privacy}",
"received_events_url": "https://api.github.com/users/lsb/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Thanks for reporting, @lsb. \r\n\r\nWe are investigating it.\r\n\r\nOn the other hand, please note that in the next `datasets` release, the index will not be created by default (see #5583). If you would like to have it, you will need to explicitly pass `index=True`. ",
"I think this is low enough priority for me to close this as Won't Fix. If I need any primary keys I can generate them beforehand. Feel free to reopen."
] | 2023-03-18T05:25:17 | 2023-06-17T07:01:57 | 2023-06-17T07:01:57 |
NONE
| null | null | null |
### Describe the bug
It seems like the "index" column is designed to be unique? The values are only unique per batch. The SQL index is not a unique index.
This can be a problem, for instance, when building a faiss index on a dataset and then trying to match up ids with a sql export.
### Steps to reproduce the bug
```
from datasets import Dataset
import sqlite3
db = sqlite3.connect(":memory:")
nice_numbers = Dataset.from_dict({"nice_number": range(101,106)})
nice_numbers.to_sql("nice1", db, batch_size=1)
nice_numbers.to_sql("nice2", db, batch_size=2)
print(db.execute("select * from nice1").fetchall()) # [(0, 101), (0, 102), (0, 103), (0, 104), (0, 105)]
print(db.execute("select * from nice2").fetchall()) # [(0, 101), (1, 102), (0, 103), (1, 104), (0, 105)]
```
### Expected behavior
I expected the "index" column to be unique
### Environment info
```
% datasets-cli env
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.10.1
- Platform: macOS-13.2.1-arm64-arm-64bit
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.5.2
zsh: segmentation fault datasets-cli env
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5649/timeline
| null |
not_planned
|
https://api.github.com/repos/huggingface/datasets/issues/5648
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5648/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5648/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5648/events
|
https://github.com/huggingface/datasets/issues/5648
| 1,629,253,719 |
I_kwDODunzps5hHHBX
| 5,648 |
flatten_indices doesn't work with pandas format
|
{
"login": "alialamiidrissi",
"id": 14365168,
"node_id": "MDQ6VXNlcjE0MzY1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/14365168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alialamiidrissi",
"html_url": "https://github.com/alialamiidrissi",
"followers_url": "https://api.github.com/users/alialamiidrissi/followers",
"following_url": "https://api.github.com/users/alialamiidrissi/following{/other_user}",
"gists_url": "https://api.github.com/users/alialamiidrissi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alialamiidrissi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alialamiidrissi/subscriptions",
"organizations_url": "https://api.github.com/users/alialamiidrissi/orgs",
"repos_url": "https://api.github.com/users/alialamiidrissi/repos",
"events_url": "https://api.github.com/users/alialamiidrissi/events{/privacy}",
"received_events_url": "https://api.github.com/users/alialamiidrissi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false |
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Thanks for reporting! This can be fixed by setting the format to `arrow` in `flatten_indices` and restoring the original format after the flattening. I'm working on a PR that reduces the number of the `flatten_indices` calls in our codebase and makes `flatten_indices` a no-op when a dataset does not have an indices mapping, so I'll incorporate the fix in that PR."
] | 2023-03-17T12:44:25 | 2023-03-21T13:12:03 | null |
NONE
| null | null | null |
### Describe the bug
Hi,
I noticed that `flatten_indices` throws an error when the batch format is `pandas`. This is probably due to the fact that flatten_indices uses map internally which doesn't accept dataframes as the transformation function output
### Steps to reproduce the bug
tabular_data = pd.DataFrame(np.random.randn(10,10))
tabular_data = datasets.arrow_dataset.Dataset.from_pandas(tabular_data)
tabular_data.with_format("pandas").select([0,1,2,3]).flatten_indices()
### Expected behavior
No error thrown
### Environment info
- `datasets` version: 2.10.1
- Python version: 3.9.5
- PyArrow version: 11.0.0
- Pandas version: 1.4.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5648/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5647
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5647/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5647/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5647/events
|
https://github.com/huggingface/datasets/issues/5647
| 1,628,225,544 |
I_kwDODunzps5hDMAI
| 5,647 |
Make all print statements optional
|
{
"login": "gagan3012",
"id": 49101362,
"node_id": "MDQ6VXNlcjQ5MTAxMzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/49101362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gagan3012",
"html_url": "https://github.com/gagan3012",
"followers_url": "https://api.github.com/users/gagan3012/followers",
"following_url": "https://api.github.com/users/gagan3012/following{/other_user}",
"gists_url": "https://api.github.com/users/gagan3012/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gagan3012/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gagan3012/subscriptions",
"organizations_url": "https://api.github.com/users/gagan3012/orgs",
"repos_url": "https://api.github.com/users/gagan3012/repos",
"events_url": "https://api.github.com/users/gagan3012/events{/privacy}",
"received_events_url": "https://api.github.com/users/gagan3012/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false | null |
[] | null |
[
"related to #5444 ",
"We now log these messages instead of printing them (addressed in #6019), so I'm closing this issue."
] | 2023-03-16T20:30:07 | 2023-07-21T14:20:25 | 2023-07-21T14:20:24 |
NONE
| null | null | null |
### Feature request
Make all print statements optional to speed up the development
### Motivation
Im loading multiple tiny datasets and all the print statements make the loading slower
### Your contribution
I can help contribute
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5647/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5645
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5645/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5645/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5645/events
|
https://github.com/huggingface/datasets/issues/5645
| 1,627,108,278 |
I_kwDODunzps5g-7O2
| 5,645 |
Datasets map and select(range()) is giving dill error
|
{
"login": "Tanya-11",
"id": 90728105,
"node_id": "MDQ6VXNlcjkwNzI4MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/90728105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tanya-11",
"html_url": "https://github.com/Tanya-11",
"followers_url": "https://api.github.com/users/Tanya-11/followers",
"following_url": "https://api.github.com/users/Tanya-11/following{/other_user}",
"gists_url": "https://api.github.com/users/Tanya-11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tanya-11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tanya-11/subscriptions",
"organizations_url": "https://api.github.com/users/Tanya-11/orgs",
"repos_url": "https://api.github.com/users/Tanya-11/repos",
"events_url": "https://api.github.com/users/Tanya-11/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tanya-11/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"It looks like an error that we observed once in https://github.com/huggingface/datasets/pull/5166\r\n\r\nCan you try to update `datasets` ?\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\nif it doesn't work, can you make sure you don't have packages installed that may modify `dill`'s behavior, such as `apache-beam` ?",
"@lhoestq That fixed the problem, Thanks :)"
] | 2023-03-16T10:01:28 | 2023-03-17T04:24:51 | 2023-03-17T04:24:51 |
NONE
| null | null | null |
### Describe the bug
I'm using Huggingface Datasets library to load the dataset in google colab
When I do,
> data = train_dataset.select(range(10))
or
> train_datasets = train_dataset.map(
> process_data_to_model_inputs,
> batched=True,
> batch_size=batch_size,
> remove_columns=["article", "abstract"],
> )
I get following error: `module 'dill._dill' has no attribute 'log'`
I've tried downgrading the dill version from latest to 0.2.8, but no luck.
Stack trace:
> ---------------------------------------------------------------------------
> ModuleNotFoundError Traceback (most recent call last)
> /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in _no_cache_fields(obj)
> 367 try:
> --> 368 import transformers as tr
> 369
>
> ModuleNotFoundError: No module named 'transformers'
>
> During handling of the above exception, another exception occurred:
>
> AttributeError Traceback (most recent call last)
> 17 frames
> <ipython-input-13-dd14813880a6> in <module>
> ----> 1 test = train_dataset.select(range(10))
>
> /usr/local/lib/python3.9/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
> 155 }
> 156 # apply actual function
> --> 157 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> 158 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
> 159 # re-apply format to the output
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
> 155 if kwargs.get(fingerprint_name) is None:
> 156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name
> --> 157 kwargs[fingerprint_name] = update_fingerprint(
> 158 self._fingerprint, transform, kwargs_for_fingerprint
> 159 )
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args)
> 103 for key in sorted(transform_args):
> 104 hasher.update(key)
> --> 105 hasher.update(transform_args[key])
> 106 return hasher.hexdigest()
> 107
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in update(self, value)
> 55 def update(self, value):
> 56 self.m.update(f"=={type(value)}==".encode("utf8"))
> ---> 57 self.m.update(self.hash(value).encode("utf-8"))
> 58
> 59 def hexdigest(self):
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in hash(cls, value)
> 51 return cls.dispatch[type(value)](cls, value)
> 52 else:
> ---> 53 return cls.hash_default(value)
> 54
> 55 def update(self, value):
>
> /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in hash_default(cls, value)
> 44 @classmethod
> 45 def hash_default(cls, value):
> ---> 46 return cls.hash_bytes(dumps(value))
> 47
> 48 @classmethod
>
> /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in dumps(obj)
> 387 file = StringIO()
> 388 with _no_cache_fields(obj):
> --> 389 dump(obj, file)
> 390 return file.getvalue()
> 391
>
> /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in dump(obj, file)
> 359 def dump(obj, file):
> 360 """pickle an object to a file"""
> --> 361 Pickler(file, recurse=True).dump(obj)
> 362 return
> 363
>
> /usr/local/lib/python3.9/dist-packages/dill/_dill.py in dump(self, obj)
> 392 return
> 393
> --> 394 def load_session(filename='/tmp/session.pkl', main=None):
> 395 """update the __main__ module with the state from the session file"""
> 396 if main is None: main = _main_module
>
> /usr/lib/python3.9/pickle.py in dump(self, obj)
> 485 if self.proto >= 4:
> 486 self.framer.start_framing()
> --> 487 self.save(obj)
> 488 self.write(STOP)
> 489 self.framer.end_framing()
>
> /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save(self, obj, save_persistent_id)
> 386 pickler._byref = False # disable pickling by name reference
> 387 pickler._recurse = False # disable pickling recursion for globals
> --> 388 pickler._session = True # is best indicator of when pickling a session
> 389 pickler.dump(main)
> 390 finally:
>
> /usr/lib/python3.9/pickle.py in save(self, obj, save_persistent_id)
> 558 f = self.dispatch.get(t)
> 559 if f is not None:
> --> 560 f(self, obj) # Call unbound method with explicit self
> 561 return
> 562
>
> /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save_singleton(pickler, obj)
>
> /usr/lib/python3.9/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj)
> 689 write(NEWOBJ)
> 690 else:
> --> 691 save(func)
> 692 save(args)
> 693 write(REDUCE)
>
> /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save(self, obj, save_persistent_id)
> 386 pickler._byref = False # disable pickling by name reference
> 387 pickler._recurse = False # disable pickling recursion for globals
> --> 388 pickler._session = True # is best indicator of when pickling a session
> 389 pickler.dump(main)
> 390 finally:
>
> /usr/lib/python3.9/pickle.py in save(self, obj, save_persistent_id)
> 558 f = self.dispatch.get(t)
> 559 if f is not None:
> --> 560 f(self, obj) # Call unbound method with explicit self
> 561 return
> 562
>
> /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in save_function(pickler, obj)
> 583 dill._dill.log.info("# F1")
> 584 else:
> --> 585 dill._dill.log.info("F2: %s" % obj)
> 586 name = getattr(obj, "__qualname__", getattr(obj, "__name__", None))
> 587 dill._dill.StockPickler.save_global(pickler, obj, name=name)
>
> AttributeError: module 'dill._dill' has no attribute 'log'
### Steps to reproduce the bug
After loading the dataset(eg: https://huggingface.co/datasets/scientific_papers) in google colab
do either
> data = train_dataset.select(range(10))
or
> train_datasets = train_dataset.map(
> process_data_to_model_inputs,
> batched=True,
> batch_size=batch_size,
> remove_columns=["article", "abstract"],
> )
### Expected behavior
The map and select function should work
### Environment info
dataset: https://huggingface.co/datasets/scientific_papers
dill = 0.3.6
python= 3.9.16
transformer = 4.2.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5645/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5641
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5641/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5641/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5641/events
|
https://github.com/huggingface/datasets/issues/5641
| 1,625,942,730 |
I_kwDODunzps5g6erK
| 5,641 |
Features cannot be named "self"
|
{
"login": "alialamiidrissi",
"id": 14365168,
"node_id": "MDQ6VXNlcjE0MzY1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/14365168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alialamiidrissi",
"html_url": "https://github.com/alialamiidrissi",
"followers_url": "https://api.github.com/users/alialamiidrissi/followers",
"following_url": "https://api.github.com/users/alialamiidrissi/following{/other_user}",
"gists_url": "https://api.github.com/users/alialamiidrissi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alialamiidrissi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alialamiidrissi/subscriptions",
"organizations_url": "https://api.github.com/users/alialamiidrissi/orgs",
"repos_url": "https://api.github.com/users/alialamiidrissi/repos",
"events_url": "https://api.github.com/users/alialamiidrissi/events{/privacy}",
"received_events_url": "https://api.github.com/users/alialamiidrissi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[] | 2023-03-15T17:16:40 | 2023-03-16T17:14:51 | 2023-03-16T17:14:51 |
NONE
| null | null | null |
### Describe the bug
Hi,
I noticed that we cannot create a HuggingFace dataset from Pandas DataFrame with a column named `self`.
The error seems to be coming from arguments validation in the `Features.from_dict` function.
### Steps to reproduce the bug
```python
import datasets
dummy_pandas = pd.DataFrame([0,1,2,3], columns = ["self"])
datasets.arrow_dataset.Dataset.from_pandas(dummy_pandas)
```
### Expected behavior
No error thrown
### Environment info
- `datasets` version: 2.8.0
- Python version: 3.9.5
- PyArrow version: 6.0.1
- Pandas version: 1.4.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5641/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5639
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5639/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5639/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5639/events
|
https://github.com/huggingface/datasets/issues/5639
| 1,625,737,098 |
I_kwDODunzps5g5seK
| 5,639 |
Parquet file wrongly recognized as zip prevents loading a dataset
|
{
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[] | 2023-03-15T15:20:45 | 2023-03-16T13:40:14 | 2023-03-16T13:40:14 |
MEMBER
| null | null | null |
### Describe the bug
When trying to `load_dataset_builder` for `HuggingFaceGECLM/StackExchange_Mar2023`, extraction fails, because parquet file [devops-00000-of-00001-22fe902fd8702892.parquet](https://huggingface.co/datasets/HuggingFaceGECLM/StackExchange_Mar2023/resolve/1f8c9a2ab6f7d0f9ae904b8b922e4384592ae1a5/data/devops-00000-of-00001-22fe902fd8702892.parquet) is wrongly identified by python as being a zip not a parquet.
(Full thread on [Slack](https://huggingface.slack.com/archives/C02V51Q3800/p1678890880803599))
### Steps to reproduce the bug
```python
from datasets import load_dataset_builder
ds = load_dataset_builder("HuggingFaceGECLM/StackExchange_Mar2023")
```
### Expected behavior
Loading the file normally.
### Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.14.0-1058-oem-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5639/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5638
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5638/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5638/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5638/events
|
https://github.com/huggingface/datasets/issues/5638
| 1,625,564,471 |
I_kwDODunzps5g5CU3
| 5,638 |
xPath to implement all operations for Path
|
{
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false | null |
[] | null |
[
" I think https://github.com/fsspec/universal_pathlib is the project you are looking for.\r\n\r\n`xPath` has the methods often used in dataset scripts, and `mkdir` is not one of them (`dl_manager`'s role is to \"interact\" with the file system, so using `mkdir` is discouraged).",
"Right is there a difference between UPath and xPath? Typically is xPath less well implemented compared to Upath, ie missing some implementations of some methods? Or are there methods in xPath that are not implemented with UPath?",
"`xPath` is an internal component (it doesn't have a leading underscore in the name, but it should) not meant to be used outside of `datasets`, and it's only tested on HTTP URLs, not S3.\r\n\r\n",
"Okay I understand that xPath won't support my usecase. What I was perhaps getting to is why not use UPath in `datasets` instead of `xPath` if UPath seems to have strictly more robust implementations.",
"It seems like `universal_pathlib` does not support `fsspec` URL chaining (`::` is the chaining symbol) and \"compression\" filesystems (e.g., `zip`), but this is what we need to access and stream files from within an archive (e.g., we want to stream URLs such as this one: `zip://data.parquet::https://www.dummyurl.com/archive.zip`)"
] | 2023-03-15T13:47:11 | 2023-03-17T13:21:12 | 2023-03-17T13:21:12 |
CONTRIBUTOR
| null | null | null |
### Feature request
Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which only work locally.
### Motivation
I'm using xPath to interact with remote objects.
### Your contribution
I could try to make a PR. I'm a bit unfamiliar with chaining right now.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5638/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5638/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5637
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5637/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5637/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5637/events
|
https://github.com/huggingface/datasets/issues/5637
| 1,625,295,691 |
I_kwDODunzps5g4AtL
| 5,637 |
IterableDataset with_format does not support 'device' keyword for jax
|
{
"login": "Lime-Cakes",
"id": 91322985,
"node_id": "MDQ6VXNlcjkxMzIyOTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/91322985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lime-Cakes",
"html_url": "https://github.com/Lime-Cakes",
"followers_url": "https://api.github.com/users/Lime-Cakes/followers",
"following_url": "https://api.github.com/users/Lime-Cakes/following{/other_user}",
"gists_url": "https://api.github.com/users/Lime-Cakes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lime-Cakes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lime-Cakes/subscriptions",
"organizations_url": "https://api.github.com/users/Lime-Cakes/orgs",
"repos_url": "https://api.github.com/users/Lime-Cakes/repos",
"events_url": "https://api.github.com/users/Lime-Cakes/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lime-Cakes/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"Hi! Yes, only `torch` is currently supported. Unlike `Dataset`, `IterableDataset` is not PyArrow-backed, so we cannot simply call `to_numpy` on the underlying subtables to format them numerically. Instead, we must manually convert examples to (numeric) arrays while preserving consistency with `Dataset`, which is not trivial, so this is still a to-do.",
"Any plans to support it in the future? Or would streaming dataset be left without support for jax and tensorflow?"
] | 2023-03-15T11:04:12 | 2023-03-16T18:30:59 | null |
NONE
| null | null | null |
### Describe the bug
As seen here: https://huggingface.co/docs/datasets/use_with_jax dataset.with_format() supports the keyword 'device', to put data on a specific device when loaded as jax. However, when called on an IterableDataset, I got the error `TypeError: with_format() got an unexpected keyword argument 'device'`
Looking over the code, it seems IterableDataset support only pytorch and no support for jax device keyword?
https://github.com/huggingface/datasets/blob/fc5c84f36684343bff3e424cb0fd1ac5ecdd66da/src/datasets/iterable_dataset.py#L1029
### Steps to reproduce the bug
1. Load an IterableDataset (tested in streaming mode)
2. Call with_format('jax',device=device)
### Expected behavior
I expect to call `with_format('jax', device=device)` as per [documentation](https://huggingface.co/docs/datasets/use_with_jax) without error
### Environment info
Tested with installing newest (dev) and also pip release (2.10.1).
- `datasets` version: 2.10.2.dev0
- Platform: Linux-5.15.89+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.12.1
- PyArrow version: 11.0.0
- Pandas version: 1.3.5
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5637/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5634
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5634/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5634/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5634/events
|
https://github.com/huggingface/datasets/issues/5634
| 1,622,424,174 |
I_kwDODunzps5gtDpu
| 5,634 |
Not all progress bars are showing up when they should for downloading dataset
|
{
"login": "garlandz-db",
"id": 110427462,
"node_id": "U_kgDOBpT9Rg",
"avatar_url": "https://avatars.githubusercontent.com/u/110427462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/garlandz-db",
"html_url": "https://github.com/garlandz-db",
"followers_url": "https://api.github.com/users/garlandz-db/followers",
"following_url": "https://api.github.com/users/garlandz-db/following{/other_user}",
"gists_url": "https://api.github.com/users/garlandz-db/gists{/gist_id}",
"starred_url": "https://api.github.com/users/garlandz-db/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/garlandz-db/subscriptions",
"organizations_url": "https://api.github.com/users/garlandz-db/orgs",
"repos_url": "https://api.github.com/users/garlandz-db/repos",
"events_url": "https://api.github.com/users/garlandz-db/events{/privacy}",
"received_events_url": "https://api.github.com/users/garlandz-db/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi! \r\n\r\nBy default, tqdm has `leave=True` to \"keep all traces of the progress bar upon the termination of iteration\". However, we use `leave=False` in some places (as of recently), which removes the bar once the iteration is over.\r\n\r\nI feel like our TQDM bars are noisy, so I think we should always set `leave=False` and also use the `delay` parameter to display progress bars only for tasks that take time (e.g., more than 3s). What do you think about this? Do you find these bars useful (after the dataset generation is over)?\r\n",
"Hi sorry for the late update. I think the problem still exists despite the `leave` flag\r\n\r\n<img width=\"1105\" alt=\"image\" src=\"https://user-images.githubusercontent.com/110427462/226501615-5b02fb02-fd5f-4eda-b1f7-a7ed6570892d.png\">\r\n\r\n\r\n```\r\nPackage Version\r\n------------------------ ---------\r\naiofiles 22.1.0\r\naiohttp 3.8.4\r\naiosignal 1.3.1\r\naiosqlite 0.18.0\r\nanyio 3.6.2\r\nappnope 0.1.3\r\nargon2-cffi 21.3.0\r\nargon2-cffi-bindings 21.2.0\r\narrow 1.2.3\r\nasttokens 2.2.1\r\nasync-generator 1.10\r\nasync-timeout 4.0.2\r\nattrs 22.2.0\r\nBabel 2.12.1\r\nbackcall 0.2.0\r\nbeautifulsoup4 4.11.2\r\nbleach 6.0.0\r\nbrotlipy 0.7.0\r\ncertifi 2022.12.7\r\ncffi 1.15.1\r\ncfgv 3.3.1\r\ncharset-normalizer 2.1.1\r\ncomm 0.1.2\r\nconda 22.9.0\r\nconda-package-handling 2.0.2\r\nconda_package_streaming 0.7.0\r\ncoverage 7.2.1\r\ncryptography 38.0.4\r\ndatasets 2.8.0\r\ndebugpy 1.6.6\r\ndecorator 5.1.1\r\ndefusedxml 0.7.1\r\ndill 0.3.6\r\ndistlib 0.3.6\r\ndistro 1.4.0\r\nentrypoints 0.4\r\nexceptiongroup 1.1.0\r\nexecuting 1.2.0\r\nfastjsonschema 2.16.3\r\nfilelock 3.9.0\r\nflaky 3.7.0\r\nfqdn 1.5.1\r\nfrozenlist 1.3.3\r\nfsspec 2023.3.0\r\nhuggingface-hub 0.10.1\r\nidentify 2.5.18\r\nidna 3.4\r\niniconfig 2.0.0\r\nipykernel 6.12.1\r\nipyparallel 8.4.1\r\nipython 7.32.0\r\nipython-genutils 0.2.0\r\nipywidgets 8.0.4\r\nisoduration 20.11.0\r\njedi 0.18.2\r\nJinja2 3.1.2\r\njson5 0.9.11\r\njsonpointer 2.3\r\njsonschema 4.17.3\r\njupyter_client 8.0.3\r\njupyter_core 5.2.0\r\njupyter-events 0.6.3\r\njupyter_server 2.4.0\r\njupyter_server_fileid 0.8.0\r\njupyter_server_terminals 0.4.4\r\njupyter_server_ydoc 0.6.1\r\njupyter-ydoc 0.2.2\r\njupyterlab 3.6.1\r\njupyterlab-pygments 0.2.2\r\njupyterlab_server 2.20.0\r\njupyterlab-widgets 3.0.5\r\nlibmambapy 1.1.0\r\nmamba 1.1.0\r\nMarkupSafe 2.1.2\r\nmatplotlib-inline 0.1.6\r\nmistune 2.0.5\r\nmultidict 6.0.4\r\nmultiprocess 0.70.14\r\nnbclassic 0.5.3\r\nnbclient 0.7.2\r\nnbconvert 7.2.9\r\nnbformat 5.7.3\r\nnest-asyncio 1.5.6\r\nnodeenv 1.7.0\r\nnotebook 6.5.3\r\nnotebook_shim 0.2.2\r\nnumpy 1.24.2\r\noutcome 1.2.0\r\npackaging 23.0\r\npandas 1.5.3\r\npandocfilters 1.5.0\r\nparso 0.8.3\r\npexpect 4.8.0\r\npickleshare 0.7.5\r\npip 22.3.1\r\nplatformdirs 3.0.0\r\nplotly 5.13.1\r\npluggy 1.0.0\r\npre-commit 3.1.0\r\nprometheus-client 0.16.0\r\nprompt-toolkit 3.0.38\r\npsutil 5.9.4\r\nptyprocess 0.7.0\r\npure-eval 0.2.2\r\npyarrow 11.0.0\r\npycosat 0.6.4\r\npycparser 2.21\r\nPygments 2.14.0\r\npyOpenSSL 22.1.0\r\npyrsistent 0.19.3\r\nPySocks 1.7.1\r\npytest 7.2.1\r\npytest-asyncio 0.20.3\r\npytest-cov 4.0.0\r\npytest-timeout 2.1.0\r\npython-dateutil 2.8.2\r\npython-json-logger 2.0.7\r\npytz 2022.7.1\r\nPyYAML 6.0\r\npyzmq 25.0.0\r\nrequests 2.28.1\r\nresponses 0.18.0\r\nrfc3339-validator 0.1.4\r\nrfc3986-validator 0.1.1\r\nruamel-yaml-conda 0.15.80\r\nSend2Trash 1.8.0\r\nsetuptools 65.6.3\r\nsimplegeneric 0.8.1\r\nsix 1.16.0\r\nsniffio 1.3.0\r\nsortedcontainers 2.4.0\r\nsoupsieve 2.4\r\nstack-data 0.6.2\r\ntenacity 8.2.2\r\nterminado 0.17.1\r\ntinycss2 1.2.1\r\ntomli 2.0.1\r\ntoolz 0.12.0\r\ntornado 6.2\r\ntqdm 4.65.0\r\ntraitlets 5.8.1\r\ntrio 0.22.0\r\ntyping_extensions 4.5.0\r\nuri-template 1.2.0\r\nurllib3 1.26.13\r\nvirtualenv 20.19.0\r\nwcwidth 0.2.6\r\nwebcolors 1.12\r\nwebencodings 0.5.1\r\nwebsocket-client 1.5.1\r\nwheel 0.38.4\r\nwidgetsnbextension 4.0.5\r\nxxhash 3.2.0\r\ny-py 0.5.9\r\nyarl 1.8.2\r\nypy-websocket 0.8.2\r\nzstandard 0.19.0\r\n```\r\n\r\nAny idea why this is happening? I debugged this to know the tqdm.pbar value is not being updated properly and its not the kernel not sending the comm messages to the IProgress bar"
] | 2023-03-13T23:04:18 | 2023-10-11T16:30:16 | 2023-10-11T16:30:16 |
NONE
| null | null | null |
### Describe the bug
During downloading the rotten tomatoes dataset, not all progress bars are displayed properly. This might be related to [this ticket](https://github.com/huggingface/datasets/issues/5117) as it raised the same concern but its not clear if the fix solves this issue too.
ipywidgets
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/110427462/224851138-13fee5b7-ab51-4883-b96f-1b9808782e3b.png">
tqdm
<img width="1251" alt="Screen Shot 2023-03-13 at 3 58 59 PM" src="https://user-images.githubusercontent.com/110427462/224851180-5feb7825-9250-4b1e-ad0c-f3172ac1eb78.png">
### Steps to reproduce the bug
1. Run this line
```
from datasets import load_dataset
rotten_tomatoes = load_dataset("rotten_tomatoes", split="train")
```
### Expected behavior
all progress bars for builder script, metadata, readme, training, validation, and test set
### Environment info
requirements.txt
```
aiofiles==22.1.0
aiohttp==3.8.4
aiosignal==1.3.1
aiosqlite==0.18.0
anyio==3.6.2
appnope==0.1.3
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
arrow==1.2.3
asttokens==2.2.1
async-generator==1.10
async-timeout==4.0.2
attrs==22.2.0
Babel==2.12.1
backcall==0.2.0
beautifulsoup4==4.11.2
bleach==6.0.0
brotlipy @ file:///Users/runner/miniforge3/conda-bld/brotlipy_1666764961872/work
certifi==2022.12.7
cffi @ file:///Users/runner/miniforge3/conda-bld/cffi_1671179414629/work
cfgv==3.3.1
charset-normalizer @ file:///home/conda/feedstock_root/build_artifacts/charset-normalizer_1661170624537/work
comm==0.1.2
conda==22.9.0
conda-package-handling @ file:///home/conda/feedstock_root/build_artifacts/conda-package-handling_1669907009957/work
conda_package_streaming @ file:///home/conda/feedstock_root/build_artifacts/conda-package-streaming_1669733752472/work
coverage==7.2.1
cryptography @ file:///Users/runner/miniforge3/conda-bld/cryptography_1669592251328/work
datasets==2.1.0
debugpy==1.6.6
decorator==5.1.1
defusedxml==0.7.1
dill==0.3.6
distlib==0.3.6
distro==1.4.0
entrypoints==0.4
exceptiongroup==1.1.0
executing==1.2.0
fastjsonschema==2.16.3
filelock==3.9.0
flaky==3.7.0
fqdn==1.5.1
frozenlist==1.3.3
fsspec==2023.3.0
huggingface-hub==0.10.1
identify==2.5.18
idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1663625384323/work
iniconfig==2.0.0
ipykernel==6.12.1
ipyparallel==8.4.1
ipython==7.32.0
ipython-genutils==0.2.0
ipywidgets==8.0.4
isoduration==20.11.0
jedi==0.18.2
Jinja2==3.1.2
json5==0.9.11
jsonpointer==2.3
jsonschema==4.17.3
jupyter-events==0.6.3
jupyter-ydoc==0.2.2
jupyter_client==8.0.3
jupyter_core==5.2.0
jupyter_server==2.4.0
jupyter_server_fileid==0.8.0
jupyter_server_terminals==0.4.4
jupyter_server_ydoc==0.6.1
jupyterlab==3.6.1
jupyterlab-pygments==0.2.2
jupyterlab-widgets==3.0.5
jupyterlab_server==2.20.0
libmambapy @ file:///Users/runner/miniforge3/conda-bld/mamba-split_1671598370072/work/libmambapy
mamba @ file:///Users/runner/miniforge3/conda-bld/mamba-split_1671598370072/work/mamba
MarkupSafe==2.1.2
matplotlib-inline==0.1.6
mistune==2.0.5
multidict==6.0.4
multiprocess==0.70.14
nbclassic==0.5.3
nbclient==0.7.2
nbconvert==7.2.9
nbformat==5.7.3
nest-asyncio==1.5.6
nodeenv==1.7.0
notebook==6.5.3
notebook_shim==0.2.2
numpy==1.24.2
outcome==1.2.0
packaging==23.0
pandas==1.5.3
pandocfilters==1.5.0
parso==0.8.3
pexpect==4.8.0
pickleshare==0.7.5
platformdirs==3.0.0
plotly==5.13.1
pluggy==1.0.0
pre-commit==3.1.0
prometheus-client==0.16.0
prompt-toolkit==3.0.38
psutil==5.9.4
ptyprocess==0.7.0
pure-eval==0.2.2
pyarrow==11.0.0
pycosat @ file:///Users/runner/miniforge3/conda-bld/pycosat_1666836580084/work
pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1636257122734/work
Pygments==2.14.0
pyOpenSSL @ file:///home/conda/feedstock_root/build_artifacts/pyopenssl_1665350324128/work
pyrsistent==0.19.3
PySocks @ file:///home/conda/feedstock_root/build_artifacts/pysocks_1661604839144/work
pytest==7.2.1
pytest-asyncio==0.20.3
pytest-cov==4.0.0
pytest-timeout==2.1.0
python-dateutil==2.8.2
python-json-logger==2.0.7
pytz==2022.7.1
PyYAML==6.0
pyzmq==25.0.0
requests @ file:///home/conda/feedstock_root/build_artifacts/requests_1661872987712/work
responses==0.18.0
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
ruamel-yaml-conda @ file:///Users/runner/miniforge3/conda-bld/ruamel_yaml_1666819760545/work
Send2Trash==1.8.0
simplegeneric==0.8.1
six==1.16.0
sniffio==1.3.0
sortedcontainers==2.4.0
soupsieve==2.4
stack-data==0.6.2
tenacity==8.2.2
terminado==0.17.1
tinycss2==1.2.1
tomli==2.0.1
toolz @ file:///home/conda/feedstock_root/build_artifacts/toolz_1657485559105/work
tornado==6.2
tqdm==4.64.1
traitlets==5.8.1
trio==0.22.0
typing_extensions==4.5.0
uri-template==1.2.0
urllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1669259737463/work
virtualenv==20.19.0
wcwidth==0.2.6
webcolors==1.12
webencodings==0.5.1
websocket-client==1.5.1
widgetsnbextension==4.0.5
xxhash==3.2.0
y-py==0.5.9
yarl==1.8.2
ypy-websocket==0.8.2
zstandard==0.19.0
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5634/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5633
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5633/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5633/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5633/events
|
https://github.com/huggingface/datasets/issues/5633
| 1,621,469,970 |
I_kwDODunzps5gpasS
| 5,633 |
Cannot import datasets
|
{
"login": "eerio",
"id": 11250555,
"node_id": "MDQ6VXNlcjExMjUwNTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/11250555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eerio",
"html_url": "https://github.com/eerio",
"followers_url": "https://api.github.com/users/eerio/followers",
"following_url": "https://api.github.com/users/eerio/following{/other_user}",
"gists_url": "https://api.github.com/users/eerio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eerio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eerio/subscriptions",
"organizations_url": "https://api.github.com/users/eerio/orgs",
"repos_url": "https://api.github.com/users/eerio/repos",
"events_url": "https://api.github.com/users/eerio/events{/privacy}",
"received_events_url": "https://api.github.com/users/eerio/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Okay, the issue was likely caused by mixing `conda` and `pip` usage - I forgot that I have already used `pip` in this environment previously and that it was 'spoiled' because of it. Creating another environment and installing `datasets` by pip with other packages from the `requirements.txt` file solved the problem."
] | 2023-03-13T13:14:44 | 2023-03-13T17:54:19 | 2023-03-13T17:54:19 |
NONE
| null | null | null |
### Describe the bug
Hi,
I cannot even import the library :( I installed it by running:
```
$ conda install datasets
```
Then I realized I should maybe use the huggingface channel, because I encountered the error below, so I ran:
```
$ conda remove datasets
$ conda install -c huggingface datasets
```
Please see 'steps to reproduce the bug' for the specific error, as steps to reproduce is just importing the library
### Steps to reproduce the bug
```
$ python3
Python 3.8.15 (default, Nov 24 2022, 15:19:38)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import datasets
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/__init__.py", line 33, in <module>
from .arrow_dataset import Dataset, concatenate_datasets
File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 59, in <module>
from .arrow_reader import ArrowReader
File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/arrow_reader.py", line 27, in <module>
import pyarrow.parquet as pq
File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/parquet/__init__.py", line 20, in <module>
from .core import *
File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/parquet/core.py", line 37, in <module>
from pyarrow._parquet import (ParquetReader, Statistics, # noqa
ImportError: cannot import name 'FileEncryptionProperties' from 'pyarrow._parquet' (/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/_parquet.cpython-38-x86_64-linux-gnu.so)
```
### Expected behavior
I would expect for the statement `import datasets` to cause no error
### Environment info
Output of `conda list`:
```
# packages in environment at /home/jack/.conda/envs/pbalawender_zpp:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
abseil-cpp 20210324.2 h2531618_0
advertools 0.13.2 pypi_0 pypi
aiofiles 0.8.0 pypi_0 pypi
aiohttp 3.8.3 py38h5eee18b_0
aiosignal 1.2.0 pyhd3eb1b0_0
aiosqlite 0.17.0 pypi_0 pypi
anyio 3.6.2 pypi_0 pypi
aquirdturtle-collapsible-headings 3.1.0 pypi_0 pypi
argon2-cffi 21.3.0 pypi_0 pypi
argon2-cffi-bindings 21.2.0 pypi_0 pypi
arrow 1.2.3 pypi_0 pypi
arrow-cpp 3.0.0 py38h6b21186_4
asttokens 2.2.0 pypi_0 pypi
async-timeout 4.0.2 py38h06a4308_0
attrs 22.1.0 py38h06a4308_0
automat 22.10.0 pypi_0 pypi
aws-c-common 0.4.57 he6710b0_1
aws-c-event-stream 0.1.6 h2531618_5
aws-checksums 0.1.9 he6710b0_0
aws-sdk-cpp 1.8.185 hce553d0_0
babel 2.11.0 pypi_0 pypi
backcall 0.2.0 pyhd3eb1b0_0
beautifulsoup4 4.11.1 pypi_0 pypi
blas 1.0 mkl
bleach 5.0.1 pypi_0 pypi
boost-cpp 1.73.0 h27cfd23_11
bottleneck 1.3.5 py38h7deecbd_0
brotli 1.0.9 h5eee18b_7
brotli-bin 1.0.9 h5eee18b_7
brotlipy 0.7.0 py38h27cfd23_1003
bzip2 1.0.8 h7b6447c_0
c-ares 1.18.1 h7f8727e_0
ca-certificates 2023.01.10 h06a4308_0
certifi 2022.9.24 pypi_0 pypi
cffi 1.15.1 py38h5eee18b_3
charset-normalizer 2.1.1 pypi_0 pypi
click 8.1.3 pypi_0 pypi
constantly 15.1.0 pypi_0 pypi
contourpy 1.0.6 pypi_0 pypi
cryptography 38.0.4 pypi_0 pypi
cssselect 1.2.0 pypi_0 pypi
cudatoolkit 10.1.243 h8cb64d8_10 conda-forge
cycler 0.11.0 pypi_0 pypi
dacite 1.6.0 pypi_0 pypi
dataclasses 0.8 pyh6d0b6a4_7
datasets 1.18.4 py_0 huggingface
datetime 4.7 pypi_0 pypi
debugpy 1.6.4 pypi_0 pypi
decorator 5.1.1 pyhd3eb1b0_0
defusedxml 0.7.1 pypi_0 pypi
dill 0.3.6 py38h06a4308_0
docker-pycreds 0.4.0 pypi_0 pypi
double-conversion 3.1.5 he6710b0_1
entrypoints 0.4 py38h06a4308_0
executing 0.8.3 pyhd3eb1b0_0
filelock 3.8.0 pypi_0 pypi
flake8 6.0.0 pypi_0 pypi
flask 2.1.3 py38h06a4308_0
flit-core 3.6.0 pyhd3eb1b0_0
fonttools 4.38.0 pypi_0 pypi
fqdn 1.5.1 pypi_0 pypi
freetype 2.12.1 h4a9f257_0
frozenlist 1.3.3 py38h5eee18b_0
fsspec 2022.11.0 py38h06a4308_0
gensim 4.2.0 pypi_0 pypi
gflags 2.2.2 he6710b0_0
giflib 5.2.1 h5eee18b_3
gitdb 4.0.10 pypi_0 pypi
gitpython 3.1.30 pypi_0 pypi
glog 0.5.0 h2531618_0
grpc-cpp 1.39.0 hae934f6_5
huggingface-hub 0.11.1 pypi_0 pypi
huggingface_hub 0.13.1 py_0 huggingface
hyperlink 21.0.0 pypi_0 pypi
icu 58.2 he6710b0_3
idna 3.4 py38h06a4308_0
importlib-metadata 5.1.0 pypi_0 pypi
importlib_metadata 4.11.3 hd3eb1b0_0
importlib_resources 5.2.0 pyhd3eb1b0_1
incremental 22.10.0 pypi_0 pypi
intel-openmp 2021.4.0 h06a4308_3561
ipykernel 6.17.1 pyh210e3f2_0 conda-forge
ipython 8.7.0 pypi_0 pypi
ipython-genutils 0.2.0 pypi_0 pypi
ipywidgets 8.0.2 pyhd8ed1ab_1 conda-forge
isoduration 20.11.0 pypi_0 pypi
itemadapter 0.7.0 pypi_0 pypi
itemloaders 1.0.6 pypi_0 pypi
itsdangerous 2.0.1 pyhd3eb1b0_0
jedi 0.18.2 pypi_0 pypi
jinja2 3.1.2 py38h06a4308_0
jmespath 1.0.1 pypi_0 pypi
joblib 1.2.0 pypi_0 pypi
jpeg 9b h024ee3a_2
json5 0.9.10 pypi_0 pypi
jsonpickle 3.0.0 pypi_0 pypi
jsonpointer 2.3 pypi_0 pypi
jsonschema 4.17.3 py38h06a4308_0
jupyter-core 5.1.0 pypi_0 pypi
jupyter-events 0.5.0 pypi_0 pypi
jupyter-server 1.23.3 pypi_0 pypi
jupyter-server-fileid 0.6.0 pypi_0 pypi
jupyter-server-ydoc 0.4.0 pypi_0 pypi
jupyter-ydoc 0.2.2 pypi_0 pypi
jupyter_client 7.4.9 py38h06a4308_0
jupyter_core 5.2.0 py38h06a4308_0
jupyterlab 3.6.0a4 pypi_0 pypi
jupyterlab-pygments 0.2.2 pypi_0 pypi
jupyterlab-server 2.16.3 pypi_0 pypi
jupyterlab_widgets 3.0.3 pyhd8ed1ab_0 conda-forge
kiwisolver 1.4.4 pypi_0 pypi
krb5 1.19.4 h568e23c_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
libboost 1.73.0 h3ff78a5_11
libbrotlicommon 1.0.9 h5eee18b_7
libbrotlidec 1.0.9 h5eee18b_7
libbrotlienc 1.0.9 h5eee18b_7
libcurl 7.88.1 h91b91d3_0
libedit 3.1.20221030 h5eee18b_0
libev 4.33 h7f8727e_1
libevent 2.1.12 h8f2d780_0
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libnghttp2 1.46.0 hce63b2e_0
libpng 1.6.39 h5eee18b_0
libprotobuf 3.17.2 h4ff587b_1
libsodium 1.0.18 h7b6447c_0
libssh2 1.10.0 h8f2d780_0
libstdcxx-ng 11.2.0 h1234567_1
libthrift 0.14.2 hcc01f38_0
libtiff 4.1.0 h2733197_1
libuv 1.44.2 h5eee18b_0
libwebp 1.2.0 h89dd481_0
lz4-c 1.9.4 h6a678d5_0
markupsafe 2.1.1 py38h7f8727e_0
matplotlib 3.6.2 pypi_0 pypi
matplotlib-inline 0.1.6 py38h06a4308_0
mccabe 0.7.0 pypi_0 pypi
mistune 2.0.4 pypi_0 pypi
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py38h7f8727e_0
mkl_fft 1.3.1 py38hd3c417c_0
mkl_random 1.2.2 py38h51133e4_0
morfeusz2 1.99.6 pypi_0 pypi
multidict 6.0.2 py38h5eee18b_0
multiprocess 0.70.14 py38h06a4308_0
nbclassic 0.4.8 pypi_0 pypi
nbclient 0.7.2 pypi_0 pypi
nbconvert 7.2.5 pypi_0 pypi
nbformat 5.7.0 py38h06a4308_0
ncurses 6.4 h6a678d5_0
nest-asyncio 1.5.6 py38h06a4308_0
ninja 1.10.2 h06a4308_5
ninja-base 1.10.2 hd09550d_5
notebook 6.5.2 pypi_0 pypi
notebook-shim 0.2.2 pypi_0 pypi
numexpr 2.8.4 py38he184ba9_0
numpy 1.23.5 py38h14f4228_0
numpy-base 1.23.5 py38h31eccc5_0
oauthlib 3.2.2 pypi_0 pypi
opencv-python 4.6.0.66 pypi_0 pypi
openssl 1.1.1t h7f8727e_0
orc 1.6.9 ha97a36c_3
packaging 22.0 py38h06a4308_0
pandas 1.5.2 pypi_0 pypi
pandocfilters 1.5.0 pypi_0 pypi
parsel 1.7.0 pypi_0 pypi
parso 0.8.3 pyhd3eb1b0_0
pathlib 1.0.1 pypi_0 pypi
pathtools 0.1.2 pypi_0 pypi
pexpect 4.8.0 pyhd3eb1b0_3
pickleshare 0.7.5 pyhd3eb1b0_1003
pillow 9.3.0 pypi_0 pypi
pip 22.2.2 py38h06a4308_0
pkgutil-resolve-name 1.3.10 py38h06a4308_0
platformdirs 2.5.4 pypi_0 pypi
prometheus-client 0.15.0 pypi_0 pypi
promise 2.3 pypi_0 pypi
prompt-toolkit 3.0.33 pypi_0 pypi
protego 0.2.1 pypi_0 pypi
protobuf 4.21.12 pypi_0 pypi
psutil 5.9.0 py38h5eee18b_0
ptyprocess 0.7.0 pyhd3eb1b0_2
pure_eval 0.2.2 pyhd3eb1b0_0
pyarrow 10.0.1 pypi_0 pypi
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pycodestyle 2.10.0 pypi_0 pypi
pycparser 2.21 pyhd3eb1b0_0
pydispatcher 2.0.6 pypi_0 pypi
pyflakes 3.0.1 pypi_0 pypi
pygments 2.11.2 pyhd3eb1b0_0
pyopenssl 22.1.0 pypi_0 pypi
pyrsistent 0.18.0 py38heee7806_0
pysocks 1.7.1 py38h06a4308_0
python 3.8.15 h7a1cb2a_2
python-dateutil 2.8.2 pyhd3eb1b0_0
python-dotenv 0.21.0 pypi_0 pypi
python-fastjsonschema 2.16.2 py38h06a4308_0
python-json-logger 2.0.4 pypi_0 pypi
python-xxhash 2.0.2 py38h5eee18b_1
pytorch 1.7.1 py3.8_cuda10.1.243_cudnn7.6.3_0 pytorch
pytz 2022.6 pypi_0 pypi
pyyaml 6.0 py38h5eee18b_1
pyzmq 23.2.0 py38h6a678d5_0
queuelib 1.6.2 pypi_0 pypi
re2 2022.04.01 h295c915_0
readline 8.2 h5eee18b_0
regex 2022.10.31 pypi_0 pypi
requests 2.28.1 py38h06a4308_0
requests-file 1.5.1 pypi_0 pypi
requests-oauthlib 1.3.1 pypi_0 pypi
rfc3339-validator 0.1.4 pypi_0 pypi
rfc3986-validator 0.1.1 pypi_0 pypi
scikit-learn 1.1.3 pypi_0 pypi
scipy 1.9.3 pypi_0 pypi
scrapy 2.7.1 pypi_0 pypi
seaborn 0.12.1 pypi_0 pypi
send2trash 1.8.0 pypi_0 pypi
sentry-sdk 1.12.1 pypi_0 pypi
service-identity 21.1.0 pypi_0 pypi
setproctitle 1.3.2 pypi_0 pypi
setuptools 65.6.3 pypi_0 pypi
shortuuid 1.0.11 pypi_0 pypi
six 1.16.0 pyhd3eb1b0_1
smart-open 6.2.0 pypi_0 pypi
smmap 5.0.0 pypi_0 pypi
snappy 1.1.9 h295c915_0
sniffio 1.3.0 pypi_0 pypi
soupsieve 2.3.2.post1 pypi_0 pypi
sqlite 3.40.1 h5082296_0
stack-data 0.6.2 pypi_0 pypi
stack_data 0.2.0 pyhd3eb1b0_0
terminado 0.17.0 pypi_0 pypi
threadpoolctl 3.1.0 pypi_0 pypi
tinycss2 1.2.1 pypi_0 pypi
tk 8.6.12 h1ccaba5_0
tldextract 3.4.0 pypi_0 pypi
tokenizers 0.13.2 pypi_0 pypi
tomli 2.0.1 pypi_0 pypi
torchvision 0.8.2 py38_cu101 pytorch
tornado 6.2 py38h5eee18b_0
tqdm 4.64.1 py38h06a4308_0
traitlets 5.6.0 pypi_0 pypi
transformers 4.25.1 pypi_0 pypi
tweepy 4.12.1 pypi_0 pypi
twisted 22.10.0 pypi_0 pypi
twython 3.9.1 pypi_0 pypi
typing-extensions 4.4.0 py38h06a4308_0
typing_extensions 4.4.0 py38h06a4308_0
uri-template 1.2.0 pypi_0 pypi
uriparser 0.9.3 he6710b0_1
urllib3 1.26.13 pypi_0 pypi
utf8proc 2.6.1 h27cfd23_0
w3lib 2.1.0 pypi_0 pypi
wandb 0.13.7 pypi_0 pypi
wcwidth 0.2.5 pyhd3eb1b0_0
webcolors 1.12 pypi_0 pypi
webencodings 0.5.1 pypi_0 pypi
websocket-client 1.4.2 pypi_0 pypi
werkzeug 2.2.2 py38h06a4308_0
wheel 0.38.4 py38h06a4308_0
widgetsnbextension 4.0.3 py38h06a4308_0
xxhash 0.8.0 h7f8727e_3
xz 5.2.10 h5eee18b_1
y-py 0.5.4 pypi_0 pypi
yaml 0.2.5 h7b6447c_0
yarl 1.8.1 py38h5eee18b_0
ypy-websocket 0.5.0 pypi_0 pypi
zeromq 4.3.4 h2531618_0
zipp 3.11.0 py38h06a4308_0
zlib 1.2.13 h5eee18b_0
zope-interface 5.5.2 pypi_0 pypi
zstd 1.4.9 haebb681_0
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5633/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5632
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5632/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5632/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5632/events
|
https://github.com/huggingface/datasets/issues/5632
| 1,621,177,391 |
I_kwDODunzps5goTQv
| 5,632 |
Dataset cannot convert too large dictionnary
|
{
"login": "MaraLac",
"id": 108518627,
"node_id": "U_kgDOBnfc4w",
"avatar_url": "https://avatars.githubusercontent.com/u/108518627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaraLac",
"html_url": "https://github.com/MaraLac",
"followers_url": "https://api.github.com/users/MaraLac/followers",
"following_url": "https://api.github.com/users/MaraLac/following{/other_user}",
"gists_url": "https://api.github.com/users/MaraLac/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaraLac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaraLac/subscriptions",
"organizations_url": "https://api.github.com/users/MaraLac/orgs",
"repos_url": "https://api.github.com/users/MaraLac/repos",
"events_url": "https://api.github.com/users/MaraLac/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaraLac/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"Answered on the forum:\r\n\r\n> To fix the overflow error, we need to merge [support LargeListArray in pyarrow by xwwwwww · Pull Request #4800 · huggingface/datasets · GitHub](https://github.com/huggingface/datasets/pull/4800), which adds support for the large lists. However, before merging it, we need to come up with a cleaner API for large lists. I hope to find some time to address this before Datasets 3.0."
] | 2023-03-13T10:14:40 | 2023-03-16T15:28:57 | null |
NONE
| null | null | null |
### Describe the bug
Hello everyone!
I tried to build a new dataset with the command "dict_valid = datasets.Dataset.from_dict({'input_values': values_array})".
However, I have a very large dataset (~400Go) and it seems that dataset cannot handle this.
Indeed, I can create the dataset until a certain size of my dictionnary, and then I have the error "OverflowError: Python int too large to convert to C long".
Do you know how to solve this problem?
Unfortunately I cannot give a reproductible code because I cannot share a so large file, but you can find the code below (it's a test on only a part of the validation data ~10Go, but it's already the case).
Thank you!
### Steps to reproduce the bug
SAVE_DIR = './data/'
features = h5py.File(SAVE_DIR+'features.hdf5','r')
valid_data = features["validation"]["data/features"]
v_array_values = [np.float32(item[()]) for item in valid_data.values()]
for i in range(len(v_array_values)):
v_array_values[i] = v_array_values[i].round(decimals=5)
dict_valid = datasets.Dataset.from_dict({'input_values': v_array_values})
### Expected behavior
The code is expected to give me a Huggingface dataset.
### Environment info
python: 3.8.15
numpy: 1.22.3
datasets: 2.3.2
pyarrow: 8.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5632/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5631
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5631/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5631/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5631/events
|
https://github.com/huggingface/datasets/issues/5631
| 1,620,442,854 |
I_kwDODunzps5glf7m
| 5,631 |
Custom split names
|
{
"login": "ErfanMoosaviMonazzah",
"id": 79091831,
"node_id": "MDQ6VXNlcjc5MDkxODMx",
"avatar_url": "https://avatars.githubusercontent.com/u/79091831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ErfanMoosaviMonazzah",
"html_url": "https://github.com/ErfanMoosaviMonazzah",
"followers_url": "https://api.github.com/users/ErfanMoosaviMonazzah/followers",
"following_url": "https://api.github.com/users/ErfanMoosaviMonazzah/following{/other_user}",
"gists_url": "https://api.github.com/users/ErfanMoosaviMonazzah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ErfanMoosaviMonazzah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ErfanMoosaviMonazzah/subscriptions",
"organizations_url": "https://api.github.com/users/ErfanMoosaviMonazzah/orgs",
"repos_url": "https://api.github.com/users/ErfanMoosaviMonazzah/repos",
"events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/events{/privacy}",
"received_events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false | null |
[] | null |
[
"Hi!\r\n\r\nYou can also use names other than \"train\", \"validation\" and \"test\". As an example, check the [script](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/blob/e095840f23f3dffc1056c078c2f9320dad9ca74d/common_voice_11_0.py#L139) of the Common Voice 11 dataset. "
] | 2023-03-12T17:21:43 | 2023-03-24T14:13:00 | 2023-03-24T14:13:00 |
NONE
| null | null | null |
### Feature request
Hi,
I participated in multiple NLP tasks where there are more than just train, test, validation splits, there could be multiple validation sets or test sets. But it seems currently only those mentioned three splits supported. It would be nice to have the support for more splits on the hub. (currently i can have more splits when I am loading datasets from urls, but not hub)
### Motivation
Easier access to more splits
### Your contribution
No
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5631/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5629
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5629/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5629/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5629/events
|
https://github.com/huggingface/datasets/issues/5629
| 1,619,921,247 |
I_kwDODunzps5gjglf
| 5,629 |
load_dataset gives "403" error when using Financial phrasebank
|
{
"login": "Jimchoo91",
"id": 67709789,
"node_id": "MDQ6VXNlcjY3NzA5Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/67709789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jimchoo91",
"html_url": "https://github.com/Jimchoo91",
"followers_url": "https://api.github.com/users/Jimchoo91/followers",
"following_url": "https://api.github.com/users/Jimchoo91/following{/other_user}",
"gists_url": "https://api.github.com/users/Jimchoo91/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jimchoo91/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jimchoo91/subscriptions",
"organizations_url": "https://api.github.com/users/Jimchoo91/orgs",
"repos_url": "https://api.github.com/users/Jimchoo91/repos",
"events_url": "https://api.github.com/users/Jimchoo91/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jimchoo91/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"Hi! You seem to be using an outdated version of `datasets` that downloads the older script version. To avoid the error, you can either pass `revision=\"main\"` to `load_dataset` (this can fail if a script uses newer features of the lib) or update your installation with `pip install -U datasets` (better solution)."
] | 2023-03-11T07:46:39 | 2023-03-13T18:27:26 | null |
NONE
| null | null | null |
When I try to load this dataset, I receive the following error:
ConnectionError: Couldn't reach https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip (error 403)
Has this been seen before? Thanks. The website loads when I try to access it manually.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5629/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5627
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5627/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5627/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5627/events
|
https://github.com/huggingface/datasets/issues/5627
| 1,619,336,609 |
I_kwDODunzps5ghR2h
| 5,627 |
Unable to load AutoTrain-generated dataset from the hub
|
{
"login": "ijmiller2",
"id": 8560151,
"node_id": "MDQ6VXNlcjg1NjAxNTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8560151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ijmiller2",
"html_url": "https://github.com/ijmiller2",
"followers_url": "https://api.github.com/users/ijmiller2/followers",
"following_url": "https://api.github.com/users/ijmiller2/following{/other_user}",
"gists_url": "https://api.github.com/users/ijmiller2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ijmiller2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ijmiller2/subscriptions",
"organizations_url": "https://api.github.com/users/ijmiller2/orgs",
"repos_url": "https://api.github.com/users/ijmiller2/repos",
"events_url": "https://api.github.com/users/ijmiller2/events{/privacy}",
"received_events_url": "https://api.github.com/users/ijmiller2/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"The AutoTrain format is not supported right now. I think it would require a dedicated dataset builder",
"Okay, good to know. Thanks for the reply. For now I will just have to\nmanage the split manually before training, because I can’t find any way of\npulling out file indices or file names from the autogenerated split. The\nfile names field of the image dataset (loaded directly from arrow file) is\nmissing, just fyi (for anyone else this might be relevant too).\n\nOn Fri, Mar 10, 2023 at 7:02 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> The AutoTrain format is not supported right now. I think it would require\n> a dedicated dataset builder\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5627#issuecomment-1464734308>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ACBJ4F5A353MCZ76OGRJ6CTW3PFI7ANCNFSM6AAAAAAVWXNUTE>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n"
] | 2023-03-10T17:25:58 | 2023-03-11T15:44:42 | null |
NONE
| null | null | null |
### Describe the bug
DatasetGenerationError: An error occurred while generating the dataset -> ValueError: Couldn't cast ... because column names don't match
```
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
_fingerprint: string
_format_columns: list<item: string>
child 0, item: string
_format_kwargs: struct<>
_format_type: null
_indexes: struct<>
_output_all_columns: bool
_split: null
to
{'citation': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'features': {'image': {'_type': Value(dtype='string', id=None)}, 'target': {'names': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='null', id=None)}}}
because column names don't match
```
### Steps to reproduce the bug
Steps to reproduce:
1. `pip install datasets==2.10.1`
2. Attempt to load (private dataset). Note that I'm authenticated via ` huggingface-cli login`
```
from datasets import load_dataset
# load dataset
dataset = "ijmiller2/autotrain-data-betterbin-vision-10000"
dataset = load_dataset(dataset)
```
Here's the full traceback:
```Downloading and preparing dataset json/ijmiller2--autotrain-data-betterbin-vision-10000 to /Users/ian/.cache/huggingface/datasets/ijmiller2___json/ijmiller2--autotrain-data-betterbin-vision-10000-2eae034a9ff8a1a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Downloading data files: 100%|███████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2383.80it/s]
Extracting data files: 100%|█████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 505.95it/s]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1874, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1868 writer = writer_class(
1869 features=writer._features,
1870 path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1871 storage_options=self._fs.storage_options,
1872 embed_local_files=embed_local_files,
1873 )
-> 1874 writer.write_table(table)
1875 num_examples_progress_update += len(table)
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/arrow_writer.py:568, in ArrowWriter.write_table(self, pa_table, writer_batch_size)
567 pa_table = pa_table.combine_chunks()
--> 568 pa_table = table_cast(pa_table, self._schema)
569 if self.embed_local_files:
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/table.py:2312, in table_cast(table, schema)
2311 if table.schema != schema:
-> 2312 return cast_table_to_schema(table, schema)
2313 elif table.schema.metadata != schema.metadata:
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/table.py:2270, in cast_table_to_schema(table, schema)
2269 if sorted(table.column_names) != sorted(features):
-> 2270 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
2271 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
_fingerprint: string
_format_columns: list<item: string>
child 0, item: string
_format_kwargs: struct<>
_format_type: null
_indexes: struct<>
_output_all_columns: bool
_split: null
to
{'citation': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'features': {'image': {'_type': Value(dtype='string', id=None)}, 'target': {'names': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='null', id=None)}}}
because column names don't match
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Input In [8], in <cell line: 6>()
4 # load dataset
5 dataset = "ijmiller2/autotrain-data-betterbin-vision-10000"
----> 6 dataset = load_dataset(dataset)
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/load.py:1782, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs)
1779 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1781 # Download and prepare data
-> 1782 builder_instance.download_and_prepare(
1783 download_config=download_config,
1784 download_mode=download_mode,
1785 verification_mode=verification_mode,
1786 try_from_hf_gcs=try_from_hf_gcs,
1787 num_proc=num_proc,
1788 )
1790 # Build dataset for splits
1791 keep_in_memory = (
1792 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1793 )
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:872, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
870 if num_proc is not None:
871 prepare_split_kwargs["num_proc"] = num_proc
--> 872 self._download_and_prepare(
873 dl_manager=dl_manager,
874 verification_mode=verification_mode,
875 **prepare_split_kwargs,
876 **download_and_prepare_kwargs,
877 )
878 # Sync info
879 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:967, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
963 split_dict.add(split_generator.split_info)
965 try:
966 # Prepare split will record examples associated to the split
--> 967 self._prepare_split(split_generator, **prepare_split_kwargs)
968 except OSError as e:
969 raise OSError(
970 "Cannot find data file. "
971 + (self.manual_download_instructions or "")
972 + "\nOriginal error:\n"
973 + str(e)
974 ) from None
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1749, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1747 job_id = 0
1748 with pbar:
-> 1749 for job_id, done, content in self._prepare_split_single(
1750 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1751 ):
1752 if done:
1753 result = content
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1892, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1890 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1891 e = e.__context__
-> 1892 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1894 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
I'm ultimately trying to generate my own performance metrics on validation data (before putting an endpoint into production) and so was hoping to load all or at least the validation subset from the hub.
I'm expecting the `load_dataset()` function to work as shown in the documentation [here](https://huggingface.co/docs/datasets/loading#hugging-face-hub):
```python
dataset = load_dataset(
"lhoestq/custom_squad",
revision="main" # tag name, or branch name, or commit hash
)
```
### Environment info
- `datasets` version: 2.10.1
- Platform: macOS-13.2.1-arm64-arm-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5627/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5625
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5625/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5625/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5625/events
|
https://github.com/huggingface/datasets/issues/5625
| 1,618,971,855 |
I_kwDODunzps5gf4zP
| 5,625 |
Allow "jsonl" data type signifier
|
{
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null |
[
"You can use \"json\" instead. It doesn't work by extension names, but rather by dataset builder names, e.g. \"text\", \"imagefolder\", etc. I don't think the example in `transformers` is correct because of that",
"Yes, I understand the reasoning but this issue is to propose that the example in transformers (while incorrect) \"makes sense\" in terms of user expectation. So the question is whether it would be possible to add \"aliases\" for common types (like \"json\" and \"text\") based on common extensions (like jsonl and txt)?"
] | 2023-03-10T13:21:48 | 2023-03-11T10:35:39 | null |
CONTRIBUTOR
| null | null | null |
### Feature request
`load_dataset` currently does not accept `jsonl` as type but only `json`.
### Motivation
I was working with one of the `run_translation` scripts and used my own datasets (`.jsonl`) as train_dataset. But the default code did not work because
```
FileNotFoundError: Couldn't find a dataset script at jsonl\jsonl.py or any data file in the same directory. Couldn't find 'jsonl' on the Hugging Face Hub either: FileNotFoundError: Dataset 'jsonl' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`.
```
The reason is because the script has these lines to extract the data type by its extension. Therefore, the derived type is `jsonl` which is not recognized by datasets as the error above shows.
https://github.com/huggingface/transformers/blob/ade26bf9912f69e2110137443e4406d7dbe253e7/examples/pytorch/translation/run_translation.py#L342-L356
I suppose you could argue that this is the script's fault (in which case I'll do a PR over at `transformers`) but it makes sense to me to add `jsonl` as an alias to `json` in `datasets`.
### Your contribution
At the moment I cannot work on this. I think it can be as "easy" as having an alias for json, namely jsonl.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5625/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5624
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5624/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5624/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5624/events
|
https://github.com/huggingface/datasets/issues/5624
| 1,617,400,192 |
I_kwDODunzps5gZ5GA
| 5,624 |
glue datasets returning -1 for test split
|
{
"login": "lithafnium",
"id": 8939967,
"node_id": "MDQ6VXNlcjg5Mzk5Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8939967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lithafnium",
"html_url": "https://github.com/lithafnium",
"followers_url": "https://api.github.com/users/lithafnium/followers",
"following_url": "https://api.github.com/users/lithafnium/following{/other_user}",
"gists_url": "https://api.github.com/users/lithafnium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lithafnium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lithafnium/subscriptions",
"organizations_url": "https://api.github.com/users/lithafnium/orgs",
"repos_url": "https://api.github.com/users/lithafnium/repos",
"events_url": "https://api.github.com/users/lithafnium/events{/privacy}",
"received_events_url": "https://api.github.com/users/lithafnium/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi @lithafnium, thanks for reporting.\r\n\r\nPlease note that you can use the \"Community\" tab in the corresponding dataset page to start any discussion: https://huggingface.co/datasets/glue/discussions\r\n\r\nIndeed this issue was already raised there (https://huggingface.co/datasets/glue/discussions/5) and answered: https://huggingface.co/datasets/glue/discussions/5#63907885937867f0cb3cde31\r\n> The test labels are not public.\r\n>\r\n> Note this dataset belongs to a benchmark: people send their predictions for the test split to GLUE (https://gluebenchmark.com/) and then they get a score in their leaderboard...\r\n"
] | 2023-03-09T14:47:18 | 2023-03-09T16:49:29 | 2023-03-09T16:49:29 |
NONE
| null | null | null |
### Describe the bug
Downloading any dataset from GLUE has -1 as class labels for test split. Train and validation have regular 0/1 class labels. This is also present in the dataset card online.
### Steps to reproduce the bug
```
dataset = load_dataset("glue", "sst2")
for d in dataset:
# prints out -1
print(d["label"]
```
### Expected behavior
Expected behavior should be 0/1 instead of -1.
### Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.17
- Python version: 3.8.16
- PyArrow version: 8.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5624/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5618
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5618/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5618/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5618/events
|
https://github.com/huggingface/datasets/issues/5618
| 1,612,977,934 |
I_kwDODunzps5gJBcO
| 5,618 |
Unpin fsspec < 2023.3.0 once issue fixed
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[] | 2023-03-07T08:41:51 | 2023-03-07T13:39:03 | 2023-03-07T13:39:03 |
MEMBER
| null | null | null |
Unpin `fsspec` upper version once root cause of our CI break is fixed.
See:
- #5614
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5618/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5616
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5616/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5616/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5616/events
|
https://github.com/huggingface/datasets/issues/5616
| 1,612,932,508 |
I_kwDODunzps5gI2Wc
| 5,616 |
CI is broken after fsspec-2023.3.0 release
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[] | 2023-03-07T08:06:39 | 2023-03-07T08:37:29 | 2023-03-07T08:37:29 |
MEMBER
| null | null | null |
As reported by @lhoestq, our CI is broken after `fsspec` 2023.3.0 release:
```
FAILED tests/test_filesystem.py::test_compression_filesystems[Bz2FileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
At index 0 diff: {'name': 'file.txt', 'size': 70, 'type': 'file', 'created': 1678175677.1887748, 'islink': False, 'mode': 33188, 'uid': 1001, 'gid': 123, 'mtime': 1678175677.1887748, 'ino': 286957, 'nlink': 1} != 'file.txt'
Full diff:
[
- 'file.txt',
+ {'created': 1678175677.1887748,
+ 'gid': 123,
+ 'ino': 286957,
+ 'islink': False,
+ 'mode': 33188,
+ 'mtime': 1678175677.1887748,
+ 'name': 'file.txt',
+ 'nlink': 1,
+ 'size': 70,
+ 'type': 'file',
+ 'uid': 1001},
]
```
Also:
```
FAILED tests/test_filesystem.py::test_compression_filesystems[GzipFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
FAILED tests/test_filesystem.py::test_compression_filesystems[Lz4FileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
FAILED tests/test_filesystem.py::test_compression_filesystems[XzFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
FAILED tests/test_filesystem.py::test_compression_filesystems[ZstdFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
===== 5 failed, 2134 passed, 18 skipped, 38 warnings in 157.21s (0:02:37) ======
```
See:
- fsspec/filesystem_spec#1205
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5616/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5615
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5615/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5615/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5615/events
|
https://github.com/huggingface/datasets/issues/5615
| 1,612,552,653 |
I_kwDODunzps5gHZnN
| 5,615 |
IterableDataset.add_column is unable to accept another IterableDataset as a parameter.
|
{
"login": "zsaladin",
"id": 6466389,
"node_id": "MDQ6VXNlcjY0NjYzODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6466389?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zsaladin",
"html_url": "https://github.com/zsaladin",
"followers_url": "https://api.github.com/users/zsaladin/followers",
"following_url": "https://api.github.com/users/zsaladin/following{/other_user}",
"gists_url": "https://api.github.com/users/zsaladin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zsaladin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zsaladin/subscriptions",
"organizations_url": "https://api.github.com/users/zsaladin/orgs",
"repos_url": "https://api.github.com/users/zsaladin/repos",
"events_url": "https://api.github.com/users/zsaladin/events{/privacy}",
"received_events_url": "https://api.github.com/users/zsaladin/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] |
closed
| false | null |
[] | null |
[
"Hi! You can use `concatenate_datasets([ids1, ids2], axis=1)` to do this."
] | 2023-03-07T01:52:00 | 2023-03-09T15:24:05 | 2023-03-09T15:23:54 |
NONE
| null | null | null |
### Describe the bug
`IterableDataset.add_column` occurs an exception when passing another `IterableDataset` as a parameter.
The method seems to accept only eager evaluated values.
https://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/iterable_dataset.py#L1388-L1391
I wrote codes below to make it.
```py
def add_column(dataset: IterableDataset, name: str, add_dataset: IterableDataset, key: str) -> IterableDataset:
iter_add_dataset = iter(add_dataset)
def add_column_fn(example):
if name in example:
raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.")
return {name: next(iter_add_dataset)[key]}
return dataset.map(add_column_fn)
```
Is there other way to do it? Or is it intended?
### Steps to reproduce the bug
Thie codes below occurs `NotImplementedError`
```py
from datasets import IterableDataset
def gen(num):
yield {f"col{num}": 1}
yield {f"col{num}": 2}
yield {f"col{num}": 3}
ids1 = IterableDataset.from_generator(gen, gen_kwargs={"num": 1})
ids2 = IterableDataset.from_generator(gen, gen_kwargs={"num": 2})
new_ids = ids1.add_column("new_col", ids1)
for row in new_ids:
print(row)
```
### Expected behavior
`IterableDataset.add_column` is able to task `IterableDataset` and lazy evaluated values as a parameter since IterableDataset is lazy evalued.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-3.10.0-1160.36.2.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.7
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5615/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5615/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5613
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5613/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5613/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5613/events
|
https://github.com/huggingface/datasets/issues/5613
| 1,611,875,473 |
I_kwDODunzps5gE0SR
| 5,613 |
Version mismatch with multiprocess and dill on Python 3.10
|
{
"login": "adampauls",
"id": 1243668,
"node_id": "MDQ6VXNlcjEyNDM2Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1243668?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adampauls",
"html_url": "https://github.com/adampauls",
"followers_url": "https://api.github.com/users/adampauls/followers",
"following_url": "https://api.github.com/users/adampauls/following{/other_user}",
"gists_url": "https://api.github.com/users/adampauls/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adampauls/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adampauls/subscriptions",
"organizations_url": "https://api.github.com/users/adampauls/orgs",
"repos_url": "https://api.github.com/users/adampauls/repos",
"events_url": "https://api.github.com/users/adampauls/events{/privacy}",
"received_events_url": "https://api.github.com/users/adampauls/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"Sorry, I just found https://github.com/apache/beam/issues/24458. It seems this issue is being worked on. ",
"Reopening, since I think the docs should inform the user of this problem. For example, [this page](https://huggingface.co/docs/datasets/installation) says \r\n> Datasets is tested on Python 3.7+.\r\n\r\nbut it should probably say that Beam Datasets do not work with Python 3.10 (or link to a known issues page). ",
"Same problem on Colab using a vanilla setup running :\r\nPython 3.10.11 \r\napache-beam 2.47.0\r\ndatasets 2.12.0",
"Same problem, \r\npy 3.10.11\r\napache-beam==2.47.0\r\ndatasets==2.12.0",
"I have made a workaround by forcing an install of the version of `multiprocess` version `0.70.15` (after installing `datasets` and `apache-beam`). I can confirm that (on Python 3.10 in [this colab notebook](https://colab.research.google.com/drive/1PTeGlshamFcJZix_GiS3vMXX_YzAhGv0?usp=sharing)) `datasets` can download pre-processed Wikipedia dumps and can download non-pre-processed dumps using `beam_runner=\"DirectRunner\"`. I don't know if/how other `beam_runner`s can be made compatible."
] | 2023-03-06T17:14:41 | 2023-09-01T18:30:08 | null |
NONE
| null | null | null |
### Describe the bug
Grabbing the latest version of `datasets` and `apache-beam` with `poetry` using Python 3.10 gives a crash at runtime. The crash is
```
File "/Users/adpauls/sc/git/DSI-transformers/data/NQ/create_NQ_train_vali.py", line 1, in <module>
import datasets
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/__init__.py", line 43, in <module>
from .arrow_dataset import Dataset
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 65, in <module>
from .arrow_reader import ArrowReader
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 30, in <module>
from .download.download_config import DownloadConfig
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/download/__init__.py", line 9, in <module>
from .download_manager import DownloadManager, DownloadMode
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/download/download_manager.py", line 35, in <module>
from ..utils.py_utils import NestedDataStructure, map_nested, size_str
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 40, in <module>
import multiprocess.pool
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/multiprocess/pool.py", line 609, in <module>
class ThreadPool(Pool):
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/multiprocess/pool.py", line 611, in ThreadPool
from .dummy import Process
File "/Users/adpauls/Library/Caches/pypoetry/virtualenvs/yyy-oPbZ7mKM-py3.10/lib/python3.10/site-packages/multiprocess/dummy/__init__.py", line 87, in <module>
class Condition(threading._Condition):
AttributeError: module 'threading' has no attribute '_Condition'. Did you mean: 'Condition'?
```
I think this is a bad interaction of versions from `dill`, `multiprocess`, `apache-beam`, and `threading` from the Python (3.10) standard lib. Upgrading `multiprocess` to a version that does not crash like this is not possible because `apache-beam` pins `dill` to and old version:
```
Because multiprocess (0.70.10) depends on dill (>=0.3.2)
and apache-beam (2.45.0) depends on dill (>=0.3.1.1,<0.3.2), multiprocess (0.70.10) is incompatible with apache-beam (2.45.0).
And because no versions of apache-beam match >2.45.0,<3.0.0, multiprocess (0.70.10) is incompatible with apache-beam (>=2.45.0,<3.0.0).
So, because yyy depends on both apache-beam (^2.45.0) and multiprocess (0.70.10), version solving failed.
```
Perhaps it is not right to file a bug here, but I'm not totally sure whose fault it is. And in any case, this is an immediate blocker to using `datasets` out of the box.
Possibly related to https://github.com/huggingface/datasets/issues/5232.
### Steps to reproduce the bug
Steps to reproduce:
1. Make a poetry project with this configuration
```
[tool.poetry]
name = "yyy"
version = "0.1.0"
description = ""
authors = ["Adam Pauls <[email protected]>"]
readme = "README.md"
packages = [{ include = "xxx" }]
[tool.poetry.dependencies]
python = ">=3.10,<3.11"
datasets = "^2.10.1"
apache-beam = "^2.45.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
```
2. `poetry install`.
3. `poetry run python -c "import datasets"`.
### Expected behavior
Script runs.
### Environment info
Python 3.10. Here are the versions installed by `poetry`:
```
•• Installing frozenlist (1.3.3)
• Installing idna (3.4)
• Installing multidict (6.0.4)
• Installing aiosignal (1.3.1)
• Installing async-timeout (4.0.2)
• Installing attrs (22.2.0)
• Installing certifi (2022.12.7)
• Installing charset-normalizer (3.1.0)
• Installing six (1.16.0)
• Installing urllib3 (1.26.14)
• Installing yarl (1.8.2)
• Installing aiohttp (3.8.4)
• Installing dill (0.3.1.1)
• Installing docopt (0.6.2)
• Installing filelock (3.9.0)
• Installing numpy (1.22.4)
• Installing pyparsing (3.0.9)
• Installing protobuf (3.19.4)
• Installing packaging (23.0)
• Installing python-dateutil (2.8.2)
• Installing pytz (2022.7.1)
• Installing pyyaml (6.0)
• Installing requests (2.28.2)
• Installing tqdm (4.65.0)
• Installing typing-extensions (4.5.0)
• Installing cloudpickle (2.2.1)
• Installing crcmod (1.7)
• Installing fastavro (1.7.2)
• Installing fasteners (0.18)
• Installing fsspec (2023.3.0)
• Installing grpcio (1.51.3)
• Installing hdfs (2.7.0)
• Installing httplib2 (0.20.4)
• Installing huggingface-hub (0.12.1)
• Installing multiprocess (0.70.9)
• Installing objsize (0.6.1)
• Installing orjson (3.8.7)
• Installing pandas (1.5.3)
• Installing proto-plus (1.22.2)
• Installing pyarrow (9.0.0)
• Installing pydot (1.4.2)
• Installing pymongo (3.13.0)
• Installing regex (2022.10.31)
• Installing responses (0.18.0)
• Installing xxhash (3.2.0)
• Installing zstandard (0.20.0)
• Installing apache-beam (2.45.0)
• Installing datasets (2.10.1)
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5613/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5613/timeline
| null |
reopened
|
https://api.github.com/repos/huggingface/datasets/issues/5612
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5612/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5612/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5612/events
|
https://github.com/huggingface/datasets/issues/5612
| 1,611,262,510 |
I_kwDODunzps5gCeou
| 5,612 |
Arrow map type in parquet files unsupported
|
{
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"I'm attaching a minimal reproducible example:\r\n```python\r\nfrom datasets import load_dataset\r\nimport pyarrow as pa\r\nimport pyarrow.parquet as pq\r\n\r\ntable_with_map = pa.Table.from_pydict(\r\n {\"a\": [1, 2], \"b\": [[(\"a\", 2)], [(\"b\", 4)]]},\r\n schema=pa.schema({\"a\": pa.int32(), \"b\": pa.map_(pa.string(), pa.int32())})\r\n)\r\npq.write_table(table_with_map, \"parquet_with_map.parquet\")\r\ndset = load_dataset(\"parquet\", data_files=\"parquet_with_map.parquet\", split=\"train\") # error unless streaming=True\r\n``` \r\n\r\nFor a dataset generated with the packaged loaders (CSV, JSON, Parquet), `streaming=True` sets the dataset's features to `None` (unless explicitly provided in `load_dataset`), hence no error will be thrown as long as the features stay \"unresolved\" (resolving the features with `_resolve_features` will lead to an error).",
"I've also been wondering about datasets support for Arrow Map datatypes. I had a situation where I had a pandas series of dict[str, float] with hundreds of different possible key values (ie. not bounded), and this got converted to a sequence of structs where every single struct had the entire set of keys.\r\n\r\nI worked around it, by explicitly creating a sequence of [str, float], but given that pyarrow has an explicit Map datatype, it would be good to be able to explicitly cast/force this data type combination.",
"(feel free to ignore) polars will not support this type: https://github.com/pola-rs/polars/issues/3942#issuecomment-1202331210\r\n\r\n> Polars will not add the map dtype. It's benefit do not outweigh the extra complexity. Maybe we can investigate conversion of maps to struct. But I will have to explore that.",
"Looks like they chose to convert every instance with https://github.com/pola-rs/polars/pull/4226"
] | 2023-03-06T12:03:24 | 2024-03-15T18:56:12 | null |
CONTRIBUTOR
| null | null | null |
### Describe the bug
When I try to load parquet files that were processed with Spark, I get the following issue:
`ValueError: Arrow type map<string, string ('warc_headers')> does not have a datasets dtype equivalent.`
Strangely, loading the dataset with `streaming=True` solves the issue.
### Steps to reproduce the bug
The dataset is private, but this can be reproduced with any dataset that has Arrow maps.
### Expected behavior
Loading the dataset no matter whether streaming is True or not.
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-5.15.0-1029-gcp-x86_64-with-glibc2.31
- Python version: 3.10.7
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5612/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 4
}
|
https://api.github.com/repos/huggingface/datasets/issues/5612/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5610
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5610/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5610/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5610/events
|
https://github.com/huggingface/datasets/issues/5610
| 1,610,698,006 |
I_kwDODunzps5gAU0W
| 5,610 |
use datasets streaming mode in trainer ddp mode cause memory leak
|
{
"login": "gromzhu",
"id": 15223544,
"node_id": "MDQ6VXNlcjE1MjIzNTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/15223544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gromzhu",
"html_url": "https://github.com/gromzhu",
"followers_url": "https://api.github.com/users/gromzhu/followers",
"following_url": "https://api.github.com/users/gromzhu/following{/other_user}",
"gists_url": "https://api.github.com/users/gromzhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gromzhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gromzhu/subscriptions",
"organizations_url": "https://api.github.com/users/gromzhu/orgs",
"repos_url": "https://api.github.com/users/gromzhu/repos",
"events_url": "https://api.github.com/users/gromzhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/gromzhu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"Same problem, \r\ntransformers 4.28.1\r\ndatasets 2.12.0\r\n\r\nleak around 100Mb per 10 seconds when use dataloader_num_werker > 0 in training argumennts for transformer train, possile bug in transformers repo, but still not found solution :(\r\n",
"found an article described a problem, may be helpful for somebody:\r\nhttps://ppwwyyxx.com/blog/2022/Demystify-RAM-Usage-in-Multiprocess-DataLoader/\r\nI confirm, it`s not memory leak, after some time memory growing has stopped",
"\"After some time\" - from your description, it sounds like memory growth can happen for 12 hours+, even days, before it stops? That seems very scary."
] | 2023-03-06T05:26:49 | 2024-03-07T01:11:32 | null |
NONE
| null | null | null |
### Describe the bug
use datasets streaming mode in trainer ddp mode cause memory leak
### Steps to reproduce the bug
import os
import time
import datetime
import sys
import numpy as np
import random
import torch
from torch.utils.data import Dataset, DataLoader, random_split, RandomSampler, SequentialSampler,DistributedSampler,BatchSampler
torch.manual_seed(42)
from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config, GPT2Model,DataCollatorForLanguageModeling,AutoModelForCausalLM
from transformers import AdamW, get_linear_schedule_with_warmup
hf_model_path ='./Wenzhong-GPT2-110M'
tokenizer = GPT2Tokenizer.from_pretrained(hf_model_path)
tokenizer.add_special_tokens({'pad_token': '<|pad|>'})
from datasets import load_dataset
gpus=8
max_len = 576
batch_size_node = 17
save_step = 5000
gradient_accumulation = 2
dataloader_num = 4
max_step = 351000*1000//batch_size_node//gradient_accumulation//gpus
#max_step = -1
print("total_step:%d"%(max_step))
import datasets
datasets.version
dataset = load_dataset("text", data_files="./gpt_data_v1/*",split='train',cache_dir='./dataset_cache',streaming=True)
print('load over')
shuffled_dataset = dataset.shuffle(seed=42)
print('shuffle over')
def dataset_tokener(example,max_lenth=max_len):
example['text'] = list(map(lambda x : x.strip()+'<|endoftext|>',example['text'] ))
return tokenizer(example['text'], truncation=True, max_length=max_lenth, padding="longest")
new_new_dataset = shuffled_dataset.map(dataset_tokener, batched=True, remove_columns=["text"])
print('map over')
configuration = GPT2Config.from_pretrained(hf_model_path, output_hidden_states=False)
model = AutoModelForCausalLM.from_pretrained(hf_model_path)
model.resize_token_embeddings(len(tokenizer))
seed_val = 42
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
from transformers import Trainer,TrainingArguments
import os
print("strat train")
training_args = TrainingArguments(output_dir="./test_trainer",
num_train_epochs=1.0,
report_to="none",
do_train=True,
dataloader_num_workers=dataloader_num,
local_rank=int(os.environ.get('LOCAL_RANK', -1)),
overwrite_output_dir=True,
logging_strategy='steps',
logging_first_step=True,
logging_dir="./logs",
log_on_each_node=False,
per_device_train_batch_size=batch_size_node,
warmup_ratio=0.03,
save_steps=save_step,
save_total_limit=5,
gradient_accumulation_steps=gradient_accumulation,
max_steps=max_step,
disable_tqdm=False,
data_seed=42
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=new_new_dataset,
eval_dataset=None,
tokenizer=tokenizer,
data_collator=DataCollatorForLanguageModeling(tokenizer,mlm=False),
#compute_metrics=compute_metrics if training_args.do_eval and not is_torch_tpu_available() else None,
#preprocess_logits_for_metrics=preprocess_logits_for_metrics
#if training_args.do_eval and not is_torch_tpu_available()
#else None,
)
trainer.train(resume_from_checkpoint=True)
### Expected behavior
use the train code uppper
my dataset ./gpt_data_v1 have 1000 files, each file size is 120mb
start cmd is : python -m torch.distributed.launch --nproc_per_node=8 my_train.py
here is result:

here is memory usage monitor in 12 hours

every dataloader work allocate over 24gb cpu memory
according to memory usage monitor in 12 hours,sometime small memory releases, but total memory usage is increase.
i think datasets streaming mode should not used so much memery,so maybe somewhere has memory leak.
### Environment info
pytorch 1.11.0
py 3.8
cuda 11.3
transformers 4.26.1
datasets 2.9.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5610/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5609
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5609/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5609/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5609/events
|
https://github.com/huggingface/datasets/issues/5609
| 1,610,062,862 |
I_kwDODunzps5f95wO
| 5,609 |
`load_from_disk` vs `load_dataset` performance.
|
{
"login": "davidgilbertson",
"id": 4443482,
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidgilbertson",
"html_url": "https://github.com/davidgilbertson",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"Hi! We've recently made some improvements to `save_to_disk`/`list_to_disk` (100x faster in some scenarios), so it would help if you could install `datasets` directly from `main` (`pip install git+https://github.com/huggingface/datasets.git`) and re-run the \"benchmark\".",
"Great to hear! I'll give it a try when I've got a moment.",
"@mariosasko is that fix released to pip in the meantime? Asking cause im facing still the same issue (regarding loading images from local paths):\r\n```\r\ndataset = load_dataset(\"csv\", cache_dir=\"cache\", data_files=[\"/STORAGE/DATA/mijam/vit/code/list_filtered.csv\"], num_proc=16, split=\"train\").cast_column(\"image\", Image())\r\ndataset = dataset.class_encode_column(\"label\")\r\n```\r\nquite fast. \r\n\r\nThen I do `save_to_disk()` and some time later:\r\n```\r\ndataset = load_from_disk('/STORAGE/DATA/mijam/accel/saved_arrow_big')\r\n```\r\nreally slow. In theory it should be quicked since it only loads arrow files, no conversions and so on.\r\n",
"@mjamroz I assume your CSV file stores image file paths. This means `save_to_disk` needs to embed the image bytes resulting in a much bigger Arrow file (than the initial one). Maybe specifying `num_shards` to make the Arrow files smaller can help (large Arrow files on some systems take a long time to load)."
] | 2023-03-05T05:27:15 | 2023-07-13T18:48:05 | null |
NONE
| null | null | null |
### Describe the bug
I have downloaded `openwebtext` (~12GB) and filtered out a small amount of junk (it's still huge). Now, I would like to use this filtered version for future work. It seems I have two choices:
1. Use `load_dataset` each time, relying on the cache mechanism, and re-run my filtering.
2. `save_to_disk` and then use `load_from_disk` to load the filtered version.
The performance of these two approaches is wildly different:
* Using `load_dataset` takes about 20 seconds to load the dataset, and a few seconds to re-filter (thanks to the brilliant filter/map caching)
* Using `load_from_disk` takes 14 minutes! And the second time I tried, the session just crashed (on a machine with 32GB of RAM)
I don't know if you'd call this a bug, but it seems like there shouldn't need to be two methods to load from disk, or that they should not take such wildly different amounts of time, or that one should not crash. Or maybe that the docs could offer some guidance about when to pick which method and why two methods exist, or just how do most people do it?
Something I couldn't work out from reading the docs was this: can I modify a dataset from the hub, save it (locally) and use `load_dataset` to load it? This [post seemed to suggest that the answer is no](https://discuss.huggingface.co/t/save-and-load-datasets/9260).
### Steps to reproduce the bug
See above
### Expected behavior
Load times should be about the same.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5609/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5608
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5608/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5608/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5608/events
|
https://github.com/huggingface/datasets/issues/5608
| 1,609,996,563 |
I_kwDODunzps5f9pkT
| 5,608 |
audiofolder only creates dataset of 13 rows (files) when the data folder it's reading from has 20,000 mp3 files.
|
{
"login": "jcho19",
"id": 107211437,
"node_id": "U_kgDOBmPqrQ",
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jcho19",
"html_url": "https://github.com/jcho19",
"followers_url": "https://api.github.com/users/jcho19/followers",
"following_url": "https://api.github.com/users/jcho19/following{/other_user}",
"gists_url": "https://api.github.com/users/jcho19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jcho19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcho19/subscriptions",
"organizations_url": "https://api.github.com/users/jcho19/orgs",
"repos_url": "https://api.github.com/users/jcho19/repos",
"events_url": "https://api.github.com/users/jcho19/events{/privacy}",
"received_events_url": "https://api.github.com/users/jcho19/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi!\r\n\r\n> naming convention of mp3 files\r\n\r\nYes, this could be the problem. MP3 files should end with `.mp3`/`.MP3` to be recognized as audio files.\r\n\r\nIf the file names are not the culprit, can you paste the audio folder's directory structure to help us reproduce the error (e.g., by running the `tree \"x\"` command)?",
"Hi! I'm sorry, I don't want to reveal my entire dataset, but here's a snippet (all of the mp3 files below are some of the ones not being recognized by audiofolder. Also, for another dataset, audiofolder loaded zero mp3 files because \"train\" was in the name of one of the mp3 files. \r\nmy_dataset\r\n├── data\r\n│ ├── VHA_Innovation_Stories_-_Day_2-123.mp3\r\n│ ├── VHA_Innovation_Stories_-_Day_2-124.mp3\r\n│ ├── ASSOCIATION_OF_GENERAL_PRACTITIONERS_OF_JAMAICA_NEPHROLOGY_CONFERENCE_-_JULY_3,_2022-93.mp3\r\n│ ├── ASSOCIATION_OF_GENERAL_PRACTITIONERS_OF_JAMAICA_NEPHROLOGY_CONFERENCE_-_JULY_3,_2022-94.mp3\r\n│ ├── ASSOCIATION_OF_GENERAL_PRACTITIONERS_OF_JAMAICA_NEPHROLOGY_CONFERENCE_-_JULY_3,_2022-95.mp3\r\n│ ├── Your_Impact\\357\\274\\232_Neurosurgery_equipment-5.mp3\r\n│ └── Your_Impact\\357\\274\\232_Neurosurgery_equipment-6.mp3\r\n└── metadata.csv\r\n\r\nHere's a few of the 13 files recognized by the dataset:\r\nBritish_Heart_Foundation_-_Your_guide_to_a_Coronary_Angiogram,_a_test_for_heart_disease-1.mp3\r\nBritish_Heart_Foundation_-_Your_guide_to_a_Coronary_Angiogram,_a_test_for_heart_disease-2.mp3\r\nBritish_Heart_Foundation_-_Your_guide_to_a_Coronary_Angiogram,_a_test_for_heart_disease-3.mp3\r\nIVP_⧸_IVU_test_Procedure_for_Kidneys_intravenous_pyelogram_-_medical_radiology_X-ray_ivp-1.mp3\r\nIVP_⧸_IVU_test_Procedure_for_Kidneys_intravenous_pyelogram_-_medical_radiology_X-ray_ivp-2.mp3"
] | 2023-03-05T00:14:45 | 2023-03-12T00:02:57 | 2023-03-12T00:02:57 |
NONE
| null | null | null |
### Describe the bug
x = load_dataset("audiofolder", data_dir="x")
When running this, x is a dataset of 13 rows (files) when it should be 20,000 rows (files) as the data_dir "x" has 20,000 mp3 files. Does anyone know what could possibly cause this (naming convention of mp3 files, etc.)
### Steps to reproduce the bug
x = load_dataset("audiofolder", data_dir="x")
### Expected behavior
x = load_dataset("audiofolder", data_dir="x") should create a dataset of 20,000 rows (files).
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.16
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5608/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5606
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5606/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5606/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5606/events
|
https://github.com/huggingface/datasets/issues/5606
| 1,608,911,632 |
I_kwDODunzps5f5gsQ
| 5,606 |
Add `Dataset.to_list` to the API
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false |
{
"login": "kyoto7250",
"id": 50972773,
"node_id": "MDQ6VXNlcjUwOTcyNzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/50972773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kyoto7250",
"html_url": "https://github.com/kyoto7250",
"followers_url": "https://api.github.com/users/kyoto7250/followers",
"following_url": "https://api.github.com/users/kyoto7250/following{/other_user}",
"gists_url": "https://api.github.com/users/kyoto7250/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kyoto7250/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kyoto7250/subscriptions",
"organizations_url": "https://api.github.com/users/kyoto7250/orgs",
"repos_url": "https://api.github.com/users/kyoto7250/repos",
"events_url": "https://api.github.com/users/kyoto7250/events{/privacy}",
"received_events_url": "https://api.github.com/users/kyoto7250/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "kyoto7250",
"id": 50972773,
"node_id": "MDQ6VXNlcjUwOTcyNzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/50972773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kyoto7250",
"html_url": "https://github.com/kyoto7250",
"followers_url": "https://api.github.com/users/kyoto7250/followers",
"following_url": "https://api.github.com/users/kyoto7250/following{/other_user}",
"gists_url": "https://api.github.com/users/kyoto7250/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kyoto7250/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kyoto7250/subscriptions",
"organizations_url": "https://api.github.com/users/kyoto7250/orgs",
"repos_url": "https://api.github.com/users/kyoto7250/repos",
"events_url": "https://api.github.com/users/kyoto7250/events{/privacy}",
"received_events_url": "https://api.github.com/users/kyoto7250/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Hello, I have an interest in this issue.\r\nIs the `Dataset.to_dict` you are describing correct in the code here?\r\n\r\nhttps://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/arrow_dataset.py#L4633-L4667",
"Yes, this is where `Dataset.to_dict` is defined.",
"#self-assign"
] | 2023-03-03T16:17:10 | 2023-03-27T13:26:40 | 2023-03-27T13:26:40 |
CONTRIBUTOR
| null | null | null |
Since there is `Dataset.from_list` in the API, we should also add `Dataset.to_list` to be consistent.
Regarding the implementation, we can re-use `Dataset.to_dict`'s code and replace the `to_pydict` calls with `to_pylist`.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5606/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5604
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5604/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5604/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5604/events
|
https://github.com/huggingface/datasets/issues/5604
| 1,608,304,775 |
I_kwDODunzps5f3MiH
| 5,604 |
Problems with downloading The Pile
|
{
"login": "sentialx",
"id": 11065386,
"node_id": "MDQ6VXNlcjExMDY1Mzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/11065386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sentialx",
"html_url": "https://github.com/sentialx",
"followers_url": "https://api.github.com/users/sentialx/followers",
"following_url": "https://api.github.com/users/sentialx/following{/other_user}",
"gists_url": "https://api.github.com/users/sentialx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sentialx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sentialx/subscriptions",
"organizations_url": "https://api.github.com/users/sentialx/orgs",
"repos_url": "https://api.github.com/users/sentialx/repos",
"events_url": "https://api.github.com/users/sentialx/events{/privacy}",
"received_events_url": "https://api.github.com/users/sentialx/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi! \r\n\r\n\r\nYou can specify `download_config=DownloadConfig(resume_download=True))` in `load_dataset` to resume the download when re-running the code after the timeout error:\r\n```python\r\nfrom datasets import load_dataset, DownloadConfig\r\ndataset = load_dataset('the_pile', split='train', cache_dir='F:\\datasets', download_config=DownloadConfig(resume_download=True))\r\n```\r\n\r\n",
"@mariosasko , I used your suggestion but its not saving anything , just stops and runs from the same point .\r\nbelow is the script to download and save on disk .\r\n\r\n```\r\nfrom datasets import load_dataset, DownloadConfig\r\n\r\n\r\n#load the Pile dataset from Hugging Face Datasets\r\n#dataset = load_dataset('the_pile')\r\ndataset = load_dataset('the_pile', split='train', cache_dir='datasets', download_config=DownloadConfig(resume_download=True))\r\n\r\n\r\n# save each file in the dataset to disk\r\nfor i, example in enumerate(dataset['train']):\r\n filename = f'pile_file_{i}.json'\r\n with open(filename, 'w') as f:\r\n f.write(str(example))\r\n\r\nprint(\"Finished saving Pile dataset files to disk.\")\r\n```\r\n",
"@mariosasko , it shows nothing in dataset folder\r\n\r\n```\r\n du -sh /mnt/nlp/hugging_face/*\r\n20K /mnt/nlp/hugging_face/datasets\r\n4.0K /mnt/nlp/hugging_face/download_pile.py\r\n```\r\n",
"@mariosasko \r\n\r\n```\r\nroot@d20f0ab8f4f8:/mnt/hugging_face# python3 download_pile.py\r\nNo config specified, defaulting to: the_pile/all\r\nDownloading and preparing dataset the_pile/all to /mnt/hugging_face/datasets/the_pile/all/0.0.0/6fadc480ecb32470826cbf5900a9558b791ce55d5e9a0fdc8ad653e7b64bb349...\r\nDownloading data files: 0%| | 0/3 [00:00<?, ?it/s]\r\n\r\n\r\n\r\n\r\n\r\nDownloading data: 70%|████████████████████████████████████████████████████████████████████▊ | 10.7G/15.2G [12:09<11:53, 6.36MB/s]\r\nDownloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 15.2G/15.2G [22:15<00:00, 7.25MB/s]\r\nDownloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 15.2G/15.2G [46:17<00:00, 5.48MB/s]\r\nDownloading data: 40%|██████████████████████████████████████▏ | 6.07G/15.3G [50:49<1:17:02, 1.99MB/s]\r\nTraceback (most recent call last):██████████████████████████▊ | 6.07G/15.3G [50:49<25:35:23, 99.9kB/s]\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 444, in _error_catcher\r\n yield\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 567, in read\r\n data = self._fp_read(amt) if not fp_closed else b\"\"\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 525, in _fp_read\r\n data = self._fp.read(chunk_amt)\r\n File \"/usr/lib/python3.8/http/client.py\", line 459, in read\r\n n = self.readinto(b)\r\n File \"/usr/lib/python3.8/http/client.py\", line 503, in readinto\r\n n = self.fp.readinto(b)\r\n File \"/usr/lib/python3.8/socket.py\", line 669, in readinto\r\n return self._sock.recv_into(b)\r\n File \"/usr/lib/python3.8/ssl.py\", line 1241, in recv_into\r\n return self.read(nbytes, buffer)\r\n File \"/usr/lib/python3.8/ssl.py\", line 1099, in read\r\n return self._sslobj.read(len, buffer)\r\nConnectionResetError: [Errno 104] Connection reset by peer\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/dist-packages/requests/models.py\", line 816, in generate\r\n yield from self.raw.stream(chunk_size, decode_content=True)\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 628, in stream\r\n data = self.read(amt=amt, decode_content=decode_content)\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 593, in read\r\n raise IncompleteRead(self._fp_bytes_read, self.length_remaining)\r\n File \"/usr/lib/python3.8/contextlib.py\", line 131, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 461, in _error_catcher\r\n raise ProtocolError(\"Connection broken: %r\" % e, e)\r\nurllib3.exceptions.ProtocolError: (\"Connection broken: ConnectionResetError(104, 'Connection reset by peer')\", ConnectionResetError(104, 'Connection reset by peer'))\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"download_pile.py\", line 6, in <module>\r\n dataset = load_dataset('the_pile', split='train', cache_dir='datasets', download_config=DownloadConfig(resume_download=True))\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/load.py\", line 1782, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 872, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 1649, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 945, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/root/.cache/huggingface/modules/datasets_modules/datasets/the_pile/6fadc480ecb32470826cbf5900a9558b791ce55d5e9a0fdc8ad653e7b64bb349/the_pile.py\", line 192, in _split_generators\r\n data_dir = dl_manager.download(_DATA_URLS[self.config.name])\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py\", line 427, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 443, in map_nested\r\n mapped = [\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 444, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True, None))\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 363, in _single_map_nested\r\n mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 363, in <listcomp>\r\n mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 346, in _single_map_nested\r\n return function(data_struct)\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py\", line 453, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py\", line 182, in cached_path\r\n output_path = get_from_cache(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py\", line 575, in get_from_cache\r\n http_get(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py\", line 379, in http_get\r\n for chunk in response.iter_content(chunk_size=1024):\r\n File \"/usr/local/lib/python3.8/dist-packages/requests/models.py\", line 818, in generate\r\n raise ChunkedEncodingError(e)\r\nrequests.exceptions.ChunkedEncodingError: (\"Connection broken: ConnectionResetError(104, 'Connection reset by peer')\", ConnectionResetError(104, 'Connection reset by peer'))\r\n```\r\n",
"Users with slow internet speed are doomed (4MB/s). The dataset downloads fine at minimum speed 10MB/s.\n\nAlso, when the train splits were generated and then I removed the downloads folder to save up disk space, it started redownloading the whole dataset. Is there any way to use the already generated splits instead?",
"@sentialx @mariosasko , anytime on my above script , am I downloading and saving dataset correctly . Please suggest :)",
"@sentialx probably worth noting that `resume_download=True` doesn't directly save the dataset to disk, but instead just helps in resuming the dataset resume on interruption as @mariosasko mentions. resolving resumptions after a crash is [an open issue](https://github.com/huggingface/datasets/issues/5380) at the moment."
] | 2023-03-03T09:52:08 | 2023-10-14T02:15:52 | 2023-03-24T12:44:25 |
NONE
| null | null | null |
### Describe the bug
The downloads in the screenshot seem to be interrupted after some time and the last download throws a "Read timed out" error.

Here are the downloaded files:

They should be all 14GB like here (https://the-eye.eu/public/AI/pile/train/).
Alternatively, can I somehow download the files by myself and use the datasets preparing script?
### Steps to reproduce the bug
dataset = load_dataset('the_pile', split='train', cache_dir='F:\datasets')
### Expected behavior
The files should be downloaded correctly.
### Environment info
- `datasets` version: 2.10.1
- Platform: Windows-10-10.0.22623-SP0
- Python version: 3.10.5
- PyArrow version: 9.0.0
- Pandas version: 1.4.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5604/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5601
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5601/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5601/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5601/events
|
https://github.com/huggingface/datasets/issues/5601
| 1,606,685,976 |
I_kwDODunzps5fxBUY
| 5,601 |
Authorization error
|
{
"login": "OleksandrKorovii",
"id": 107404835,
"node_id": "U_kgDOBmbeIw",
"avatar_url": "https://avatars.githubusercontent.com/u/107404835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OleksandrKorovii",
"html_url": "https://github.com/OleksandrKorovii",
"followers_url": "https://api.github.com/users/OleksandrKorovii/followers",
"following_url": "https://api.github.com/users/OleksandrKorovii/following{/other_user}",
"gists_url": "https://api.github.com/users/OleksandrKorovii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OleksandrKorovii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OleksandrKorovii/subscriptions",
"organizations_url": "https://api.github.com/users/OleksandrKorovii/orgs",
"repos_url": "https://api.github.com/users/OleksandrKorovii/repos",
"events_url": "https://api.github.com/users/OleksandrKorovii/events{/privacy}",
"received_events_url": "https://api.github.com/users/OleksandrKorovii/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi! \r\n\r\nIt's better to report this kind of issue in the `huggingface_hub` repo, so if you still haven't resolved it, I suggest you open an issue there.",
"Yeah, I solved it. Problem was in osxkeychain. When I do `hugginface-cli login` it's add token with default account (username)`hg_user` but my repo contain other username. When I changed username in keychain - it works now."
] | 2023-03-02T12:08:39 | 2023-03-14T16:55:35 | 2023-03-14T16:55:34 |
NONE
| null | null | null |
### Describe the bug
Get `Authorization error` when try to push data into hugginface datasets hub.
### Steps to reproduce the bug
I did all steps in the [tutorial](https://huggingface.co/docs/datasets/share),
1. `huggingface-cli login` with WRITE token
2. `git lfs install`
3. `git clone https://huggingface.co/datasets/namespace/your_dataset_name`
4.
```
cp /somewhere/data/*.json .
git lfs track *.json
git add .gitattributes
git add *.json
git commit -m "add json files"
```
but when I execute `git push` I got the error:
```
Uploading LFS objects: 0% (0/1), 0 B | 0 B/s, done.
batch response: Authorization error.
error: failed to push some refs to 'https://huggingface.co/datasets/zeusfsx/ukrainian-news'
```
Size of data ~100Gb. I have five json files - different parts.
### Expected behavior
All my data pushed into hub
### Environment info
- `datasets` version: 2.10.1
- Platform: macOS-13.2.1-arm64-arm-64bit
- Python version: 3.10.10
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5601/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5600
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5600/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5600/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5600/events
|
https://github.com/huggingface/datasets/issues/5600
| 1,606,585,596 |
I_kwDODunzps5fwoz8
| 5,600 |
Dataloader getitem not working for DreamboothDatasets
|
{
"login": "salahiguiliz",
"id": 76955987,
"node_id": "MDQ6VXNlcjc2OTU1OTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/76955987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/salahiguiliz",
"html_url": "https://github.com/salahiguiliz",
"followers_url": "https://api.github.com/users/salahiguiliz/followers",
"following_url": "https://api.github.com/users/salahiguiliz/following{/other_user}",
"gists_url": "https://api.github.com/users/salahiguiliz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/salahiguiliz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/salahiguiliz/subscriptions",
"organizations_url": "https://api.github.com/users/salahiguiliz/orgs",
"repos_url": "https://api.github.com/users/salahiguiliz/repos",
"events_url": "https://api.github.com/users/salahiguiliz/events{/privacy}",
"received_events_url": "https://api.github.com/users/salahiguiliz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi! \r\n\r\n> (see example of DreamboothDatasets)\r\n\r\n\r\nCould you please provide a link to it? If you are referring to the example in the `diffusers` repo, your issue is unrelated to `datasets` as that example uses `Dataset` from PyTorch to load data."
] | 2023-03-02T11:00:27 | 2023-03-13T17:59:35 | 2023-03-13T17:59:35 |
NONE
| null | null | null |
### Describe the bug
Dataloader getitem is not working as before (see example of [DreamboothDatasets](https://github.com/huggingface/peft/blob/main/examples/lora_dreambooth/train_dreambooth.py#L451C14-L529))
moving Datasets to 2.8.0 solved the issue.
### Steps to reproduce the bug
1- using DreamBoothDataset to load some images
2- error after loading when trying to visualise the images
### Expected behavior
I was expecting a numpy array of the image
### Environment info
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5600/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5597
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5597/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5597/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5597/events
|
https://github.com/huggingface/datasets/issues/5597
| 1,604,928,721 |
I_kwDODunzps5fqUTR
| 5,597 |
in-place dataset update
|
{
"login": "speedcell4",
"id": 3585459,
"node_id": "MDQ6VXNlcjM1ODU0NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/speedcell4",
"html_url": "https://github.com/speedcell4",
"followers_url": "https://api.github.com/users/speedcell4/followers",
"following_url": "https://api.github.com/users/speedcell4/following{/other_user}",
"gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions",
"organizations_url": "https://api.github.com/users/speedcell4/orgs",
"repos_url": "https://api.github.com/users/speedcell4/repos",
"events_url": "https://api.github.com/users/speedcell4/events{/privacy}",
"received_events_url": "https://api.github.com/users/speedcell4/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] |
closed
| false | null |
[] | null |
[
"We won't support in-place modifications since `datasets` is based on the Apache Arrow format which doesn't support in-place modifications.\r\n\r\nIn your case the old dataset is garbage collected pretty quickly so you won't have memory issues.\r\n\r\nNote that datasets loaded from disk (memory mapped) are not loaded in memory, and therefore the new dataset actually use the same buffers as the old one.",
"Thank you for your detailed reply.\r\n\r\n> In your case the old dataset is garbage collected pretty quickly so you won't have memory issues.\r\n\r\nI understand this, but it still copies the old dataset to create the new one, is this correct? So maybe it is not memory-consuming, but time-consuming?",
"Indeed, and because of that it is more efficient to add multiple rows at once instead of one by one, using `concatenate_datasets` for example."
] | 2023-03-01T12:58:18 | 2023-03-02T13:30:41 | 2023-03-02T03:47:00 |
NONE
| null | null | null |
### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets import Dataset
ds = Dataset.from_list([])
ds.add_item({'a': [1, 2, 3], 'b': 4})
print(ds)
>>> Dataset({
>>> features: [],
>>> num_rows: 0
>>> })
ds = ds.add_item({'a': [1, 2, 3], 'b': 4})
print(ds)
>>> Dataset({
>>> features: ['a', 'b'],
>>> num_rows: 1
>>> })
```
### Feature request
Call for in-place dataset update functions, that update the existing `Dataset` in place without creating a new copy. The interface is supposed to keep the same style as PyTorch, such as the in-place version of a `function` is named `function_`. For example, the in-pace version of `add_item`, i.e., `add_item_`, immediately updates the `Dataset`.
```python
from datasets import Dataset
ds = Dataset.from_list([])
ds.add_item({'a': [1, 2, 3], 'b': 4})
print(ds)
>>> Dataset({
>>> features: [],
>>> num_rows: 0
>>> })
ds.add_item_({'a': [1, 2, 3], 'b': 4})
print(ds)
>>> Dataset({
>>> features: ['a', 'b'],
>>> num_rows: 1
>>> })
```
### Related Functions
* `.map`
* `.filter`
* `.add_item`
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5597/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5596
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5596/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5596/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5596/events
|
https://github.com/huggingface/datasets/issues/5596
| 1,604,919,993 |
I_kwDODunzps5fqSK5
| 5,596 |
[TypeError: Couldn't cast array of type] Can only load a subset of the dataset
|
{
"login": "loubnabnl",
"id": 44069155,
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loubnabnl",
"html_url": "https://github.com/loubnabnl",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Apparently some JSON objects have a `\"labels\"` field. Since this field is not present in every object, you must specify all the fields types in the README.md\r\n\r\nEDIT: actually specifying the feature types doesn’t solve the issue, it raises an error because “labels” is missing in the data",
"We've updated the dataset to remove the extra `labels` field from some files, closing this issue. Thanks!",
"A similar error occurs in the Pile dataset (EleutherAI/the_pile)\r\n\r\nLoading the dataset produces the following error.\r\n\r\n```\r\nTypeError: Couldn't cast array of type\r\nstruct<file: string, id: string>\r\nto\r\n{'id': Value(dtype='string', id=None)}\r\n```\r\n",
"I think this was fixed in https://huggingface.co/datasets/EleutherAI/the_pile/discussions/11",
"i have the same problem ,how to solve :\r\n raise TypeError(f\"Couldn't cast array of type\\n{array.type}\\nto\\n{feature}\")\r\nTypeError: Couldn't cast array of type\r\nlist<item: string>\r\nto\r\n{'content': Value(dtype='string', id=None), 'role': Value(dtype='string', id=None)}"
] | 2023-03-01T12:53:08 | 2023-12-05T03:22:00 | 2023-03-02T11:12:11 |
NONE
| null | null | null |
### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wrapper
return func(array, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 2132, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
struct<type: string, action: string, datetime: timestamp[s], author: string, title: string, description: string, comment_id: int64, comment: string, labels: list<item: string>>
to
{'type': Value(dtype='string', id=None), 'action': Value(dtype='string', id=None), 'datetime': Value(dtype='timestamp[s]', id=None), 'author': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'comment_id': Value(dtype='int64', id=None), 'comment': Value(dtype='string', id=None)}
```
But I can succesfully load a subset of the dataset, for example this works:
```python
ds = load_dataset('bigcode-data/the-stack-gh-issues', split="train", data_files=[f"data/data-{x}.jsonl" for x in range(10)])
```
and `ds.features` returns:
```
{'repo': Value(dtype='string', id=None),
'org': Value(dtype='string', id=None),
'issue_id': Value(dtype='int64', id=None),
'issue_number': Value(dtype='int64', id=None),
'pull_request': {'user_login': Value(dtype='string', id=None),
'repo': Value(dtype='string', id=None),
'number': Value(dtype='int64', id=None)},
'events': [{'type': Value(dtype='string', id=None),
'action': Value(dtype='string', id=None),
'datetime': Value(dtype='timestamp[s]', id=None),
'author': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None),
'description': Value(dtype='string', id=None),
'comment_id': Value(dtype='int64', id=None),
'comment': Value(dtype='string', id=None)}]}
```
So I'm not sure if there's an issue with just some of the files. Grateful if you have any suggestions to fix the issue.
Side note:
I saw this related [issue](https://github.com/huggingface/datasets/issues/3637) and tried to write a loading script to have `events` as a `Sequence` and not `list` [here](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues/blob/main/loading.py) (the script was renamed). It worked with a subset locally but doesn't for the remote dataset it can't find https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues/resolve/main/data.
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('bigcode-data/the-stack-gh-issues', split="train")
```
### Expected behavior
Load the entire dataset succesfully.
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.12
- PyArrow version: 9.0.0
- Pandas version: 1.3.4
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5596/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5594
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5594/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5594/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5594/events
|
https://github.com/huggingface/datasets/issues/5594
| 1,603,980,995 |
I_kwDODunzps5fms7D
| 5,594 |
Error while downloading the xtreme udpos dataset
|
{
"login": "simran-khanuja",
"id": 24687672,
"node_id": "MDQ6VXNlcjI0Njg3Njcy",
"avatar_url": "https://avatars.githubusercontent.com/u/24687672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simran-khanuja",
"html_url": "https://github.com/simran-khanuja",
"followers_url": "https://api.github.com/users/simran-khanuja/followers",
"following_url": "https://api.github.com/users/simran-khanuja/following{/other_user}",
"gists_url": "https://api.github.com/users/simran-khanuja/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simran-khanuja/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simran-khanuja/subscriptions",
"organizations_url": "https://api.github.com/users/simran-khanuja/orgs",
"repos_url": "https://api.github.com/users/simran-khanuja/repos",
"events_url": "https://api.github.com/users/simran-khanuja/events{/privacy}",
"received_events_url": "https://api.github.com/users/simran-khanuja/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi! I cannot reproduce this error on my machine.\r\n\r\nThe raised error could mean that one of the downloaded files is corrupted. To verify this is not the case, you can run `load_dataset` as follows:\r\n```python\r\ntrain_dataset = load_dataset('xtreme', 'udpos.English', split=\"train\", cache_dir=args.cache_dir, download_mode=\"force_redownload\", verification_mode=\"all_checks\")\r\n```",
"Hi! Apologies for the delayed response! I tried the above and it doesn't solve the issue. Actually, the dataset gets downloaded most times, but sometimes this error occurs (at random afaik). Is it possible that there is a server issue for this particular dataset? I am able to download other datasets using the same code on the same machine with no issues :( I get this error now : \r\n```\r\nDownloading data: 16%|███████████████▌ | 55.9M/355M [04:45<25:25, 196kB/s]\r\nTraceback (most recent call last):\r\n File \"/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py\", line 1107, in <module>\r\n main()\r\n File \"/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py\", line 439, in main\r\n en_dataset = load_dataset(\"xtreme\", \"udpos.English\", split=\"train\", download_mode=\"force_redownload\", verification_mode=\"all_checks\")\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/load.py\", line 1782, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py\", line 872, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py\", line 1649, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py\", line 949, in _download_and_prepare\r\n verify_checksums(\r\n File \"/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/utils/info_utils.py\", line 62, in verify_checksums\r\n raise NonMatchingChecksumError(\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://lindat.mff.cuni.cz/repository/xmlui/bitstream/handle/11234/1-3105/ud-treebanks-v2.5.tgz']\r\nSet `verification_mode='no_checks'` to skip checksums verification and ignore this error\r\n```",
"If this happens randomly, then this means the data file from the error message is not always downloaded correctly. \r\n\r\nThe only solution in this scenario is to download the dataset again by passing `download_mode=\"force_redownload\"` to the `load_dataset` call.",
"Wow. I effectively have to redownload a dataset of 1TB because of this now?\r\nBecause 3% of its parts are broken?\r\n\r\nWhy is this downloader library so sh*t and badly documented also? I found almost nothing on the net, at least finally this issue about the problem here.\r\nNo words to express how disappointed I am by that dataset tool provided by Huggingface here, which I sadly have to use because HF is the only place where the Dataset I plan to work with is hosted....\r\n\r\nI mean... checksum check after download... or hitting timeout of a part... and redownload if not matching... that's content of every junior developer training session.\r\n\r\nI added `verification_mode=\"all_checks\"`. And it really calculated checksums for 4096 parts of ~350 MB... But then did nothing and tried to extract still, hitting the error again. \r\n\r\nEDIT: Apparently it is able to fix it by getting a little help: Just delete the broken parts and associated files from `~/.cache/huggingface/datasets/downloads`",
"I'm getting it too, although just retrying fixed it. Nevertheless, the dataset is too large to have re-downloaded the whole thing, for it's probably just one file with an issue. It would be good to know if there's a way people could manually examine the files (first for sizes, then possibly checksums)... going to the web or elsewhere to compare and correct it by hand, if ever needed.",
"Okay, no, it got further but it is repeatedly giving me:\r\n```/home/jaggz/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_11_0/3f27acf10f303eac5b6fbbbe02495aeddb46ecffdb0a2fe3507fcfbf89094631/common_voice_11_0.py\", line 195, in _generate_examples\r\nresult[\"audio\"] = {\"path\": path, \"bytes\": file.read()}\r\n^^^^^^^^^^^\r\nFile \"/usr/lib/python3.11/tarfile.py\", line 687, in read\r\nraise ReadError(\"unexpected end of data\")\r\ntarfile.ReadError: unexpected end of data\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\nFile \"/home/jaggz/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 625, in <module>\r\nmain()\r\nFile \"/home/jaggz/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 360, in main\r\nraw_datasets[\"train\"] = load_dataset(\r\n^^^^^^^^^^^^^\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/load.py\", line 2153, in load_dataset\r\nbuilder_instance.download_and_prepare(\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/builder.py\", line 954, in download_and_prepare\r\nself._download_and_prepare(\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/builder.py\", line 1717, in _download_and_prepare\r\nsuper()._download_and_prepare(\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/builder.py\", line 1049, in _download_and_prepare\r\nself._prepare_split(split_generator, **prepare_split_kwargs)\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/builder.py\", line 1555, in _prepare_split\r\nfor job_id, done, content in self._prepare_split_single(\r\nFile \"/home/jaggz/venvs/pynow/lib/python3.11/site-packages/datasets/builder.py\", line 1712, in _prepare_split_single\r\nraise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.builder.DatasetGenerationError: An error occurred while generating the datase\r\n",
"@RuntimeRacer \r\n> EDIT: Apparently it is able to fix it by getting a little help: Just delete the broken parts and associated files from `~/.cache/huggingface/datasets/downloads`\r\n\r\nHow do you know the broken parts?\r\nMine's consistently erroring and.. yeah, really this thing should be able to check the files (but where's that even done)...\r\n\r\n2023-11-02 00:14:09.846055: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\r\nTo enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n/home/j/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py:299: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v4.34. Please use `token` instead.\r\n warnings.warn(\r\n11/02/2023 00:14:37 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: True\r\n11/02/2023 00:14:37 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(\r\n_n_gpu=1,\r\nadafactor=False,\r\nadam_beta1=0.9,\r\nadam_beta2=0.999,\r\n...\r\nlogging_dir=./whisper-tiny-en/runs/Nov02_00-14-28_jsys,\r\n...\r\nrun_name=./whisper-tiny-en,\r\n...\r\nweight_decay=0.0,\r\n)\r\n11/02/2023 00:14:37 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(\r\n_n_gpu=1,\r\nadafactor=False,\r\n...\r\nlogging_dir=./whisper-tiny-en/runs/Nov02_00-14-28_jsys,\r\n...\r\nweight_decay=0.0,\r\n)\r\n\r\nDownloading data files: 0%| | 0/5 [00:00<?, ?it/s]\r\nDownloading data files: 100%|██████████| 5/5 [00:00<00:00, 2426.42it/s]\r\n\r\nExtracting data files: 0%| | 0/5 [00:00<?, ?it/s]\r\nExtracting data files: 100%|██████████| 5/5 [00:00<00:00, 421.16it/s]\r\n\r\nDownloading data files: 0%| | 0/5 [00:00<?, ?it/s]\r\nDownloading data files: 100%|██████████| 5/5 [00:00<00:00, 18707.87it/s]\r\n\r\nExtracting data files: 0%| | 0/5 [00:00<?, ?it/s]\r\nExtracting data files: 100%|██████████| 5/5 [00:00<00:00, 3754.97it/s]\r\n\r\nGenerating train split: 0 examples [00:00, ? examples/s]\r\n\r\nReading metadata...: 0it [00:00, ?it/s]\u001b[A\r\n...\r\nReading metadata...: 948736it [00:23, 40632.92it/s] \r\n\r\nGenerating train split: 1 examples [00:23, 23.37s/ examples]\r\n...\r\nGenerating train split: 948736 examples [08:28, 1866.15 examples/s]\r\n\r\nGenerating validation split: 0 examples [00:00, ? examples/s]\r\n\r\nReading metadata...: 0it [00:00, ?it/s]\u001b[A\r\n\r\nReading metadata...: 16089it [00:00, 157411.88it/s]\u001b[A\r\nReading metadata...: 16354it [00:00, 158233.27it/s]\r\n\r\nGenerating validation split: 1 examples [00:00, 7.60 examples/s]\r\nGenerating validation split: 16354 examples [00:14, 1154.77 examples/s]\r\n\r\nGenerating test split: 0 examples [00:00, ? examples/s]\r\n\r\nReading metadata...: 0it [00:00, ?it/s]\u001b[A\r\nReading metadata...: 16354it [00:00, 194855.03it/s]\r\n\r\nGenerating test split: 1 examples [00:00, 4.53 examples/s]\r\nGenerating test split: 16354 examples [00:07, 2105.43 examples/s]\r\n\r\nGenerating other split: 0 examples [00:00, ? examples/s]\r\n\r\nReading metadata...: 0it [00:00, ?it/s]\u001b[A\r\nReading metadata...: 290846it [00:01, 235823.90it/s]\r\n\r\nGenerating other split: 1 examples [00:01, 1.27s/ examples]\r\n...\r\nGenerating other split: 290846 examples [02:12, 2196.96 examples/s]\r\nGenerating invalidated split: 0 examples [00:00, ? examples/s]\r\nReading metadata...: 252599it [00:01, 241965.85it/s]\r\n\r\nGenerating invalidated split: 1 examples [00:01, 1.08s/ examples]\r\n...\r\nGenerating invalidated split: 60130 examples [00:34, 1764.14 examples/s]\r\nTraceback (most recent call last):\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 1676, in _prepare_split_single\r\n for key, record in generator:\r\n File \"/home/j/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_11_0/3f27acf10f303eac5b6fbbbe02495aeddb46ecffdb0a2fe3507fcfbf89094631/common_voice_11_0.py\", line 195, in _generate_examples\r\n result[\"audio\"] = {\"path\": path, \"bytes\": file.read()}\r\n ^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/tarfile.py\", line 687, in read\r\n raise ReadError(\"unexpected end of data\")\r\ntarfile.ReadError: unexpected end of data\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/j/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 625, in <module>\r\n main()\r\n File \"/home/j/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 360, in main\r\n raw_datasets[\"train\"] = load_dataset(\r\n ^^^^^^^^^^^^^\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/load.py\", line 2153, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 954, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 1717, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 1049, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 1555, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/home/j/venvs/pycur/lib/python3.11/site-packages/datasets/builder.py\", line 1712, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.builder.DatasetGenerationError: An error occurred while generating the dataset\r\n",
"@jaggzh Hi, I actually came around with a fix for this, wasn't that easy to solve since there were a lot of hidden pitfalls in the code, and it's quite hacky, but I was able to download the full dataset.\r\n\r\nI just didn't create a PR for it yet since I was too lazy to create a fork and change my local repo's origin. 😅 \r\nLet me try to do this tonight, I'll give you a ping once it's up.\r\n\r\nEDIT: And no, what I wrote above about adding a param to the download config does NOT solve it apparently. A code fix is required here.",
"@jaggzh PR is up: https://github.com/huggingface/datasets/pull/6380\r\n\r\n🤞 on approval for merge to the main repo.",
"@mariosasko Can you re-open this? We really need some better diagnostics output, at the least, to locate which files are contributing, some checksum output, etc. I can't even tell if this is a mozilla...py issue or huggingface datasets or ....",
"@RuntimeRacer \r\nBeautiful, thank you so much. I patched with your PR and am re-running now.\r\n(I'm running this script: https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py)\r\nOkay, actually it failed; so now I'm running with verification_mode='all_checks' added to the load_data() call and it's re-running now. Wish me luck.\r\n(Note: It's generating checksums; I don't see an option that handles anything between basic_checks and all_checks -- Something checking dl'ed files' lengths would be a good common fix I'd think; corruption is more rare nowadays than a short file (although maybe your patch helps prevent that in the first place.) :}",
"@RuntimeRacer \r\nNo luck. Sigh.\r\n[Edit: My tmux copy didn't get some data. That was weird. I'm adding in the initial part of the output:]\r\n```\r\nDownloading data files: 100%|██████████| 5/5 [00:00<00:00, 2190.69it/s]\r\nComputing checksums: 100%|██████████| 41/41 [11:39<00:00, 17.05s/it] Extracting data files: 100%|██████████| 5/5 [00:00<00:00, 12.37it/s]\r\nDownloading data files: 100%|██████████| 5/5 [00:00<00:00, 107.64it/s]\r\nExtracting data files: 100%|██████████| 5/5 [00:00<00:00, 3149.82it/s]\r\nReading metadata...: 948736it [00:03, 243227.36it/s]s/s]\r\n...\r\n```\r\n```\r\n...\r\nReading metadata...: 252599it [00:01, 249267.71it/s]xamples/s]\r\nGenerating invalidated split: 60130 examples [00:31, 1916.33 examples/s]\r\nTraceback (most recent call last):\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 1676, in _prepare_split_single\r\nfor key, record in generator:\r\nFile \"/home/j/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_11_0/3f27acf10f303eac5b6fbbbe02495aeddb46ecffdb0a2fe3507fcfbf89094631/common_voice_11_0.py\", line 195, in _generate_examples\r\nresult[\"audio\"] = {\"path\": path, \"bytes\": file.read()}\r\n^^^^^^^^^^^\r\nFile \"/usr/lib/python3.11/tarfile.py\", line 687, in read\r\nraise ReadError(\"unexpected end of data\")\r\ntarfile.ReadError: unexpected end of data\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\nFile \"/home/j/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 627, in <module>\r\nmain()\r\nFile \"/home/j/src/transformers/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py\", line 360, in main\r\nraw_datasets[\"train\"] = load_dataset(\r\n^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/load.py\", line 2153, in load_dataset\r\nbuilder_instance.download_and_prepare(\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 954, in download_and_prepare\r\nself._download_and_prepare(\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 1717, in _download_and_prepare\r\nsuper()._download_and_prepare(\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 1049, in _download_and_prepare\r\nself._prepare_split(split_generator, **prepare_split_kwargs)\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 1555, in _prepare_split\r\nfor job_id, done, content in self._prepare_split_single(\r\nFile \"/home/j/src/py/datasets/src/datasets/builder.py\", line 1712\r\n```",
"I'm unable to reproduce this error. Based on https://github.com/psf/requests/issues/4956, newer releases of `urllib3` check the returned content length by default, so perhaps updating `requests` and `urllib3` to the latest versions (`pip install -U requests urllib3`) and loading the dataset with `datasets.load_dataset(\"xtreme\", \"udpos.English\", download_config=datasets.DownloadConfig(resume_download=True))` (re-run when it fails to resume the download) can fix the issue.",
"@jaggzh I think you will need to re-download the whole dataset with my patched code. Files which have already been downloaded and marked as complete by the broken downloader won't be detected even on re-run (I described that in the PR).\r\nI also had to download reazonspeech, which is over 1TB, twice. 🙈 \r\nFor re-download, you need to manually delete the dataset files from your local machine's huggingface download cache.\r\n\r\n@mariosasko Not sure how you tested it, but it's not an issue in `requests` or `urllib`. The problem is the huggingface downloader, which generates a nested download thread for the actual download I think.\r\nThe issue I had with the reazonspeech dataset (https://huggingface.co/datasets/reazon-research/reazonspeech/tree/main) basically was, that it started downloading a part, but sometimes the connection would 'starve' and only continue with a few kilobytes, and eventually stop receiving any data at all.\r\nSometimes it would even recover during the download and finish properly.\r\nHowever, if it did not recover, the request would hit the really generous default timeout (which is 100 seconds I think), however the exception thrown by the failure inside `urllib`, isn't captured or handled by the upper level downloader code of the `datasets` library.\r\n`datasets` even has a retry mechanism, which would continue interrupted downloads if they have the `.incomplete` suffix, which isn't cleared if, for example, a manual `CTRL+C` is sent by the user to the python process.\r\nBut: If it runs into that edge case I described above (TL;DR: connection starves after minutes + timeout exception which isn't captured), the cache downloader will consider the download as successful and remove the `.incomplete` suffix nevertheless, leaving the archive file in a corrupted state.\r\n\r\nHonestly, I spent hours on trying to figure out what was even going on and why the retry mechanics of the cache downloader didn't work at all.\r\nBut it is indeed an issue caused by the download process itself not receiving any info about actual content size and filesize size on disk of the archive to be downloaded, thus, having no direct control in case something fails on the request level.\r\n\r\nIMHO, this requires a major refactor of the way this part of the downloader works.\r\nYet I was able to quick-fix it by adding some synthetic Exception handling and explicit retry-handling in the code, als done in my PR.",
"@RuntimeRacer \r\nUgh. It took a day. I'm seeing if I can get some debug code in here to examine the files myself. (I'm not sure why checksum tests would fail, so, yeah, I think you're right -- this stuff needs some work. Going through ipdb right now to try to get some idea of what's going on in the code).",
"@RuntimeRacer Data can only be appended to the `.incomplete` files if `load_dataset` is called with `download_config=DownloadConfig(resume_download=True)`. \r\n\r\nWhere exactly does this exception happen (in the code)? The error stack trace would help a lot.",
"@mariosasko I do not have a trace of this exception nor do I know which type it is. I am honestly not even sure if an exception is thrown, or the process just aborts without error.\r\n\r\n> @RuntimeRacer Data can only be appended to the .incomplete files if load_dataset is called with download_config=DownloadConfig(resume_download=True).\r\n\r\nWell, I think I did a very clear explaination of the issue in the PR I shared, and the description above, but maybe I wasn't precise enough. Let me try to explain once more:\r\n\r\nWhat you mention here is the \"normal\" case, if the process is aborted. In this case, there will be files with `.incomplete` suffix, which the cache downloader can continue to download. That is correct.\r\n\r\nBUT: What I am talking about all the time is an edge case: if the download step crashes / timeouts internally, the cache downloader will NOT be aware of this, and REMOVES the `.incomplete` suffix.\r\nIt does NOT know that the file is incomplete when the `http_get` function returns and will remove the `.incomplete` suffix in any case once `http_get` returns.\r\nBut the problem is that `http_get` returns without failure, even if the download failed.\r\nAnd this is still a problem even with latest `urllib` and `requests` library.\r\n",
"@RuntimeRacer Updating `urllib3` and `requests` to the latest versions fixes the issue explained in this [blog](https://blog.petrzemek.net/2018/04/22/on-incomplete-http-reads-and-the-requests-library-in-python/) post. \r\n\r\nHowever, the issue explained above seems more similar to [this](https://stackoverflow.com/questions/52731196/python-3-6-5-requests-with-streaming-getting-stuck-in-iter-content-even-if-chun) one. To address it, we can reduce the default timeout to 10 seconds (btw, this was the initial value, but it was causing problems for some users) and expose a config variable so that users can easily control it. Additionally, we can re-run `http_get` similarly to https://github.com/huggingface/huggingface_hub/pull/1766 when the connection/timeout error happens to make the logic even more robust. Would this work for you? The last part is what you did in the PR, right?\r\n\r\n@jaggzh From all the datasets mentioned in this issue, `xtreme` is the only one that stores the data file checksums in the metadata. So, the checksum check has no effect when enabled for the rest of the datasets.",
"(I don't have any .incomplete files, just the extraction errors.)\r\nI was going through the code to try to relate filenames to the hex/hash files, but realized I might not need to.\r\nSo instead I coded up a script in bash to examine the tar files for validity (had an issue with bash subshells not adding to my array so I had cgpt recode it in perl).\r\n\r\n```perl\r\n#!/usr/bin/perl\r\nuse strict;\r\nuse warnings;\r\n\r\n# Initialize the array to store tar files\r\nmy @tars;\r\n\r\n# Open the current directory\r\nopendir(my $dh, '.') or die \"Cannot open directory: $!\";\r\n\r\n# Read files in the current directory\r\nwhile (my $f = readdir($dh)) {\r\n # Skip files ending with lock, json, or py\r\n next if $f =~ /\\.(lock|json|py)$/;\r\n\r\n # Use the `file` command to determine the type of file\r\n my $ft = `file \"$f\"`;\r\n\r\n # If it's a tar archive, add it to the list\r\n if ($ft =~ /tar archive/) {\r\n push @tars, $f;\r\n }\r\n}\r\n\r\nclosedir($dh);\r\n\r\nprint \"Final Tars count: \" . scalar(@tars) . \"\\n\";\r\n\r\n# Iterate over the tar files and check them\r\nforeach my $i (0 .. $#tars) {\r\n my $f = $tars[$i];\r\n printf '%d/%d ', $i+1, scalar(@tars);\r\n \r\n # Use `ls -lgG` to list the files, similar to the original bash script\r\n system(\"ls -lgG '$f'\");\r\n\r\n # Check the integrity of the tar file\r\n my $errfn = \"/tmp/$f.tarerr\";\r\n if (system(\"tar tf '$f' > /dev/null 2> '$errfn'\") != 0) {\r\n print \" BAD $f\\n\";\r\n print \" ERR: \";\r\n system(\"cat '$errfn'\");\r\n }\r\n\r\n # Remove the error file if it exists\r\n unlink $errfn if -e $errfn;\r\n}\r\n```\r\n\r\nThis found one hash file that errored in the tar extraction, and one small tmp* file that also was supposedly a tar and was erroring. I removed those two and re-data loaded.. it grabbed just what it needed and I'm on my way. Yay!\r\n\r\nSo... is there a way for the datasets api to get file sizes? That would be a very easy and fast test, leaving checksum slowdowns for extra-messed-up situations.\r\n\r\n",
"> @RuntimeRacer Updating `urllib3` and `requests` to the latest versions fixes the issue explained in this [blog](https://blog.petrzemek.net/2018/04/22/on-incomplete-http-reads-and-the-requests-library-in-python/) post.\r\n> \r\n> However, the issue explained above seems more similar to [this](https://stackoverflow.com/questions/52731196/python-3-6-5-requests-with-streaming-getting-stuck-in-iter-content-even-if-chun) one. To address it, we can reduce the default timeout to 10 seconds (btw, this was the initial value, but it was causing problems for some users) and expose a config variable so that users can easily control it. Additionally, we can re-run `http_get` similarly to [huggingface/huggingface_hub#1766](https://github.com/huggingface/huggingface_hub/pull/1766) when the connection/timeout error happens to make the logic even more robust. Would this work for you? The last part is what you did in the PR, right?\r\n> \r\n> @jaggzh From all the datasets mentioned in this issue, `xtreme` is the only one that stores the data file checksums in the metadata. So, the checksum check has no effect when enabled for the rest of the datasets.\r\n\r\n@mariosasko Well if you look at my commit date, you will see that I run into this problem still in October. The blog post you mention and the update in the pull request for `urllib` was from July: https://github.com/psf/requests/issues/4956#issuecomment-1648632935\r\n\r\nBut yeah the [issue on StackOverflow](https://stackoverflow.com/questions/52731196/python-3-6-5-requests-with-streaming-getting-stuck-in-iter-content-even-if-chun) you mentioned seems like that's the source issue I was running into there.\r\nI experimented with timeouts, but changing them didn't help to resolve the issue of the starving connection unfortunately.\r\nHowever, https://github.com/huggingface/huggingface_hub/pull/1766 seems like that could be working; it's very similar to my change. So yeah I think this would fix it probably.\r\n\r\nAlso I can confirm the checksum option did not work for [reazonspeech](https://huggingface.co/datasets/reazon-research/reazonspeech/tree/main) as well. So maybe it's a double edge case that only occurs for some datasets. 🤷♂️ ",
"Also, the hf urls to files -- while I can't see a way of getting a listing from the hf site side -- do include the file size in the http header response. So we do have a quick way of just verifying lengths for resume. (This message may not be interesting to you all).\r\n\r\nFirst, a json clip (mozilla-foundation___common_voice_11_0/en/11.0.0/3f27acf10f303eac5b6fbbbe02495aeddb46ecffdb0a2fe3507fcfbf89094631/dataset_info.json):\r\n\r\n* I don't know how specific this .json is to mozilla common voice\r\n* Note that *dataset_size* is not the dataset size :) DatasetInfo class docs indicate it might be their \"combined size in bytes of the Arrow tables for all splits.\"\r\n* *num_bytes*: does match the individual file size though, and matches the http header (further down)\r\n```\r\n{\r\n \"builder_name\" : \"common_voice_11_0\",\r\n...\r\n \"config_name\" : \"en\",\r\n \"dataset_name\" : \"common_voice_11_0\",\r\n \"dataset_size\" : 1680793952,\r\n...\r\n \"download_checksums\" : {\r\n...\r\n \"https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/resolve/main/audio/en/invalidated/en_invalidated_3.tar\" : {\r\n \"checksum\" : null,\r\n \"num_bytes\" : 2110853120\r\n },\r\n...\r\n```\r\n\r\n```bash\r\n~/.cache/huggingface/datasets/downloads$ ls -lgG b45f82cb87bab2c35361857fcd46042ab658b42c37dc9a455248c2866c9b8f40* | cut -c 14-\r\n```\r\n```\r\n2110853120 Nov 1 16:28 b45f82cb87bab2c35361857fcd46042ab658b42c37dc9a455248c2866c9b8f40\r\n148 Nov 1 16:28 b45f82cb87bab2c35361857fcd46042ab658b42c37dc9a455248c2866c9b8f40.json\r\n0 Nov 1 16:07 b45f82cb87bab2c35361857fcd46042ab658b42c37dc9a455248c2866c9b8f40.lock\r\n```\r\n\r\n* Note the -L to follow redirects. Two headers are below:\r\n\r\n```bash\r\n$ curl -I -L https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/resolve/main/audio/en/invalidated/en_invalidated_3.tar\r\n```\r\n```\r\nHTTP/2 302 \r\ncontent-type: text/plain; charset=utf-8\r\ncontent-length: 1215\r\nlocation: https://cdn-lfs.huggingface.co/repos/00/ce/00ce867b4ae70bd23a10b60c32a8626d87b2666fc088ad03f86b94788faff554/984086fc250badece2992e8be4d7c4430f7c1208fb8bf37dc7c4aecdc803b220?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27en_invalidated_3.tar%3B+filename%3D%22en_invalidated_3.tar%22%3B&response-content-type=application%2Fx-tar&Expires=1699389040&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY5OTM4OTA0MH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy8wMC9jZS8wMGNlODY3YjRhZTcwYmQyM2ExMGI2MGMzMmE4NjI2ZDg3YjI2NjZmYzA4OGFkMDNmODZiOTQ3ODhmYWZmNTU0Lzk4NDA4NmZjMjUwYmFkZWNlMjk5MmU4YmU0ZDdjNDQzMGY3YzEyMDhmYjhiZjM3ZGM3YzRhZWNkYzgwM2IyMjA%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=WYc32e75PqbKSAv3KTpG86ooFT6oOyDDQpCt1i2B8gVS10J3qvpZlDmxaBgnGlCCl7SRiAvhIQctgwooNtWbUeDqK3T4bAo0-OOrGCuVi-%7EKWUBcoHce7nHWpl%7Ex9ubHS%7EFoYcGB2SCEqh5fIgGjNV-VKRX6TSXkRto5bclQq4VCJKHufDsJ114A1V4Qu%7EYiRIWKG4Gi93Xv4OFhyWY0uqykvP5c0x02F%7ELX0m3WbW-eXBk6Fw2xnV1XLrEkdR-9Ax2vHqMYIIw6yV0wWEc1hxE393P9mMG1TNDj%7EXDuCoOaA7LbrwBCxai%7Ew2MopdPamTXyOia5-FnSqEdsV29v4Q__&Key-Pair-Id=KVTP0A1DKRTAX\r\ndate: Sat, 04 Nov 2023 20:30:40 GMT\r\nx-powered-by: huggingface-moon\r\nx-request-id: Root=1-6546a9f0-5e7f729d09bdb38e35649a7e\r\naccess-control-allow-origin: https://huggingface.co\r\nvary: Origin, Accept\r\naccess-control-expose-headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range\r\nx-repo-commit: 23b4059922516c140711b91831aa3393a22e9b80\r\naccept-ranges: bytes\r\nx-linked-size: 2110853120\r\nx-linked-etag: \"984086fc250badece2992e8be4d7c4430f7c1208fb8bf37dc7c4aecdc803b220\"\r\nx-cache: Miss from cloudfront\r\nvia: 1.1 f31a6426ebd75ce4393909b12f5cbdcc.cloudfront.net (CloudFront)\r\nx-amz-cf-pop: LAX53-P4\r\nx-amz-cf-id: BcYMFcHVcxPome2IjAvx0ZU90G41QlNI_HEHDGDqCQaEPvrOsnsGXw==\r\n\r\nHTTP/2 200 \r\ncontent-type: application/x-tar\r\ncontent-length: 2110853120\r\ndate: Sat, 04 Nov 2023 20:19:35 GMT\r\nlast-modified: Fri, 18 Nov 2022 15:08:22 GMT\r\netag: \"acac28988e2f7e73b68e865179fbd008\"\r\nx-amz-storage-class: INTELLIGENT_TIERING\r\nx-amz-version-id: LgTuOcd9FGN4JnAXp26O.1v2VW42GPtF\r\ncontent-disposition: attachment; filename*=UTF-8''en_invalidated_3.tar; filename=\"en_invalidated_3.tar\";\r\naccept-ranges: bytes\r\nserver: AmazonS3\r\nx-cache: Hit from cloudfront\r\nvia: 1.1 d07c8167eda81d307ca96358727f505e.cloudfront.net (CloudFront)\r\nx-amz-cf-pop: LAX50-P5\r\nx-amz-cf-id: 6oNZg_V8U1M_JXsMHQAPuRmDfxbY2BnMUWcVH0nz3VnfEZCzF5lgkQ==\r\nage: 666\r\ncache-control: public, max-age=604800, immutable, s-maxage=604800\r\nvary: Origin\r\n\r\n```\r\n"
] | 2023-02-28T23:40:53 | 2023-11-04T20:45:56 | 2023-07-24T14:22:18 |
NONE
| null | null | null |
### Describe the bug
Hi,
I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed
```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtreme/udpos.Arabic/1.0.0/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4...
Downloading data: 16%|██████████████▏ | 56.9M/355M [03:11<16:43, 297kB/s]
Generating train split: 0%| | 0/6075 [00:00<?, ? examples/s]Traceback (most recent call last):
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1608, in _prepare_split_single
for key, record in generator:
File "/home/skhanuja/.cache/huggingface/modules/datasets_modules/datasets/xtreme/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4/xtreme.py", line 732, in _generate_examples
yield from UdposParser.generate_examples(config=self.config, filepath=filepath, **kwargs)
File "/home/skhanuja/.cache/huggingface/modules/datasets_modules/datasets/xtreme/29f5d57a48779f37ccb75cb8708d1095448aad0713b425bdc1ff9a4a128a56e4/xtreme.py", line 921, in generate_examples
for path, file in filepath:
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 158, in __iter__
yield from self.generator(*self.args, **self.kwargs)
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 211, in _iter_from_path
yield from cls._iter_tar(f)
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/download/download_manager.py", line 167, in _iter_tar
for tarinfo in stream:
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/tarfile.py", line 2475, in __iter__
tarinfo = self.next()
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/tarfile.py", line 2344, in next
raise ReadError("unexpected end of data")
tarfile.ReadError: unexpected end of data
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py", line 855, in <module>
main()
File "/home/skhanuja/Optimal-Resource-Allocation-for-Multilingual-Finetuning/src/train_al.py", line 487, in main
train_dataset = load_dataset(dataset_name, source_language, split="train", cache_dir=args.cache_dir, download_mode="force_redownload")
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset
builder_instance.download_and_prepare(
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 872, in download_and_prepare
self._download_and_prepare(
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1649, in _download_and_prepare
super()._download_and_prepare(
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 967, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1488, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/skhanuja/miniconda3/envs/multilingual_ft/lib/python3.10/site-packages/datasets/builder.py", line 1644, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
```
train_dataset = load_dataset('xtreme', 'udpos.English', split="train", cache_dir=args.cache_dir, download_mode="force_redownload")
```
### Expected behavior
Download the udpos dataset
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-3.10.0-957.1.3.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5594/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5586
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5586/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5586/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5586/events
|
https://github.com/huggingface/datasets/issues/5586
| 1,602,961,544 |
I_kwDODunzps5fi0CI
| 5,586 |
.sort() is broken when used after .filter(), only in 2.10.0
|
{
"login": "MattYoon",
"id": 57797966,
"node_id": "MDQ6VXNlcjU3Nzk3OTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/57797966?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MattYoon",
"html_url": "https://github.com/MattYoon",
"followers_url": "https://api.github.com/users/MattYoon/followers",
"following_url": "https://api.github.com/users/MattYoon/following{/other_user}",
"gists_url": "https://api.github.com/users/MattYoon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MattYoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MattYoon/subscriptions",
"organizations_url": "https://api.github.com/users/MattYoon/orgs",
"repos_url": "https://api.github.com/users/MattYoon/repos",
"events_url": "https://api.github.com/users/MattYoon/events{/privacy}",
"received_events_url": "https://api.github.com/users/MattYoon/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] | null |
[
"Thanks for reporting and thanks @mariosasko for fixing ! We just did a patch release `2.10.1` with the fix"
] | 2023-02-28T12:18:09 | 2023-02-28T18:17:26 | 2023-02-28T17:21:59 |
NONE
| null | null | null |
### Describe the bug
Hi, thank you for your support!
It seems like the addition of multiple key sort (#5502) in 2.10.0 broke the `.sort()` method.
After filtering a dataset with `.filter()`, the `.sort()` seems to refer to the query_table index of the previous unfiltered dataset, resulting in an IndexError.
This only happens with the 2.10.0 release.
### Steps to reproduce the bug
```Python
from datasets import load_dataset
# dataset with length of 1104
ds = load_dataset('glue', 'ax')['test']
ds = ds.filter(lambda x: x['idx'] > 1100)
ds.sort('premise')
print('Done')
```
File "/home/dongkeun/datasets_test/test.py", line 5, in <module>
ds.sort('premise')
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 528, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/fingerprint.py", line 511, in wrapper
out = func(dataset, *args, **kwargs)
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3959, in sort
sort_table = query_table(
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 588, in query_table
_check_valid_index_key(key, size)
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 537, in _check_valid_index_key
_check_valid_index_key(max(key), size=size)
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 531, in _check_valid_index_key
raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
IndexError: Invalid key: 1103 is out of bounds for size 3
### Expected behavior
It should sort the dataset and print "Done". Which it does on 2.9.0.
### Environment info
- `datasets` version: 2.10.0
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5586/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5585
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5585/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5585/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5585/events
|
https://github.com/huggingface/datasets/issues/5585
| 1,602,190,030 |
I_kwDODunzps5ff3rO
| 5,585 |
Cache is not transportable
|
{
"login": "davidgilbertson",
"id": 4443482,
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidgilbertson",
"html_url": "https://github.com/davidgilbertson",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi ! No the cache is not transportable in general. It will work on a shared filesystem if you use the same python environment, but not across machines/os/environments.\r\n\r\nIn particular, reloading cached datasets does work, but reloading cached processed datasets (e.g. from `map`) may not work. This is because some hashes used by caching are based on pickle dumps of the function you pass to `map`.\r\n\r\nFinally you may copy the cache to another machine, but all the `cached-*.arrow` files are unlikely to be reloaded.",
"OK good to know. Thanks @lhoestq !"
] | 2023-02-28T00:53:06 | 2023-02-28T21:26:52 | 2023-02-28T21:26:52 |
NONE
| null | null | null |
### Describe the bug
I would like to share cache between two machines (a Windows host machine and a WSL instance).
I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereby sharing the downloads.
I'm hoping that I can just copy/paste the cache files, but I notice that a lot of the file names start with the path name, e.g. `_home_davidg_.cache_huggingface_datasets_conll2003_default-451...98.lock` where `home/davidg` is where the cache is in WSL.
This seems to suggest that the cache is not portable/cannot be centralised or shared. Is this the case, or are the files that start with path names not integral to the caching mechanism? Because copying the cache files _seems_ to work, but I'm not filled with confidence that something isn't going to break.
A related issue, when trying to load a dataset that should come from cache (running in WSL, pointing to cache on the Windows host) it seemed to work fine, but it still uses a WSL directory for `.cache\huggingface\modules\datasets_modules`. I see nothing in the docs about this, or how to point it to a different place.
I have asked a related question on the forum: https://discuss.huggingface.co/t/is-datasets-cache-operating-system-agnostic/32656
### Steps to reproduce the bug
View the cache directory in WSL/Windows.
### Expected behavior
Cache can be shared between (virtual) machines and be transportable.
It would be nice to have a simple way to say "Dear Hugging Face packages, please put ALL your cache in `blah/de/blah`" and have all the Hugging Face packages respect that single location.
### Environment info
```
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- ```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5585/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5584
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5584/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5584/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5584/events
|
https://github.com/huggingface/datasets/issues/5584
| 1,601,821,808 |
I_kwDODunzps5fedxw
| 5,584 |
Unable to load coyo700M dataset
|
{
"login": "manuaero",
"id": 3059998,
"node_id": "MDQ6VXNlcjMwNTk5OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3059998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manuaero",
"html_url": "https://github.com/manuaero",
"followers_url": "https://api.github.com/users/manuaero/followers",
"following_url": "https://api.github.com/users/manuaero/following{/other_user}",
"gists_url": "https://api.github.com/users/manuaero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manuaero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manuaero/subscriptions",
"organizations_url": "https://api.github.com/users/manuaero/orgs",
"repos_url": "https://api.github.com/users/manuaero/repos",
"events_url": "https://api.github.com/users/manuaero/events{/privacy}",
"received_events_url": "https://api.github.com/users/manuaero/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi @manuaero \r\n\r\nThank you for your interest in the COYO dataset.\r\n\r\nOur dataset provides the img-url and alt-text in the form of a parquet, so to utilize the coyo dataset you will need to download it directly.\r\n\r\nWe provide a [guide](https://github.com/kakaobrain/coyo-dataset/blob/main/download/README.md) to download, so check it out.\r\n\r\nThank you."
] | 2023-02-27T19:35:03 | 2023-02-28T07:27:59 | 2023-02-28T07:27:58 |
NONE
| null | null | null |
### Describe the bug
Seeing this error when downloading https://huggingface.co/datasets/kakaobrain/coyo-700m:
```ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.```
Full stack trace
```Downloading and preparing dataset parquet/kakaobrain--coyo-700m to /root/.cache/huggingface/datasets/kakaobrain___parquet/kakaobrain--coyo-700m-ae729692ae3e0073/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...
Downloading data files: 100%
1/1 [00:00<00:00, 63.35it/s]
Extracting data files: 100%
1/1 [00:00<00:00, 5.00it/s]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
[/usr/local/lib/python3.8/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1859 _time = time.time()
-> 1860 for _, table in generator:
1861 if max_shard_size is not None and writer._num_bytes > max_shard_size:
9 frames
ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
[/usr/local/lib/python3.8/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1890 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1891 e = e.__context__
-> 1892 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1893
1894 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset```
### Steps to reproduce the bug
```
from datasets import load_dataset
hf_dataset = load_dataset("kakaobrain/coyo-700m")
```
### Expected behavior
The above commands load the dataset successfully. Or handles exception and continue loading the remainder.
### Environment info
colab. any
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5584/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5581
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5581/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5581/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5581/events
|
https://github.com/huggingface/datasets/issues/5581
| 1,600,675,489 |
I_kwDODunzps5faF6h
| 5,581 |
[DOC] Mistaken docs on set_format
|
{
"login": "NightMachinery",
"id": 36224762,
"node_id": "MDQ6VXNlcjM2MjI0NzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NightMachinery",
"html_url": "https://github.com/NightMachinery",
"followers_url": "https://api.github.com/users/NightMachinery/followers",
"following_url": "https://api.github.com/users/NightMachinery/following{/other_user}",
"gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions",
"organizations_url": "https://api.github.com/users/NightMachinery/orgs",
"repos_url": "https://api.github.com/users/NightMachinery/repos",
"events_url": "https://api.github.com/users/NightMachinery/events{/privacy}",
"received_events_url": "https://api.github.com/users/NightMachinery/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false | null |
[] | null |
[
"Thanks for reporting!"
] | 2023-02-27T08:03:09 | 2023-02-28T19:19:17 | 2023-02-28T19:19:17 |
CONTRIBUTOR
| null | null | null |
### Describe the bug
https://huggingface.co/docs/datasets/v2.10.0/en/package_reference/main_classes#datasets.Dataset.set_format
<img width="700" alt="image" src="https://user-images.githubusercontent.com/36224762/221506973-ae2e3991-60a7-4d4e-99f8-965c6eb61e59.png">
While actually running it will result in:
<img width="1094" alt="image" src="https://user-images.githubusercontent.com/36224762/221507032-007dab82-8781-4319-b21a-e6e4d40d97b3.png">
### Steps to reproduce the bug
_
### Expected behavior
_
### Environment info
- `datasets` version: 2.10.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5581/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5577
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5577/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5577/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5577/events
|
https://github.com/huggingface/datasets/issues/5577
| 1,598,587,665 |
I_kwDODunzps5fSIMR
| 5,577 |
Cannot load `the_pile_openwebtext2`
|
{
"login": "wjfwzzc",
"id": 5126316,
"node_id": "MDQ6VXNlcjUxMjYzMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wjfwzzc",
"html_url": "https://github.com/wjfwzzc",
"followers_url": "https://api.github.com/users/wjfwzzc/followers",
"following_url": "https://api.github.com/users/wjfwzzc/following{/other_user}",
"gists_url": "https://api.github.com/users/wjfwzzc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wjfwzzc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wjfwzzc/subscriptions",
"organizations_url": "https://api.github.com/users/wjfwzzc/orgs",
"repos_url": "https://api.github.com/users/wjfwzzc/repos",
"events_url": "https://api.github.com/users/wjfwzzc/events{/privacy}",
"received_events_url": "https://api.github.com/users/wjfwzzc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi! I've merged a PR to use `int32` instead of `int8` for `reddit_scores`, so it should work now.\r\n\r\n"
] | 2023-02-24T13:01:48 | 2023-02-24T14:01:09 | 2023-02-24T14:01:09 |
NONE
| null | null | null |
### Describe the bug
I met the same bug mentioned in #3053 which is never fixed. Because several `reddit_scores` are larger than `int8` even `int16`. https://huggingface.co/datasets/the_pile_openwebtext2/blob/main/the_pile_openwebtext2.py#L62
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_dataset("the_pile_openwebtext2")
```
### Expected behavior
load as normal.
### Environment info
- `datasets` version: 2.10.0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5577/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5576
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5576/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5576/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5576/events
|
https://github.com/huggingface/datasets/issues/5576
| 1,598,582,744 |
I_kwDODunzps5fSG_Y
| 5,576 |
I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers.
|
{
"login": "wjfwzzc",
"id": 5126316,
"node_id": "MDQ6VXNlcjUxMjYzMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wjfwzzc",
"html_url": "https://github.com/wjfwzzc",
"followers_url": "https://api.github.com/users/wjfwzzc/followers",
"following_url": "https://api.github.com/users/wjfwzzc/following{/other_user}",
"gists_url": "https://api.github.com/users/wjfwzzc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wjfwzzc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wjfwzzc/subscriptions",
"organizations_url": "https://api.github.com/users/wjfwzzc/orgs",
"repos_url": "https://api.github.com/users/wjfwzzc/repos",
"events_url": "https://api.github.com/users/wjfwzzc/events{/privacy}",
"received_events_url": "https://api.github.com/users/wjfwzzc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Duplicated issue."
] | 2023-02-24T12:57:49 | 2023-02-24T12:58:31 | 2023-02-24T12:58:18 |
NONE
| null | null | null |
I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers.
I worked around this by downloading the `the_pile_openwebtext2.py` and editing it to use local files and drop reddit scores as a column (not needed for my purposes).
_Originally posted by @tc-wolf in https://github.com/huggingface/datasets/issues/3053#issuecomment-1281392422_
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5576/timeline
| null |
not_planned
|
https://api.github.com/repos/huggingface/datasets/issues/5575
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5575/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5575/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5575/events
|
https://github.com/huggingface/datasets/issues/5575
| 1,598,396,552 |
I_kwDODunzps5fRZiI
| 5,575 |
Metadata for each column
|
{
"login": "parsa-ra",
"id": 11356471,
"node_id": "MDQ6VXNlcjExMzU2NDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/11356471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parsa-ra",
"html_url": "https://github.com/parsa-ra",
"followers_url": "https://api.github.com/users/parsa-ra/followers",
"following_url": "https://api.github.com/users/parsa-ra/following{/other_user}",
"gists_url": "https://api.github.com/users/parsa-ra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parsa-ra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parsa-ra/subscriptions",
"organizations_url": "https://api.github.com/users/parsa-ra/orgs",
"repos_url": "https://api.github.com/users/parsa-ra/repos",
"events_url": "https://api.github.com/users/parsa-ra/events{/privacy}",
"received_events_url": "https://api.github.com/users/parsa-ra/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] |
{
"url": "https://api.github.com/repos/huggingface/datasets/milestones/10",
"html_url": "https://github.com/huggingface/datasets/milestone/10",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels",
"id": 9038583,
"node_id": "MI_kwDODunzps4Aier3",
"number": 10,
"title": "3.0",
"description": "Next major release",
"creator": {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 4,
"closed_issues": 0,
"state": "open",
"created_at": "2023-02-13T16:22:42",
"updated_at": "2023-09-22T14:07:52",
"due_on": null,
"closed_at": null
}
|
[
"Hi! Indeed it would be useful to support this. PyArrow natively supports schema-level and column-level metadata, so implementing this should be straightforward. The API I have in mind would work as follows:\r\n```python\r\ncol_feature = Value(\"string\", metadata=\"Some column-level metadata\")\r\n\r\nfeatures = Features({\"col\": col_feature}, metadata=\"Some schema-level metadata\")\r\n```\r\n\r\nWDYT?",
"Sorry for the late reply, \r\nYes, I think this is the most straight-forward approach with the things that we already have.\r\n\r\n",
"@mariosasko Let me know how I can help.",
"Hi, is this feature to be implemented in the near future? It would be really nice if that would be the case! ",
"Hi, I also need this feature for tell my customer if any of the feature is encrypted with a certain key. "
] | 2023-02-24T10:53:44 | 2024-01-05T21:48:35 | null |
NONE
| null | null | null |
### Feature request
Being able to put some metadata for each column as a string or any other type.
### Motivation
I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of preprocessing and see which one works better in our downstream task, here as workaround right now what I do is the compute the hash of the preprocessing that the images went through as part of the new columns name, it would be nice to attach some kinda meta data in these scenarios to the each columns. metadata
### Your contribution
Maybe we could map another relational like database as the metadata?
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5575/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5575/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5574
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5574/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5574/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5574/events
|
https://github.com/huggingface/datasets/issues/5574
| 1,598,104,691 |
I_kwDODunzps5fQSRz
| 5,574 |
c4 dataset streaming fails with `FileNotFoundError`
|
{
"login": "krasserm",
"id": 202907,
"node_id": "MDQ6VXNlcjIwMjkwNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/202907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krasserm",
"html_url": "https://github.com/krasserm",
"followers_url": "https://api.github.com/users/krasserm/followers",
"following_url": "https://api.github.com/users/krasserm/following{/other_user}",
"gists_url": "https://api.github.com/users/krasserm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krasserm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krasserm/subscriptions",
"organizations_url": "https://api.github.com/users/krasserm/orgs",
"repos_url": "https://api.github.com/users/krasserm/repos",
"events_url": "https://api.github.com/users/krasserm/events{/privacy}",
"received_events_url": "https://api.github.com/users/krasserm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Also encountering this issue for every dataset I try to stream! Installed datasets from main:\r\n```\r\n- `datasets` version: 2.10.1.dev0\r\n- Platform: macOS-13.1-arm64-arm-64bit\r\n- Python version: 3.9.13\r\n- PyArrow version: 10.0.1\r\n- Pandas version: 1.5.2\r\n```\r\n\r\nRepro:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nspigi = load_dataset(\"kensho/spgispeech\", \"dev\", split=\"validation\", streaming=True, use_auth_token=True)\r\nsample = next(iter(spigi))\r\n```\r\n\r\n<details>\r\n<summary> Traceback </summary>\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nClientResponseError Traceback (most recent call last)\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/implementations/http.py:407, in HTTPFileSystem._info(self, url, **kwargs)\r\n 405 try:\r\n 406 info.update(\r\n--> 407 await _file_info(\r\n 408 self.encode_url(url),\r\n 409 size_policy=policy,\r\n 410 session=session,\r\n 411 **self.kwargs,\r\n 412 **kwargs,\r\n 413 )\r\n 414 )\r\n 415 if info.get(\"size\") is not None:\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/implementations/http.py:792, in _file_info(url, session, size_policy, **kwargs)\r\n 791 async with r:\r\n--> 792 r.raise_for_status()\r\n 794 # TODO:\r\n 795 # recognise lack of 'Accept-Ranges',\r\n 796 # or 'Accept-Ranges': 'none' (not 'bytes')\r\n 797 # to mean streaming only, no random access => return None\r\n\r\nFile ~/venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py:1005, in ClientResponse.raise_for_status(self)\r\n 1004 self.release()\r\n-> 1005 raise ClientResponseError(\r\n 1006 self.request_info,\r\n 1007 self.history,\r\n 1008 status=self.status,\r\n 1009 message=self.reason,\r\n 1010 headers=self.headers,\r\n 1011 )\r\n\r\nClientResponseError: 403, message='Forbidden', url=URL('[https://cdn-lfs.huggingface.co/repos/e2/89/e28905247d6f48bb4edad5baf9b1bb4158e897a13fdf18bf3b8ee89ff8387ab8/46eca7431a7b6bad344bf451800e5b10cea1dd168f26d1027a6d9eb374b7fac3?response-content-disposition=attachment%3B+filename*%3DUTF-8''dev.csv%3B+filename%3D%22dev.csv%22%3B&response-content-type=text/csv&Expires=1677494732&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zL2UyLzg5L2UyODkwNTI0N2Q2ZjQ4YmI0ZWRhZDViYWY5YjFiYjQxNThlODk3YTEzZmRmMThiZjNiOGVlODlmZjgzODdhYjgvNDZlY2E3NDMxYTdiNmJhZDM0NGJmNDUxODAwZTViMTBjZWExZGQxNjhmMjZkMTAyN2E2ZDllYjM3NGI3ZmFjMz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPXRleHQlMkZjc3YiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2Nzc0OTQ3MzJ9fX1dfQ__&Signature=EzQB9f7xPckvqfFB6LzcyR-wzTnQCqtPDdWtQUzZ3QJ-gY-IHG5mxQITJgMr1nVTbJZrPmGAaDngMcPFUfSQa8RmCqYH~dZl-UGE8CO4neKNUT1DvA2WEvLDS4WaAJ3SN-9rX0uFb03~c1QS78cIgIRboYvf6ugKiJz86Bd7Vs~tcp201JFR0A6jIMseqApOnkb9d8dHMP3Ny~F6gO3Qf2QpEWM-QsDIyw2Kz2QV55nq8TsDpRYZCZo50~WwD~73Hej0PoDhEA1K37d19pa0CQhkaN-gjCrbT9xLabbvhJWa~ZkWcMdD0teCgjYqv1wKyvFXDAxukxLGEc7OBXVbYw__&Key-Pair-Id=KVTP0A1DKRTAX](https://cdn-lfs.huggingface.co/repos/e2/89/e28905247d6f48bb4edad5baf9b1bb4158e897a13fdf18bf3b8ee89ff8387ab8/46eca7431a7b6bad344bf451800e5b10cea1dd168f26d1027a6d9eb374b7fac3?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27dev.csv%3B+filename%3D%22dev.csv%22%3B&response-content-type=text/csv&Expires=1677494732&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zL2UyLzg5L2UyODkwNTI0N2Q2ZjQ4YmI0ZWRhZDViYWY5YjFiYjQxNThlODk3YTEzZmRmMThiZjNiOGVlODlmZjgzODdhYjgvNDZlY2E3NDMxYTdiNmJhZDM0NGJmNDUxODAwZTViMTBjZWExZGQxNjhmMjZkMTAyN2E2ZDllYjM3NGI3ZmFjMz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPXRleHQlMkZjc3YiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2Nzc0OTQ3MzJ9fX1dfQ__&Signature=EzQB9f7xPckvqfFB6LzcyR-wzTnQCqtPDdWtQUzZ3QJ-gY-IHG5mxQITJgMr1nVTbJZrPmGAaDngMcPFUfSQa8RmCqYH~dZl-UGE8CO4neKNUT1DvA2WEvLDS4WaAJ3SN-9rX0uFb03~c1QS78cIgIRboYvf6ugKiJz86Bd7Vs~tcp201JFR0A6jIMseqApOnkb9d8dHMP3Ny~F6gO3Qf2QpEWM-QsDIyw2Kz2QV55nq8TsDpRYZCZo50~WwD~73Hej0PoDhEA1K37d19pa0CQhkaN-gjCrbT9xLabbvhJWa~ZkWcMdD0teCgjYqv1wKyvFXDAxukxLGEc7OBXVbYw__&Key-Pair-Id=KVTP0A1DKRTAX)')\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nFileNotFoundError Traceback (most recent call last)\r\nCell In[5], line 4\r\n 1 from datasets import load_dataset\r\n 3 spigi = load_dataset(\"kensho/spgispeech\", \"dev\", split=\"validation\", streaming=True)\r\n----> 4 sample = next(iter(spigi))\r\n\r\nFile ~/datasets/src/datasets/iterable_dataset.py:937, in IterableDataset.__iter__(self)\r\n 934 yield from self._iter_pytorch(ex_iterable)\r\n 935 return\r\n--> 937 for key, example in ex_iterable:\r\n 938 if self.features:\r\n 939 # `IterableDataset` automatically fills missing columns with None.\r\n 940 # This is done with `_apply_feature_types_on_example`.\r\n 941 yield _apply_feature_types_on_example(\r\n 942 example, self.features, token_per_repo_id=self._token_per_repo_id\r\n 943 )\r\n\r\nFile ~/datasets/src/datasets/iterable_dataset.py:113, in ExamplesIterable.__iter__(self)\r\n 112 def __iter__(self):\r\n--> 113 yield from self.generate_examples_fn(**self.kwargs)\r\n\r\nFile ~/.cache/huggingface/modules/datasets_modules/datasets/kensho--spgispeech/5fbf75dd9ef795a9b5a673457d2cbaf0b8fa0de8fb62acbd1da338d83a41e2f0/spgispeech.py:186, in Spgispeech._generate_examples(self, local_extracted_archive_paths, archives, meta_path)\r\n 183 dict_keys = [\"wav_filename\", \"wav_filesize\", \"transcript\"]\r\n 185 logging.info(\"Reading metadata...\")\r\n--> 186 with open(meta_path, encoding=\"utf-8\") as f:\r\n 187 csvreader = csv.DictReader(f, delimiter=\"|\")\r\n 188 metadata = {x[\"wav_filename\"]: dict((k, x[k]) for k in dict_keys) for x in csvreader}\r\n\r\nFile ~/datasets/src/datasets/streaming.py:70, in extend_module_for_streaming.<locals>.wrap_auth.<locals>.wrapper(*args, **kwargs)\r\n 68 @wraps(function)\r\n 69 def wrapper(*args, **kwargs):\r\n---> 70 return function(*args, use_auth_token=use_auth_token, **kwargs)\r\n\r\nFile ~/datasets/src/datasets/download/streaming_download_manager.py:495, in xopen(file, mode, use_auth_token, *args, **kwargs)\r\n 493 kwargs = {**kwargs, **new_kwargs}\r\n 494 try:\r\n--> 495 file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()\r\n 496 except ValueError as e:\r\n 497 if str(e) == \"Cannot seek streaming HTTP file\":\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/core.py:135, in OpenFile.open(self)\r\n 128 def open(self):\r\n 129 \"\"\"Materialise this as a real open file without context\r\n 130 \r\n 131 The OpenFile object should be explicitly closed to avoid enclosed file\r\n 132 instances persisting. You must, therefore, keep a reference to the OpenFile\r\n 133 during the life of the file-like it generates.\r\n 134 \"\"\"\r\n--> 135 return self.__enter__()\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/core.py:103, in OpenFile.__enter__(self)\r\n 100 def __enter__(self):\r\n 101 mode = self.mode.replace(\"t\", \"\").replace(\"b\", \"\") + \"b\"\r\n--> 103 f = self.fs.open(self.path, mode=mode)\r\n 105 self.fobjects = [f]\r\n 107 if self.compression is not None:\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/spec.py:1106, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)\r\n 1104 else:\r\n 1105 ac = kwargs.pop(\"autocommit\", not self._intrans)\r\n-> 1106 f = self._open(\r\n 1107 path,\r\n 1108 mode=mode,\r\n 1109 block_size=block_size,\r\n 1110 autocommit=ac,\r\n 1111 cache_options=cache_options,\r\n 1112 **kwargs,\r\n 1113 )\r\n 1114 if compression is not None:\r\n 1115 from fsspec.compression import compr\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/implementations/http.py:346, in HTTPFileSystem._open(self, path, mode, block_size, autocommit, cache_type, cache_options, size, **kwargs)\r\n 344 kw[\"asynchronous\"] = self.asynchronous\r\n 345 kw.update(kwargs)\r\n--> 346 size = size or self.info(path, **kwargs)[\"size\"]\r\n 347 session = sync(self.loop, self.set_session)\r\n 348 if block_size and size:\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/asyn.py:113, in sync_wrapper.<locals>.wrapper(*args, **kwargs)\r\n 110 @functools.wraps(func)\r\n 111 def wrapper(*args, **kwargs):\r\n 112 self = obj or args[0]\r\n--> 113 return sync(self.loop, func, *args, **kwargs)\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/asyn.py:98, in sync(loop, func, timeout, *args, **kwargs)\r\n 96 raise FSTimeoutError from return_result\r\n 97 elif isinstance(return_result, BaseException):\r\n---> 98 raise return_result\r\n 99 else:\r\n 100 return return_result\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/asyn.py:53, in _runner(event, coro, result, timeout)\r\n 51 coro = asyncio.wait_for(coro, timeout=timeout)\r\n 52 try:\r\n---> 53 result[0] = await coro\r\n 54 except Exception as ex:\r\n 55 result[0] = ex\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/implementations/http.py:420, in HTTPFileSystem._info(self, url, **kwargs)\r\n 417 except Exception as exc:\r\n 418 if policy == \"get\":\r\n 419 # If get failed, then raise a FileNotFoundError\r\n--> 420 raise FileNotFoundError(url) from exc\r\n 421 logger.debug(str(exc))\r\n 423 return {\"name\": url, \"size\": None, **info, \"type\": \"file\"}\r\n\r\nFileNotFoundError: https://huggingface.co/datasets/kensho/spgispeech/resolve/main/data/meta/dev.csv\r\n```\r\n</details>",
"Hi ! We're investigating this issue, sorry for the inconvenience",
"This has been resolved ! Thanks for reporting",
"Wow, thanks for the very quick fix!",
"This problem now appears again, this time with an underlying HTTP 502 status code:\r\n\r\n```\r\naiohttp.client_exceptions.ClientResponseError: 502, message='Bad Gateway', url=URL('https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-validation.00002-of-00008.json.gz')\r\n```",
"Re-executing a minute later, the underlying cause is an HTTP 403 status code, as reported yesterday:\r\n\r\n```\r\naiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://cdn-lfs.huggingface.co/datasets/allenai/c4/4bf6b248b0f910dcde2cdf2118d6369d8208c8f9515ec29ab73e531f380b18e2?response-content-disposition=attachment%3B+filename*%3DUTF-8''c4-validation.00002-of-00008.json.gz%3B+filename%3D%22c4-validation.00002-of-00008.json.gz%22%3B&response-content-type=application/gzip&Expires=1677571273&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL2RhdGFzZXRzL2FsbGVuYWkvYzQvNGJmNmIyNDhiMGY5MTBkY2RlMmNkZjIxMThkNjM2OWQ4MjA4YzhmOTUxNWVjMjlhYjczZTUzMWYzODBiMThlMj9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPWFwcGxpY2F0aW9uJTJGZ3ppcCIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY3NzU3MTI3M319fV19&Signature=WW42NOKkLuX~xVB1QfbkqzdvGo2AOXpgbF3PjTXy6iKd~ffilr1N9ScPXfvTXqy5yvdhJg1G0xJy1zYtUjGAL8GEx3Av-0vIhpWMGYTM8XKEU5gYA9qt30oVtNph6TkTYSABrsYTaj-hzQL9WCgyapmjvG69ETMh4wj44r2rcbk4T3j0l6l4u76Gh~lyRSll3aK4qycdUwcyL7FECDu~0W1mJIJwKkCrWHhSpHJSshb-0ElwG71pq4eyQ5g2uxHdK6JbRF7loxUpRQQJ1vlk0EHXdw0wTMaQ9tqHy6xcrQd8Ep0Yvx3tUD8MR0vWOcbQKnL6LwPQByc8tkChlpjnig__&Key-Pair-Id=KVTP0A1DKRTAX')\r\n```",
"I'm facing the same problem. Interestingly using `wget` I can download the file. ",
"It's been resolved again ;)",
"> It's been resolved again ;)\r\n\r\nI'm experiencing the same issue when trying to load this dataset, `FileNotFoundError: https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/realnewslike/c4-train.00000-of-00512.json.gz`",
"Experiencing the same issues as above : `FileNotFoundError: https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-train.00000-of-01024.json.gz\r\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.`\r\n\r\nHave made sure to login as well, issue persists.",
"> Experiencing the same issues as above : `FileNotFoundError: https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-train.00000-of-01024.json.gz If the repo is private or gated, make sure to log in with `huggingface-cli login`.`\r\n> \r\n> Have made sure to login as well, issue persists.\r\n\r\nI meet the same issue",
"I meet the same issue"
] | 2023-02-24T07:57:32 | 2023-12-18T07:32:32 | 2023-02-27T04:03:38 |
NONE
| null | null | null |
### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("c4", "en", split="train", streaming=True)
next(iter(dataset))
```
causes a
```
FileNotFoundError: https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-train.00000-of-01024.json.gz
```
I can download this file manually though e.g. by entering this URL in a browser.
There is an underlying HTTP 403 status code:
```
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://cdn-lfs.huggingface.co/datasets/allenai/c4/8ef8d75b0e045dec4aa5123a671b4564466b0707086a7ed1ba8721626dfffbc9?response-content-disposition=attachment%3B+filename*%3DUTF-8''c4-train.00000-of-01024.json.gz%3B+filename%3D%22c4-train.00000-of-01024.json.gz%22%3B&response-content-type=application/gzip&Expires=1677483770&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL2RhdGFzZXRzL2FsbGVuYWkvYzQvOGVmOGQ3NWIwZTA0NWRlYzRhYTUxMjNhNjcxYjQ1NjQ0NjZiMDcwNzA4NmE3ZWQxYmE4NzIxNjI2ZGZmZmJjOT9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPWFwcGxpY2F0aW9uJTJGZ3ppcCIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY3NzQ4Mzc3MH19fV19&Signature=yjL3UeY72cf2xpnvPvD68eAYOEe2qtaUJV55sB-jnPskBJEMwpMJcBZvg2~GqXZdM3O-GWV-Z3CI~d4u5VCb4YZ-HlmOjr3VBYkvox2EKiXnBIhjMecf2UVUPtxhTa9kBVlWjqu4qKzB9gKXZF2Cwpp5ctLzapEaT2nnqF84RAL-rsqMA3I~M8vWWfivQsbBK63hMfgZqqKMgdWM0iKMaItveDl0ufQ29azMFmsR7qd8V7sU2Z-F1fAeohS8HpN9OOnClW34yi~YJ2AbgZJJBXA~qsylfVA0Qp7Q~yX~q4P8JF1vmJ2BjkiSbGrj3bAXOGugpOVU5msI52DT88yMdA__&Key-Pair-Id=KVTP0A1DKRTAX')
```
### Expected behavior
This should retrieve the first example from the C4 validation set. This worked a few days ago but stopped working now.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5574/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5572
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5572/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5572/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5572/events
|
https://github.com/huggingface/datasets/issues/5572
| 1,597,257,624 |
I_kwDODunzps5fNDeY
| 5,572 |
Datasets 2.10.0 does not reuse the dataset cache
|
{
"login": "lsb",
"id": 45281,
"node_id": "MDQ6VXNlcjQ1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/45281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lsb",
"html_url": "https://github.com/lsb",
"followers_url": "https://api.github.com/users/lsb/followers",
"following_url": "https://api.github.com/users/lsb/following{/other_user}",
"gists_url": "https://api.github.com/users/lsb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lsb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lsb/subscriptions",
"organizations_url": "https://api.github.com/users/lsb/orgs",
"repos_url": "https://api.github.com/users/lsb/repos",
"events_url": "https://api.github.com/users/lsb/events{/privacy}",
"received_events_url": "https://api.github.com/users/lsb/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[] | 2023-02-23T17:28:11 | 2023-02-23T18:03:55 | 2023-02-23T18:03:55 |
NONE
| null | null | null |
### Describe the bug
download_mode="reuse_dataset_if_exists" will always consider that a dataset doesn't exist.
Specifically, upon losing an internet connection trying to load a dataset for a second time in ten seconds, a connection error results, showing a breakpoint of:
```
File ~/jupyterlab/.direnv/python-3.9.6/lib/python3.9/site-packages/datasets/load.py:1174, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1165 except Exception as e: # noqa: catch any exception of hf_hub and consider that the dataset doesn't exist
1166 if isinstance(
1167 e,
1168 (
(...)
1172 ),
1173 ):
-> 1174 raise ConnectionError(f"Couldn't reach '{path}' on the Hub ({type(e).__name__})")
1175 elif "404" in str(e):
1176 msg = f"Dataset '{path}' doesn't exist on the Hub"
ConnectionError: Couldn't reach 'lsb/tenk' on the Hub (ConnectionError)
```
This has been around since at least v2.0.
### Steps to reproduce the bug
```
from datasets import load_dataset
import numpy as np
tenk = load_dataset("lsb/tenk") # ten thousand integers
print(np.average(tenk['train']['a'])) # prints 4999.5
### now disconnect your internet
tenk_too = load_dataset("lsb/tenk", download_mode="reuse_dataset_if_exists")
# Raises ConnectionError: Couldn't reach 'lsb/tenk' on the Hub (ConnectionError)
```
### Expected behavior
I expected that I would be able to reuse the dataset I just downloaded.
### Environment info
- `datasets` version: 2.10.0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.5.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5572/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5571
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5571/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5571/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5571/events
|
https://github.com/huggingface/datasets/issues/5571
| 1,597,198,953 |
I_kwDODunzps5fM1Jp
| 5,571 |
load_dataset fails for JSON in windows
|
{
"login": "abinashsahu",
"id": 11876897,
"node_id": "MDQ6VXNlcjExODc2ODk3",
"avatar_url": "https://avatars.githubusercontent.com/u/11876897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abinashsahu",
"html_url": "https://github.com/abinashsahu",
"followers_url": "https://api.github.com/users/abinashsahu/followers",
"following_url": "https://api.github.com/users/abinashsahu/following{/other_user}",
"gists_url": "https://api.github.com/users/abinashsahu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abinashsahu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abinashsahu/subscriptions",
"organizations_url": "https://api.github.com/users/abinashsahu/orgs",
"repos_url": "https://api.github.com/users/abinashsahu/repos",
"events_url": "https://api.github.com/users/abinashsahu/events{/privacy}",
"received_events_url": "https://api.github.com/users/abinashsahu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi! \r\n\r\nYou need to pass an input json file explicitly as `data_files` to `load_dataset` to avoid this error:\r\n```python\r\n ds = load_dataset(\"json\", data_files=args.input_json)\r\n```\r\n\r\n",
"Thanks it worked!"
] | 2023-02-23T16:50:11 | 2023-02-24T13:21:47 | 2023-02-24T13:21:47 |
NONE
| null | null | null |
### Describe the bug
Steps:
1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method.
2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json"
3. I am reading the file in my local PyCharm - the location of python file is different than the location of the JSON.
4. When I read using load_dataset("json",args.input_json), it throws and error from builder.py.
raise InvalidConfigName(
f"Bad characters from black list '{invalid_windows_characters}' found in '{self.name}'. "
f"They could create issues when creating a directory for this config on Windows filesystem."
6. When I bring the data to the current directory, it works fine.
### Steps to reproduce the bug
Steps:
1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method.
2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json"
3. I am reading the file in my local PyCharm - the location of python file is different than the location of the JSON.
4. When I read using load_dataset("json",args.input_json), it throws and error from builder.py.
raise InvalidConfigName(
f"Bad characters from black list '{invalid_windows_characters}' found in '{self.name}'. "
f"They could create issues when creating a directory for this config on Windows filesystem."
6. When I bring the data to the current directory, it works fine.
### Expected behavior
Should be able to read from a path different than current directory in Windows machine.
### Environment info
datasets version: 2.3.1
python version: 3.8
Windows OS
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5571/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5570
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5570/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5570/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5570/events
|
https://github.com/huggingface/datasets/issues/5570
| 1,597,190,926 |
I_kwDODunzps5fMzMO
| 5,570 |
load_dataset gives FileNotFoundError on imagenet-1k if license is not accepted on the hub
|
{
"login": "buoi",
"id": 38630200,
"node_id": "MDQ6VXNlcjM4NjMwMjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/38630200?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buoi",
"html_url": "https://github.com/buoi",
"followers_url": "https://api.github.com/users/buoi/followers",
"following_url": "https://api.github.com/users/buoi/following{/other_user}",
"gists_url": "https://api.github.com/users/buoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/buoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/buoi/subscriptions",
"organizations_url": "https://api.github.com/users/buoi/orgs",
"repos_url": "https://api.github.com/users/buoi/repos",
"events_url": "https://api.github.com/users/buoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/buoi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi, thanks for the feedback! Would it help to add a tip or note saying the dataset is gated and you need to accept the license before downloading it?",
"The error is now more informative:\r\n```\r\nFileNotFoundError: Couldn't find a dataset script at /content/imagenet-1k/imagenet-1k.py or any data file in the same directory. Couldn't find 'imagenet-1k' on the Hugging Face Hub either: FileNotFoundError: Dataset 'imagenet-1k' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`.\r\n```\r\n\r\n"
] | 2023-02-23T16:44:32 | 2023-07-24T15:18:50 | 2023-07-24T15:18:50 |
NONE
| null | null | null |
### Describe the bug
When calling ```load_dataset('imagenet-1k')``` FileNotFoundError is raised, if not logged in and if logged in with huggingface-cli but not having accepted the licence on the hub. There is no error once accepting.
### Steps to reproduce the bug
```
from datasets import load_dataset
imagenet = load_dataset("imagenet-1k", split="train", streaming=True)
FileNotFoundError: Couldn't find a dataset script at /content/imagenet-1k/imagenet-1k.py or any data file in the same directory. Couldn't find 'imagenet-1k' on the Hugging Face Hub either: FileNotFoundError: Dataset 'imagenet-1k' doesn't exist on the Hub
```
tested on a colab notebook.
### Expected behavior
I would expect a specific error indicating that I have to login then accept the dataset licence.
I find this bug very relevant as this code is on a guide on the [Huggingface documentation for Datasets](https://huggingface.co/docs/datasets/about_mapstyle_vs_iterable)
### Environment info
google colab cpu-only instance
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5570/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5568
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5568/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5568/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5568/events
|
https://github.com/huggingface/datasets/issues/5568
| 1,596,900,532 |
I_kwDODunzps5fLsS0
| 5,568 |
dataset.to_iterable_dataset() loses useful info like dataset features
|
{
"login": "Hubert-Bonisseur",
"id": 48770768,
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hubert-Bonisseur",
"html_url": "https://github.com/Hubert-Bonisseur",
"followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers",
"following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}",
"gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions",
"organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs",
"repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos",
"events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false |
{
"login": "Hubert-Bonisseur",
"id": 48770768,
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hubert-Bonisseur",
"html_url": "https://github.com/Hubert-Bonisseur",
"followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers",
"following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}",
"gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions",
"organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs",
"repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos",
"events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "Hubert-Bonisseur",
"id": 48770768,
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hubert-Bonisseur",
"html_url": "https://github.com/Hubert-Bonisseur",
"followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers",
"following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}",
"gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions",
"organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs",
"repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos",
"events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Hi ! Oh good catch. I think the features should be passed to `IterableDataset.from_generator()` in `to_iterable_dataset()` indeed.\r\n\r\nSetting this as a good first issue if someone would like to contribute, otherwise we can take care of it :)",
"#self-assign",
"seems like the feature parameter is missing from `return IterableDataset.from_generator(Dataset._iter_shards, gen_kwargs={\"shards\": shards})` hence it defaults to None."
] | 2023-02-23T13:45:33 | 2023-02-24T13:22:36 | 2023-02-24T13:22:36 |
CONTRIBUTOR
| null | null | null |
### Describe the bug
Hello,
I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing.
When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata like the features.
These metadata are useful if you want to interleave iterable datasets, cast columns etc.
### Steps to reproduce the bug
```python
dataset = load_dataset("lhoestq/demo1")["train"]
print(dataset.features)
# {'id': Value(dtype='string', id=None), 'package_name': Value(dtype='string', id=None), 'review': Value(dtype='string', id=None), 'date': Value(dtype='string', id=None), 'star': Value(dtype='int64', id=None), 'version_id': Value(dtype='int64', id=None)}
dataset = dataset.to_iterable_dataset()
print(dataset.features)
# None
```
### Expected behavior
Keep the relevant information
### Environment info
datasets==2.10.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5568/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5566
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5566/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5566/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5566/events
|
https://github.com/huggingface/datasets/issues/5566
| 1,595,916,674 |
I_kwDODunzps5fH8GC
| 5,566 |
Directly reading parquet files in a s3 bucket from the load_dataset method
|
{
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null |
[
"Hi ! I think is in the scope of this other issue: to https://github.com/huggingface/datasets/issues/5281 "
] | 2023-02-22T22:13:40 | 2023-02-23T11:03:29 | null |
NONE
| null | null | null |
### Feature request
Right now, we have to read the get the parquet file to the local storage. So having ability to read given the bucket directly address would be benificial
### Motivation
In a production set up, this feature can help us a lot. So we do not need move training datafiles in between storage.
### Your contribution
I am willing to help if there's anyway.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5566/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5555
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5555/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5555/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5555/events
|
https://github.com/huggingface/datasets/issues/5555
| 1,592,469,938 |
I_kwDODunzps5e6ymy
| 5,555 |
`.shuffle` throwing error `ValueError: Protocol not known: parent`
|
{
"login": "prabhakar267",
"id": 10768588,
"node_id": "MDQ6VXNlcjEwNzY4NTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/10768588?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prabhakar267",
"html_url": "https://github.com/prabhakar267",
"followers_url": "https://api.github.com/users/prabhakar267/followers",
"following_url": "https://api.github.com/users/prabhakar267/following{/other_user}",
"gists_url": "https://api.github.com/users/prabhakar267/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prabhakar267/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prabhakar267/subscriptions",
"organizations_url": "https://api.github.com/users/prabhakar267/orgs",
"repos_url": "https://api.github.com/users/prabhakar267/repos",
"events_url": "https://api.github.com/users/prabhakar267/events{/privacy}",
"received_events_url": "https://api.github.com/users/prabhakar267/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"Hi ! The indices mapping is written in the same cachedirectory as your dataset.\r\n\r\nCan you run this to show your current cache directory ?\r\n```python\r\nprint(train_dataset.cache_files)\r\n```",
"```\r\n[{'filename': '.../train/dataset.arrow'}, {'filename': '.../train/dataset.arrow'}]\r\n```\r\n\r\nThese are the actual paths where `.hf` files are stored. ",
"I'm not aware of any `.hf` file ? What are you referring to ?\r\n\r\nAlso the error says \"Protocol unknown: parent\". Is there a chance you may have ended up with a path that contains this string `parent://` ?",
"I figured out why the issue was occuring but don't know the long-term fix.\r\nThe dataset I was trying to shuffle was loaded from a saved file which had `::` delimiter in filename. When I try with the exact same file without `::` in filename, it works as expected.\r\nQuick fix is to not use colons in filename. But if this is expected behaviour, this should be clearly stated in the documentation.\r\nThanks for help @lhoestq "
] | 2023-02-20T21:33:45 | 2023-02-27T09:23:34 | null |
NONE
| null | null | null |
### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs)
544 self_format = {
545 "type": self._format_type,
546 "format_kwargs": self._format_kwargs,
547 "columns": self._format_columns,
548 "output_all_columns": self._output_all_columns,
549 }
550 # apply actual function
--> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
553 # re-apply format to the output
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
482 # Update fingerprint of in-place transforms + update in-place history of transforms
484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3616, in Dataset.shuffle(self, seed, generator, keep_in_memory, load_from_cache_file, indices_cache_file_name, writer_batch_size, new_fingerprint)
3610 return self._new_dataset_with_indices(
3611 fingerprint=new_fingerprint, indices_cache_file_name=indices_cache_file_name
3612 )
3614 permutation = generator.permutation(len(self))
-> 3616 return self.select(
3617 indices=permutation,
3618 keep_in_memory=keep_in_memory,
3619 indices_cache_file_name=indices_cache_file_name if not keep_in_memory else None,
3620 writer_batch_size=writer_batch_size,
3621 new_fingerprint=new_fingerprint,
3622 )
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs)
544 self_format = {
545 "type": self._format_type,
546 "format_kwargs": self._format_kwargs,
547 "columns": self._format_columns,
548 "output_all_columns": self._output_all_columns,
549 }
550 # apply actual function
--> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
553 # re-apply format to the output
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
482 # Update fingerprint of in-place transforms + update in-place history of transforms
484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3266, in Dataset.select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint)
3263 return self._select_contiguous(start, length, new_fingerprint=new_fingerprint)
3265 # If not contiguous, we need to create a new indices mapping
-> 3266 return self._select_with_indices_mapping(
3267 indices,
3268 keep_in_memory=keep_in_memory,
3269 indices_cache_file_name=indices_cache_file_name,
3270 writer_batch_size=writer_batch_size,
3271 new_fingerprint=new_fingerprint,
3272 )
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:551, in transmit_format.<locals>.wrapper(*args, **kwargs)
544 self_format = {
545 "type": self._format_type,
546 "format_kwargs": self._format_kwargs,
547 "columns": self._format_columns,
548 "output_all_columns": self._output_all_columns,
549 }
550 # apply actual function
--> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
553 # re-apply format to the output
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
482 # Update fingerprint of in-place transforms + update in-place history of transforms
484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_dataset.py:3389, in Dataset._select_with_indices_mapping(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint)
3387 logger.info(f"Caching indices mapping at {indices_cache_file_name}")
3388 tmp_file = tempfile.NamedTemporaryFile("wb", dir=os.path.dirname(indices_cache_file_name), delete=False)
-> 3389 writer = ArrowWriter(
3390 path=tmp_file.name, writer_batch_size=writer_batch_size, fingerprint=new_fingerprint, unit="indices"
3391 )
3393 indices = indices if isinstance(indices, list) else list(indices)
3395 size = len(self)
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/datasets/arrow_writer.py:315, in ArrowWriter.__init__(self, schema, features, path, stream, fingerprint, writer_batch_size, hash_salt, check_duplicates, disable_nullable, update_features, with_metadata, unit, embed_local_files, storage_options)
312 self._disable_nullable = disable_nullable
314 if stream is None:
--> 315 fs_token_paths = fsspec.get_fs_token_paths(path, storage_options=storage_options)
316 self._fs: fsspec.AbstractFileSystem = fs_token_paths[0]
317 self._path = (
318 fs_token_paths[2][0]
319 if not is_remote_filesystem(self._fs)
320 else self._fs.unstrip_protocol(fs_token_paths[2][0])
321 )
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/core.py:593, in get_fs_token_paths(urlpath, mode, num, name_function, storage_options, protocol, expand)
591 else:
592 urlpath = stringify_path(urlpath)
--> 593 chain = _un_chain(urlpath, storage_options or {})
594 if len(chain) > 1:
595 inkwargs = {}
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/core.py:330, in _un_chain(path, kwargs)
328 for bit in reversed(bits):
329 protocol = split_protocol(bit)[0] or "file"
--> 330 cls = get_filesystem_class(protocol)
331 extra_kwargs = cls._get_kwargs_from_urls(bit)
332 kws = kwargs.get(protocol, {})
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/fsspec/registry.py:240, in get_filesystem_class(protocol)
238 if protocol not in registry:
239 if protocol not in known_implementations:
--> 240 raise ValueError("Protocol not known: %s" % protocol)
241 bit = known_implementations[protocol]
242 try:
ValueError: Protocol not known: parent
```
This is what the `train_dataset` object looks like
```
Dataset({
features: ['label', 'input_ids', 'attention_mask'],
num_rows: 364166
})
```
### Steps to reproduce the bug
The `train_dataset` obj is created by concatenating two datasets
And then shuffle is called, but it throws the mentioned error.
### Expected behavior
Should shuffle the dataset properly.
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-1022-aws-x86_64-with-glibc2.31
- Python version: 3.9.13
- PyArrow version: 10.0.0
- Pandas version: 1.4.4
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5555/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5548
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5548/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5548/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5548/events
|
https://github.com/huggingface/datasets/issues/5548
| 1,590,835,479 |
I_kwDODunzps5e0jkX
| 5,548 |
Apply flake8-comprehensions to codebase
|
{
"login": "Skylion007",
"id": 2053727,
"node_id": "MDQ6VXNlcjIwNTM3Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2053727?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Skylion007",
"html_url": "https://github.com/Skylion007",
"followers_url": "https://api.github.com/users/Skylion007/followers",
"following_url": "https://api.github.com/users/Skylion007/following{/other_user}",
"gists_url": "https://api.github.com/users/Skylion007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Skylion007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Skylion007/subscriptions",
"organizations_url": "https://api.github.com/users/Skylion007/orgs",
"repos_url": "https://api.github.com/users/Skylion007/repos",
"events_url": "https://api.github.com/users/Skylion007/events{/privacy}",
"received_events_url": "https://api.github.com/users/Skylion007/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false | null |
[] | null |
[] | 2023-02-19T20:05:38 | 2023-02-23T13:59:41 | 2023-02-23T13:59:41 |
CONTRIBUTOR
| null | null | null |
### Feature request
Apply ruff flake8 comprehension checks to codebase.
### Motivation
This should strictly improve the performance / readability of the codebase by removing unnecessary iteration, function calls, etc. This should generate better Python bytecode which should strictly improve performance.
I already applied this fixes to PyTorch and Sympy with little issue and have opened PRs to diffusers and transformers todo this as well.
### Your contribution
Making a PR.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5548/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5546
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5546/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5546/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5546/events
|
https://github.com/huggingface/datasets/issues/5546
| 1,590,346,349 |
I_kwDODunzps5eysJt
| 5,546 |
Downloaded datasets do not cache at $HF_HOME
|
{
"login": "ErfanMoosaviMonazzah",
"id": 79091831,
"node_id": "MDQ6VXNlcjc5MDkxODMx",
"avatar_url": "https://avatars.githubusercontent.com/u/79091831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ErfanMoosaviMonazzah",
"html_url": "https://github.com/ErfanMoosaviMonazzah",
"followers_url": "https://api.github.com/users/ErfanMoosaviMonazzah/followers",
"following_url": "https://api.github.com/users/ErfanMoosaviMonazzah/following{/other_user}",
"gists_url": "https://api.github.com/users/ErfanMoosaviMonazzah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ErfanMoosaviMonazzah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ErfanMoosaviMonazzah/subscriptions",
"organizations_url": "https://api.github.com/users/ErfanMoosaviMonazzah/orgs",
"repos_url": "https://api.github.com/users/ErfanMoosaviMonazzah/repos",
"events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/events{/privacy}",
"received_events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi ! Can you make sure you set `HF_HOME` before importing `datasets` ?\r\n\r\nThen you can print\r\n```python\r\nprint(datasets.config.HF_CACHE_HOME)\r\nprint(datasets.config.HF_DATASETS_CACHE)\r\n```"
] | 2023-02-18T13:30:35 | 2023-07-24T14:22:43 | 2023-07-24T14:22:43 |
NONE
| null | null | null |
### Describe the bug
In the huggingface course (https://huggingface.co/course/chapter3/2?fw=pt) it said that if we set HF_HOME, downloaded datasets would be cached at specified address but it does not. downloaded models from checkpoint names are downloaded and cached at HF_HOME but this is not the case for datasets, they are still cached at ~/.cache/huggingface/datasets.
### Steps to reproduce the bug
Run the following code
```
from datasets import load_dataset
raw_datasets = load_dataset("glue", "mrpc")
raw_datasets
```
it downloads and store dataset at ~/.cache/huggingface/datasets
### Expected behavior
to cache dataset at HF_HOME.
### Environment info
python 3.10.6
Kubuntu 22.04
HF_HOME located on a separate partition
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5546/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5543
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5543/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5543/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5543/events
|
https://github.com/huggingface/datasets/issues/5543
| 1,588,951,379 |
I_kwDODunzps5etXlT
| 5,543 |
the pile datasets url seems to change back
|
{
"login": "wjfwzzc",
"id": 5126316,
"node_id": "MDQ6VXNlcjUxMjYzMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wjfwzzc",
"html_url": "https://github.com/wjfwzzc",
"followers_url": "https://api.github.com/users/wjfwzzc/followers",
"following_url": "https://api.github.com/users/wjfwzzc/following{/other_user}",
"gists_url": "https://api.github.com/users/wjfwzzc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wjfwzzc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wjfwzzc/subscriptions",
"organizations_url": "https://api.github.com/users/wjfwzzc/orgs",
"repos_url": "https://api.github.com/users/wjfwzzc/repos",
"events_url": "https://api.github.com/users/wjfwzzc/events{/privacy}",
"received_events_url": "https://api.github.com/users/wjfwzzc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Thanks for reporting, @wjfwzzc.\r\n\r\nI am transferring this issue to the corresponding dataset on the Hub: https://huggingface.co/datasets/bookcorpusopen/discussions/1",
"Thank you. All fixes are done:\r\n- [x] https://huggingface.co/datasets/bookcorpusopen/discussions/2\r\n- [x] https://huggingface.co/datasets/the_pile/discussions/1\r\n- [x] https://huggingface.co/datasets/the_pile_books3/discussions/1\r\n- [x] https://huggingface.co/datasets/the_pile_openwebtext2/discussions/2\r\n- [x] https://huggingface.co/datasets/the_pile_stack_exchange/discussions/2"
] | 2023-02-17T08:40:11 | 2023-02-21T06:37:00 | 2023-02-20T08:41:33 |
NONE
| null | null | null |
### Describe the bug
in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again.
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_dataset("bookcorpusopen")
```
shows
```python3
ConnectionError: Couldn't reach https://mystic.the-eye.eu/public/AI/pile_preliminary_components/books1.tar.gz (ProxyError(MaxRetryError("HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_pr
eliminary_components/books1.tar.gz (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 504 Gateway Timeout')))")))
```
### Expected behavior
Downloading as normal.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- PyArrow version: 6.0.1
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5543/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5541
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5541/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5541/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5541/events
|
https://github.com/huggingface/datasets/issues/5541
| 1,588,633,555 |
I_kwDODunzps5esJ_T
| 5,541 |
Flattening indices in selected datasets is extremely inefficient
|
{
"login": "marioga",
"id": 6591505,
"node_id": "MDQ6VXNlcjY1OTE1MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marioga",
"html_url": "https://github.com/marioga",
"followers_url": "https://api.github.com/users/marioga/followers",
"following_url": "https://api.github.com/users/marioga/following{/other_user}",
"gists_url": "https://api.github.com/users/marioga/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marioga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marioga/subscriptions",
"organizations_url": "https://api.github.com/users/marioga/orgs",
"repos_url": "https://api.github.com/users/marioga/repos",
"events_url": "https://api.github.com/users/marioga/events{/privacy}",
"received_events_url": "https://api.github.com/users/marioga/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Running the script above on the branch https://github.com/huggingface/datasets/pull/5542 results in the expected behaviour:\r\n```\r\nNum chunks for original ds: 1\r\nOriginal ds save/load\r\nsave_to_disk -- RAM memory used: 0.671875 MB -- Total time: 0.255265 s\r\nload_from_disk -- RAM memory used: 42.796875 MB -- Total time: 0.014899 s\r\nNum chunks for original ds after reloading: 5000\r\n\r\nNum chunks for selected ds: 1\r\nflatten_indices -- RAM memory used: 42.546875 MB -- Total time: 23.735089 s\r\nNum chunks for selected ds after flattening: 5000\r\n\r\nSelected ds save/load\r\nsave_to_disk -- RAM memory used: 0.0 MB -- Total time: 0.287112 s\r\nload_from_disk -- RAM memory used: 38.84375 MB -- Total time: 0.014772 s\r\nNum chunks for selected ds after reloading: 5000\r\n```",
"Wouahouh super cool @marioga thanks a lot!",
"We just released `datasets==2.10.0` with this big improvement, thanks again @marioga "
] | 2023-02-17T01:52:24 | 2023-02-22T13:15:20 | 2023-02-17T11:12:33 |
CONTRIBUTOR
| null | null | null |
### Describe the bug
If we perform a `select` (or `shuffle`, `train_test_split`, etc.) operation on a dataset , we end up with a dataset with an `indices_table`. Currently, flattening such dataset consumes a lot of memory and the resulting flat dataset contains ChunkedArrays with as many chunks as there are rows. This is extremely inefficient and slows down the operations on the flat dataset, e.g., saving/loading the dataset to disk becomes really slow.
Perhaps more importantly, loading the dataset back from disk basically loads the whole table into RAM, as it cannot take advantage of memory mapping.
### Steps to reproduce the bug
The following script reproduces the issue:
```python
import gc
import os
import psutil
import tempfile
import time
from datasets import Dataset
DATASET_SIZE = 5000000
def profile(func):
def wrapper(*args, **kwargs):
mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
start = time.time()
# Run function here
out = func(*args, **kwargs)
end = time.time()
mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
print(f"{func.__name__} -- RAM memory used: {mem_after - mem_before} MB -- Total time: {end - start:.6f} s")
return out
return wrapper
def main():
ds = Dataset.from_list([{'col': i} for i in range(DATASET_SIZE)])
print(f"Num chunks for original ds: {ds.data['col'].num_chunks}")
with tempfile.TemporaryDirectory() as tmpdir:
path1 = os.path.join(tmpdir, 'ds1')
print("Original ds save/load")
profile(ds.save_to_disk)(path1)
ds_loaded = profile(Dataset.load_from_disk)(path1)
print(f"Num chunks for original ds after reloading: {ds_loaded.data['col'].num_chunks}")
print("")
ds_select = ds.select(reversed(range(len(ds))))
print(f"Num chunks for selected ds: {ds_select.data['col'].num_chunks}")
del ds
del ds_loaded
gc.collect()
# This would happen anyway when we call save_to_disk
ds_select = profile(ds_select.flatten_indices)()
print(f"Num chunks for selected ds after flattening: {ds_select.data['col'].num_chunks}")
print("")
path2 = os.path.join(tmpdir, 'ds2')
print("Selected ds save/load")
profile(ds_select.save_to_disk)(path2)
del ds_select
gc.collect()
ds_select_loaded = profile(Dataset.load_from_disk)(path2)
print(f"Num chunks for selected ds after reloading: {ds_select_loaded.data['col'].num_chunks}")
if __name__ == '__main__':
main()
```
Sample result:
```
Num chunks for original ds: 1
Original ds save/load
save_to_disk -- RAM memory used: 0.515625 MB -- Total time: 0.253888 s
load_from_disk -- RAM memory used: 42.765625 MB -- Total time: 0.015176 s
Num chunks for original ds after reloading: 5000
Num chunks for selected ds: 1
flatten_indices -- RAM memory used: 4852.609375 MB -- Total time: 46.116774 s
Num chunks for selected ds after flattening: 5000000
Selected ds save/load
save_to_disk -- RAM memory used: 1326.65625 MB -- Total time: 42.309825 s
load_from_disk -- RAM memory used: 2085.953125 MB -- Total time: 11.659137 s
Num chunks for selected ds after reloading: 5000000
```
### Expected behavior
Saving/loading the dataset should be much faster and consume almost no extra memory thanks to pyarrow memory mapping.
### Environment info
- `datasets` version: 2.9.1.dev0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5541/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5539
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5539/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5539/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5539/events
|
https://github.com/huggingface/datasets/issues/5539
| 1,587,970,083 |
I_kwDODunzps5epoAj
| 5,539 |
IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
|
{
"login": "aalbersk",
"id": 41912135,
"node_id": "MDQ6VXNlcjQxOTEyMTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/41912135?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aalbersk",
"html_url": "https://github.com/aalbersk",
"followers_url": "https://api.github.com/users/aalbersk/followers",
"following_url": "https://api.github.com/users/aalbersk/following{/other_user}",
"gists_url": "https://api.github.com/users/aalbersk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aalbersk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aalbersk/subscriptions",
"organizations_url": "https://api.github.com/users/aalbersk/orgs",
"repos_url": "https://api.github.com/users/aalbersk/repos",
"events_url": "https://api.github.com/users/aalbersk/events{/privacy}",
"received_events_url": "https://api.github.com/users/aalbersk/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false | null |
[] | null |
[
"Hi! The `set_transform` does not apply a custom formatting transform on a single example but the entire batch, so the fixed version of your transform would look as follows:\r\n```python\r\nfrom datasets import load_dataset\r\nimport torch\r\n\r\ndataset = load_dataset(\"lambdalabs/pokemon-blip-captions\", split='train')\r\ndef t(batch):\r\n return {\"test\": torch.tensor([1] * len(batch[next(iter(batch))]))}\r\n \r\ndataset.set_transform(t)\r\nd_0 = dataset[0]\r\n```\r\n\r\nStill, the formatter's error message should mention that a dict of **sequences** is expected as the returned value (not just a dict) to make debugging easier.",
"I can take this",
"Fixed in #5553 ",
"> Hi! The `set_transform` does not apply a custom formatting transform on a single example but the entire batch, so the fixed version of your transform would look as follows:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> import torch\r\n> \r\n> dataset = load_dataset(\"lambdalabs/pokemon-blip-captions\", split='train')\r\n> def t(batch):\r\n> return {\"test\": torch.tensor([1] * len(batch[next(iter(batch))]))}\r\n> \r\n> dataset.set_transform(t)\r\n> d_0 = dataset[0]\r\n> ```\r\n> \r\n> Still, the formatter's error message should mention that a dict of **sequences** is expected as the returned value (not just a dict) to make debugging easier.\r\n\r\nok, will change it according to suggestion. Thanks for the reply!"
] | 2023-02-16T16:08:51 | 2023-02-22T10:30:30 | 2023-02-21T13:03:57 |
NONE
| null | null | null |
### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 501, in format_row
return _unnest(formatted_batch)
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 137, in _unnest
return {key: array[0] for key, array in py_dict.items()}
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 137, in <dictcomp>
return {key: array[0] for key, array in py_dict.items()}
IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
```
### Steps to reproduce the bug
Load whichever dataset and add transform method to add 0-dim tensor. Or create/find a dataset containing 0-dim tensor. E.g.
```python
from datasets import load_dataset
import torch
dataset = load_dataset("lambdalabs/pokemon-blip-captions", split='train')
def t(batch):
return {"test": torch.tensor(1)}
dataset.set_transform(t)
d_0 = dataset[0]
```
### Expected behavior
Extractor will correctly get a row from the dataset, even if it contains 0-dim tensor.
### Environment info
`datasets==2.8.0`, but it looks like it is also applicable to main branch version (as of 16th February)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5539/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5538
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5538/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5538/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5538/events
|
https://github.com/huggingface/datasets/issues/5538
| 1,587,732,596 |
I_kwDODunzps5eouB0
| 5,538 |
load_dataset in seaborn is not working for me. getting this error.
|
{
"login": "reemaranibarik",
"id": 125575109,
"node_id": "U_kgDOB3wfxQ",
"avatar_url": "https://avatars.githubusercontent.com/u/125575109?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reemaranibarik",
"html_url": "https://github.com/reemaranibarik",
"followers_url": "https://api.github.com/users/reemaranibarik/followers",
"following_url": "https://api.github.com/users/reemaranibarik/following{/other_user}",
"gists_url": "https://api.github.com/users/reemaranibarik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/reemaranibarik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reemaranibarik/subscriptions",
"organizations_url": "https://api.github.com/users/reemaranibarik/orgs",
"repos_url": "https://api.github.com/users/reemaranibarik/repos",
"events_url": "https://api.github.com/users/reemaranibarik/events{/privacy}",
"received_events_url": "https://api.github.com/users/reemaranibarik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi! `seaborn`'s `load_dataset` pulls datasets from [here](https://github.com/mwaskom/seaborn-data) and not from our Hub, so this issue is not related to our library in any way and should be reported in their repo instead."
] | 2023-02-16T14:01:58 | 2023-02-16T14:44:36 | 2023-02-16T14:44:36 |
NONE
| null | null | null |
TimeoutError Traceback (most recent call last)
~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args)
1345 try:
-> 1346 h.request(req.get_method(), req.selector, req.data, headers,
1347 encode_chunked=req.has_header('Transfer-encoding'))
~\anaconda3\lib\http\client.py in request(self, method, url, body, headers, encode_chunked)
1278 """Send a complete request to the server."""
-> 1279 self._send_request(method, url, body, headers, encode_chunked)
1280
~\anaconda3\lib\http\client.py in _send_request(self, method, url, body, headers, encode_chunked)
1324 body = _encode(body, 'body')
-> 1325 self.endheaders(body, encode_chunked=encode_chunked)
1326
~\anaconda3\lib\http\client.py in endheaders(self, message_body, encode_chunked)
1273 raise CannotSendHeader()
-> 1274 self._send_output(message_body, encode_chunked=encode_chunked)
1275
~\anaconda3\lib\http\client.py in _send_output(self, message_body, encode_chunked)
1033 del self._buffer[:]
-> 1034 self.send(msg)
1035
~\anaconda3\lib\http\client.py in send(self, data)
973 if self.auto_open:
--> 974 self.connect()
975 else:
~\anaconda3\lib\http\client.py in connect(self)
1440
-> 1441 super().connect()
1442
~\anaconda3\lib\http\client.py in connect(self)
944 """Connect to the host and port specified in __init__."""
--> 945 self.sock = self._create_connection(
946 (self.host,self.port), self.timeout, self.source_address)
~\anaconda3\lib\socket.py in create_connection(address, timeout, source_address)
843 try:
--> 844 raise err
845 finally:
~\anaconda3\lib\socket.py in create_connection(address, timeout, source_address)
831 sock.bind(source_address)
--> 832 sock.connect(sa)
833 # Break explicitly a reference cycle
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
During handling of the above exception, another exception occurred:
URLError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_12220/2927704185.py in <module>
1 import seaborn as sn
----> 2 iris = sn.load_dataset('iris')
~\anaconda3\lib\site-packages\seaborn\utils.py in load_dataset(name, cache, data_home, **kws)
594 if name not in get_dataset_names():
595 raise ValueError(f"'{name}' is not one of the example datasets.")
--> 596 urlretrieve(url, cache_path)
597 full_path = cache_path
598 else:
~\anaconda3\lib\urllib\request.py in urlretrieve(url, filename, reporthook, data)
237 url_type, path = _splittype(url)
238
--> 239 with contextlib.closing(urlopen(url, data)) as fp:
240 headers = fp.info()
241
~\anaconda3\lib\urllib\request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context)
212 else:
213 opener = _opener
--> 214 return opener.open(url, data, timeout)
215
216 def install_opener(opener):
~\anaconda3\lib\urllib\request.py in open(self, fullurl, data, timeout)
515
516 sys.audit('urllib.Request', req.full_url, req.data, req.headers, req.get_method())
--> 517 response = self._open(req, data)
518
519 # post-process response
~\anaconda3\lib\urllib\request.py in _open(self, req, data)
532
533 protocol = req.type
--> 534 result = self._call_chain(self.handle_open, protocol, protocol +
535 '_open', req)
536 if result:
~\anaconda3\lib\urllib\request.py in _call_chain(self, chain, kind, meth_name, *args)
492 for handler in handlers:
493 func = getattr(handler, meth_name)
--> 494 result = func(*args)
495 if result is not None:
496 return result
~\anaconda3\lib\urllib\request.py in https_open(self, req)
1387
1388 def https_open(self, req):
-> 1389 return self.do_open(http.client.HTTPSConnection, req,
1390 context=self._context, check_hostname=self._check_hostname)
1391
~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args)
1347 encode_chunked=req.has_header('Transfer-encoding'))
1348 except OSError as err: # timeout error
-> 1349 raise URLError(err)
1350 r = h.getresponse()
1351 except:
URLError: <urlopen error [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5538/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5537
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5537/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5537/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5537/events
|
https://github.com/huggingface/datasets/issues/5537
| 1,587,567,464 |
I_kwDODunzps5eoFto
| 5,537 |
Increase speed of data files resolution
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3761482852,
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue",
"name": "good second issue",
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues"
}
] |
closed
| false |
{
"login": "semajyllek",
"id": 35013374,
"node_id": "MDQ6VXNlcjM1MDEzMzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/35013374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/semajyllek",
"html_url": "https://github.com/semajyllek",
"followers_url": "https://api.github.com/users/semajyllek/followers",
"following_url": "https://api.github.com/users/semajyllek/following{/other_user}",
"gists_url": "https://api.github.com/users/semajyllek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/semajyllek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/semajyllek/subscriptions",
"organizations_url": "https://api.github.com/users/semajyllek/orgs",
"repos_url": "https://api.github.com/users/semajyllek/repos",
"events_url": "https://api.github.com/users/semajyllek/events{/privacy}",
"received_events_url": "https://api.github.com/users/semajyllek/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "semajyllek",
"id": 35013374,
"node_id": "MDQ6VXNlcjM1MDEzMzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/35013374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/semajyllek",
"html_url": "https://github.com/semajyllek",
"followers_url": "https://api.github.com/users/semajyllek/followers",
"following_url": "https://api.github.com/users/semajyllek/following{/other_user}",
"gists_url": "https://api.github.com/users/semajyllek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/semajyllek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/semajyllek/subscriptions",
"organizations_url": "https://api.github.com/users/semajyllek/orgs",
"repos_url": "https://api.github.com/users/semajyllek/repos",
"events_url": "https://api.github.com/users/semajyllek/events{/privacy}",
"received_events_url": "https://api.github.com/users/semajyllek/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"#self-assign",
"You were right, if `self.dir_cache` is not None in glob, it is exactly the same as what is returned by find, at least for all the tests we have, and some extended evaluation I did across a random sample of about 1000 datasets. \r\n\r\nThanks for the nice hints, and let me know if this is not exactly what we want here!\r\n\r\nsee PR: https://github.com/huggingface/datasets/pull/5704\r\n\r\n",
"I think we can make the data files resolution (significantly) faster in 2 steps:\r\n\r\n1. `glob` calls `find` (which in turn calls `ls`), so we need `find` to be fast, and this can be achieved by fetching all the entries in a single API call and avoiding calls to `ls`. Implementing this for `HfFileSystem.find` (the one in `huggingface_hub`) is on my TO-DO list.\r\n2. caching the repeated `find` calls in `_get_data_files_patterns` when the `data_files` patterns are not provided in `load_dataset`. To address this, we can introduce a `_resolve_single_pattern` function that would accept a filesystem object and a list of regex patterns to resolve. Then we can wrap this filesystem object in `_get_data_files_patterns` with an object that would cache the find calls before resolving the patterns with `_resolve_single_pattern`. (Feel free to suggest a cleaner implementation)\r\n\r\nWDYT?",
"Good idea :) \r\n\r\nFor 2:\r\n\r\nThat would work ! It's also possible to have a FileSystem with a cache on `.find` and use it inside the resolver passed to `_get_data_files_patterns`. Right now they're pretty simple:\r\n\r\n```python\r\n# for remote repositories\r\nresolver = partial(_resolve_single_pattern_in_dataset_repository, dataset_info, base_path=base_path)\r\n# for local\r\nresolver = partial(_resolve_single_pattern_locally, base_path)\r\n```",
"something like this maybe (with Quentin's reimplementation of `HfFilesystem.find`)?\r\n\r\n ```\r\n @lru_cache(max_size=None)\r\n def _find(self, path, maxdepth=None, withdirs=False, detail=False, **kwargs):\r\n```\r\n\r\nIn any case please let me know if I can help in any way!"
] | 2023-02-16T12:11:45 | 2023-12-15T13:12:31 | 2023-12-15T13:12:31 |
MEMBER
| null | null | null |
Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step.
`datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on all the data files.
This comes from `resolve_patterns_in_dataset_repository` which calls `_resolve_single_pattern_in_dataset_repository`, which iterates on all the files at
```python
glob_iter = [PurePath(filepath) for filepath in fs.glob(PurePath(pattern).as_posix()) if fs.isfile(filepath)]
```
but calling `glob` on such a dataset is too expensive. Indeed it calls `ls()` in `hffilesystem.py` too many times.
Maybe `glob` can be more optimized in `hffilesystem.py`, or the data files resolution can directly be implemented in the filesystem by checking its `dir_cache` ?
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5537/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5536
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5536/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5536/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5536/events
|
https://github.com/huggingface/datasets/issues/5536
| 1,586,930,643 |
I_kwDODunzps5elqPT
| 5,536 |
Failure to hash function when using .map()
|
{
"login": "venzen",
"id": 6916056,
"node_id": "MDQ6VXNlcjY5MTYwNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6916056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/venzen",
"html_url": "https://github.com/venzen",
"followers_url": "https://api.github.com/users/venzen/followers",
"following_url": "https://api.github.com/users/venzen/following{/other_user}",
"gists_url": "https://api.github.com/users/venzen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/venzen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/venzen/subscriptions",
"organizations_url": "https://api.github.com/users/venzen/orgs",
"repos_url": "https://api.github.com/users/venzen/repos",
"events_url": "https://api.github.com/users/venzen/events{/privacy}",
"received_events_url": "https://api.github.com/users/venzen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi ! `enc` is not hashable:\r\n```python\r\nimport tiktoken\r\nfrom datasets.fingerprint import Hasher\r\n\r\nenc = tiktoken.get_encoding(\"gpt2\")\r\nHasher.hash(enc)\r\n# raises TypeError: cannot pickle 'builtins.CoreBPE' object\r\n```\r\nIt happens because it's not picklable, and because of that it's not possible to cache the result of `map`, hence the warning message.\r\n\r\nYou can find more details about caching here: https://huggingface.co/docs/datasets/about_cache\r\n\r\nYou can also provide your own unique hash in `map` if you want, with the `new_fingerprint` argument.\r\nOr disable caching using\r\n```python\r\nimport datasets\r\ndatasets.disable_caching()\r\n```",
"@lhoestq Thank you for the explanation and advice. Will relay all of this to the repo where this (non)issue arose. \r\n\r\nGreat job with huggingface! ",
"We made tiktoken tokenizers hashable in #5552, which is included in today's release `datasets==2.10.0`",
"Just a heads up that when I'm trying to use TikToken along with the a given Dataset `.map()` method, I am still met with the following error :\r\n\r\n```\r\n File \"/opt/conda/lib/python3.8/site-packages/dill/_dill.py\", line 388, in save\r\n StockPickler.save(self, obj, save_persistent_id)\r\n File \"/opt/conda/lib/python3.8/pickle.py\", line 578, in save\r\n rv = reduce(self.proto)\r\nTypeError: cannot pickle 'builtins.CoreBPE' object\r\n```\r\n\r\nMy current environment is running datasets v2.10.0.",
"cc @mariosasko ",
"@lhoestq @edhenry I am also seeing this, do you have any suggested solution?",
"With which `datasets` version ? Can you try to udpate ?",
"@lhoestq @edhenry I am on datasets version `'2.12.0'. I see the same `TypeError: cannot pickle 'builtins.CoreBPE' object` that others are seeing.",
"I am able to reproduce this on datasets 2.14.2. The `datasets.disable_caching()` doesn't work around it.\r\n\r\n@lhoestq - you might want to reopen this issue. Because of this issue folks won't be able run Karpathy's NanoGPT :(.",
"update: temporarily solved the problem by setting\r\n```\r\n--preprocess_num_workers 1\r\n```\r\n\r\n-------------\r\nI have met the same problem, here is my env:\r\n```\r\ndatasets 2.14.4\r\ntransformers 4.31.0\r\ntiktoken 0.4.0\r\ntorch 1.13.1\r\n```",
"@mengban I cannot reproduce the issue even with these versions installed. It would help if you could provide info about your system and the `pip list` output.",
"@mariosasko Please take a look at this\r\n```python\r\nfrom typing import Any\r\nfrom datasets import Dataset\r\nimport tiktoken\r\n\r\ndataset = Dataset.from_list([{\"n\": str(i)} for i in range(20)])\r\nenc = tiktoken.get_encoding(\"gpt2\")\r\n\r\n\r\nclass A:\r\n tokenizer = enc #tiktoken.get_encoding(\"gpt2\")\r\n\r\n def __call__(self, example) -> Any:\r\n ids = self.tokenizer.encode(example[\"n\"])\r\n example[\"len\"] = len(ids)\r\n return example\r\n\r\na = A()\r\n\r\ndef process(example):\r\n ids = a.tokenizer.encode(example[\"n\"])\r\n example[\"len\"] = len(ids)\r\n return example\r\n\r\n# success\r\ntokenized = dataset.map(process, desc=\"tiktoken\", num_proc=2)\r\n\r\n# raise TypeError: cannot pickle 'builtins.CoreBPE' object\r\ntokenized = dataset.map(a, desc=\"tiktoken\", num_proc=2)\r\n```\r\n\r\npip list\r\n```\r\ndatasets 2.14.4\r\ntiktoken 0.4.0\r\n```",
"Thanks @maxwellzh! Our `Hasher` works with this snippet, but the problem is running multiprocessing with a non-serializable `tiktoken.Encoding` object.\r\n\r\nInserting the following code before the `map` should fix this:\r\n```python\r\nimport copyreg\r\n\r\ndef pickle_Encoding(enc):\r\n return (functools.partial(tiktoken.core.Encoding, enc.name, pat_str=enc._pat_str, mergeable_ranks=enc._mergeable_ranks, special_tokens=enc._special_tokens), ())\r\n\r\ncopyreg.pickle(tiktoken.core.Encoding, pickle_Encoding)\r\n```\r\n\r\nBut the best fix would be implementing `__reduce__` for `tiktoken.Encoding` or `tiktoken.CoreBPE`. If I find time, I'll try to fix this in the `tiktoken` repo.",
"I think the right way to fix this would be to have new tokenizer instance for each process. This applies to many other tokenizers that don't support multi-process or have bugs. To do this, first define tokenizer factory class like this:\r\n\r\n```\r\n class TikTokenFactory:\r\n def __init__(self):\r\n self._enc = None\r\n self.eot_token = None\r\n\r\n def encode_ordinary(self, text):\r\n if self._enc is None:\r\n self._enc = tiktoken.get_encoding(\"gpt2\")\r\n self.eot_token = self._enc.eot_token\r\n return self._enc.encode_ordinary(text)\r\n```\r\n\r\nNow use this in `.map()` like this:\r\n\r\n```\r\n # tokenize the dataset\r\n tokenized = dataset.map(\r\n partial(process, TikTokenFactory()),\r\n remove_columns=['text'],\r\n desc=\"tokenizing the splits\",\r\n num_proc=max(1, cpu_count()//2),\r\n )\r\n```\r\n\r\nA full working example is here: https://github.com/sytelus/nanoGPT/blob/refactor/nanogpt_common/hf_data_prepare.py"
] | 2023-02-16T03:12:07 | 2023-09-08T21:06:01 | 2023-02-16T14:56:41 |
NONE
| null | null | null |
### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed._
This issue with `.map()` happens for me consistently, as also described in closed issue #4506
Dataset indices can be individually serialized using dill and pickle without any errors. I'm using tiktoken to encode in the function passed to map(). Similarly, indices can be individually encoded without error.
### Steps to reproduce the bug
```py
from datasets import load_dataset
import tiktoken
dataset = load_dataset("stas/openwebtext-10k")
enc = tiktoken.get_encoding("gpt2")
tokenized = dataset.map(
process,
remove_columns=['text'],
desc="tokenizing the OWT splits",
)
def process(example):
ids = enc.encode(example['text'])
ids.append(enc.eot_token)
out = {'ids': ids, 'len': len(ids)}
return out
```
### Expected behavior
Should encode simple text objects.
### Environment info
Python versions tried: both 3.8 and 3.10.10
`PYTHONUTF8=1` as env variable
Datasets tried:
- stas/openwebtext-10k
- rotten_tomatoes
- local text file
OS: Ubuntu Linux 20.04
Package versions:
- torch 1.13.1
- dill 0.3.4 (if using 0.3.6 - same issue)
- datasets 2.9.0
- tiktoken 0.2.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5536/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5534
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5534/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5534/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5534/events
|
https://github.com/huggingface/datasets/issues/5534
| 1,586,177,862 |
I_kwDODunzps5eiydG
| 5,534 |
map() breaks at certain dataset size when using Array3D
|
{
"login": "ArneBinder",
"id": 3375489,
"node_id": "MDQ6VXNlcjMzNzU0ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArneBinder",
"html_url": "https://github.com/ArneBinder",
"followers_url": "https://api.github.com/users/ArneBinder/followers",
"following_url": "https://api.github.com/users/ArneBinder/following{/other_user}",
"gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions",
"organizations_url": "https://api.github.com/users/ArneBinder/orgs",
"repos_url": "https://api.github.com/users/ArneBinder/repos",
"events_url": "https://api.github.com/users/ArneBinder/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArneBinder/received_events",
"type": "User",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"Hi! This code works for me locally or in Colab. What's the output of `python -c \"import pyarrow as pa; print(pa.__version__)\"` when you run it inside your environment?",
"Thanks for looking into this!\r\nThe output of `python -c \"import pyarrow as pa; print(pa.__version__)\"` is:\r\n```\r\n11.0.0\r\n```\r\n\r\nI did the following to setup the environment:\r\n```\r\nconda create -n datasets_debug python=3.9\r\nconda activate datasets_debug\r\npip install datasets==2.9.0\r\n```\r\n\r\nI just tested this on another machine (Ubuntu 18.04.6 LTS) with the same result as mentioned in the issue description.\r\n"
] | 2023-02-15T16:34:25 | 2023-03-03T16:31:33 | null |
NONE
| null | null | null |
### Describe the bug
`map()` magically breaks when using a `Array3D` feature and mapping it. I created a very simple dummy dataset (see below). When filtering it down to 95 elements I can apply map, but it breaks when filtering it down to just 96 entries with the following exception:
```
Traceback (most recent call last):
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3255, in _map_single
writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 581, in finalize
self.write_examples_on_file()
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 440, in write_examples_on_file
batch_examples[col] = array_concat(arrays)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1931, in array_concat
return _concat_arrays(arrays)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1901, in _concat_arrays
return array_type.wrap_array(_concat_arrays([array.storage for array in arrays]))
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays
_concat_arrays([array.values for array in arrays]),
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays
_concat_arrays([array.values for array in arrays]),
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1920, in _concat_arrays
return pa.ListArray.from_arrays(
File "pyarrow/array.pxi", line 1997, in pyarrow.lib.ListArray.from_arrays
File "pyarrow/array.pxi", line 1527, in pyarrow.lib.Array.validate
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Negative offsets in list array
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2815, in map
return self._map_single(
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 546, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 513, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/fingerprint.py", line 480, in wrapper
out = func(self, *args, **kwargs)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3259, in _map_single
writer.finalize()
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 581, in finalize
self.write_examples_on_file()
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/arrow_writer.py", line 440, in write_examples_on_file
batch_examples[col] = array_concat(arrays)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1931, in array_concat
return _concat_arrays(arrays)
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1901, in _concat_arrays
return array_type.wrap_array(_concat_arrays([array.storage for array in arrays]))
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays
_concat_arrays([array.values for array in arrays]),
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1922, in _concat_arrays
_concat_arrays([array.values for array in arrays]),
File "/home/arbi01/miniconda3/envs/tmp9/lib/python3.9/site-packages/datasets/table.py", line 1920, in _concat_arrays
return pa.ListArray.from_arrays(
File "pyarrow/array.pxi", line 1997, in pyarrow.lib.ListArray.from_arrays
File "pyarrow/array.pxi", line 1527, in pyarrow.lib.Array.validate
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Negative offsets in list array
```
### Steps to reproduce the bug
1. put following dataset loading script into: debug/debug.py
```python
import datasets
import numpy as np
class DEBUG(datasets.GeneratorBasedBuilder):
"""DEBUG dataset."""
def _info(self):
return datasets.DatasetInfo(
features=datasets.Features(
{
"id": datasets.Value("uint8"),
"img_data": datasets.Array3D(shape=(3, 224, 224), dtype="uint8"),
},
),
supervised_keys=None,
)
def _split_generators(self, dl_manager):
return [datasets.SplitGenerator(name=datasets.Split.TRAIN)]
def _generate_examples(self):
for i in range(149):
image_np = np.zeros(shape=(3, 224, 224), dtype=np.int8).tolist()
yield f"id_{i}", {"id": i, "img_data": image_np}
```
2. try the following code:
```python
import datasets
def add_dummy_col(ex):
ex["dummy"] = "test"
return ex
ds = datasets.load_dataset(path="debug", split="train")
# works
ds_filtered_works = ds.filter(lambda example: example["id"] < 95)
print(f"filtered result size: {len(ds_filtered_works)}")
# output:
# filtered result size: 95
ds_mapped_works = ds_filtered_works.map(add_dummy_col)
# fails
ds_filtered_error = ds.filter(lambda example: example["id"] < 96)
print(f"filtered result size: {len(ds_filtered_error)}")
# output:
# filtered result size: 96
ds_mapped_error = ds_filtered_error.map(add_dummy_col)
```
### Expected behavior
The example code does not fail.
### Environment info
Python 3.9.16 (main, Jan 11 2023, 16:05:54); [GCC 11.2.0] :: Anaconda, Inc. on linux
datasets 2.9.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5534/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5532
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5532/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5532/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5532/events
|
https://github.com/huggingface/datasets/issues/5532
| 1,584,505,128 |
I_kwDODunzps5ecaEo
| 5,532 |
train_test_split in arrow_dataset does not ensure to keep single classes in test set
|
{
"login": "Ulipenitz",
"id": 37191008,
"node_id": "MDQ6VXNlcjM3MTkxMDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/37191008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ulipenitz",
"html_url": "https://github.com/Ulipenitz",
"followers_url": "https://api.github.com/users/Ulipenitz/followers",
"following_url": "https://api.github.com/users/Ulipenitz/following{/other_user}",
"gists_url": "https://api.github.com/users/Ulipenitz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ulipenitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ulipenitz/subscriptions",
"organizations_url": "https://api.github.com/users/Ulipenitz/orgs",
"repos_url": "https://api.github.com/users/Ulipenitz/repos",
"events_url": "https://api.github.com/users/Ulipenitz/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ulipenitz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi! You can get this behavior by specifying `stratify_by_column=\"label\"` in `train_test_split`.\r\n\r\nThis is the full example:\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset, ClassLabel\r\n\r\ndata = [\r\n {'label': 0, 'text': \"example1\"},\r\n {'label': 1, 'text': \"example2\"},\r\n {'label': 1, 'text': \"example3\"},\r\n {'label': 1, 'text': \"example4\"},\r\n {'label': 0, 'text': \"example5\"},\r\n {'label': 1, 'text': \"example6\"},\r\n {'label': 2, 'text': \"example7\"},\r\n {'label': 2, 'text': \"example8\"}\r\n]\r\n\r\nfor _ in range(10):\r\n data_set = Dataset.from_list(data)\r\n data_set = data_set.cast_column(\"label\", ClassLabel(num_classes=3))\r\n data_set = data_set.train_test_split(test_size=0.5, stratify_by_column=\"label\")\r\n unique_labels_train = np.unique(data_set[\"train\"][:][\"label\"])\r\n unique_labels_test = np.unique(data_set[\"test\"][:][\"label\"])\r\n assert len(unique_labels_train) >= len(unique_labels_test) \r\n```\r\n"
] | 2023-02-14T16:52:29 | 2023-02-15T16:09:19 | 2023-02-15T16:09:19 |
NONE
| null | null | null |
### Describe the bug
When I have a dataset with very few (e.g. 1) examples per class and I call the train_test_split function on it, sometimes the single class will be in the test set. thus will never be considered for training.
### Steps to reproduce the bug
```
import numpy as np
from datasets import Dataset
data = [
{'label': 0, 'text': "example1"},
{'label': 1, 'text': "example2"},
{'label': 1, 'text': "example3"},
{'label': 1, 'text': "example4"},
{'label': 0, 'text': "example5"},
{'label': 1, 'text': "example6"},
{'label': 2, 'text': "example7"},
{'label': 2, 'text': "example8"}
]
for _ in range(10):
data_set = Dataset.from_list(data)
data_set = data_set.train_test_split(test_size=0.5)
data_set["train"]
unique_labels_train = np.unique(data_set["train"][:]["label"])
unique_labels_test = np.unique(data_set["test"][:]["label"])
assert len(unique_labels_train) >= len(unique_labels_test)
```
### Expected behavior
I expect to have every available class at least once in my training set.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- PyArrow version: 11.0.0
- Pandas version: 1.3.5
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5532/timeline
| null |
completed
|
https://api.github.com/repos/huggingface/datasets/issues/5531
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5531/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5531/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5531/events
|
https://github.com/huggingface/datasets/issues/5531
| 1,584,387,276 |
I_kwDODunzps5eb9TM
| 5,531 |
Invalid Arrow data from JSONL
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false | null |
[] | null |
[] | 2023-02-14T15:39:49 | 2023-02-14T15:46:09 | null |
MEMBER
| null | null | null |
This code fails:
```python
from datasets import Dataset
ds = Dataset.from_json(path_to_file)
ds.data.validate()
```
raises
```python
ArrowInvalid: Column 2: In chunk 1: Invalid: Struct child array #3 invalid: Invalid: Length spanned by list offsets (4064) larger than values array (length 4063)
```
This causes many issues for @TevenLeScao:
- `map` fails because it fails to rewrite invalid arrow arrays
```python
~/Desktop/hf/datasets/src/datasets/arrow_writer.py in write_examples_on_file(self)
438 if all(isinstance(row[0][col], (pa.Array, pa.ChunkedArray)) for row in self.current_examples):
439 arrays = [row[0][col] for row in self.current_examples]
--> 440 batch_examples[col] = array_concat(arrays)
441 else:
442 batch_examples[col] = [
~/Desktop/hf/datasets/src/datasets/table.py in array_concat(arrays)
1885
1886 if not _is_extension_type(array_type):
-> 1887 return pa.concat_arrays(arrays)
1888
1889 def _offsets_concat(offsets):
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.concat_arrays()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowIndexError: array slice would exceed array length
```
- `to_dict()` **segfaults** ⚠️
```python
/Users/runner/work/crossbow/crossbow/arrow/cpp/src/arrow/array/data.cc:99: Check failed: (off) <= (length) Slice offset greater
than array length
```
To reproduce: unzip the archive and run the above code using `sanity_oscar_en.jsonl`
[sanity_oscar_en.jsonl.zip](https://github.com/huggingface/datasets/files/10734124/sanity_oscar_en.jsonl.zip)
PS: reading using pandas and converting to Arrow works though (note that the dataset lives in RAM in this case):
```python
ds = Dataset.from_pandas(pd.read_json(path_to_file, lines=True))
ds.data.validate()
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5531/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5531/timeline
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5525
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5525/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5525/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5525/events
|
https://github.com/huggingface/datasets/issues/5525
| 1,580,342,729 |
I_kwDODunzps5eMh3J
| 5,525 |
TypeError: Couldn't cast array of type string to null
|
{
"login": "TJ-Solergibert",
"id": 74564958,
"node_id": "MDQ6VXNlcjc0NTY0OTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/74564958?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TJ-Solergibert",
"html_url": "https://github.com/TJ-Solergibert",
"followers_url": "https://api.github.com/users/TJ-Solergibert/followers",
"following_url": "https://api.github.com/users/TJ-Solergibert/following{/other_user}",
"gists_url": "https://api.github.com/users/TJ-Solergibert/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TJ-Solergibert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TJ-Solergibert/subscriptions",
"organizations_url": "https://api.github.com/users/TJ-Solergibert/orgs",
"repos_url": "https://api.github.com/users/TJ-Solergibert/repos",
"events_url": "https://api.github.com/users/TJ-Solergibert/events{/privacy}",
"received_events_url": "https://api.github.com/users/TJ-Solergibert/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Thanks for reporting, @TJ-Solergibert.\r\n\r\nWe cannot access your Colab notebook: `There was an error loading this notebook. Ensure that the file is accessible and try again.`\r\nCould you please make it publicly accessible?\r\n",
"I swear it's public, I've checked the settings and I've been able to open it in incognito mode.\r\n\r\nNotebook: https://colab.research.google.com/drive/1JCrS7FlGfu_kFqChMrwKZ_bpabnIMqbP?usp=sharing\r\n\r\nAnyway, this is the code to reproduce the error:\r\n\r\n```python3\r\nfrom datasets import ClassLabel\r\nfrom datasets import load_dataset\r\n\r\neuroparl_ds = load_dataset(\"tj-solergibert/Europarl-ST\")\r\n\r\nsource_lang = \"nl\"\r\nlanguages = list(europarl_ds[\"train\"][0][\"transcriptions\"].keys())\r\nClassLabels = ClassLabel(num_classes = len(languages), names = languages)\r\n\r\ndef map_label2id(example):\r\n example['dest_lang'] = ClassLabels.str2int(example['dest_lang'])\r\n return example\r\n\r\ndef unfold_transcriptions(example):\r\n for lang in languages:\r\n example[lang] = example[\"transcriptions\"][lang]\r\n return example\r\n\r\ndef unroll(batch, src_lang, dest_langs):\r\n source_t, dest_t, dest_l = [], [], []\r\n for lang in dest_langs: \r\n source_t += batch[src_lang]\r\n dest_t += batch[lang]\r\n dest_l += [lang]\r\n return_dict = {\"source_text\": source_t, \"dest_text\": dest_t, \"dest_lang\": dest_l}\r\n return return_dict\r\n\r\ndef preprocess_split(ds_split, src_lang):\r\n dest_langs = [x for x in languages if x != src_lang]\r\n\r\n ds_split = ds_split.map(unroll, fn_kwargs= {\"src_lang\": src_lang, \"dest_langs\": dest_langs}, batched = True, batch_size = 1, remove_columns= list(languages))\r\n ds_split = ds_split.filter(lambda x: x[\"source_text\"] != None and x[\"dest_text\"] != None) # Remove incomplete translations\r\n ds_split = ds_split.filter(lambda x: x[\"source_text\"] != \"None\" and x[\"dest_text\"] != \"None\")\r\n ds_split = ds_split.map(map_label2id) \r\n ds_split = ds_split.cast_column(\"dest_lang\", ClassLabels)\r\n return ds_split\r\n\r\ndef reset_cortas(example):\r\n for lang in languages:\r\n if isinstance(example[lang], str):\r\n if example[lang].isnumeric () or len(example[lang]) <= 5:\r\n example[lang] = \"None\"\r\n return example\r\n\r\ndef clean_dataset(dataset):\r\n # Remove columns\r\n dataset = dataset.remove_columns([\"original_speech\", \"original_language\", \"audio_path\", \"segment_start\", \"segment_end\"])\r\n # Unfold\r\n dataset = dataset.map(unfold_transcriptions, remove_columns = [\"transcriptions\"])\r\n dataset = dataset.map(reset_cortas)\r\n return dataset\r\n\r\nprocessed_europarl = clean_dataset(europarl_ds[\"test\"])\r\nnew_train_ds = preprocess_split(processed_europarl, 'nl')\r\n```",
"Thanks, @TJ-Solergibert. I can access your notebook now. Maybe it was just a temporary issue.\r\n\r\nAt first sight, it seems something related to your data: maybe some of the examples do not have all the transcriptions for all the languages. Then, some of them are null when unrolled. And when trying to concatenate with the other rows containing strings, the cast issue is raised (the arrays to be concatenated have different types).\r\n\r\nDo you think this could be the case?",
"See, in this example, \"nl\" and \"ro\" transcripts are null:\r\n```python\r\n>>> europarl_ds[\"test\"][:1]\r\n{'original_speech': ['− Señor Presidente, en primer lugar, quisiera felicitar al señor Seeber por el trabajo realizado, porque en su informe se recogen muchas de las preocupaciones manifestadas en esta'],\r\n 'original_language': ['es'],\r\n 'audio_path': ['es/audios/en.20081008.24.3-238.m4a'],\r\n 'segment_start': [0.6200000047683716],\r\n 'segment_end': [11.319999694824219],\r\n 'transcriptions': [{'de': '− Herr Präsident! Zunächst möchte ich Richard Seeber zu der von ihm geleisteten Arbeit gratulieren, denn sein Bericht greift viele der in diesem Haus zum Ausdruck gebrachten Anliegen',\r\n 'en': '− Mr President, firstly I would like to congratulate Mr Seeber on the work he has done, because his report picks up many of the concerns expressed in this',\r\n 'es': '− Señor Presidente, en primer lugar, quisiera felicitar al señor Seeber por el trabajo realizado, porque en su informe se recogen muchas de las preocupaciones manifestadas en esta',\r\n 'fr': '− Monsieur le Président, je voudrais tout d ’ abord féliciter M. Seeber pour le travail qu ’ il a effectué, parce que son rapport reprend beaucoup des inquiétudes exprimées au sein de cette',\r\n 'it': \"− Signor Presidente, mi congratulo innanzi tutto con l'onorevole Seeber per il lavoro svolto, perché la sua relazione accoglie molti dei timori espressi da quest'Aula\",\r\n 'nl': None,\r\n 'pl': '− Panie przewodniczący! Po pierwsze chciałabym pogratulować panu posłowi Seeberowi wykonanej pracy, ponieważ jego sprawozdanie podejmuje szereg podnoszonych w tej Izbie',\r\n 'pt': '− Senhor Presidente, começo por felicitar o senhor deputado Seeber pelo trabalho que desenvolveu em torno deste relatório, que retoma muitas das preocupações expressas nesta',\r\n 'ro': None}]}\r\n```\r\n```python\r\n>>> processed_europarl[0]\r\n{'de': '− Herr Präsident! Zunächst möchte ich Richard Seeber zu der von ihm geleisteten Arbeit gratulieren, denn sein Bericht greift viele der in diesem Haus zum Ausdruck gebrachten Anliegen',\r\n 'en': '− Mr President, firstly I would like to congratulate Mr Seeber on the work he has done, because his report picks up many of the concerns expressed in this',\r\n 'es': '− Señor Presidente, en primer lugar, quisiera felicitar al señor Seeber por el trabajo realizado, porque en su informe se recogen muchas de las preocupaciones manifestadas en esta',\r\n 'fr': '− Monsieur le Président, je voudrais tout d ’ abord féliciter M. Seeber pour le travail qu ’ il a effectué, parce que son rapport reprend beaucoup des inquiétudes exprimées au sein de cette',\r\n 'it': \"− Signor Presidente, mi congratulo innanzi tutto con l'onorevole Seeber per il lavoro svolto, perché la sua relazione accoglie molti dei timori espressi da quest'Aula\",\r\n 'nl': None,\r\n 'pl': '− Panie przewodniczący! Po pierwsze chciałabym pogratulować panu posłowi Seeberowi wykonanej pracy, ponieważ jego sprawozdanie podejmuje szereg podnoszonych w tej Izbie',\r\n 'pt': '− Senhor Presidente, começo por felicitar o senhor deputado Seeber pelo trabalho que desenvolveu em torno deste relatório, que retoma muitas das preocupações expressas nesta',\r\n 'ro': None}\r\n```",
"You can fix this issue by forcing the cast of None to str by hand:\r\n- If you replace this line:\r\n```python\r\nsource_t += batch[src_lang]\r\n```\r\n- With this line (because the batch size is 1):\r\n```python\r\nsource_t += [str(batch[src_lang][0])]\r\n```\r\n- Or with this line (if the batch size were larger than 1):\r\n```python\r\nsource_t += [str(text) for text in batch[src_lang]]\r\n```",
"Problem solved! Thanks @albertvillanova, now I have even increased the batch size and it's crazy fast :rocket: !"
] | 2023-02-10T21:12:36 | 2023-02-14T17:41:08 | 2023-02-14T09:35:49 |
NONE
| null | null | null |
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings (reset_cortas function). It only happends with NL, PL, RO and PT. It does not make sense since when processing the other languages I also use the corpus of those that fail and it does not cause any errors.
I suspect that the error may be in this direction:
We use cast_array_to_feature to support casting to custom types like Audio and Image # Also, when trying type "string", we don't want to convert integers or floats to "string". # We only do it if trying_type is False - since this is what the user asks for.
### Steps to reproduce the bug
Here I link a colab notebook to reproduce the error:
https://colab.research.google.com/drive/1JCrS7FlGfu_kFqChMrwKZ_bpabnIMqbP?authuser=1#scrollTo=FBAvlhMxIzpA
### Expected behavior
Data processing does not fail. A correct example can be seen here: https://huggingface.co/datasets/tj-solergibert/Europarl-ST-processed-mt-en
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/5525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/5525/timeline
| null |
completed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.