Datasets:
Streaming example does not work: "because column names don't match"
#26
by
simeneide
- opened
When running the example in readme
from datasets import load_dataset
cv_17 = load_dataset("mozilla-foundation/common_voice_17_0", "hi", split="train", streaming=True)
print(next(iter(cv_17)))
I get the error below. I am running datasets 3.6.0 and huggingface hub 0.28.1, and python 3.12. It is related to the audio column, because if I remove that before iterating it works. I have also tried a different machine. And also different dataset (gigaspeech), which worked. So I suspect this is related to the dataset.
---------------------------------------------------------------------------
CastError Traceback (most recent call last)
Cell In[2], line 4
1 from datasets import load_dataset
2 cv_17 = load_dataset("mozilla-foundation/common_voice_17_0", "hi", split="train", streaming=True)
----> 4 print(next(iter(cv_17)))
File ~/workdir/tts/mimi/.venv/lib/python3.12/site-packages/datasets/iterable_dataset.py:2270, in IterableDataset.__iter__(self)
2267 yield formatter.format_row(pa_table)
2268 return
-> 2270 for key, example in ex_iterable:
2271 # no need to format thanks to FormattedExamplesIterable
2272 yield example
File ~/workdir/tts/mimi/.venv/lib/python3.12/site-packages/datasets/iterable_dataset.py:1856, in FormattedExamplesIterable.__iter__(self)
1849 formatter = get_formatter(
1850 self.formatting.format_type,
1851 features=self._features if not self.ex_iterable.is_typed else None,
1852 token_per_repo_id=self.token_per_repo_id,
1853 )
1854 if self.ex_iterable.iter_arrow:
1855 # feature casting (inc column addition) handled within self._iter_arrow()
-> 1856 for key, pa_table in self._iter_arrow():
1857 batch = formatter.format_batch(pa_table)
1858 for example in _batch_to_examples(batch):
File ~/workdir/tts/mimi/.venv/lib/python3.12/site-packages/datasets/iterable_dataset.py:1888, in FormattedExamplesIterable._iter_arrow(self)
1886 pa_table = pa_table.append_column(column_name, col)
1887 if pa_table.schema != schema:
-> 1888 pa_table = cast_table_to_features(pa_table, self.features)
1889 yield key, pa_table
File ~/workdir/tts/mimi/.venv/lib/python3.12/site-packages/datasets/table.py:2215, in cast_table_to_features(table, features)
2203 """Cast a table to the arrow schema that corresponds to the requested features.
2204
2205 Args:
(...) 2212 table (`pyarrow.Table`): the casted table
2213 """
2214 if sorted(table.column_names) != sorted(features):
-> 2215 raise CastError(
2216 f"Couldn't cast\n{_short_str(table.schema)}\nto\n{_short_str(features)}\nbecause column names don't match",
2217 table_column_names=table.column_names,
2218 requested_column_names=list(features),
2219 )
2220 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
2221 return pa.Table.from_arrays(arrays, schema=features.arrow_schema)
CastError: Couldn't cast
client_id: string
path: string
sentence_id: string
sentence: string
sentence_domain: string
up_votes: string
down_votes: string
age: string
gender: string
variant: string
locale: string
segment: string
accent: string
audio: struct<bytes: binary, path: string>
child 0, bytes: binary
child 1, path: string
to
{'client_id': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None), 'down_votes': Value(dtype='int64', id=None), 'age': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'accent': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'variant': Value(dtype='string', id=None)}
because column names don't match
Hi, were you able to find a solution for this problem?
Unfortunately no, I downloaded the whole dataset on a huge disk..