File size: 2,071 Bytes
d0fdec7
 
 
 
 
 
 
 
77a63e2
 
d0fdec7
 
 
 
 
ffb846f
d0fdec7
8cd2a1e
 
d0fdec7
 
df01d1d
d0fdec7
 
 
 
 
7c896d1
 
34a5736
 
 
 
 
 
 
 
 
 
d0fdec7
7c896d1
d0fdec7
 
 
 
 
 
 
 
 
 
0e9d59f
d0fdec7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
language: fa
pretty_name: Filimo ASR Dataset 2024
tags:
- Farsi
- Persian
- ASR
- filimo
task_categories:
  - automatic-speech-recognition
license: cc0-1.0
---

# Filimo ASR Dataset 2024

This dataset consists of about **245** hours of transcribed audio extracted from various Filimo (an Iranian VOD) videos in the Persian language (more than 400k rows).

This dataset is similar to the [Youtube ASR dataset](https://huggingface.co/datasets/PerSets/youtube-persian-asr) in terms of structure and content, and both can be used alongside each other, but not as substitutes for one another.

## Dataset Description

This dataset includes Farsi content from a variety of video genres, spanning from older productions up to mid-2024 (all available content with Persian audio and Persian subtitle on Filimo up to this date), such as:
- Movies
- TV Series
- Shows
- Documentaries

Utterances and sentences are extracted based on the timing of subtitles.

The list of videos used in this dataset is stored in the `movie_ids.csv` file as follows:
```
p1w08	می_خواهم_زنده_بمانم_فصل_1_قسمت_7
izmtv	می_خواهم_زنده_بمانم_فصل_1_قسمت_8
rt5e2	دیرین_دیرین_(با_زیرنویس_مخصوص_ناشنوایان)_فصل_1_قسمت_11:_عشق_قهوه_ای
kuenl	دیرین_دیرین_(با_زیرنویس_مخصوص_ناشنوایان)_فصل_1_قسمت_10:_استیكر_ناجور_وی
oqgln	می_خواهم_زنده_بمانم_فصل_1_قسمت_2
...
```

## Note
This dataset contains raw, unvalidated transcriptions. Transcriptions timing may occasionally be imprecise. Many efforts have been made to cleanse the data using various methods and software. Users are advised to:
- Perform their own quality assessment
- Create their own train/validation/test splits based on their specific needs
- Validate a subset of the data if needed for their use case

## Usage
<details>

Huggingface datasets library:
```python
from datasets import load_dataset
dataset = load_dataset('PerSets/filimo-persian-asr', trust_remote_code=True)
```