Codyfederer commited on
Commit
0190a49
·
verified ·
1 Parent(s): 3935b1d

Upload dataset as Parquet (1 files, 491 records)

Browse files
README.md ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - automatic-speech-recognition
5
+ - text-to-speech
6
+ language:
7
+ - tr
8
+ tags:
9
+ - speech
10
+ - audio
11
+ - dataset
12
+ - tts
13
+ - asr
14
+ - merged-dataset
15
+ size_categories:
16
+ - n<1K
17
+ configs:
18
+ - config_name: default
19
+ data_files:
20
+ - split: train
21
+ path: "data.jsonl"
22
+ default: true
23
+ dataset_info:
24
+ features:
25
+ - name: audio
26
+ dtype:
27
+ audio:
28
+ sampling_rate: null
29
+ - name: text
30
+ dtype: string
31
+ - name: speaker_id
32
+ dtype: string
33
+ - name: emotion
34
+ dtype: string
35
+ - name: language
36
+ dtype: string
37
+ splits:
38
+ - name: train
39
+ num_examples: 491
40
+ config_name: default
41
+ ---
42
+
43
+ # tetttttt
44
+
45
+ This is a merged speech dataset containing 491 audio segments from 2 source datasets.
46
+
47
+ ## Dataset Information
48
+
49
+ - **Total Segments**: 491
50
+ - **Speakers**: 2
51
+ - **Languages**: tr
52
+ - **Emotions**: angry, neutral, happy
53
+ - **Original Datasets**: 2
54
+
55
+ ## Dataset Structure
56
+
57
+ Each example contains:
58
+ - `audio`: Audio file (WAV format, original sampling rate preserved)
59
+ - `text`: Transcription of the audio
60
+ - `speaker_id`: Unique speaker identifier (made unique across all merged datasets)
61
+ - `emotion`: Detected emotion (neutral, happy, sad, etc.)
62
+ - `language`: Language code (en, es, fr, etc.)
63
+
64
+ ## Usage
65
+
66
+ ### Loading the Dataset
67
+
68
+ ```python
69
+ from datasets import load_dataset
70
+
71
+ # Load the dataset
72
+ dataset = load_dataset("Codyfederer/tetttttt")
73
+
74
+ # Access the training split
75
+ train_data = dataset["train"]
76
+
77
+ # Example: Get first sample
78
+ sample = train_data[0]
79
+ print(f"Text: {sample['text']}")
80
+ print(f"Speaker: {sample['speaker_id']}")
81
+ print(f"Language: {sample['language']}")
82
+ print(f"Emotion: {sample['emotion']}")
83
+
84
+ # Play audio (requires audio libraries)
85
+ # sample['audio']['array'] contains the audio data
86
+ # sample['audio']['sampling_rate'] contains the sampling rate
87
+ ```
88
+
89
+ ### Alternative: Load from JSONL
90
+
91
+ ```python
92
+ from datasets import Dataset, Audio, Features, Value
93
+ import json
94
+
95
+ # Load the JSONL file
96
+ rows = []
97
+ with open("data.jsonl", "r", encoding="utf-8") as f:
98
+ for line in f:
99
+ rows.append(json.loads(line))
100
+
101
+ features = Features({
102
+ "audio": Audio(sampling_rate=None),
103
+ "text": Value("string"),
104
+ "speaker_id": Value("string"),
105
+ "emotion": Value("string"),
106
+ "language": Value("string")
107
+ })
108
+
109
+ dataset = Dataset.from_list(rows, features=features)
110
+ ```
111
+
112
+ ### Dataset Structure
113
+
114
+ The dataset includes:
115
+ - `data.jsonl` - Main dataset file with all columns (JSON Lines)
116
+ - `*.wav` - Audio files under `audio_XXX/` subdirectories
117
+ - `load_dataset.txt` - Python script for loading the dataset (rename to .py to use)
118
+
119
+ JSONL keys:
120
+ - `audio`: Relative audio path (e.g., `audio_000/segment_000000_speaker_0.wav`)
121
+ - `text`: Transcription of the audio
122
+ - `speaker_id`: Unique speaker identifier
123
+ - `emotion`: Detected emotion
124
+ - `language`: Language code
125
+
126
+ ## Speaker ID Mapping
127
+
128
+ Speaker IDs have been made unique across all merged datasets to avoid conflicts.
129
+ For example:
130
+ - Original Dataset A: `speaker_0`, `speaker_1`
131
+ - Original Dataset B: `speaker_0`, `speaker_1`
132
+ - Merged Dataset: `speaker_0`, `speaker_1`, `speaker_2`, `speaker_3`
133
+
134
+ Original dataset information is preserved in the metadata for reference.
135
+
136
+ ## Data Quality
137
+
138
+ This dataset was created using the Vyvo Dataset Builder with:
139
+ - Automatic transcription and diarization
140
+ - Quality filtering for audio segments
141
+ - Music and noise filtering
142
+ - Emotion detection
143
+ - Language identification
144
+
145
+ ## License
146
+
147
+ This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
148
+
149
+ ## Citation
150
+
151
+ ```bibtex
152
+ @dataset{vyvo_merged_dataset,
153
+ title={tetttttt},
154
+ author={Vyvo Dataset Builder},
155
+ year={2025},
156
+ url={https://huggingface.co/datasets/Codyfederer/tetttttt}
157
+ }
158
+ ```
159
+
160
+ This dataset was created using the Vyvo Dataset Builder tool.
dataset_info.json ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "dataset_name": "tetttttt",
3
+ "description": "Merged speech dataset containing 491 segments from 2 source datasets",
4
+ "features": {
5
+ "audio": {
6
+ "_type": "Audio",
7
+ "sampling_rate": null
8
+ },
9
+ "text": {
10
+ "_type": "Value",
11
+ "dtype": "string"
12
+ },
13
+ "speaker_id": {
14
+ "_type": "Value",
15
+ "dtype": "string"
16
+ },
17
+ "emotion": {
18
+ "_type": "Value",
19
+ "dtype": "string"
20
+ },
21
+ "language": {
22
+ "_type": "Value",
23
+ "dtype": "string"
24
+ }
25
+ },
26
+ "splits": {
27
+ "train": {
28
+ "name": "train",
29
+ "num_examples": 491
30
+ }
31
+ },
32
+ "total_segments": 491,
33
+ "speakers": [
34
+ "speaker_0",
35
+ "speaker_1"
36
+ ],
37
+ "emotions": [
38
+ "angry",
39
+ "neutral",
40
+ "happy"
41
+ ],
42
+ "languages": [
43
+ "tr"
44
+ ],
45
+ "original_datasets": [
46
+ "1DPlg5pXztI",
47
+ "-WOKqeQsYzs"
48
+ ]
49
+ }
load_dataset.txt ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dataset Loading Script
2
+ # Save this as load_dataset.py to use
3
+
4
+ import csv
5
+ import os
6
+ from datasets import Dataset, Audio, Value, Features
7
+
8
+ def load_dataset():
9
+ # Define features
10
+ features = Features({
11
+ # Preserve original sampling rates by not forcing a fixed rate
12
+ "audio": Audio(sampling_rate=None),
13
+ "text": Value("string"),
14
+ "speaker_id": Value("string"),
15
+ "emotion": Value("string"),
16
+ "language": Value("string")
17
+ })
18
+
19
+ # Load data from CSV
20
+ data = {
21
+ "audio": [],
22
+ "text": [],
23
+ "speaker_id": [],
24
+ "emotion": [],
25
+ "language": []
26
+ }
27
+
28
+ # Read JSONL
29
+ import json
30
+ with open("data.jsonl", "r", encoding="utf-8") as f:
31
+ for line in f:
32
+ obj = json.loads(line)
33
+ data["audio"].append(obj["audio"]) # relative path within repo
34
+ data["text"].append(obj.get("text", ""))
35
+ data["speaker_id"].append(obj.get("speaker_id", ""))
36
+ data["emotion"].append(obj.get("emotion", "neutral"))
37
+ data["language"].append(obj.get("language", "en"))
38
+
39
+ # Create dataset
40
+ dataset = Dataset.from_dict(data, features=features)
41
+ return dataset
42
+
43
+ # For direct loading
44
+ if __name__ == "__main__":
45
+ dataset = load_dataset()
46
+ print(f"Dataset loaded with {len(dataset)} examples")
metadata.json ADDED
The diff for this file is too large to render. See raw diff
 
train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe002e0376c3e430421ed1521dcb70e177d323f4c0119936f141a03e78741c23
3
+ size 229484945