guynich commited on
Commit
29b88a8
·
verified ·
1 Parent(s): 8ea2e4e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +41 -23
README.md CHANGED
@@ -1,33 +1,12 @@
1
  ---
2
  language:
3
  - en
4
- pretty_name: test hf dataset
5
  tags:
6
  - speech
7
- license: mit
8
  task_categories:
9
  - text-classification
10
- configs:
11
- - config_name: default
12
- data_files:
13
- - split: test
14
- path: data/test-*
15
- dataset_info:
16
- features:
17
- - name: audio
18
- dtype:
19
- audio:
20
- sampling_rate: 16000
21
- - name: text
22
- dtype: string
23
- - name: time_secs
24
- dtype: float64
25
- splits:
26
- - name: test
27
- num_bytes: 230118.0
28
- num_examples: 1
29
- download_size: 219281
30
- dataset_size: 230118.0
31
  ---
32
 
33
  # test_hf_dataset
@@ -40,3 +19,42 @@ e.g.:
40
  * contents of the dataset
41
  * context for how the dataset should be used, e.g.: `datasets` package
42
  * existing dataset cards, such as the ELI5 dataset card, show common conventions
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  language:
3
  - en
4
+ pretty_name: "test hf dataset"
5
  tags:
6
  - speech
7
+ license: "mit"
8
  task_categories:
9
  - text-classification
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  # test_hf_dataset
 
19
  * contents of the dataset
20
  * context for how the dataset should be used, e.g.: `datasets` package
21
  * existing dataset cards, such as the ELI5 dataset card, show common conventions
22
+
23
+ # Example usage of dataset
24
+
25
+ Example of transcription.
26
+
27
+ First install extra dependencies, typically within virtual environment.
28
+ ```
29
+ python3 -m pip install datasets torch transformers
30
+ ```
31
+ Then save and run this Python script. It runs transcription using the Moonshine
32
+ model by Useful Sensors [link](https://github.com/usefulsensors/moonshine).
33
+ ```
34
+ """Adapted from https://github.com/usefulsensors/moonshine#huggingface-transformers"""
35
+ from datasets import load_dataset
36
+ from transformers import AutoProcessor, MoonshineForConditionalGeneration
37
+
38
+ dataset = load_dataset("guynich/test_hf_dataset", split="test")
39
+ model = MoonshineForConditionalGeneration.from_pretrained(
40
+ "UsefulSensors/moonshine-tiny"
41
+ )
42
+ processor = AutoProcessor.from_pretrained("UsefulSensors/moonshine-tiny")
43
+
44
+ for index in range(len(dataset)):
45
+ audio_array = dataset[index]["audio"]["array"]
46
+ sampling_rate = dataset[index]["audio"]["sampling_rate"]
47
+
48
+ inputs = processor(audio_array, return_tensors="pt", sampling_rate=sampling_rate)
49
+
50
+ generated_ids = model.generate(**inputs)
51
+
52
+ transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
53
+ print(transcription)
54
+ ```
55
+ Example output.
56
+ ```console
57
+ $ python3 main.py
58
+ The birch canoe slid on the smooth planks, glue the sheets to a dark blue background.
59
+ $
60
+ ```