Update README.md
Browse files
README.md
CHANGED
|
@@ -96,13 +96,23 @@ Please notice that we encourage you to read our tutorials and learn more about
|
|
| 96 |
|
| 97 |
### Transcribing your own audio files (in English)
|
| 98 |
|
| 99 |
-
TODO + streaming TODO
|
| 100 |
-
|
| 101 |
```python
|
| 102 |
from speechbrain.inference.ASR import EncoderDecoderASR
|
| 103 |
-
|
| 104 |
-
asr_model.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 105 |
```
|
|
|
|
| 106 |
### Inference on GPU
|
| 107 |
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
|
| 108 |
|
|
|
|
| 96 |
|
| 97 |
### Transcribing your own audio files (in English)
|
| 98 |
|
|
|
|
|
|
|
| 99 |
```python
|
| 100 |
from speechbrain.inference.ASR import EncoderDecoderASR
|
| 101 |
+
from speechbrain.utils.dynamic_chunk_training import DynChunkTrainConfig
|
| 102 |
+
asr_model = EncoderDecoderASR.from_hparams(
|
| 103 |
+
source="speechbrain/asr-streaming-conformer-librispeech",
|
| 104 |
+
savedir="pretrained_models/asr-streaming-conformer-librispeech"
|
| 105 |
+
)
|
| 106 |
+
asr_model.transcribe_file(
|
| 107 |
+
"speechbrain/asr-streaming-conformer-librispeech/test-en.wav",
|
| 108 |
+
# select a chunk size of ~960ms with 4 chunks of left context
|
| 109 |
+
DynChunkTrainConfig(24, 4),
|
| 110 |
+
# disable torchaudio streaming to allow fetching from HuggingFace
|
| 111 |
+
# set this to True for your own files or streams to allow for streaming file decoding
|
| 112 |
+
use_torchaudio_streaming=False,
|
| 113 |
+
)
|
| 114 |
```
|
| 115 |
+
|
| 116 |
### Inference on GPU
|
| 117 |
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
|
| 118 |
|