Update README.md
Browse files
README.md
CHANGED
@@ -57,22 +57,6 @@ After that, we will check if it can perform scalable work through additional tra
|
|
57 |
|
58 |
## Evaluation
|
59 |
|
60 |
-
Evaluation by
|
61 |
-
```
|
62 |
-
from whisper_normalizer.basic import BasicTextNormalizer
|
63 |
-
from evaluate import load
|
64 |
-
|
65 |
-
normalizer = BasicTextNormalizer()
|
66 |
-
cer_metric = load("cer")
|
67 |
-
wer_metric = load("wer")
|
68 |
-
```
|
69 |
-
|
70 |
-
| Model | zeroth-test-BLEU | zeroth-test-CER | zeroth-test-WER | fleurs-test-BLEU | fleurs-test-CER | fleurs-test-WER |
|
71 |
-
|--------------------|------------------|-----------------|-----------------|------------------|-----------------|-----------------|
|
72 |
-
| original | 0.071 | 126.4 | 121.5 | 0.010 | 115.7 | 112.8 |
|
73 |
-
| finetune (this model) | 94.837 | 1.429 | 2.951 | 67.659 | 7.951 | 18.313 |
|
74 |
-
|
75 |
-
|
76 |
Evaluation was done on the following datasets:
|
77 |
- ASR (Automatic Speech Recognition): Evaluated with CER (Character Error Rate) on zeroth-test set (457 samples).
|
78 |
- AST (Automatic Speech Translation): Evaluated with BLEU score on fleurs ko <-> en speech translation result (270 samples).
|
|
|
57 |
|
58 |
## Evaluation
|
59 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
Evaluation was done on the following datasets:
|
61 |
- ASR (Automatic Speech Recognition): Evaluated with CER (Character Error Rate) on zeroth-test set (457 samples).
|
62 |
- AST (Automatic Speech Translation): Evaluated with BLEU score on fleurs ko <-> en speech translation result (270 samples).
|