Whisper Large SEIN - COES SEIN - Version 7
This model is a fine-tuned version of openai/whisper-large-v3-turbo on the SEIN COES dataset. It achieves the following results on the evaluation set:
- Loss: 2.7602
- Wer: 58.0711
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.0267 | 83.3333 | 500 | 2.2424 | 60.1015 |
0.0013 | 166.6667 | 1000 | 2.5216 | 59.6954 |
0.0001 | 250.0 | 1500 | 2.6548 | 59.3909 |
0.0001 | 333.3333 | 2000 | 2.6880 | 58.8832 |
0.0001 | 416.6667 | 2500 | 2.7241 | 58.7817 |
0.0 | 500.0 | 3000 | 2.7435 | 58.3756 |
0.0 | 583.3333 | 3500 | 2.7557 | 58.2741 |
0.0 | 666.6667 | 4000 | 2.7602 | 58.0711 |
Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
- Downloads last month
- 25
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Cristhian2430/whisper-large-coes-v7
Base model
openai/whisper-large-v3
Finetuned
openai/whisper-large-v3-turbo