Validation by me for Turkish
I manually validate some part of data for Turkish, unfortunately miss labeled data is big part.
hi, thanks for your feedback!
can you share more info on what the miss label means here? does it mean the languages are wrong or the transcription quality is bad?
Transcription quality is bad
I'll provide some useful feedback on this topic because I think this dataset has a lot of potential, but needs a lot of work in the filtering and alignment stages.
Consider this Korean sample:
{
'id': '10682',
'utt_id':
'Y6ktQBUClBI-00002-00000754-00000804',
'audio': {
'path': None,
'array': array([1.22070312e-03, 9.76562500e-04, 9.76562500e-04, ..., 3.05175781e-05, 3.05175781e-05, 6.10351562e-05]),
'sampling_rate': 16000
},
'text': '공부 잘하는법 영어 수학 과외 구하기 공부 영어과외 수학과외 온라인 과외 온라인 영어 영어 학습 고등 수학 파닉스 중학생 영어 과외 초등 영어 초등 수학 논술 과 영어 문법 한국사 공부 플래너 초등 수학 과외 영어 문장 만들기 어문제 중학 영어 영어 전문 과외 영어 수학 과외 수학 문제 풀이 국어 문제 중학생 공부 서울 과외 초등학생 수학 초등 국어 공부 초등 수학 공부법 초등 수학 공부 스터디 수학 과외 자료 영상 강의 영어 리스닝 훈련 학습기 ebs 영단어 수학 답지 온라인 영어 과외 대입 학원 수학 문제집 답지 초등 국어 ebs 고등 중등 인강 고등 수학 인강 분당 수학 과외 동탄 수학 과외 안산 영어 과외 제주 영어 과외'
}
After a little inspection, you don't need to speak Korean to figure out there's something wrong here:
print(f"{record['audio']['array'].shape=}")
# Output:
# record['audio']['array'].shape=(7999,)
A 246-character transcript with 0.5 seconds of audio... something went very wrong in the processing of this sample.
So let's investigate further.
Whisper has a reported very low character error rate on Korean. So let's do this:
- Transcribe FLEURS Korean test set with Whisper Turbo
- Transcribe a random subset of YODAS Korean (chunk 000, "manual" transcripts) with Whisper Turbo
- Check distribution of error rates
We should expect that Whisper performs reasonably well on the bulk of the data, albeit not as well as a test set as easy as FLEURS.
(This is a random 20k samples from the first 200k samples of ko 000, clipped at CER 1.0)
I don't expect CER on "wild" data to be as low as FLEURS, but my spot-checking agrees with Whisper's CER: the data quality is all over the place, and can easily be caught.
While this isn't a perfect analysis (I don't know a single word in Korean), it seems clear that something went wrong in the processing of this data. The sample shown above would have been filtered out by the most basic of cleaning algorithms, and the distribution of Whisper's CER is telling.
Another possibility is a bug in the audio cut extraction... like the example above, there are a lot of files with exactly 0.5 seconds of audio and really long transcripts.