LibriSpeechMix
None
view repo
Recently, an end-to-end speaker-attributed automatic speech recognition (E2E SA-ASR) model was proposed as a joint model of speaker counting, speech recognition and speaker identification for monaural overlapped speech. In the previous study, the model parameters were trained based on the speaker-attributed maximum mutual information (SA-MMI) criterion, with which the joint posterior probability for multi-talker transcription and speaker identification are maximized over training data. Although SA-MMI training showed promising results for overlapped speech consisting of various numbers of speakers, the training criterion was not directly linked to the final evaluation metric, i.e., speaker-attributed word error rate (SA-WER). In this paper, we propose a speaker-attributed minimum Bayes risk (SA-MBR) training method where the parameters are trained to directly minimize the expected SA-WER over the training data. Experiments using the LibriSpeech corpus show that the proposed SA-MBR training reduces the SA-WER by 9.0 with the SA-MMI-trained model.
READ FULL TEXT
We propose an end-to-end speaker-attributed automatic speech recognition...
read it
Recently, an end-to-end (E2E) speaker-attributed automatic speech recogn...
read it
This paper presents our recent effort on end-to-end speaker-attributed
a...
read it
An end-to-end (E2E) speaker-attributed automatic speech recognition (SA-...
read it
Joint optimization of multi-channel front-end and automatic speech
recog...
read it
Speaker-independent speech recognition systems trained with data from ma...
read it
A simplified speech recognition system that uses the maximum mutual
info...
read it
None
Comments
There are no comments yet.