Exploring Machine Speech Chain for Domain Adaptation and Few-Shot Speaker Adaptation

04/08/2021
by   Fengpeng Yue, et al.
0

Machine Speech Chain, which integrates both end-to-end (E2E) automatic speech recognition (ASR) and text-to-speech (TTS) into one circle for joint training, has been proven to be effective in data augmentation by leveraging large amounts of unpaired data. In this paper, we explore the TTS->ASR pipeline in speech chain to do domain adaptation for both neural TTS and E2E ASR models, with only text data from target domain. We conduct experiments by adapting from audiobook domain (LibriSpeech) to presentation domain (TED-LIUM), there is a relative word error rate (WER) reduction of 10 TED-LIUM test set, and a relative WER reduction of 51.5 generated by neural TTS in the presentation domain. Further, we apply few-shot speaker adaptation for the E2E ASR by using a few utterances from target speakers in an unsupervised way, results in additional gains.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset