Towards Transfer Learning for End-to-End Speech Synthesis from Deep Pre-Trained Language Models

06/17/2019
by   Wei Fang, et al.
0

Modern text-to-speech (TTS) systems are able to generate audio that sounds almost as natural as human speech. However, the bar of developing high-quality TTS systems remains high since a sizable set of studio-quality <text, audio> pairs is usually required. Compared to commercial data used to develop state-of-the-art systems, publicly available data are usually worse in terms of both quality and size. Audio generated by TTS systems trained on publicly available data tends to not only sound less natural, but also exhibits more background noise. In this work, we aim to lower TTS systems' reliance on high-quality data by providing them the textual knowledge extracted by deep pre-trained language models during training. In particular, we investigate the use of BERT to assist the training of Tacotron-2, a state of the art TTS consisting of an encoder and an attention-based decoder. BERT representations learned from large amounts of unlabeled text data are shown to contain very rich semantic and syntactic information about the input text, and have potential to be leveraged by a TTS system to compensate the lack of high-quality data. We incorporate BERT as a parallel branch to the Tacotron-2 encoder with its own attention head. For an input text, it is simultaneously passed into BERT and the Tacotron-2 encoder. The representations extracted by the two branches are concatenated and then fed to the decoder. As a preliminary study, although we have not found incorporating BERT into Tacotron-2 generates more natural or cleaner speech at a human-perceivable level, we observe improvements in other aspects such as the model is being significantly better at knowing when to stop decoding such that there is much less babbling at the end of the synthesized audio and faster convergence during training.

READ FULL TEXT
research
03/06/2022

Leveraging Pre-trained BERT for Audio Captioning

Audio captioning aims at using natural language to describe the content ...
research
11/23/2022

IMaSC – ICFOSS Malayalam Speech Corpus

Modern text-to-speech (TTS) systems use deep learning to synthesize spee...
research
01/20/2023

Phoneme-Level BERT for Enhanced Prosody of Text-to-Speech with Grapheme Predictions

Large-scale pre-trained language models have been shown to be helpful in...
research
08/30/2018

Semi-Supervised Training for Improving Data Efficiency in End-to-End Speech Synthesis

Although end-to-end text-to-speech (TTS) models such as Tacotron have sh...
research
03/03/2023

Miipher: A Robust Speech Restoration Model Integrating Self-Supervised Speech and Text Representations

Speech restoration (SR) is a task of converting degraded speech signals ...
research
08/28/2022

Training Text-To-Speech Systems From Synthetic Data: A Practical Approach For Accent Transfer Tasks

Transfer tasks in text-to-speech (TTS) synthesis - where one or more asp...
research
01/24/2022

Polyphone disambiguation and accent prediction using pre-trained language models in Japanese TTS front-end

Although end-to-end text-to-speech (TTS) models can generate natural spe...

Please sign up or login with your details

Forgot password? Click here to reset