Initial investigation of an encoder-decoder end-to-end TTS framework using marginalization of monotonic hard latent alignments

08/30/2019
by   Yusuke Yasuda, et al.
0

End-to-end text-to-speech (TTS) synthesis is a method that directly converts input text to output acoustic features using a single network. A recent advance of end-to-end TTS is due to a key technique called attention mechanisms, and all successful methods proposed so far have been based on soft attention mechanisms. However, although network structures are becoming increasingly complex, end-to-end TTS systems with soft attention mechanisms may still fail to learn and to predict accurate alignment between the input and output. This may be because the soft attention mechanisms are too flexible. Therefore, we propose an approach that has more explicit but natural constraints suitable for speech signals to make alignment learning and prediction of end-to-end TTS systems more robust. The proposed system, with the constrained alignment scheme borrowed from segment-to-segment neural transduction (SSNT), directly calculates the joint probability of acoustic features and alignment given an input text. The alignment is designed to be hard and monotonically increase by considering the speech nature, and it is treated as a latent variable and marginalized during training. During prediction, both the alignment and acoustic features can be generated from the probabilistic distributions. The advantages of our approach are that we can simplify many modules for the soft attention and that we can train the end-to-end TTS model using a single likelihood function. As far as we know, our approach is the first end-to-end TTS without a soft attention mechanism.

READ FULL TEXT
research
03/14/2017

Multichannel End-to-end Speech Recognition

The field of speech recognition is in the midst of a paradigm shift: end...
research
03/30/2021

A study of latent monotonic attention variants

End-to-end models reach state-of-the-art performance for speech recognit...
research
10/19/2020

End-to-End Text-to-Speech using Latent Duration based on VQ-VAE

Explicit duration modeling is a key to achieving robust and efficient al...
research
10/23/2019

Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis

Despite the ability to produce human-level speech for in-domain text, at...
research
07/25/2023

On the Learning Dynamics of Attention Networks

Attention models are typically learned by optimizing one of three standa...
research
11/04/2018

Investigating context features hidden in End-to-End TTS

Recent studies have introduced end-to-end TTS, which integrates the prod...

Please sign up or login with your details

Forgot password? Click here to reset