A learned conditional prior for the VAE acoustic space of a TTS system

06/14/2021
by   Penny Karanasou, et al.
0

Many factors influence speech yielding different renditions of a given sentence. Generative models, such as variational autoencoders (VAEs), capture this variability and allow multiple renditions of the same sentence via sampling. The degree of prosodic variability depends heavily on the prior that is used when sampling. In this paper, we propose a novel method to compute an informative prior for the VAE latent space of a neural text-to-speech (TTS) system. By doing so, we aim to sample with more prosodic variability, while gaining controllability over the latent space's structure. By using as prior the posterior distribution of a secondary VAE, which we condition on a speaker vector, we can sample from the primary VAE taking explicitly the conditioning into account and resulting in samples from a specific region of the latent space for each condition (i.e. speaker). A formal preference test demonstrates significant preference of the proposed approach over standard Conditional VAE. We also provide visualisations of the latent space where well-separated condition-specific clusters appear, as well as ablation studies to better understand the behaviour of the system.

READ FULL TEXT
research
06/30/2021

Interventional Assays for the Latent Space of Autoencoders

The encoders and decoders of autoencoders effectively project the input ...
research
06/10/2019

Using generative modelling to produce varied intonation for speech synthesis

Unlike human speakers, typical text-to-speech (TTS) systems are unable t...
research
09/22/2021

LDC-VAE: A Latent Distribution Consistency Approach to Variational AutoEncoders

Variational autoencoders (VAEs), as an important aspect of generative mo...
research
09/25/2019

Disentangling Speech and Non-Speech Components for Building Robust Acoustic Models from Found Data

In order to build language technologies for majority of the languages, i...
research
11/28/2022

Chroma-VAE: Mitigating Shortcut Learning with Generative Classifiers

Deep neural networks are susceptible to shortcut learning, using simple ...
research
09/07/2020

Ordinal-Content VAE: Isolating Ordinal-Valued Content Factors in Deep Latent Variable Models

In deep representational learning, it is often desired to isolate a part...
research
04/28/2022

Oracle Guided Image Synthesis with Relative Queries

Isolating and controlling specific features in the outputs of generative...

Please sign up or login with your details

Forgot password? Click here to reset