Scalable Factorized Hierarchical Variational Autoencoder Training

04/09/2018
by   Wei-Ning Hsu, et al.
0

Deep generative models have achieved great success in unsupervised learning with the ability to capture complex nonlinear relationships between latent generating factors and observations. Among them, a factorized hierarchical variational autoencoder (FHVAE) is a variational inference-based model that formulates a hierarchical generative process for sequential data. Specifically, an FHVAE model can learn disentangled and interpretable representations, which have been proven useful for numerous speech applications, such as speaker verification, robust speech recognition, and voice conversion. However, as we will elaborate in this paper, the training algorithm proposed in the original paper is not scalable to datasets of thousands of hours, which makes this model less applicable on a larger scale. After identifying limitations in terms of runtime, memory, and hyperparameter optimization, we propose a hierarchical sampling training algorithm to address all three issues. Our proposed method is evaluated comprehensively on a wide variety of datasets, ranging from 3 to 1,000 hours and involving different types of generating factors, such as recording conditions and noise types. In addition, we also present a new visualization method for qualitatively evaluating the performance with respect to interpretability and disentanglement. Models trained with our proposed algorithm demonstrate the desired characteristics on all the datasets.

READ FULL TEXT
research
09/22/2017

Unsupervised Learning of Disentangled and Interpretable Representations from Sequential Data

We present a factorized hierarchical variational autoencoder, which lear...
research
02/05/2019

Meta-Amortized Variational Inference and Learning

How can we learn to do probabilistic inference in a way that generalizes...
research
11/15/2022

Improved disentangled speech representations using contrastive learning in factorized hierarchical variational autoencoder

By utilizing the fact that speaker identity and content vary on differen...
research
04/11/2022

Fine-grained Noise Control for Multispeaker Speech Synthesis

A text-to-speech (TTS) model typically factorizes speech attributes such...
research
04/14/2022

Learning and controlling the source-filter representation of speech with a variational autoencoder

Understanding and controlling latent representations in deep generative ...
research
04/28/2021

Learning deep autoregressive models for hierarchical data

We propose a model for hierarchical structured data as an extension to t...
research
11/24/2017

Quantifying the Effects of Enforcing Disentanglement on Variational Autoencoders

The notion of disentangled autoencoders was proposed as an extension to ...

Please sign up or login with your details

Forgot password? Click here to reset