Improving VAE generations of multimodal data through data-dependent conditional priors

11/25/2019
by   Frantzeska Lavda, et al.
0

One of the major shortcomings of variational autoencoders is the inability to produce generations from the individual modalities of data originating from mixture distributions. This is primarily due to the use of a simple isotropic Gaussian as the prior for the latent code in the ancestral sampling procedure for the data generations. We propose a novel formulation of variational autoencoders, conditional prior VAE (CP-VAE), which learns to differentiate between the individual mixture components and therefore allows for generations from the distributional data clusters. We assume a two-level generative process with a continuous (Gaussian) latent variable sampled conditionally on a discrete (categorical) latent component. The new variational objective naturally couples the learning of the posterior and prior conditionals, and the learning of the latent categories encoding the multimodality of the original data in an unsupervised manner. The data-dependent conditional priors are then used to sample the continuous latent code when generating new samples from the individual mixture components corresponding to the multimodal structure of the original data. Our experimental results illustrate the generative performance of our new model comparing to multiple baselines.

READ FULL TEXT

page 6

page 7

research
06/28/2022

Latent Combinational Game Design

We present an approach for generating playable games that blend a given ...
research
06/08/2020

tvGP-VAE: Tensor-variate Gaussian Process Prior Variational Autoencoder

Variational autoencoders (VAEs) are a powerful class of deep generative ...
research
10/19/2020

Learning Optimal Conditional Priors For Disentangled Representations

A large part of the literature on learning disentangled representations ...
research
06/08/2020

Variational Variance: Simple and Reliable Predictive Variance Parameterization

An often overlooked sleight of hand performed with variational autoencod...
research
01/06/2021

Cauchy-Schwarz Regularized Autoencoder

Recent work in unsupervised learning has focused on efficient inference ...
research
11/19/2017

Diverse and Accurate Image Description Using a Variational Auto-Encoder with an Additive Gaussian Encoding Space

This paper explores image caption generation using conditional variation...
research
09/17/2020

Discond-VAE: Disentangling Continuous Factors from the Discrete

We propose a variant of VAE capable of disentangling both variations wit...

Please sign up or login with your details

Forgot password? Click here to reset