Avoiding Latent Variable Collapse With Generative Skip Models

07/12/2018
by   Adji B. Dieng, et al.
0

Variational autoencoders (VAEs) learn distributions of high-dimensional data. They model data by introducing a deep latent-variable model and then maximizing a lower bound of the log marginal likelihood. While VAEs can capture complex distributions, they also suffer from an issue known as "latent variable collapse." Specifically, the lower bound involves an approximate posterior of the latent variables; this posterior "collapses" when it is set equal to the prior, i.e., when the posterior is independent of the data. While VAEs learn good generative models, latent variable collapse prevents them from learning useful representations. In this paper, we propose a new way to avoid latent variable collapse. We expand the model class to one that includes skip connections; these connections enforce strong links between the latent variables and the likelihood function. We study these generative skip models both theoretically and empirically. Theoretically, we prove that skip models increase the mutual information between the observations and the inferred latent variables. Empirically, on both images (MNIST and Omniglot) and text (Yahoo), we show that generative skip models lead to less collapse than existing VAE architectures.

READ FULL TEXT
research
01/16/2019

Lagging Inference Networks and Posterior Collapse in Variational Autoencoders

The variational autoencoder (VAE) is a popular combination of deep laten...
research
01/10/2019

Preventing Posterior Collapse with delta-VAEs

Due to the phenomenon of "posterior collapse," current latent variable g...
research
07/10/2019

Variational Autoencoders and Nonlinear ICA: A Unifying Framework

The framework of variational autoencoders allows us to efficiently learn...
research
02/13/2018

GILBO: One Metric to Measure Them All

We propose a simple, tractable lower bound on the mutual information con...
research
10/01/2019

Latent-Variable Generative Models for Data-Efficient Text Classification

Generative classifiers offer potential advantages over their discriminat...
research
12/18/2019

Sampling Good Latent Variables via CPP-VAEs: VAEs with Condition Posterior as Prior

In practice, conditional variational autoencoders (CVAEs) perform condit...
research
04/27/2019

Improved Conditional VRNNs for Video Prediction

Predicting future frames for a video sequence is a challenging generativ...

Please sign up or login with your details

Forgot password? Click here to reset