Preventing Posterior Collapse with delta-VAEs

01/10/2019
by   Ali Razavi, et al.
4

Due to the phenomenon of "posterior collapse," current latent variable generative models pose a challenging design choice that either weakens the capacity of the decoder or requires augmenting the objective so it does not only maximize the likelihood of the data. In this paper, we propose an alternative that utilizes the most powerful generative models as decoders, whilst optimising the variational lower bound all while ensuring that the latent variables preserve and encode useful information. Our proposed δ-VAEs achieve this by constraining the variational family for the posterior to have a minimum distance to the prior. For sequential latent variable models, our approach resembles the classic representation learning approach of slow feature analysis. We demonstrate the efficacy of our approach at modeling text on LM1B and modeling images: learning representations, improving sample quality, and achieving state of the art log-likelihood on CIFAR-10 and ImageNet 32× 32.

READ FULL TEXT

page 7

page 18

page 19

page 20

page 21

research
07/12/2018

Avoiding Latent Variable Collapse With Generative Skip Models

Variational autoencoders (VAEs) learn distributions of high-dimensional ...
research
06/12/2018

Improving latent variable descriptiveness with AutoGen

Powerful generative models, particularly in Natural Language Modelling, ...
research
11/20/2017

Likelihood Almost Free Inference Networks

Variational inference for latent variable models is prevalent in various...
research
04/12/2019

Information Theoretic Lower Bounds on Negative Log Likelihood

In this article we use rate-distortion theory, a branch of information t...
research
08/26/2019

PixelVAE++: Improved PixelVAE with Discrete Prior

Constructing powerful generative models for natural images is a challeng...
research
09/27/2019

Identifying through Flows for Recovering Latent Representations

Identifiability, or recovery of the true latent representations from whi...
research
09/02/2019

A Surprisingly Effective Fix for Deep Latent Variable Modeling of Text

When trained effectively, the Variational Autoencoder (VAE) is both a po...

Please sign up or login with your details

Forgot password? Click here to reset