A Surprisingly Effective Fix for Deep Latent Variable Modeling of Text

09/02/2019
by   Bohan Li, et al.
22

When trained effectively, the Variational Autoencoder (VAE) is both a powerful language model and an effective representation learning framework. In practice, however, VAEs are trained with the evidence lower bound (ELBO) as a surrogate objective to the intractable marginal data likelihood. This approach to training yields unstable results, frequently leading to a disastrous local optimum known as posterior collapse. In this paper, we investigate a simple fix for posterior collapse which yields surprisingly effective results. The combination of two known heuristics, previously considered only in isolation, substantially improves held-out likelihood, reconstruction, and latent representation learning when compared with previous state-of-the-art methods. More interestingly, while our experiments demonstrate superiority on these principle evaluations, our method obtains a worse ELBO. We use these results to argue that the typical surrogate objective for VAEs may not be sufficient or necessarily appropriate for balancing the goals of representation learning and data distribution modeling.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/16/2019

Lagging Inference Networks and Posterior Collapse in Variational Autoencoders

The variational autoencoder (VAE) is a popular combination of deep laten...
research
01/04/2021

Transformer-based Conditional Variational Autoencoder for Controllable Story Generation

We investigate large-scale latent variable models (LVMs) for neural stor...
research
01/10/2019

Preventing Posterior Collapse with delta-VAEs

Due to the phenomenon of "posterior collapse," current latent variable g...
research
04/30/2020

Preventing Posterior Collapse with Levenshtein Variational Autoencoder

Variational autoencoders (VAEs) are a standard framework for inducing la...
research
04/27/2020

A Batch Normalized Inference Network Keeps the KL Vanishing Away

Variational Autoencoder (VAE) is widely used as a generative model to ap...
research
09/27/2019

Identifying through Flows for Recovering Latent Representations

Identifiability, or recovery of the true latent representations from whi...

Please sign up or login with your details

Forgot password? Click here to reset