DeepAI AI Chat
Log In Sign Up

Preventing Posterior Collapse in Sequence VAEs with Pooling

11/10/2019
by   Teng Long, et al.
McGill University
Borealis AI
0

Variational Autoencoders (VAEs) hold great potential for modelling text, as they could in theory separate high-level semantic and syntactic properties from local regularities of natural language. Practically, however, VAEs with autoregressive decoders often suffer from posterior collapse, a phenomenon where the model learns to ignore the latent variables, causing the sequence VAE to degenerate into a language model. Previous works attempt to solve this problem with complex architectural changes or costly optimization schemes. In this paper, we argue that posterior collapse is caused in part by the encoder network failing to capture the input variabilities. We verify this hypothesis empirically and propose a straightforward fix using pooling. This simple technique effectively prevents posterior collapse, allowing the model to achieve significantly better data log-likelihood than standard sequence VAEs. Compared to the previous SOTA on preventing posterior collapse, we are able to achieve comparable performances while being significantly faster.

READ FULL TEXT
01/16/2019

Lagging Inference Networks and Posterior Collapse in Variational Autoencoders

The variational autoencoder (VAE) is a popular combination of deep laten...
08/05/2021

Finetuning Pretrained Transformers into Variational Autoencoders

Text variational autoencoders (VAEs) are notorious for posterior collaps...
04/30/2020

Preventing Posterior Collapse with Levenshtein Variational Autoencoder

Variational autoencoders (VAEs) are a standard framework for inducing la...
11/06/2019

Don't Blame the ELBO! A Linear VAE Perspective on Posterior Collapse

Posterior collapse in Variational Autoencoders (VAEs) arises when the va...
05/31/2019

On the Necessity and Effectiveness of Learning the Prior of Variational Auto-Encoder

Using powerful posterior distributions is a popular approach to achievin...
03/07/2022

Hierarchical Sketch Induction for Paraphrase Generation

We propose a generative model of paraphrase generation, that encourages ...
09/02/2019

A Surprisingly Effective Fix for Deep Latent Variable Modeling of Text

When trained effectively, the Variational Autoencoder (VAE) is both a po...