Improving Variational Autoencoders with Density Gap-based Regularization

11/01/2022
by   Jianfei Zhang, et al.
0

Variational autoencoders (VAEs) are one of the powerful unsupervised learning frameworks in NLP for latent representation learning and latent-directed generation. The classic optimization goal of VAEs is to maximize the Evidence Lower Bound (ELBo), which consists of a conditional likelihood for generation and a negative Kullback-Leibler (KL) divergence for regularization. In practice, optimizing ELBo often leads the posterior distribution of all samples converge to the same degenerated local optimum, namely posterior collapse or KL vanishing. There are effective ways proposed to prevent posterior collapse in VAEs, but we observe that they in essence make trade-offs between posterior collapse and hole problem, i.e., mismatch between the aggregated posterior distribution and the prior distribution. To this end, we introduce new training objectives to tackle both two problems through a novel regularization based on the probabilistic density gap between the aggregated posterior distribution and the prior distribution. Through experiments on language modeling, latent space visualization and interpolation, we show that our proposed method can solve both problems effectively and thus outperforms the existing methods in latent-directed generation. To the best of our knowledge, we are the first to jointly solve the hole problem and the posterior collapse.

READ FULL TEXT

page 2

page 9

research
10/28/2021

Preventing posterior collapse in variational autoencoders for text generation via decoder regularization

Variational autoencoders trained to minimize the reconstruction error ar...
research
07/20/2020

Generalizing Variational Autoencoders with Hierarchical Empirical Bayes

Variational Autoencoders (VAEs) have experienced recent success as data-...
research
06/07/2017

InfoVAE: Information Maximizing Variational Autoencoders

It has been previously observed that variational autoencoders tend to ig...
research
09/27/2019

Identifying through Flows for Recovering Latent Representations

Identifiability, or recovery of the true latent representations from whi...
research
08/31/2018

Spherical Latent Spaces for Stable Variational Autoencoders

A hallmark of variational autoencoders (VAEs) for text processing is the...
research
07/17/2020

Learning Posterior and Prior for Uncertainty Modeling in Person Re-Identification

Data uncertainty in practical person reID is ubiquitous, hence it requir...
research
04/21/2020

Discrete Variational Attention Models for Language Generation

Variational autoencoders have been widely applied for natural language g...

Please sign up or login with your details

Forgot password? Click here to reset