Posterior Collapse in Linear Conditional and Hierarchical Variational Autoencoders

06/08/2023
by   Hien Dang, et al.
4

The posterior collapse phenomenon in variational autoencoders (VAEs), where the variational posterior distribution closely matches the prior distribution, can hinder the quality of the learned latent variables. As a consequence of posterior collapse, the latent variables extracted by the encoder in VAEs preserve less information from the input data and thus fail to produce meaningful representations as input to the reconstruction process in the decoder. While this phenomenon has been an actively addressed topic related to VAEs performance, the theory for posterior collapse remains underdeveloped, especially beyond the standard VAEs. In this work, we advance the theoretical understanding of posterior collapse to two important and prevalent yet less studied classes of VAEs: conditional VAEs and hierarchical VAEs. Specifically, via a non-trivial theoretical analysis of linear conditional VAEs and hierarchical VAEs with two levels of latent, we prove that the cause of posterior collapses in these models includes the correlation between the input and output of the conditional VAEs and the effect of learnable encoder variance in the hierarchical VAEs. We empirically validate our theoretical findings for linear conditional and hierarchical VAEs and demonstrate that these results are also predictive for non-linear cases.

READ FULL TEXT
research
02/20/2023

Analyzing the Posterior Collapse in Hierarchical Variational Autoencoders

Hierarchical Variational Autoencoders (VAEs) are among the most popular ...
research
11/06/2019

Don't Blame the ELBO! A Linear VAE Perspective on Posterior Collapse

Posterior collapse in Variational Autoencoders (VAEs) arises when the va...
research
06/15/2023

Tree Variational Autoencoders

We propose a new generative hierarchical clustering model that learns a ...
research
09/19/2019

Improved Variational Neural Machine Translation by Promoting Mutual Information

Posterior collapse plagues VAEs for text, especially for conditional tex...
research
10/28/2021

Preventing posterior collapse in variational autoencoders for text generation via decoder regularization

Variational autoencoders trained to minimize the reconstruction error ar...
research
02/18/2020

A Neural Network Based on First Principles

In this paper, a Neural network is derived from first principles, assumi...
research
11/10/2019

Preventing Posterior Collapse in Sequence VAEs with Pooling

Variational Autoencoders (VAEs) hold great potential for modelling text,...

Please sign up or login with your details

Forgot password? Click here to reset