DeepAI AI Chat
Log In Sign Up

Analyzing the Posterior Collapse in Hierarchical Variational Autoencoders

02/20/2023
by   Anna Kuzina, et al.
0

Hierarchical Variational Autoencoders (VAEs) are among the most popular likelihood-based generative models. There is rather a consensus that the top-down hierarchical VAEs allow to effectively learn deep latent structures and avoid problems like the posterior collapse. Here, we show that it is not necessarily the case and the problem of collapsing posteriors remains. To discourage the posterior collapse, we propose a new deep hierarchical VAE with a partly fixed encoder, specifically, we use Discrete Cosine Transform to obtain top latent variables. In a series of experiments, we observe that the proposed modification allows us to achieve better utilization of the latent space. Further, we demonstrate that the proposed approach can be useful for compression and robustness to adversarial attacks.

READ FULL TEXT

page 5

page 8

page 13

page 14

page 15

page 16

page 17

09/22/2021

LDC-VAE: A Latent Distribution Consistency Approach to Variational AutoEncoders

Variational autoencoders (VAEs), as an important aspect of generative mo...
06/16/2021

Discrete Auto-regressive Variational Attention Models for Text Modeling

Variational autoencoders (VAEs) have been widely applied for text modeli...
10/14/2021

The Neglected Sibling: Isotropic Gaussian Posterior for VAE

Deep generative models have been widely used in several areas of NLP, an...
02/19/2020

Hierarchical Quantized Autoencoders

Despite progress in training neural networks for lossy image compression...
11/16/2016

Deep Variational Inference Without Pixel-Wise Reconstruction

Variational autoencoders (VAEs), that are built upon deep neural network...
07/20/2020

Generalizing Variational Autoencoders with Hierarchical Empirical Bayes

Variational Autoencoders (VAEs) have experienced recent success as data-...
02/09/2023

Trading Information between Latents in Hierarchical Variational Autoencoders

Variational Autoencoders (VAEs) were originally motivated (Kingma We...