DeepAI AI Chat
Log In Sign Up

Analyzing the Posterior Collapse in Hierarchical Variational Autoencoders

by   Anna Kuzina, et al.

Hierarchical Variational Autoencoders (VAEs) are among the most popular likelihood-based generative models. There is rather a consensus that the top-down hierarchical VAEs allow to effectively learn deep latent structures and avoid problems like the posterior collapse. Here, we show that it is not necessarily the case and the problem of collapsing posteriors remains. To discourage the posterior collapse, we propose a new deep hierarchical VAE with a partly fixed encoder, specifically, we use Discrete Cosine Transform to obtain top latent variables. In a series of experiments, we observe that the proposed modification allows us to achieve better utilization of the latent space. Further, we demonstrate that the proposed approach can be useful for compression and robustness to adversarial attacks.


page 5

page 8

page 13

page 14

page 15

page 16

page 17


LDC-VAE: A Latent Distribution Consistency Approach to Variational AutoEncoders

Variational autoencoders (VAEs), as an important aspect of generative mo...

Discrete Auto-regressive Variational Attention Models for Text Modeling

Variational autoencoders (VAEs) have been widely applied for text modeli...

The Neglected Sibling: Isotropic Gaussian Posterior for VAE

Deep generative models have been widely used in several areas of NLP, an...

Hierarchical Quantized Autoencoders

Despite progress in training neural networks for lossy image compression...

Deep Variational Inference Without Pixel-Wise Reconstruction

Variational autoencoders (VAEs), that are built upon deep neural network...

Generalizing Variational Autoencoders with Hierarchical Empirical Bayes

Variational Autoencoders (VAEs) have experienced recent success as data-...

Trading Information between Latents in Hierarchical Variational Autoencoders

Variational Autoencoders (VAEs) were originally motivated (Kingma We...