Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness

10/24/2021
by   Dazhong Shen, et al.
11

As one of the most popular generative models, Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference. However, when the decoder network is sufficiently expressive, VAE may lead to posterior collapse; that is, uninformative latent representations may be learned. To this end, in this paper, we propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space, and thus the representation can be learned in a meaningful and compact manner. Specifically, we first theoretically demonstrate that it will result in better latent space with high diversity and low uncertainty awareness by controlling the distribution of posterior's parameters across the whole data accordingly. Then, without the introduction of new loss terms or modifying training strategies, we propose to exploit Dropout on the variances and Batch-Normalization on the means simultaneously to regularize their distributions implicitly. Furthermore, to evaluate the generalization effect, we also exploit DU-VAE for inverse autoregressive flow based-VAE (VAE-IAF) empirically. Finally, extensive experiments on three benchmark datasets clearly show that our approach can outperform state-of-the-art baselines on both likelihood estimation and underlying classification tasks.

READ FULL TEXT

page 6

page 13

research
10/06/2017

Learnable Explicit Density for Continuous Latent Space and Variational Inference

In this paper, we study two aspects of the variational autoencoder (VAE)...
research
09/29/2022

Training β-VAE by Aggregating a Learned Gaussian Posterior with a Decoupled Decoder

The reconstruction loss and the Kullback-Leibler divergence (KLD) loss i...
research
02/01/2021

Hierarchical Variational Autoencoder for Visual Counterfactuals

Conditional Variational Auto Encoders (VAE) are gathering significant at...
research
12/11/2020

Unsupervised Learning of slow features for Data Efficient Regression

Research in computational neuroscience suggests that the human brain's u...
research
06/01/2022

Top-down inference in an early visual cortex inspired hierarchical Variational Autoencoder

Interpreting computations in the visual cortex as learning and inference...
research
12/07/2020

Autoencoding Variational Autoencoder

Does a Variational AutoEncoder (VAE) consistently encode typical samples...
research
05/18/2020

HyperVAE: A Minimum Description Length Variational Hyper-Encoding Network

We propose a framework called HyperVAE for encoding distributions of dis...

Please sign up or login with your details

Forgot password? Click here to reset