DeepAI AI Chat
Log In Sign Up

Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness

by   Dazhong Shen, et al.
Rutgers University
Baidu, Inc.

As one of the most popular generative models, Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference. However, when the decoder network is sufficiently expressive, VAE may lead to posterior collapse; that is, uninformative latent representations may be learned. To this end, in this paper, we propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space, and thus the representation can be learned in a meaningful and compact manner. Specifically, we first theoretically demonstrate that it will result in better latent space with high diversity and low uncertainty awareness by controlling the distribution of posterior's parameters across the whole data accordingly. Then, without the introduction of new loss terms or modifying training strategies, we propose to exploit Dropout on the variances and Batch-Normalization on the means simultaneously to regularize their distributions implicitly. Furthermore, to evaluate the generalization effect, we also exploit DU-VAE for inverse autoregressive flow based-VAE (VAE-IAF) empirically. Finally, extensive experiments on three benchmark datasets clearly show that our approach can outperform state-of-the-art baselines on both likelihood estimation and underlying classification tasks.


page 6

page 13


Learnable Explicit Density for Continuous Latent Space and Variational Inference

In this paper, we study two aspects of the variational autoencoder (VAE)...

Training β-VAE by Aggregating a Learned Gaussian Posterior with a Decoupled Decoder

The reconstruction loss and the Kullback-Leibler divergence (KLD) loss i...

Hierarchical Variational Autoencoder for Visual Counterfactuals

Conditional Variational Auto Encoders (VAE) are gathering significant at...

Unsupervised Learning of slow features for Data Efficient Regression

Research in computational neuroscience suggests that the human brain's u...

Top-down inference in an early visual cortex inspired hierarchical Variational Autoencoder

Interpreting computations in the visual cortex as learning and inference...

Autoencoding Variational Autoencoder

Does a Variational AutoEncoder (VAE) consistently encode typical samples...

HyperVAE: A Minimum Description Length Variational Hyper-Encoding Network

We propose a framework called HyperVAE for encoding distributions of dis...