VAE with a VampPrior

05/19/2017
by   Jakub M. Tomczak, et al.
0

Many different methods to train deep generative models have been introduced in the past. In this paper, we propose to extend the variational auto-encoder (VAE) framework with a new type of prior which we call "Variational Mixture of Posteriors" prior, or VampPrior for short. The VampPrior consists of a mixture distribution (e.g., a mixture of Gaussians) with components given by variational posteriors conditioned on learnable pseudo-inputs. We further extend this prior to a two layer hierarchical model and show that this architecture with a coupled prior and posterior, learns significantly better models. The model also avoids the usual local optima issues related to useless latent dimensions that plague VAEs. We provide empirical studies on six datasets, namely, static and binary MNIST, OMNIGLOT, Caltech 101 Silhouettes, Frey Faces and Histopathology patches, and show that applying the hierarchical VampPrior delivers state-of-the-art results on all datasets in the unsupervised permutation invariant setting and the best results or comparable to SOTA methods for the approach with convolutional networks.

READ FULL TEXT

page 7

page 13

page 14

page 15

research
11/29/2016

Deep Quantization: Encoding Convolutional Activations with Deep Generative Model

Deep convolutional neural networks (CNNs) have proven highly effective f...
research
05/31/2019

On the Necessity and Effectiveness of Learning the Prior of Variational Auto-Encoder

Using powerful posterior distributions is a popular approach to achievin...
research
01/10/2019

Stroke-based sketched symbol reconstruction and segmentation

Hand-drawn objects usually consist of multiple semantically meaningful p...
research
08/29/2019

Document Hashing with Mixture-Prior Generative Models

Hashing is promising for large-scale information retrieval tasks thanks ...
research
09/12/2018

Hyperprior Induced Unsupervised Disentanglement of Latent Representations

We address the problem of unsupervised disentanglement of latent represe...
research
05/13/2019

Learning Hierarchical Priors in VAEs

We propose to learn a hierarchical prior in the context of variational a...
research
10/14/2021

The Neglected Sibling: Isotropic Gaussian Posterior for VAE

Deep generative models have been widely used in several areas of NLP, an...

Please sign up or login with your details

Forgot password? Click here to reset