Can VAEs Generate Novel Examples?

12/22/2018
by   Alican Bozkurt, et al.
0

An implicit goal in works on deep generative models is that such models should be able to generate novel examples that were not previously seen in the training data. In this paper, we investigate to what extent this property holds for widely employed variational autoencoder (VAE) architectures. VAEs maximize a lower bound on the log marginal likelihood, which implies that they will in principle overfit the training data when provided with a sufficiently expressive decoder. In the limit of an infinite capacity decoder, the optimal generative model is a uniform mixture over the training data. More generally, an optimal decoder should output a weighted average over the examples in the training data, where the magnitude of the weights is determined by the proximity in the latent space. This leads to the hypothesis that, for a sufficiently high capacity encoder and decoder, the VAE decoder will perform nearest-neighbor matching according to the coordinates in the latent space. To test this hypothesis, we investigate generalization on the MNIST dataset. We consider both generalization to new examples of previously seen classes, and generalization to the classes that were withheld from the training set. In both cases, we find that reconstructions are closely approximated by nearest neighbors for higher-dimensional parameterizations. When generalizing to unseen classes however, lower-dimensional parameterizations offer a clear advantage.

READ FULL TEXT
research
11/15/2017

Zero-Shot Learning via Class-Conditioned Deep Generative Models

We present a deep generative model for learning to predict classes not s...
research
02/04/2022

Robust Vector Quantized-Variational Autoencoder

Image generative models can learn the distributions of the training data...
research
04/09/2020

Exemplar VAEs for Exemplar based Generation and Data Augmentation

This paper presents a framework for exemplar based generative modeling, ...
research
02/15/2018

Quantum Variational Autoencoder

Variational autoencoders (VAEs) are powerful generative models with the ...
research
06/05/2018

Training Generative Reversible Networks

Generative models with an encoding component such as autoencoders curren...
research
04/24/2019

Generated Loss and Augmented Training of MNIST VAE

The variational autoencoder (VAE) framework is a popular option for trai...
research
11/11/2019

Evaluating Combinatorial Generalization in Variational Autoencoders

We evaluate the ability of variational autoencoders to generalize to uns...

Please sign up or login with your details

Forgot password? Click here to reset