The deep generative decoder: Using MAP estimates of representations

by   Viktoria Schuster, et al.

A deep generative model is characterized by a representation space, its distribution, and a neural network mapping the representation to a distribution over vectors in feature space. Common methods such as variational autoencoders (VAEs) apply variational inference for training the neural network, but optimizing these models is often non-trivial. The encoder adds to the complexity of the model and introduces an amortization gap and the quality of the variational approximation is usually unknown. Additionally, the balance of the loss terms of the objective function heavily influences performance. Therefore, we argue that it is worthwhile to investigate a much simpler approximation which finds representations and their distribution by maximizing the model likelihood via back-propagation. In this approach, there is no encoder, and we therefore call it a Deep Generative Decoder (DGD). Using the CIFAR10 data set, we show that the DGD is easier and faster to optimize than the VAE, achieves more consistent low reconstruction errors of test data, and alleviates the problem of balancing the reconstruction and distribution loss terms. Although the model in its simple form cannot compete with state-of-the-art image generation approaches, it obtains better image generation scores than the variational approach on the CIFAR10 data. We demonstrate on MNIST data how the use of a Gaussian mixture with priors can lead to a clear separation of classes in a 2D representation space, and how the DGD can be used with labels to obtain a supervised representation.


page 7

page 8

page 9

page 15


Deep Quantization: Encoding Convolutional Activations with Deep Generative Model

Deep convolutional neural networks (CNNs) have proven highly effective f...

Batch norm with entropic regularization turns deterministic autoencoders into generative models

The variational autoencoder is a well defined deep generative model that...

Unsupervised Learning of Global Factors in Deep Generative Models

We present a novel deep generative model based on non i.i.d. variational...

Learning to Generate with Memory

Memory units have been widely used to enrich the capabilities of deep ne...

dpVAEs: Fixing Sample Generation for Regularized VAEs

Unsupervised representation learning via generative modeling is a staple...

Supervising the Decoder of Variational Autoencoders to Improve Scientific Utility

Probabilistic generative models are attractive for scientific modeling b...

Disentangling Space and Time in Video with Hierarchical Variational Auto-encoders

There are many forms of feature information present in video data. Princ...

Please sign up or login with your details

Forgot password? Click here to reset