DeepAI AI Chat
Log In Sign Up

The deep generative decoder: Using MAP estimates of representations

10/13/2021
by   Viktoria Schuster, et al.
Københavns Uni
0

A deep generative model is characterized by a representation space, its distribution, and a neural network mapping the representation to a distribution over vectors in feature space. Common methods such as variational autoencoders (VAEs) apply variational inference for training the neural network, but optimizing these models is often non-trivial. The encoder adds to the complexity of the model and introduces an amortization gap and the quality of the variational approximation is usually unknown. Additionally, the balance of the loss terms of the objective function heavily influences performance. Therefore, we argue that it is worthwhile to investigate a much simpler approximation which finds representations and their distribution by maximizing the model likelihood via back-propagation. In this approach, there is no encoder, and we therefore call it a Deep Generative Decoder (DGD). Using the CIFAR10 data set, we show that the DGD is easier and faster to optimize than the VAE, achieves more consistent low reconstruction errors of test data, and alleviates the problem of balancing the reconstruction and distribution loss terms. Although the model in its simple form cannot compete with state-of-the-art image generation approaches, it obtains better image generation scores than the variational approach on the CIFAR10 data. We demonstrate on MNIST data how the use of a Gaussian mixture with priors can lead to a clear separation of classes in a 2D representation space, and how the DGD can be used with labels to obtain a supervised representation.

READ FULL TEXT

page 7

page 8

page 9

page 15

11/29/2016

Deep Quantization: Encoding Convolutional Activations with Deep Generative Model

Deep convolutional neural networks (CNNs) have proven highly effective f...
02/25/2020

Batch norm with entropic regularization turns deterministic autoencoders into generative models

The variational autoencoder is a well defined deep generative model that...
12/15/2020

Unsupervised Learning of Global Factors in Deep Generative Models

We present a novel deep generative model based on non i.i.d. variational...
02/24/2016

Learning to Generate with Memory

Memory units have been widely used to enrich the capabilities of deep ne...
09/09/2021

Supervising the Decoder of Variational Autoencoders to Improve Scientific Utility

Probabilistic generative models are attractive for scientific modeling b...
08/21/2020

MPCC: Matching Priors and Conditionals for Clustering

Clustering is a fundamental task in unsupervised learning that depends h...
12/14/2016

Disentangling Space and Time in Video with Hierarchical Variational Auto-encoders

There are many forms of feature information present in video data. Princ...