Learning the Base Distribution in Implicit Generative Models

03/12/2018
by   Cem Subakan, et al.
0

Popular generative model learning methods such as Generative Adversarial Networks (GANs), and Variational Autoencoders (VAE) enforce the latent representation to follow simple distributions such as isotropic Gaussian. In this paper, we argue that learning a complicated distribution over the latent space of an auto-encoder enables more accurate modeling of complicated data distributions. Based on this observation, we propose a two stage optimization procedure which maximizes an approximate implicit density model. We experimentally verify that our method outperforms GANs and VAEs on two image datasets (MNIST, CELEB-A). We also show that our approach is amenable to learning generative model for sequential data, by learning to generate speech and music.

READ FULL TEXT

page 8

page 9

page 12

page 13

page 14

page 15

page 16

research
12/10/2019

Towards Latent Space Optimality for Auto-Encoder Based Generative Models

The field of neural generative models is dominated by the highly success...
research
07/25/2019

Y-Autoencoders: disentangling latent representations via sequential-encoding

In the last few years there have been important advancements in generati...
research
07/07/2020

Gradient Origin Networks

This paper proposes a new type of implicit generative model that is able...
research
11/19/2019

SimVAE: Simulator-Assisted Training forInterpretable Generative Models

This paper presents a simulator-assisted training method (SimVAE) for va...
research
06/01/2019

GANchors: Realistic Image Perturbation Distributions for Anchors Using Generative Models

We extend and improve the work of Model Agnostic Anchors for explanation...
research
10/28/2020

GENs: Generative Encoding Networks

Mapping data from and/or onto a known family of distributions has become...

Please sign up or login with your details

Forgot password? Click here to reset