Generator Reversal

07/28/2017
by   Yannic Kilcher, et al.
0

We consider the problem of training generative models with deep neural networks as generators, i.e. to map latent codes to data points. Whereas the dominant paradigm combines simple priors over codes with complex deterministic models, we propose instead to use more flexible code distributions. These distributions are estimated non-parametrically by reversing the generator map during training. The benefits include: more powerful generative models, better modeling of latent structure and explicit control of the degree of generalization.

READ FULL TEXT

page 8

page 14

page 15

page 17

page 18

page 19

page 20

page 23

research
10/31/2017

Flexible Prior Distributions for Deep Generative Models

We consider the problem of training generative models with deep neural n...
research
10/31/2017

Latent Space Oddity: on the Curvature of Deep Generative Models

Deep generative models provide a systematic way to learn nonlinear data ...
research
07/13/2020

Synthetic Dataset Generation with Itemset-Based Generative Models

This paper proposes three different data generators, tailored to transac...
research
02/09/2021

Generative Models as Distributions of Functions

Generative models are typically trained on grid-like data such as images...
research
09/15/2020

Generative models with kernel distance in data space

Generative models dealing with modeling a joint data distribution are ge...
research
07/02/2018

Ambient Hidden Space of Generative Adversarial Networks

Generative adversarial models are powerful tools to model structure in c...
research
08/03/2021

Toward Spatially Unbiased Generative Models

Recent image generation models show remarkable generation performance. H...

Please sign up or login with your details

Forgot password? Click here to reset