New Losses for Generative Adversarial Learning

07/03/2018
by   Victor Berger, et al.
0

Generative Adversarial Networks (Goodfellow et al., 2014), a major breakthrough in the field of generative modeling, learn a discriminator to estimate some distance between the target and the candidate distributions. This paper examines mathematical issues regarding the way the gradients for the generative model are computed in this context, and notably how to take into account how the discriminator itself depends on the generator parameters. A unifying methodology is presented to define mathematically sound training objectives for generative models taking this dependency into account in a robust way, covering both GAN, VAE and some GAN variants as particular cases.

READ FULL TEXT

page 8

page 12

page 13

page 14

page 15

page 16

research
02/21/2018

A Study into the similarity in generator and discriminator in GAN architecture

One popular generative model that has high-quality results is the Genera...
research
11/20/2017

Bidirectional Conditional Generative Adversarial Networks

Conditional variants of Generative Adversarial Networks (GANs), known as...
research
01/12/2018

Comparative Study on Generative Adversarial Networks

In recent years, there have been tremendous advancements in the field of...
research
11/14/2016

Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy

We propose a method to optimize the representation and distinguishabilit...
research
05/23/2019

PHom-GeM: Persistent Homology for Generative Models

Generative neural network models, including Generative Adversarial Netwo...
research
10/30/2017

Tensorizing Generative Adversarial Nets

Generative Adversarial Network (GAN) and its variants demonstrate state-...
research
05/08/2017

Geometric GAN

Generative Adversarial Nets (GANs) represent an important milestone for ...

Please sign up or login with your details

Forgot password? Click here to reset