Variance Regularizing Adversarial Learning

07/02/2017
by   Karan Grewal, et al.
0

We introduce a novel approach for training adversarial models by replacing the discriminator score with a bi-modal Gaussian distribution over the real/fake indicator variables. In order to do this, we train the Gaussian classifier to match the target bi-modal distribution implicitly through meta-adversarial training. We hypothesize that this approach ensures a non-zero gradient to the generator, even in the limit of a perfect classifier. We test our method against standard benchmark image datasets as well as show the classifier output distribution is smooth and has overlap between the real and fake modes.

READ FULL TEXT

page 4

page 5

02/21/2019

Domain Partitioning Network

Standard adversarial training involves two agents, namely a generator an...
02/27/2017

Boundary-Seeking Generative Adversarial Networks

We introduce a novel approach to training generative adversarial network...
09/01/2018

Semi-supervised Learning on Graphs with Generative Adversarial Nets

We investigate how generative adversarial nets (GANs) can help semi-supe...
01/30/2019

Diversity Regularized Adversarial Learning

The two key players in Generative Adversarial Networks (GANs), the discr...
04/02/2018

Updating the generator in PPGN-h with gradients flowing through the encoder

The Generative Adversarial Network framework has shown success in implic...
10/19/2020

Adversarial Training for Code Retrieval with Question-Description Relevance Regularization

Code retrieval is a key task aiming to match natural and programming lan...