Variance Regularizing Adversarial Learning

07/02/2017
by   Karan Grewal, et al.
0

We introduce a novel approach for training adversarial models by replacing the discriminator score with a bi-modal Gaussian distribution over the real/fake indicator variables. In order to do this, we train the Gaussian classifier to match the target bi-modal distribution implicitly through meta-adversarial training. We hypothesize that this approach ensures a non-zero gradient to the generator, even in the limit of a perfect classifier. We test our method against standard benchmark image datasets as well as show the classifier output distribution is smooth and has overlap between the real and fake modes.

READ FULL TEXT

page 4

page 5

research
02/21/2019

Domain Partitioning Network

Standard adversarial training involves two agents, namely a generator an...
research
02/27/2017

Boundary-Seeking Generative Adversarial Networks

We introduce a novel approach to training generative adversarial network...
research
09/01/2018

Semi-supervised Learning on Graphs with Generative Adversarial Nets

We investigate how generative adversarial nets (GANs) can help semi-supe...
research
01/30/2019

Diversity Regularized Adversarial Learning

The two key players in Generative Adversarial Networks (GANs), the discr...
research
04/11/2021

Integrating Information Theory and Adversarial Learning for Cross-modal Retrieval

Accurately matching visual and textual data in cross-modal retrieval has...
research
10/19/2020

Adversarial Training for Code Retrieval with Question-Description Relevance Regularization

Code retrieval is a key task aiming to match natural and programming lan...
research
08/24/2020

What makes fake images detectable? Understanding properties that generalize

The quality of image generation and manipulation is reaching impressive ...

Please sign up or login with your details

Forgot password? Click here to reset