The relativistic discriminator: a key element missing from standard GAN

In standard generative adversarial network (SGAN), the discriminator estimates the probability that the input data is real. The generator is trained to increase the probability that fake data is real. We argue that it should also simultaneously decrease the probability that real data is real because 1) this would account for a priori knowledge that half of the data in the mini-batch is fake, 2) this would be observed with divergence minimization, and 3) in optimal settings, SGAN would be equivalent to integral probability metric (IPM) GANs. We show that this property can be induced by using a relativistic discriminator which estimate the probability that the given real data is more realistic than a randomly sampled fake data. We also present a variant in which the discriminator estimate the probability that the given real data is more realistic than fake data, on average. We generalize both approaches to non-standard GAN loss functions and we refer to them respectively as Relativistic GANs (RGANs) and Relativistic average GANs (RaGANs). We show that IPM-based GANs are a subset of RGANs which use the identity function. Empirically, we observe that 1) RGANs and RaGANs are significantly more stable and generate higher quality data samples than their non-relativistic counterparts, 2) Standard RaGAN with gradient penalty generate data of better quality than WGAN-GP while only requiring a single discriminator update per generator update (reducing the time taken for reaching the state-of-the-art by 400 (256x256) from a very small sample (N=2011), while GAN and LSGAN cannot; these images are of significantly better quality than the ones generated by WGAN-GP and SGAN with spectral normalization.

READ FULL TEXT

page 8

page 18

page 19

page 20

page 21

page 22

page 23

page 24

research
09/06/2018

GANs beyond divergence minimization

Generative adversarial networks (GANs) can be interpreted as an adversar...
research
04/24/2023

ComGAN: Toward GANs Exploiting Multiple Samples

In this paper, we propose ComGAN(ComparativeGAN) which allows the genera...
research
02/12/2018

Tempered Adversarial Networks

Generative adversarial networks (GANs) have been shown to produce realis...
research
04/05/2020

A Discriminator Improves Unconditional Text Generation without Updating the Generator

We propose a novel mechanism to improve a text generator with a discrimi...
research
04/05/2020

A Discriminator Improves Unconditional Text Generation without Updating the Generato

We propose a novel mechanism to improve a text generator with a discrimi...
research
07/13/2018

TequilaGAN: How to easily identify GAN samples

In this paper we show strategies to easily identify fake samples generat...
research
04/02/2018

Updating the generator in PPGN-h with gradients flowing through the encoder

The Generative Adversarial Network framework has shown success in implic...

Please sign up or login with your details

Forgot password? Click here to reset