A Convex Duality Framework for GANs

10/28/2018
by   Farzan Farnia, et al.
0

Generative adversarial network (GAN) is a minimax game between a generator mimicking the true model and a discriminator distinguishing the samples produced by the generator from the real training samples. Given an unconstrained discriminator able to approximate any function, this game reduces to finding the generative model minimizing a divergence measure, e.g. the Jensen-Shannon (JS) divergence, to the data distribution. However, in practice the discriminator is constrained to be in a smaller class F such as neural nets. Then, a natural question is how the divergence minimization interpretation changes as we constrain F. In this work, we address this question by developing a convex duality framework for analyzing GANs. For a convex set F, this duality framework interprets the original GAN formulation as finding the generative model with minimum JS-divergence to the distributions penalized to match the moments of the data distribution, with the moments specified by the discriminators in F. We show that this interpretation more generally holds for f-GAN and Wasserstein GAN. As a byproduct, we apply the duality framework to a hybrid of f-divergence and Wasserstein distance. Unlike the f-divergence, we prove that the proposed hybrid divergence changes continuously with the generative model, which suggests regularizing the discriminator's Lipschitz constant in f-GAN and vanilla GAN. We numerically evaluate the power of the suggested regularization schemes for improving GAN's training performance.

READ FULL TEXT

page 8

page 11

page 12

page 13

page 14

research
03/29/2018

Generative Modeling using the Sliced Wasserstein Distance

Generative Adversarial Nets (GANs) are very successful at modeling distr...
research
10/09/2019

How Well Do WGANs Estimate the Wasserstein Metric?

Generative modelling is often cast as minimizing a similarity measure be...
research
12/12/2020

On Duality Gap as a Measure for Monitoring GAN Training

Generative adversarial network (GAN) is among the most popular deep lear...
research
11/18/2018

GAN-QP: A Novel GAN Framework without Gradient Vanishing and Lipschitz Constraint

We know SGAN may have a risk of gradient vanishing. A significant improv...
research
11/06/2017

KGAN: How to Break The Minimax Game in GAN

Generative Adversarial Networks (GANs) were intuitively and attractively...
research
01/27/2019

Deconstructing Generative Adversarial Networks

We deconstruct the performance of GANs into three components: 1. Formu...
research
11/07/2017

On the Discrimination-Generalization Tradeoff in GANs

Generative adversarial training can be generally understood as minimizin...

Please sign up or login with your details

Forgot password? Click here to reset