Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities

01/23/2017
by   Guo-Jun Qi, et al.
0

*New Theory Result* We analyze the generalizability of the LS-GAN, showing that the loss function and generator trained over finite examples can converge to those learned from the real distributions with a moderate number of training examples. In this paper, we present a novel Loss-Sensitive GAN (LS-GAN) that learns a loss function to separate generated samples from their real examples. An important property of the LS-GAN is it allows the generator to focus on improving poor data points that are far apart from real examples rather than wasting efforts on those samples that have already been well generated, and thus can improve the overall quality of generated samples. The theoretical analysis also shows that the LS-GAN can generate samples following the true data density. In particular, we present a regularity condition on the underlying data density, which allows us to use a class of Lipschitz losses and generators to model the LS-GAN. It relaxes the assumption that the classic GAN should have infinite modeling capacity to obtain the similar theoretical guarantee. Furthermore, we show the generalization ability of the LS-GAN by bounding the difference between the model performances over the empirical and real distributions, as well as deriving a tractable sample complexity to train the LS-GAN model in terms of its generalization ability. We also derive a non-parametric solution that characterizes the upper and lower bounds of the losses learned by the LS-GAN, both of which are cone-shaped and have non-vanishing gradient almost everywhere.

READ FULL TEXT

page 11

page 13

page 14

research
11/23/2018

Do GAN Loss Functions Really Matter?

In this paper, we address the recent controversy between Lipschitz regul...
research
08/14/2023

A Unifying Generator Loss Function for Generative Adversarial Networks

A unifying α-parametrized generator loss function is introduced for a du...
research
07/08/2021

Generalization Error of GAN from the Discriminator's Perspective

The generative adversarial network (GAN) is a well-known model for learn...
research
04/20/2017

Softmax GAN

Softmax GAN is a novel variant of Generative Adversarial Network (GAN). ...
research
07/04/2020

RDP-GAN: A Rényi-Differential Privacy based Generative Adversarial Network

Generative adversarial network (GAN) has attracted increasing attention ...
research
07/21/2021

cGANs with Auxiliary Discriminative Classifier

Conditional generative models aim to learn the underlying joint distribu...
research
07/13/2021

Generative Adversarial Learning via Kernel Density Discrimination

We introduce Kernel Density Discrimination GAN (KDD GAN), a novel method...

Please sign up or login with your details

Forgot password? Click here to reset