Generalization of GANs under Lipschitz continuity and data augmentation

04/06/2021
by   Khoat Than, et al.
0

Generative adversarial networks (GANs) have been being widely used in various applications. Arguably, GANs are really complex, and little has been known about their generalization. In this paper, we make a comprehensive analysis about generalization of GANs. We decompose the generalization error into an explicit composition: generator error + discriminator error + optimization error. The first two errors show the capacity of the player's families, are irreducible and optimizer-independent. We then provide both uniform and non-uniform generalization bounds in different scenarios, thanks to our new bridge between Lipschitz continuity and generalization. Our bounds overcome some major limitations of existing ones. In particular, our bounds show that penalizing the zero- and first-order informations of the GAN loss will improve generalization, answering the long mystery of why imposing a Lipschitz constraint can help GANs perform better in practice. Finally, we show why data augmentation penalizes the zero- and first-order informations of the loss, helping the players generalize better, and hence explaining the highly successful use of data augmentation for GANs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/02/2019

Towards Efficient and Unbiased Implementation of Lipschitz Continuity in GANs

Lipschitz continuity recently becomes popular in generative adversarial ...
research
11/23/2018

Do GAN Loss Functions Really Matter?

In this paper, we address the recent controversy between Lipschitz regul...
research
09/18/2021

Manifold-preserved GANs

Generative Adversarial Networks (GANs) have been widely adopted in vario...
research
11/28/2021

On Predicting Generalization using GANs

Research on generalization bounds for deep networks seeks to give ways t...
research
03/05/2018

Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect

Despite being impactful on a variety of problems and applications, the g...
research
03/25/2021

About the regularity of the discriminator in conditional WGANs

Training of conditional WGANs is usually done by averaging the underlyin...
research
02/28/2023

Towards Addressing GAN Training Instabilities: Dual-objective GANs with Tunable Parameters

In an effort to address the training instabilities of GANs, we introduce...

Please sign up or login with your details

Forgot password? Click here to reset