Towards a Better Global Loss Landscape of GANs

11/10/2020
by   Ruoyu Sun, et al.
10

Understanding of GAN training is still very limited. One major challenge is its non-convex-non-concave min-max objective, which may lead to sub-optimal local minima. In this work, we perform a global landscape analysis of the empirical loss of GANs. We prove that a class of separable-GAN, including the original JS-GAN, has exponentially many bad basins which are perceived as mode-collapse. We also study the relativistic pairing GAN (RpGAN) loss which couples the generated samples and the true samples. We prove that RpGAN has no bad basins. Experiments on synthetic data show that the predicted bad basin can indeed appear in training. We also perform experiments to support our theory that RpGAN has a better landscape than separable-GAN. For instance, we empirically show that RpGAN performs better than separable-GAN with relatively narrow neural nets. The code is available at https://github.com/AilsaF/RS-GAN.

READ FULL TEXT

page 20

page 39

page 40

page 41

page 42

research
11/27/2022

DigGAN: Discriminator gradIent Gap Regularization for GAN Training with Limited Data

Generative adversarial nets (GANs) have been remarkably successful at le...
research
07/02/2020

The Global Landscape of Neural Networks: An Overview

One of the major concerns for neural network training is that the non-co...
research
01/12/2019

Eliminating all bad Local Minima from Loss Landscapes without even adding an Extra Unit

Recent work has noted that all bad local minima can be removed from neur...
research
08/21/2022

Instability and Local Minima in GAN Training with Kernel Discriminators

Generative Adversarial Networks (GANs) are a widely-used tool for genera...
research
09/16/2020

Landscape of Sparse Linear Network: A Brief Investigation

Network pruning, or sparse network has a long history and practical sign...
research
05/12/2022

α-GAN: Convergence and Estimation Guarantees

We prove a two-way correspondence between the min-max optimization of ge...
research
10/28/2018

Iteratively Learning from the Best

We study a simple generic framework to address the issue of bad training...

Please sign up or login with your details

Forgot password? Click here to reset