When can Wasserstein GANs minimize Wasserstein Distance?

03/09/2020
by   Yuanzhi Li, et al.
0

Generative Adversarial Networks (GANs) are widely used models to learn complex real-world distributions. In GANs, the training of the generator usually stops when the discriminator can no longer distinguish the generator's output from the set of training examples. A central question of GANs is that when the training stops, whether the generated distribution is actually close to the target distribution. Previously, it was found that such closeness can only be achieved when there is a strict capacity trade-off between the generator and discriminator: Neither of the two models can be too powerful than the other. In this paper, we established one of the first theoretical results in explaining this trade-off. We show that when the generator is a class of two-layer neural networks, then it is necessary and sufficient for the discriminator to be a one-layer network with ReLU-type activation functions. With this trade-off, using polynomially many training examples, when the training stops, the generator will indeed output a distribution that is inverse-polynomially close to the target. Our result also sheds light on how GANs training can find such a generator efficiently.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/30/2023

Adversarially Slicing Generative Networks: Discriminator Slices Feature for One-Dimensional Optimal Transport

Generative adversarial networks (GANs) learn a target probability distri...
research
12/01/2020

Convergence and Sample Complexity of SGD in GANs

We provide theoretical convergence guarantees on training Generative Adv...
research
11/02/2021

Understanding Entropic Regularization in GANs

Generative Adversarial Networks are a popular method for learning distri...
research
06/04/2020

Some Theoretical Insights into Wasserstein GANs

Generative Adversarial Networks (GANs) have been successful in producing...
research
01/18/2022

Minimax Optimality (Probably) Doesn't Imply Distribution Learning for GANs

Arguably the most fundamental question in the theory of generative adver...
research
02/10/2020

Statistical Guarantees of Generative Adversarial Networks for Distribution Estimation

Generative Adversarial Networks (GANs) have achieved great success in un...
research
01/28/2021

The Hidden Tasks of Generative Adversarial Networks: An Alternative Perspective on GAN Training

We present an alternative perspective on the training of generative adve...

Please sign up or login with your details

Forgot password? Click here to reset