Approximability of Discriminators Implies Diversity in GANs

06/27/2018
by   Yu Bai, et al.
0

While Generative Adversarial Networks (GANs) have empirically produced impressive results on learning complex real-world distributions, recent work has shown that they suffer from lack of diversity or mode collapse. The theoretical work of Arora et al. AroraGeLiMaZh17 suggests a dilemma about GANs' statistical properties: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. In contrast, we show in this paper that GANs can in principle learn distributions in Wasserstein distance (or KL-divergence in many cases) with polynomial sample complexity, if the discriminator class has strong distinguishing power against the particular generator class (instead of against all possible generators). For various generator classes such as mixture of Gaussians, exponential families, and invertible neural networks generators, we design corresponding discriminators (which are often neural nets of specific architectures) such that the Integral Probability Metric (IPM) induced by the discriminators can provably approximate the Wasserstein distance and/or KL-divergence. This implies that if the training is successful, then the learned distribution is close to the true distribution in Wasserstein distance or KL divergence, and thus cannot drop modes. Our preliminary experiments show that on synthetic datasets the test IPM is well correlated with KL divergence, indicating that the lack of diversity may be caused by the sub-optimality in optimization instead of statistical inefficiency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2019

Bridging the Gap Between f-GANs and Wasserstein GANs

Generative adversarial networks (GANs) have enjoyed much success in lear...
research
10/15/2020

Non-saturating GAN training as divergence minimization

Non-saturating generative adversarial network (GAN) training is widely u...
research
03/21/2018

Some Theoretical Properties of GANs

Generative Adversarial Networks (GANs) are a class of generative algorit...
research
09/12/2018

The Inductive Bias of Restricted f-GANs

Generative adversarial networks are a novel method for statistical infer...
research
01/18/2022

Minimax Optimality (Probably) Doesn't Imply Distribution Learning for GANs

Arguably the most fundamental question in the theory of generative adver...
research
03/18/2021

Approximation for Probability Distributions by Wasserstein GAN

In this paper, we show that the approximation for distributions by Wasse...
research
06/24/2023

Smoothed f-Divergence Distributionally Robust Optimization: Exponential Rate Efficiency and Complexity-Free Calibration

In data-driven optimization, sample average approximation is known to su...

Please sign up or login with your details

Forgot password? Click here to reset