Minimax Optimality (Probably) Doesn't Imply Distribution Learning for GANs

01/18/2022
by   Sitan Chen, et al.
0

Arguably the most fundamental question in the theory of generative adversarial networks (GANs) is to understand to what extent GANs can actually learn the underlying distribution. Theoretical and empirical evidence suggests local optimality of the empirical training objective is insufficient. Yet, it does not rule out the possibility that achieving a true population minimax optimal solution might imply distribution learning. In this paper, we show that standard cryptographic assumptions imply that this stronger condition is still insufficient. Namely, we show that if local pseudorandom generators (PRGs) exist, then for a large family of natural continuous target distributions, there are ReLU network generators of constant depth and polynomial size which take Gaussian random seeds so that (i) the output is far in Wasserstein distance from the target distribution, but (ii) no polynomially large Lipschitz discriminator ReLU network can detect this. This implies that even achieving a population minimax optimal solution to the Wasserstein GAN objective is likely insufficient for distribution learning in the usual statistical sense. Our techniques reveal a deep connection between GANs and PRGs, which we believe will lead to further insights into the computational landscape of GANs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2020

When can Wasserstein GANs minimize Wasserstein Distance?

Generative Adversarial Networks (GANs) are widely used models to learn c...
research
01/30/2023

Adversarially Slicing Generative Networks: Discriminator Slices Feature for One-Dimensional Optimal Transport

Generative adversarial networks (GANs) learn a target probability distri...
research
03/21/2018

Some Theoretical Properties of GANs

Generative Adversarial Networks (GANs) are a class of generative algorit...
research
06/27/2018

Approximability of Discriminators Implies Diversity in GANs

While Generative Adversarial Networks (GANs) have empirically produced i...
research
03/18/2021

Approximation for Probability Distributions by Wasserstein GAN

In this paper, we show that the approximation for distributions by Wasse...
research
01/27/2019

Deconstructing Generative Adversarial Networks

We deconstruct the performance of GANs into three components: 1. Formu...
research
02/13/2019

Rethinking Generative Coverage: A Pointwise Guaranteed Approach

All generative models have to combat missing modes. The conventional wis...

Please sign up or login with your details

Forgot password? Click here to reset