Statistical Guarantees of Generative Adversarial Networks for Distribution Estimation

by   Minshuo Chen, et al.
Georgia Institute of Technology

Generative Adversarial Networks (GANs) have achieved great success in unsupervised learning. Despite the remarkable empirical performance, there are limited theoretical understandings on the statistical properties of GANs. This paper provides statistical guarantees of GANs for the estimation of data distributions which have densities in a Hölder space. Our main result shows that, if the generator and discriminator network architectures are properly chosen (universally for all distributions with Hölder densities), GANs are consistent estimators of the data distributions under strong discrepancy metrics, such as the Wasserstein distance. To our best knowledge, this is the first statistical theory of GANs for Hölder densities. In comparison with existing works, our theory requires minimum assumptions on data distributions. Our generator and discriminator networks utilize general weight matrices and the non-invertible ReLU activation function, while many existing works only apply to invertible weight matrices and invertible activation functions. In our analysis, we decompose the error into a statistical error and an approximation error by a new oracle inequality, which may be of independent interest.


page 1

page 2

page 3

page 4


An error analysis of generative adversarial networks for learning distributions

This paper studies how well generative adversarial networks (GANs) learn...

When can Wasserstein GANs minimize Wasserstein Distance?

Generative Adversarial Networks (GANs) are widely used models to learn c...

Statistical inference for generative adversarial networks

This paper studies generative adversarial networks (GANs) from a statist...

Convergence and Sample Complexity of SGD in GANs

We provide theoretical convergence guarantees on training Generative Adv...

Statistical Regeneration Guarantees of the Wasserstein Autoencoder with Latent Space Consistency

The introduction of Variational Autoencoders (VAE) has been marked as a ...

A Convenient Infinite Dimensional Framework for Generative Adversarial Learning

In recent years, generative adversarial networks (GANs) have demonstrate...

Spectral Regularization for Combating Mode Collapse in GANs

Despite excellent progress in recent years, mode collapse remains a majo...

Please sign up or login with your details

Forgot password? Click here to reset