On some theoretical limitations of Generative Adversarial Networks

10/21/2021
by   Benoît Oriol, et al.
0

Generative Adversarial Networks have become a core technique in Machine Learning to generate unknown distributions from data samples. They have been used in a wide range of context without paying much attention to the possible theoretical limitations of those models. Indeed, because of the universal approximation properties of Neural Networks, it is a general assumption that GANs can generate any probability distribution. Recently, people began to question this assumption and this article is in line with this thinking. We provide a new result based on Extreme Value Theory showing that GANs can't generate heavy tailed distributions. The full proof of this result is given.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/18/2023

Redes Generativas Adversarias (GAN) Fundamentos Teóricos y Aplicaciones

Generative adversarial networks (GANs) are a method based on the trainin...
research
01/22/2021

Pareto GAN: Extending the Representational Power of GANs to Heavy-Tailed Distributions

Generative adversarial networks (GANs) are often billed as "universal di...
research
09/17/2020

ExGAN: Adversarial Generation of Extreme Samples

Mitigating the risk arising from extreme events is a fundamental goal wi...
research
09/11/2020

CounteRGAN: Generating Realistic Counterfactuals with Residual Generative Adversarial Nets

The prevalence of machine learning models in various industries has led ...
research
09/06/2018

GANs for generating EFT models

We initiate a way of generating models by the computer, satisfying both ...
research
02/10/2018

Plummer Autoencoders

Estimating the true density in high-dimensional feature spaces is a well...
research
01/13/2021

Sequential IoT Data Augmentation using Generative Adversarial Networks

Sequential data in industrial applications can be used to train and eval...

Please sign up or login with your details

Forgot password? Click here to reset