Understanding GANs: the LQG Setting

10/30/2017
by   Soheil Feizi, et al.
0

Generative Adversarial Networks (GANs) have become a popular method to learn a probability model from data. Many GAN architectures with different optimization metrics have been introduced recently. Instead of proposing yet another architecture, this paper aims to provide an understanding of some of the basic issues surrounding GANs. First, we propose a natural way of specifying the loss function for GANs by drawing a connection with supervised learning. Second, we shed light on the generalization peformance of GANs through the analysis of a simple LQG setting: the generator is Linear, the loss function is Quadratic and the data is drawn from a Gaussian distribution. We show that in this setting: 1) the optimal GAN solution converges to population Principal Component Analysis (PCA) as the number of training samples increases; 2) the number of samples required scales exponentially with the dimension of the data; 3) the number of samples scales almost linearly if the discriminator is constrained to be quadratic. Thus, linear generators and quadratic discriminators provide a good balance for fast learning.

READ FULL TEXT
research
02/25/2019

Wasserstein GAN Can Perform PCA

Generative Adversarial Networks (GANs) have become a powerful framework ...
research
06/09/2021

Realizing GANs via a Tunable Loss Function

We introduce a tunable GAN, called α-GAN, parameterized by α∈ (0,∞], whi...
research
02/28/2023

Towards Addressing GAN Training Instabilities: Dual-objective GANs with Tunable Parameters

In an effort to address the training instabilities of GANs, we introduce...
research
05/22/2023

Statistical Guarantees of Group-Invariant GANs

Group-invariant generative adversarial networks (GANs) are a type of GAN...
research
09/27/2019

On the Anomalous Generalization of GANs

Generative models, especially Generative Adversarial Networks (GANs), ha...
research
02/28/2023

Double Dynamic Sparse Training for GANs

The past decade has witnessed a drastic increase in modern deep neural n...
research
06/07/2021

Double Descent and Other Interpolation Phenomena in GANs

We study overparameterization in generative adversarial networks (GANs) ...

Please sign up or login with your details

Forgot password? Click here to reset