Understanding Overparameterization in Generative Adversarial Networks

04/12/2021
by   Yogesh Balaji, et al.
0

A broad class of unsupervised deep learning methods such as Generative Adversarial Networks (GANs) involve training of overparameterized models where the number of parameters of the model exceeds a certain threshold. A large body of work in supervised learning have shown the importance of model overparameterization in the convergence of the gradient descent (GD) to globally optimal solutions. In contrast, the unsupervised setting and GANs in particular involve non-convex concave mini-max optimization problems that are often trained using Gradient Descent/Ascent (GDA). The role and benefits of model overparameterization in the convergence of GDA to a global saddle point in non-convex concave problems is far less understood. In this work, we present a comprehensive analysis of the importance of model overparameterization in GANs both theoretically and empirically. We theoretically show that in an overparameterized GAN model with a 1-layer neural network generator and a linear discriminator, GDA converges to a global saddle point of the underlying non-convex concave min-max problem. To the best of our knowledge, this is the first result for global convergence of GDA in such settings. Our theory is based on a more general result that holds for a broader class of nonlinear generators and discriminators that obey certain assumptions (including deeper generators and random feature discriminators). We also empirically study the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets. Our experiments show that overparameterization improves the quality of generated samples across various model architectures and datasets. Remarkably, we observe that overparameterization leads to faster and more stable convergence behavior of GDA across the board.

READ FULL TEXT

page 8

page 38

research
09/29/2021

On the One-sided Convergence of Adam-type Algorithms in Non-convex Non-concave Min-max Optimization

Adam-type methods, the extension of adaptive gradient methods, have show...
research
07/12/2021

Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions

Generative Adversarial Networks (GANs) are commonly used for modeling co...
research
06/12/2018

The Unusual Effectiveness of Averaging in GAN Training

We show empirically that the optimal strategy of parameter averaging in ...
research
10/07/2020

Training GANs with predictive projection centripetal acceleration

Although remarkable successful in practice, training generative adversar...
research
05/23/2018

Interior Point Methods with Adversarial Networks

We present a new methodology, called IPMAN, that combines interior point...
research
07/01/2020

Sliced Iterative Generator

We introduce the Sliced Iterative Generator (SIG), an iterative generati...
research
04/22/2019

Training generative networks using random discriminators

In recent years, Generative Adversarial Networks (GANs) have drawn a lot...

Please sign up or login with your details

Forgot password? Click here to reset