DeepAI AI Chat
Log In Sign Up

Understanding Overparameterization in Generative Adversarial Networks

by   Yogesh Balaji, et al.

A broad class of unsupervised deep learning methods such as Generative Adversarial Networks (GANs) involve training of overparameterized models where the number of parameters of the model exceeds a certain threshold. A large body of work in supervised learning have shown the importance of model overparameterization in the convergence of the gradient descent (GD) to globally optimal solutions. In contrast, the unsupervised setting and GANs in particular involve non-convex concave mini-max optimization problems that are often trained using Gradient Descent/Ascent (GDA). The role and benefits of model overparameterization in the convergence of GDA to a global saddle point in non-convex concave problems is far less understood. In this work, we present a comprehensive analysis of the importance of model overparameterization in GANs both theoretically and empirically. We theoretically show that in an overparameterized GAN model with a 1-layer neural network generator and a linear discriminator, GDA converges to a global saddle point of the underlying non-convex concave min-max problem. To the best of our knowledge, this is the first result for global convergence of GDA in such settings. Our theory is based on a more general result that holds for a broader class of nonlinear generators and discriminators that obey certain assumptions (including deeper generators and random feature discriminators). We also empirically study the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets. Our experiments show that overparameterization improves the quality of generated samples across various model architectures and datasets. Remarkably, we observe that overparameterization leads to faster and more stable convergence behavior of GDA across the board.


page 8

page 38


On the One-sided Convergence of Adam-type Algorithms in Non-convex Non-concave Min-max Optimization

Adam-type methods, the extension of adaptive gradient methods, have show...

Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions

Generative Adversarial Networks (GANs) are commonly used for modeling co...

The Unusual Effectiveness of Averaging in GAN Training

We show empirically that the optimal strategy of parameter averaging in ...

Training GANs with predictive projection centripetal acceleration

Although remarkable successful in practice, training generative adversar...

Interior Point Methods with Adversarial Networks

We present a new methodology, called IPMAN, that combines interior point...

Sliced Iterative Generator

We introduce the Sliced Iterative Generator (SIG), an iterative generati...

Training generative networks using random discriminators

In recent years, Generative Adversarial Networks (GANs) have drawn a lot...