On the One-sided Convergence of Adam-type Algorithms in Non-convex Non-concave Min-max Optimization

09/29/2021
by   Zehao Dou, et al.
0

Adam-type methods, the extension of adaptive gradient methods, have shown great performance in the training of both supervised and unsupervised machine learning models. In particular, Adam-type optimizers have been widely used empirically as the default tool for training generative adversarial networks (GANs). On the theory side, however, despite the existence of theoretical results showing the efficiency of Adam-type methods in minimization problems, the reason of their wonderful performance still remains absent in GAN's training. In existing works, the fast convergence has long been considered as one of the most important reasons and multiple works have been proposed to give a theoretical guarantee of the convergence to a critical point of min-max optimization algorithms under certain assumptions. In this paper, we firstly argue empirically that in GAN's training, Adam does not converge to a critical point even upon successful training: Only the generator is converging while the discriminator's gradient norm remains high throughout the training. We name this one-sided convergence. Then we bridge the gap between experiments and theory by showing that Adam-type algorithms provably converge to a one-sided first order stationary points in min-max optimization problems under the one-sided MVI condition. We also empirically verify that such one-sided MVI condition is satisfied for standard GANs after trained over standard data sets. To the best of our knowledge, this is the very first result which provides an empirical observation and a strict theoretical guarantee on the one-sided convergence of Adam-type algorithms in min-max optimization.

READ FULL TEXT

page 4

page 15

page 16

page 29

research
06/16/2020

The limits of min-max optimization algorithms: convergence to spurious non-critical sets

Compared to minimization problems, the min-max landscape in machine lear...
research
04/12/2021

Understanding Overparameterization in Generative Adversarial Networks

A broad class of unsupervised deep learning methods such as Generative A...
research
12/26/2019

Towards Better Understanding of Adaptive Gradient Algorithms in Generative Adversarial Nets

Adaptive gradient algorithms perform gradient-based updates using the hi...
research
07/07/2018

Mirror descent in saddle-point problems: Going the extra (gradient) mile

Owing to their connection with generative adversarial networks (GANs), s...
research
02/21/2019

Hybrid Block Successive Approximation for One-Sided Non-Convex Min-Max Problems: Algorithms and Applications

The min-max problem, also known as the saddle point problem, is a class ...
research
04/22/2019

Training generative networks using random discriminators

In recent years, Generative Adversarial Networks (GANs) have drawn a lot...
research
03/23/2021

Generative Minimization Networks: Training GANs Without Competition

Many applications in machine learning can be framed as minimization prob...

Please sign up or login with your details

Forgot password? Click here to reset