Don't Fix What ain't Broke: Near-optimal Local Convergence of Alternating Gradient Descent-Ascent for Minimax Optimization

02/18/2021
by   Guodong Zhang, et al.
0

Minimax optimization has recently gained a lot of attention as adversarial architectures and algorithms proliferate. Often, smooth minimax games proceed by simultaneous or alternating gradient updates. Although algorithms with alternating updates are commonly used in practice for many applications (e.g., GAN training), the majority of existing theoretical analyses focus on simultaneous algorithms. In this paper, we study alternating gradient descent-ascent (Alt-GDA) in minimax games and show that Alt-GDA is superior to its simultaneous counterpart (Sim-GDA) in many settings. In particular, we prove that Alt-GDA achieves a near-optimal local convergence rate for strongly-convex strongly-concave problems while Sim-GDA converges with a much slower rate. Moreover, we show that the acceleration effect of alternating updates remains when the minimax problem has only strong concavity in the dual variables. Numerical experiments on quadratic minimax games validate our claims. Additionally, we demonstrate that alternating updates speed up GAN training significantly and the use of optimism only helps for simultaneous algorithms.

READ FULL TEXT
research
08/11/2022

Near-Optimal Algorithms for Making the Gradient Small in Stochastic Minimax Optimization

We study the problem of finding a near-stationary point for smooth minim...
research
10/06/2021

Solve Minimax Optimization by Anderson Acceleration

Many modern machine learning algorithms such as generative adversarial n...
research
08/15/2019

Convergence Behaviour of Some Gradient-Based Methods on Bilinear Games

Min-max optimization has attracted much attention in the machine learnin...
research
05/10/2021

A Sharp Analysis of Covariate Adjusted Precision Matrix Estimation via Alternating Gradient Descent with Hard Thresholding

In this paper, we present a sharp analysis for an alternating gradient d...
research
05/29/2018

K-Beam Subgradient Descent for Minimax Optimization

Minimax optimization plays a key role in adversarial training of machine...
research
08/15/2019

Convergence Behaviour of Some Gradient-Based Methods on Bilinear Zero-Sum Games

Min-max formulations have attracted great attention in the ML community ...
research
02/24/2019

Training GANs with Centripetal Acceleration

Training generative adversarial networks (GANs) often suffers from cycli...

Please sign up or login with your details

Forgot password? Click here to reset