Global Convergence and Variance-Reduced Optimization for a Class of Nonconvex-Nonconcave Minimax Problems

02/22/2020
by   Junchi Yang, et al.
0

Nonconvex minimax problems appear frequently in emerging machine learning applications, such as generative adversarial networks and adversarial learning. Simple algorithms such as the gradient descent ascent (GDA) are the common practice for solving these nonconvex games and receive lots of empirical success. Yet, it is known that these vanilla GDA algorithms with constant step size can potentially diverge even in the convex setting. In this work, we show that for a subclass of nonconvex-nonconcave objectives satisfying a so-called two-sided Polyak-Łojasiewicz inequality, the alternating gradient descent ascent (AGDA) algorithm converges globally at a linear rate and the stochastic AGDA achieves a sublinear rate. We further develop a variance reduced algorithm that attains a provably faster rate than AGDA when the problem has the finite-sum structure.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/22/2021

Accelerated Proximal Alternating Gradient-Descent-Ascent for Nonconvex Minimax Machine Learning

Alternating gradient-descent-ascent (AltGDA) is an optimization algorith...
06/02/2019

On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems

We consider nonconvex-concave minimax problems, _x_y∈Y f(x, y), where f ...
10/20/2020

Limiting Behaviors of Nonconvex-Nonconcave Minimax Optimization via Continuous-Time Systems

Unlike nonconvex optimization, where gradient descent is guaranteed to c...
03/19/2016

Fast Incremental Method for Nonconvex Optimization

We analyze a fast incremental aggregated gradient method for optimizing ...
02/21/2022

Semi-Implicit Hybrid Gradient Methods with Application to Adversarial Robustness

Adversarial examples, crafted by adding imperceptible perturbations to n...
12/10/2021

Faster Single-loop Algorithms for Minimax Optimization without Strong Concavity

Gradient descent ascent (GDA), the simplest single-loop algorithm for no...
05/29/2018

K-Beam Subgradient Descent for Minimax Optimization

Minimax optimization plays a key role in adversarial training of machine...