Near-Optimal Decentralized Momentum Method for Nonconvex-PL Minimax Problems

04/21/2023
by   Feihu Huang, et al.
0

Minimax optimization plays an important role in many machine learning tasks such as generative adversarial networks (GANs) and adversarial training. Although recently a wide variety of optimization methods have been proposed to solve the minimax problems, most of them ignore the distributed setting where the data is distributed on multiple workers. Meanwhile, the existing decentralized minimax optimization methods rely on the strictly assumptions such as (strongly) concavity and variational inequality conditions. In the paper, thus, we propose an efficient decentralized momentum-based gradient descent ascent (DM-GDA) method for the distributed nonconvex-PL minimax optimization, which is nonconvex in primal variable and is nonconcave in dual variable and satisfies the Polyak-Lojasiewicz (PL) condition. In particular, our DM-GDA method simultaneously uses the momentum-based techniques to update variables and estimate the stochastic gradients. Moreover, we provide a solid convergence analysis for our DM-GDA method, and prove that it obtains a near-optimal gradient complexity of O(ϵ^-3) for finding an ϵ-stationary solution of the nonconvex-PL stochastic minimax problems, which reaches the lower bound of nonconvex stochastic optimization. To the best of our knowledge, we first study the decentralized algorithm for Nonconvex-PL stochastic minimax optimization over a network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/06/2022

Decentralized Stochastic Gradient Descent Ascent for Finite-Sum Minimax Problems

Minimax optimization problems have attracted significant attention in re...
research
05/27/2019

Revisiting Stochastic Extragradient

We consider a new extension of the extragradient method that is motivate...
research
03/07/2023

Enhanced Adaptive Gradient Algorithms for Nonconvex-PL Minimax Optimization

In the paper, we study a class of nonconvex nonconcave minimax optimizat...
research
04/05/2023

Decentralized gradient descent maximization method for composite nonconvex strongly-concave minimax problems

Minimax problems have recently attracted a lot of research interests. A ...
research
02/21/2022

Semi-Implicit Hybrid Gradient Methods with Application to Adversarial Robustness

Adversarial examples, crafted by adding imperceptible perturbations to n...
research
05/28/2022

Uniform Convergence and Generalization for Nonconvex Stochastic Minimax Problems

This paper studies the uniform convergence and generalization bounds for...
research
02/07/2019

Momentum Schemes with Stochastic Variance Reduction for Nonconvex Composite Optimization

Two new stochastic variance-reduced algorithms named SARAH and SPIDER ha...

Please sign up or login with your details

Forgot password? Click here to reset