AdaGDA: Faster Adaptive Gradient Descent Ascent Methods for Minimax Optimization

06/30/2021
by   Feihu Huang, et al.
0

In the paper, we propose a class of faster adaptive gradient descent ascent methods for solving the nonconvex-strongly-concave minimax problems by using unified adaptive matrices used in the SUPER-ADAM <cit.>. Specifically, we propose a fast adaptive gradient decent ascent (AdaGDA) method based on the basic momentum technique, which reaches a low sample complexity of O(κ^4ϵ^-4) for finding an ϵ-stationary point without large batches, which improves the existing result of adaptive minimax optimization method by a factor of O(√(κ)). Moreover, we present an accelerated version of AdaGDA (VR-AdaGDA) method based on the momentum-based variance reduced technique, which achieves the best known sample complexity of O(κ^3ϵ^-3) for finding an ϵ-stationary point without large batches. Further assume the bounded Lipschitz parameter of objective function, we prove that our VR-AdaGDA method reaches a lower sample complexity of O(κ^2.5ϵ^-3) with the mini-batch size O(κ). In particular, we provide an effective convergence analysis framework for our adaptive methods based on unified adaptive matrices, which include almost existing adaptive learning rates.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/21/2021

BiAdam: Fast Adaptive Bilevel Optimization Methods

Bilevel optimization recently has attracted increased interest in machin...
research
10/13/2020

Gradient Descent Ascent for Min-Max Problems on Riemannian Manifold

In the paper, we study a class of useful non-convex minimax optimization...
research
08/18/2023

Breaking the Complexity Barrier in Compositional Minimax Optimization

Compositional minimax optimization is a pivotal yet under-explored chall...
research
06/23/2021

Bregman Gradient Policy Optimization

In this paper, we design a novel Bregman gradient policy optimization fr...
research
08/18/2020

Accelerated Zeroth-Order Momentum Methods from Mini to Minimax Optimization

In the paper, we propose a new accelerated zeroth-order momentum (Acc-ZO...
research
06/15/2021

SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients

Adaptive gradient methods have shown excellent performance for solving m...
research
10/31/2022

Private optimization in the interpolation regime: faster rates and hardness results

In non-private stochastic convex optimization, stochastic gradient metho...

Please sign up or login with your details

Forgot password? Click here to reset