Gradient Descent Ascent for Min-Max Problems on Riemannian Manifold

10/13/2020
by   Feihu Huang, et al.
0

In the paper, we study a class of useful non-convex minimax optimization problems on the Riemanian manifold and propose a class of Riemanian gradient descent ascent algorithms to solve these minimax problems. Specifically, we propose a new Riemannian gradient descent ascent (RGDA) algorithm for the deterministic minimax optimization. Moreover, we prove that the RGDA has a sample complexity of O(κ^2ϵ^-2) for finding an ϵ-stationary point of the nonconvex strongly-concave minimax problems, where κ denotes the condition number. At the same time, we introduce a Riemannian stochastic gradient descent ascent (RSGDA) algorithm for the stochastic minimax optimization. In the theoretical analysis, we prove that the RSGDA can achieve a sample complexity of O(κ^4ϵ^-4). To further reduce the sample complexity, we propose a novel momentum variance-reduced Riemannian stochastic gradient descent ascent (MVR-RSGDA) algorithm based on a new momentum variance-reduced technique of STORM. We prove that the MVR-RSGDA algorithm achieves a lower sample complexity of Õ(κ^4ϵ^-3) without large batches, which reaches near the best known sample complexity for its Euclidean counterparts. This is the first study of the minimax optimization over the Riemannian manifold. Extensive experimental results on the robust deep neural networks training over Stiefel manifold demonstrate the efficiency of our proposed algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/30/2021

AdaGDA: Faster Adaptive Gradient Descent Ascent Methods for Minimax Optimization

In the paper, we propose a class of faster adaptive gradient descent asc...
research
02/08/2023

Decentralized Riemannian Algorithm for Nonconvex Minimax Problems

The minimax optimization over Riemannian manifolds (possibly nonconvex c...
research
08/18/2020

Accelerated Zeroth-Order Momentum Methods from Mini to Minimax Optimization

In the paper, we propose a new accelerated zeroth-order momentum (Acc-ZO...
research
04/09/2021

A Riemannian smoothing steepest descent method for non-Lipschitz optimization on submanifolds

In this paper, we propose a Riemannian smoothing steepest descent method...
research
10/31/2022

Private optimization in the interpolation regime: faster rates and hardness results

In non-private stochastic convex optimization, stochastic gradient metho...
research
02/01/2023

Riemannian Stochastic Approximation for Minimizing Tame Nonsmooth Objective Functions

In many learning applications, the parameters in a model are structurall...
research
08/23/2020

Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth Nonlinear TD Learning

Temporal-Difference (TD) learning with nonlinear smooth function approxi...

Please sign up or login with your details

Forgot password? Click here to reset