Finding Second-Order Stationary Point for Nonconvex-Strongly-Concave Minimax Problem

10/10/2021
by   Luo Luo, et al.
0

We study the smooth minimax optimization problem of the form min_ xmax_ y f( x, y), where the objective function is strongly-concave in y but possibly nonconvex in x. This problem includes a lot of applications in machine learning such as regularized GAN, reinforcement learning and adversarial training. Most of existing theory related to gradient descent accent focus on establishing the convergence result for achieving the first-order stationary point of f( x, y) or primal function P( x)≜max_ y f( x, y). In this paper, we design a new optimization method via cubic Newton iterations, which could find an 𝒪(ε,κ^1.5√(ρε))-second-order stationary point of P( x) with 𝒪(κ^1.5√(ρ)ε^-1.5) second-order oracle calls and 𝒪̃(κ^2√(ρ)ε^-1.5) first-order oracle calls, where κ is the condition number and ρ is the Hessian smoothness coefficient of f( x, y). For high-dimensional problems, we propose an variant algorithm to avoid expensive cost form second-order oracle, which solves the cubic sub-problem inexactly via gradient descent and matrix Chebyshev expansion. This strategy still obtains desired approximate second-order stationary point with high probability but only requires 𝒪̃(κ^1.5ℓε^-2) Hessian-vector oracle and 𝒪̃(κ^2√(ρ)ε^-1.5) first-order oracle calls. To the best of our knowledge, this is the first work considers non-asymptotic convergence behavior of finding second-order stationary point for minimax problem without convex-concave assumption.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/30/2023

Accelerating Inexact HyperGradient Descent for Bilevel Optimization

We present a method for solving general nonconvex-strongly-convex bileve...
research
10/14/2021

Escaping Saddle Points in Nonconvex Minimax Optimization via Cubic-Regularized Gradient Descent-Ascent

The gradient descent-ascent (GDA) algorithm has been widely applied to s...
research
09/21/2020

Zeroth-Order Algorithms for Smooth Saddle-Point Problems

In recent years, the importance of saddle-point problems in machine lear...
research
05/25/2023

Two-timescale Extragradient for Finding Local Minimax Points

Minimax problems are notoriously challenging to optimize. However, we de...
research
06/26/2023

Near-Optimal Fully First-Order Algorithms for Finding Stationary Points in Bilevel Optimization

Bilevel optimization has various applications such as hyper-parameter op...
research
09/06/2018

Escaping Saddle Points in Constrained Optimization

In this paper, we focus on escaping from saddle points in smooth nonconv...
research
04/22/2019

Provable Bregman-divergence based Methods for Nonconvex and Non-Lipschitz Problems

The (global) Lipschitz smoothness condition is crucial in establishing t...

Please sign up or login with your details

Forgot password? Click here to reset