Generalized Optimistic Methods for Convex-Concave Saddle Point Problems

02/19/2022
by   Ruichen Jiang, et al.
0

The optimistic gradient method has seen increasing popularity as an efficient first-order method for solving convex-concave saddle point problems. To analyze its iteration complexity, a recent work [arXiv:1901.08511] proposed an interesting perspective that interprets the optimistic gradient method as an approximation to the proximal point method. In this paper, we follow this approach and distill the underlying idea of optimism to propose a generalized optimistic method, which encompasses the optimistic gradient method as a special case. Our general framework can handle constrained saddle point problems with composite objective functions and can work with arbitrary norms with compatible Bregman distances. Moreover, we also develop an adaptive line search scheme to select the stepsizes without knowledge of the smoothness coefficients. We instantiate our method with first-order, second-order and higher-order oracles and give sharp global iteration complexity bounds. When the objective function is convex-concave, we show that the averaged iterates of our p-th-order method (p≥ 1) converge at a rate of 𝒪(1/N^p+1/2). When the objective function is further strongly-convex-strongly-concave, we prove a complexity bound of 𝒪(L_1/μlog1/ϵ) for our first-order method and a bound of 𝒪((L_p D^p-1/2/μ)^2/p+1+loglog1/ϵ) for our p-th-order method (p≥ 2) respectively, where L_p (p≥ 1) is the Lipschitz constant of the p-th-order derivative, μ is the strongly-convex parameter, and D is the initial Bregman distance to the saddle point. Moreover, our line search scheme provably only requires an almost constant number of calls to a subproblem solver per iteration on average, making our first-order and second-order methods particularly amenable to implementation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2022

Optimal and Adaptive Monteiro-Svaiter Acceleration

We develop a variant of the Monteiro-Svaiter (MS) acceleration framework...
research
06/03/2019

Towards Unified Acceleration of High-Order Algorithms under Hölder Continuity and Uniform Convexity

In this paper, through a very intuitive vanilla proximal method perspec...
research
01/25/2021

Extragradient and Extrapolation Methods with Generalized Bregman Distances for Saddle Point Problems

In this work, we introduce two algorithmic frameworks, named Bregman ext...
research
06/15/2020

Derivative-free global minimization for a class of multiple minima problems

We prove that the finite-difference based derivative-free descent (FD-DF...
research
04/06/2022

Black-Box Min–Max Continuous Optimization Using CMA-ES with Worst-case Ranking Approximation

In this study, we investigate the problem of min-max continuous optimiza...
research
01/23/2023

(Non)-penalized Multilevel methods for non-uniformly log-concave distributions

We study and develop multilevel methods for the numerical approximation ...

Please sign up or login with your details

Forgot password? Click here to reset