A Second-order Equilibrium in Nonconvex-Nonconcave Min-max Optimization: Existence and Algorithm

06/22/2020
by   Oren Mangoubi, et al.
0

Min-max optimization, with a nonconvex-nonconcave objective function f: ℝ^d ×ℝ^d →ℝ, arises in many areas, including optimization, economics, and deep learning. The nonconvexity-nonconcavity of f means that the problem of finding a global ε-min-max point cannot be solved in poly(d, 1/ε) evaluations of f. Thus, most algorithms seek to obtain a certain notion of local min-max point where, roughly speaking, each player optimizes her payoff in a local sense. However, the classes of local min-max solutions which prior algorithms seek are only guaranteed to exist under very strong assumptions on f, such as convexity or monotonicity. We propose a notion of a greedy equilibrium point for min-max optimization and prove the existence of such a point for any function such that it and its first three derivatives are bounded. Informally, we say that a point (x^⋆, y^⋆) is an ε-greedy min-max equilibrium point of a function f: ℝ^d ×ℝ^d →ℝ if y^⋆ is a second-order local maximum for f(x^⋆,·) and, roughly, x^⋆ is a local minimum for a greedy optimization version of the function max_y f(x,y) which can be efficiently estimated using greedy algorithms. The existence follows from an algorithm that converges from any starting point to such a point in a number of gradient and function evaluations that is polynomial in 1/ε, the dimension d, and the bounds on f and its first three derivatives. Our results do not require convexity, monotonicity, or special starting points.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset