Escaping Saddle Points in Constrained Optimization

09/06/2018
by   Aryan Mokhtari, et al.
0

In this paper, we focus on escaping from saddle points in smooth nonconvex optimization problems subject to a convex set C. We propose a generic framework that yields convergence to a second-order stationary point of the problem, if the convex set C is simple for a quadratic objective function. To be more precise, our results hold if one can find a ρ-approximate solution of a quadratic program subject to C in polynomial time, where ρ<1 is a positive constant that depends on the structure of the set C. Under this condition, we show that the sequence of iterates generated by the proposed framework reaches an (ϵ,γ)-second order stationary point (SOSP) in at most O({ϵ^-2,ρ^-3γ^-3}) iterations. We further characterize the overall arithmetic operations to reach an SOSP when the convex set C can be written as a set of quadratic constraints. Finally, we extend our results to the stochastic setting and characterize the number of stochastic gradient and Hessian evaluations to reach an (ϵ,γ)-SOSP.

READ FULL TEXT
research
05/01/2019

Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization

Variance reduction techniques like SVRG provide simple and fast algorith...
research
10/04/2022

Convex and Nonconvex Sublinear Regression with Application to Data-driven Learning of Reach Sets

We consider estimating a compact set from finite data by approximating t...
research
02/01/2019

Sharp Analysis for Nonconvex SGD Escaping from Saddle Points

In this paper, we prove that the simplest Stochastic Gradient Descent (S...
research
10/10/2021

Finding Second-Order Stationary Point for Nonconvex-Strongly-Concave Minimax Problem

We study the smooth minimax optimization problem of the form min_ xmax_ ...
research
11/28/2021

Escape saddle points by a simple gradient-descent based algorithm

Escaping saddle points is a central research topic in nonconvex optimiza...
research
07/04/2022

Approximate Vanishing Ideal Computations at Scale

The approximate vanishing ideal of a set of points X = {𝐱_1, …, 𝐱_m}⊆ [0...
research
04/28/2023

A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems

A stochastic-gradient-based interior-point algorithm for minimizing a co...

Please sign up or login with your details

Forgot password? Click here to reset