Escaping strict saddle points of the Moreau envelope in nonsmooth optimization

06/17/2021
by   Damek Davis, et al.
0

Recent work has shown that stochastically perturbed gradient methods can efficiently escape strict saddle points of smooth functions. We extend this body of work to nonsmooth optimization, by analyzing an inexact analogue of a stochastically perturbed gradient method applied to the Moreau envelope. The main conclusion is that a variety of algorithms for nonsmooth optimization can escape strict saddle points of the Moreau envelope at a controlled rate. The main technical insight is that typical algorithms applied to the proximal subproblem yield directions that approximate the gradient of the Moreau envelope in relative terms.

READ FULL TEXT
02/04/2021

Escaping Saddle Points for Nonsmooth Weakly Convex Functions via Perturbed Proximal Algorithms

We propose perturbed proximal algorithms that can provably escape strict...
01/24/2019

Perturbed Proximal Descent to Escape Saddle Points for Non-convex and Non-smooth Objective Functions

We consider the problem of finding local minimizers in non-convex and no...
09/23/2018

Second-order Guarantees of Distributed Gradient Algorithms

We consider distributed smooth nonconvex unconstrained optimization over...
12/16/2019

Active strict saddles in nonsmooth optimization

We introduce a geometrically transparent strict saddle property for nons...
04/06/2021

Hölder Gradient Descent and Adaptive Regularization Methods in Banach Spaces for First-Order Points

This paper considers optimization of smooth nonconvex functionals in smo...
01/07/2021

Boundary Conditions for Linear Exit Time Gradient Trajectories Around Saddle Points: Analysis and Algorithm

Gradient-related first-order methods have become the workhorse of large-...

Please sign up or login with your details

Forgot password? Click here to reset