Escaping Saddle Points for Nonsmooth Weakly Convex Functions via Perturbed Proximal Algorithms

by   Minhui Huang, et al.

We propose perturbed proximal algorithms that can provably escape strict saddles for nonsmooth weakly convex functions. The main results are based on a novel characterization of ϵ-approximate local minimum for nonsmooth functions, and recent developments on perturbed gradient methods for escaping saddle points for smooth problems. Specifically, we show that under standard assumptions, the perturbed proximal point, perturbed proximal gradient and perturbed proximal linear algorithms find ϵ-approximate local minimum for nonsmooth weakly convex functions in O(ϵ^-2log(d)^4) iterations, where d is the dimension of the problem.


page 1

page 2

page 3

page 4


Escaping strict saddle points of the Moreau envelope in nonsmooth optimization

Recent work has shown that stochastically perturbed gradient methods can...

Active strict saddles in nonsmooth optimization

We introduce a geometrically transparent strict saddle property for nons...

Efficiently Escaping Saddle Points in Bilevel Optimization

Bilevel optimization is one of the fundamental problems in machine learn...

Approximate Bisimulation Minimisation

We propose polynomial-time algorithms to minimise labelled Markov chains...

Stochastic model-based minimization of weakly convex functions

We consider an algorithm that successively samples and minimizes stochas...

Stochastic subgradient method converges at the rate O(k^-1/4) on weakly convex functions

We prove that the projected stochastic subgradient method, applied to a ...

Improved analysis for a proximal algorithm for sampling

We study the proximal sampler of Lee, Shen, and Tian (2021) and obtain n...