Escaping Saddle Points for Nonsmooth Weakly Convex Functions via Perturbed Proximal Algorithms

02/04/2021
by   Minhui Huang, et al.
0

We propose perturbed proximal algorithms that can provably escape strict saddles for nonsmooth weakly convex functions. The main results are based on a novel characterization of ϵ-approximate local minimum for nonsmooth functions, and recent developments on perturbed gradient methods for escaping saddle points for smooth problems. Specifically, we show that under standard assumptions, the perturbed proximal point, perturbed proximal gradient and perturbed proximal linear algorithms find ϵ-approximate local minimum for nonsmooth weakly convex functions in O(ϵ^-2log(d)^4) iterations, where d is the dimension of the problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/17/2021

Escaping strict saddle points of the Moreau envelope in nonsmooth optimization

Recent work has shown that stochastically perturbed gradient methods can...
12/16/2019

Active strict saddles in nonsmooth optimization

We introduce a geometrically transparent strict saddle property for nons...
02/08/2022

Efficiently Escaping Saddle Points in Bilevel Optimization

Bilevel optimization is one of the fundamental problems in machine learn...
10/01/2021

Approximate Bisimulation Minimisation

We propose polynomial-time algorithms to minimise labelled Markov chains...
03/17/2018

Stochastic model-based minimization of weakly convex functions

We consider an algorithm that successively samples and minimizes stochas...
02/08/2018

Stochastic subgradient method converges at the rate O(k^-1/4) on weakly convex functions

We prove that the projected stochastic subgradient method, applied to a ...
02/13/2022

Improved analysis for a proximal algorithm for sampling

We study the proximal sampler of Lee, Shen, and Tian (2021) and obtain n...