Active strict saddles in nonsmooth optimization

12/16/2019
by   Damek Davis, et al.
0

We introduce a geometrically transparent strict saddle property for nonsmooth functions. This property guarantees that simple proximal algorithms on weakly convex problems converge only to local minimizers, when randomly initialized. We argue that the strict saddle property may be a realistic assumption in applications, since it provably holds for generic semi-algebraic optimization problems.

READ FULL TEXT

page 4

page 5

research
02/04/2021

Escaping Saddle Points for Nonsmooth Weakly Convex Functions via Perturbed Proximal Algorithms

We propose perturbed proximal algorithms that can provably escape strict...
research
02/27/2013

Modus Ponens Generating Function in the Class of ^-valuations of Plausibility

We discuss the problem of construction of inference procedures which can...
research
09/13/2022

Semi-strict chordality of digraphs

Chordal graphs are important in algorithmic graph theory. Chordal digrap...
research
09/06/2021

Stochastic Subgradient Descent on a Generic Definable Function Converges to a Minimizer

It was previously shown by Davis and Drusvyatskiy that every Clarke crit...
research
06/17/2021

Escaping strict saddle points of the Moreau envelope in nonsmooth optimization

Recent work has shown that stochastically perturbed gradient methods can...
research
08/04/2021

Stochastic Subgradient Descent Escapes Active Strict Saddles

In non-smooth stochastic optimization, we establish the non-convergence ...
research
02/08/2023

A Unified Approach to Unimodality of Gaussian Polynomials

In 2013, Pak and Panova proved the strict unimodality property of q-bino...

Please sign up or login with your details

Forgot password? Click here to reset