Active strict saddles in nonsmooth optimization

12/16/2019
by   Damek Davis, et al.
0

We introduce a geometrically transparent strict saddle property for nonsmooth functions. This property guarantees that simple proximal algorithms on weakly convex problems converge only to local minimizers, when randomly initialized. We argue that the strict saddle property may be a realistic assumption in applications, since it provably holds for generic semi-algebraic optimization problems.

READ FULL TEXT

Authors

page 4

page 5

02/04/2021

Escaping Saddle Points for Nonsmooth Weakly Convex Functions via Perturbed Proximal Algorithms

We propose perturbed proximal algorithms that can provably escape strict...
02/27/2013

Modus Ponens Generating Function in the Class of ^-valuations of Plausibility

We discuss the problem of construction of inference procedures which can...
08/04/2021

Stochastic Subgradient Descent Escapes Active Strict Saddles

In non-smooth stochastic optimization, we establish the non-convergence ...
09/06/2021

Stochastic Subgradient Descent on a Generic Definable Function Converges to a Minimizer

It was previously shown by Davis and Drusvyatskiy that every Clarke crit...
06/17/2021

Escaping strict saddle points of the Moreau envelope in nonsmooth optimization

Recent work has shown that stochastically perturbed gradient methods can...
05/31/2020

Revisiting Frank-Wolfe for Polytopes: Strict Complementary and Sparsity

In recent years it was proved that simple modifications of the classical...
01/31/2022

Inverse design of photonic devices with strict foundry fabrication constraints

We introduce a new method for inverse design of nanophotonic devices whi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.