Perturbed Proximal Descent to Escape Saddle Points for Non-convex and Non-smooth Objective Functions

01/24/2019
by   Zhishen Huang, et al.
0

We consider the problem of finding local minimizers in non-convex and non-smooth optimization. Under the assumption of strict saddle points, positive results have been derived for first-order methods. We present the first known results for the non-smooth case, which requires different analysis and a different algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/25/2018

Convergence guarantees for a class of non-convex and non-smooth optimization problems

We consider the problem of finding critical points of functions that are...
research
02/11/2015

Proximal Algorithms in Statistics and Machine Learning

In this paper we develop proximal methods for statistical learning. Prox...
research
06/26/2020

Understanding Notions of Stationarity in Non-Smooth Optimization

Many contemporary applications in signal processing and machine learning...
research
03/23/2022

Terms of Lucas sequences having a large smooth divisor

We show that the Kn–smooth part of a^n-1 for an integer a>1 is a^o(n) fo...
research
06/17/2021

Escaping strict saddle points of the Moreau envelope in nonsmooth optimization

Recent work has shown that stochastically perturbed gradient methods can...
research
11/23/2019

A Stochastic Tensor Method for Non-convex Optimization

We present a stochastic optimization method that uses a fourth-order reg...
research
01/17/2023

Noisy, Non-Smooth, Non-Convex Estimation of Moment Condition Models

A practical challenge for structural estimation is the requirement to ac...

Please sign up or login with your details

Forgot password? Click here to reset