Stochastic model-based minimization of weakly convex functions

by   Damek Davis, et al.

We consider an algorithm that successively samples and minimizes stochastic models of the objective function. We show that under weak-convexity and Lipschitz conditions, the algorithm drives the expected norm of the gradient of the Moreau envelope to zero at the rate O(k^-1/4). Our result yields new complexity guarantees for the stochastic proximal point algorithm on weakly convex problems and for the stochastic prox-linear algorithm for minimizing compositions of convex functions with smooth maps. Moreover, our result also recovers the recently obtained complexity estimate for the stochastic proximal subgradient method on weakly convex problems.


page 1

page 2

page 3

page 4


Stochastic subgradient method converges at the rate O(k^-1/4) on weakly convex functions

We prove that the projected stochastic subgradient method, applied to a ...

An extension of the proximal point algorithm beyond convexity

We introduce and investigate a new generalized convexity notion for func...

Stochastic model-based minimization under high-order growth

Given a nonsmooth, nonconvex minimization problem, we consider algorithm...

Stochastic subgradient method converges on tame functions

This work considers the question: what convergence guarantees does the s...

Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization

We develop a family of accelerated stochastic algorithms that minimize s...

Scalable sparse covariance estimation via self-concordance

We consider the class of convex minimization problems, composed of a sel...

Escaping Saddle Points for Nonsmooth Weakly Convex Functions via Perturbed Proximal Algorithms

We propose perturbed proximal algorithms that can provably escape strict...