
Stochastic subgradient method converges at the rate O(k^1/4) on weakly convex functions
We prove that the projected stochastic subgradient method, applied to a ...
read it

Stochastic modelbased minimization under highorder growth
Given a nonsmooth, nonconvex minimization problem, we consider algorithm...
read it

Stochastic subgradient method converges on tame functions
This work considers the question: what convergence guarantees does the s...
read it

A Stochastic Proximal Point Algorithm for SaddlePoint Problems
We consider saddle point problems which objective functions are the aver...
read it

Scalable sparse covariance estimation via selfconcordance
We consider the class of convex minimization problems, composed of a sel...
read it

The importance of better models in stochastic optimization
Standard stochastic optimization methods are brittle, sensitive to steps...
read it

Completing Simple Valuations in Kcategories
We prove that Keimel and Lawson's Kcompletion Kc of the simple valuatio...
read it
Stochastic modelbased minimization of weakly convex functions
We consider an algorithm that successively samples and minimizes stochastic models of the objective function. We show that under weakconvexity and Lipschitz conditions, the algorithm drives the expected norm of the gradient of the Moreau envelope to zero at the rate O(k^1/4). Our result yields new complexity guarantees for the stochastic proximal point algorithm on weakly convex problems and for the stochastic proxlinear algorithm for minimizing compositions of convex functions with smooth maps. Moreover, our result also recovers the recently obtained complexity estimate for the stochastic proximal subgradient method on weakly convex problems.
READ FULL TEXT
Comments
There are no comments yet.