Log In Sign Up

A Semismooth Newton Stochastic Proximal Point Algorithm with Variance Reduction

by   Andre Milzarek, et al.

We develop an implementable stochastic proximal point (SPP) method for a class of weakly convex, composite optimization problems. The proposed stochastic proximal point algorithm incorporates a variance reduction mechanism and the resulting SPP updates are solved using an inexact semismooth Newton framework. We establish detailed convergence results that take the inexactness of the SPP steps into account and that are in accordance with existing convergence guarantees of (proximal) stochastic variance-reduced gradient methods. Numerical experiments show that the proposed algorithm competes favorably with other state-of-the-art methods and achieves higher robustness with respect to the step size selection.


page 1

page 2

page 3

page 4


A Stochastic Proximal Point Algorithm for Saddle-Point Problems

We consider saddle point problems which objective functions are the aver...

Variance-Reduced Proximal Stochastic Gradient Descent for Non-convex Composite optimization

Here we study non-convex composite optimization: first, a finite-sum of ...

Sharper Bounds for Proximal Gradient Algorithms with Errors

We analyse the convergence of the proximal gradient algorithm for convex...

The Stochastic Proximal Distance Algorithm

Stochastic versions of proximal methods have gained much attention in st...

Proximal Quasi-Newton for Computationally Intensive L1-regularized M-estimators

We consider the class of optimization problems arising from computationa...

SpiderBoost: A Class of Faster Variance-reduced Algorithms for Nonconvex Optimization

There has been extensive research on developing stochastic variance redu...

An inexact subsampled proximal Newton-type method for large-scale machine learning

We propose a fast proximal Newton-type algorithm for minimizing regulari...