DeepAI
Log In Sign Up

A Semismooth Newton Stochastic Proximal Point Algorithm with Variance Reduction

04/01/2022
by   Andre Milzarek, et al.
0

We develop an implementable stochastic proximal point (SPP) method for a class of weakly convex, composite optimization problems. The proposed stochastic proximal point algorithm incorporates a variance reduction mechanism and the resulting SPP updates are solved using an inexact semismooth Newton framework. We establish detailed convergence results that take the inexactness of the SPP steps into account and that are in accordance with existing convergence guarantees of (proximal) stochastic variance-reduced gradient methods. Numerical experiments show that the proposed algorithm competes favorably with other state-of-the-art methods and achieves higher robustness with respect to the step size selection.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/13/2019

A Stochastic Proximal Point Algorithm for Saddle-Point Problems

We consider saddle point problems which objective functions are the aver...
06/02/2016

Variance-Reduced Proximal Stochastic Gradient Descent for Non-convex Composite optimization

Here we study non-convex composite optimization: first, a finite-sum of ...
03/04/2022

Sharper Bounds for Proximal Gradient Algorithms with Errors

We analyse the convergence of the proximal gradient algorithm for convex...
10/21/2022

The Stochastic Proximal Distance Algorithm

Stochastic versions of proximal methods have gained much attention in st...
06/27/2014

Proximal Quasi-Newton for Computationally Intensive L1-regularized M-estimators

We consider the class of optimization problems arising from computationa...
10/25/2018

SpiderBoost: A Class of Faster Variance-reduced Algorithms for Nonconvex Optimization

There has been extensive research on developing stochastic variance redu...
08/28/2017

An inexact subsampled proximal Newton-type method for large-scale machine learning

We propose a fast proximal Newton-type algorithm for minimizing regulari...