A Proximal Stochastic Quasi-Newton Algorithm

01/31/2016
by   Luo Luo, et al.
0

In this paper, we discuss the problem of minimizing the sum of two convex functions: a smooth function plus a non-smooth function. Further, the smooth part can be expressed by the average of a large number of smooth component functions, and the non-smooth part is equipped with a simple proximal mapping. We propose a proximal stochastic second-order method, which is efficient and scalable. It incorporates the Hessian in the smooth part of the function and exploits multistage scheme to reduce the variance of the stochastic gradient. We prove that our method can achieve linear rate of convergence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/19/2014

A Proximal Stochastic Gradient Method with Progressive Variance Reduction

We consider the problem of minimizing the sum of two convex functions: o...
research
03/26/2021

Second order semi-smooth Proximal Newton methods in Hilbert spaces

We develop a globalized Proximal Newton method for composite and possibl...
research
08/28/2017

An inexact subsampled proximal Newton-type method for large-scale machine learning

We propose a fast proximal Newton-type algorithm for minimizing regulari...
research
03/05/2016

A single-phase, proximal path-following framework

We propose a new proximal, path-following framework for a class of const...
research
09/04/2023

Self-concordant Smoothing for Convex Composite Optimization

We introduce the notion of self-concordant smoothing for minimizing the ...
research
05/28/2019

Stochastic Proximal Langevin Algorithm: Potential Splitting and Nonasymptotic Rates

We propose a new algorithm---Stochastic Proximal Langevin Algorithm (SPL...
research
05/26/2022

On stochastic stabilization via non-smooth control Lyapunov functions

Control Lyapunov function is a central tool in stabilization. It general...

Please sign up or login with your details

Forgot password? Click here to reset