A Linearly-Convergent Stochastic L-BFGS Algorithm

08/09/2015
by   Philipp Moritz, et al.
0

We propose a new stochastic L-BFGS algorithm and prove a linear convergence rate for strongly convex and smooth functions. Our algorithm draws heavily from a recent stochastic variant of L-BFGS proposed in Byrd et al. (2014) as well as a recent approach to variance reduction for stochastic gradient descent from Johnson and Zhang (2013). We demonstrate experimentally that our algorithm performs well on large-scale convex and non-convex optimization problems, exhibiting linear convergence and rapidly solving the optimization problems to high levels of precision. Furthermore, we show that our algorithm performs well for a wide-range of step sizes, often differing by several orders of magnitude.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/27/2017

Stochastic Conjugate Gradient Algorithm with Variance Reduction

Conjugate gradient methods are a class of important methods for solving ...
research
06/08/2020

The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization

The extrapolation strategy raised by Nesterov, which can accelerate the ...
research
10/28/2021

Stochastic Mirror Descent: Convergence Analysis and Adaptive Variants via the Mirror Stochastic Polyak Stepsize

We investigate the convergence of stochastic mirror descent (SMD) in rel...
research
03/14/2022

A Linearly Convergent Douglas-Rachford Splitting Solver for Markovian Information-Theoretic Optimization Problems

In this work, we propose solving the Information bottleneck (IB) and Pri...
research
04/28/2020

Distributed Projected Subgradient Method for Weakly Convex Optimization

The stochastic subgradient method is a widely-used algorithm for solving...
research
11/12/2021

Solving A System Of Linear Equations By Randomized Orthogonal Projections

Randomization has shown catalyzing effects in linear algebra with promis...
research
10/26/2020

Stochastic Optimization with Laggard Data Pipelines

State-of-the-art optimization is steadily shifting towards massively par...

Please sign up or login with your details

Forgot password? Click here to reset