Stochastic Conjugate Gradient Algorithm with Variance Reduction

10/27/2017
by   Xiao-Bo Jin, et al.
0

Conjugate gradient methods are a class of important methods for solving linear equations and nonlinear optimization. In our work, we propose a new stochastic conjugate gradient algorithm with variance reduction (CGVR) and prove its linear convergence with the Fletcher and Revves method for strongly convex and smooth functions. We experimentally demonstrate that the CGVR algorithm converges faster than its counterparts for six large-scale optimization problems that may be convex, non-convex or non-smooth, and its AUC (Area Under Curve) performance with L2-regularized L2-loss is comparable to that of LIBLINEAR but with significant improvement in computational efficiency.

READ FULL TEXT

page 9

page 10

page 11

page 12

research
08/09/2015

A Linearly-Convergent Stochastic L-BFGS Algorithm

We propose a new stochastic L-BFGS algorithm and prove a linear converge...
research
09/13/2018

Hamiltonian Descent Methods

We propose a family of optimization methods that achieve linear converge...
research
02/13/2018

Fast Global Convergence via Landscape of Empirical Loss

While optimizing convex objective (loss) functions has been a powerhouse...
research
05/22/2023

SignSVRG: fixing SignSGD via variance reduction

We consider the problem of unconstrained minimization of finite sums of ...
research
05/02/2018

SVRG meets SAGA: k-SVRG --- A Tale of Limited Memory

In recent years, many variance reduced algorithms for empirical risk min...
research
02/26/2018

Guaranteed Sufficient Decrease for Stochastic Variance Reduced Gradient Optimization

In this paper, we propose a novel sufficient decrease technique for stoc...
research
10/18/2016

Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server

This paper presents an asynchronous incremental aggregated gradient algo...

Please sign up or login with your details

Forgot password? Click here to reset