Variance-Reduced Stochastic Quasi-Newton Methods for Decentralized Learning: Part I

01/19/2022
by   Jiaojiao Zhang, et al.
0

In this work, we investigate stochastic quasi-Newton methods for minimizing a finite sum of cost functions over a decentralized network. In Part I, we develop a general algorithmic framework that incorporates stochastic quasi-Newton approximations with variance reduction so as to achieve fast convergence. At each time each node constructs a local, inexact quasi-Newton direction that asymptotically approaches the global, exact one. To be specific, (i) A local gradient approximation is constructed by using dynamic average consensus to track the average of variance-reduced local stochastic gradients over the entire network; (ii) A local Hessian inverse approximation is assumed to be positive definite with bounded eigenvalues, and how to construct it to satisfy these assumptions will be given in Part II. Compared to the existing decentralized stochastic first-order methods, the proposed general framework introduces the second-order curvature information without incurring extra sampling or communication. With a fixed step size, we establish the conditions under which the proposed general framework linearly converges to an exact optimal solution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/23/2023

Unified Convergence Theory of Stochastic and Variance-Reduced Cubic Newton Methods

We study the widely known Cubic-Newton method in the stochastic setting ...
research
06/06/2022

Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches

Stochastic variance reduction has proven effective at accelerating first...
research
10/17/2019

A Stochastic Variance Reduced Nesterov's Accelerated Quasi-Newton Method

Recently algorithms incorporating second order curvature information hav...
research
04/06/2020

Deep Neural Network Learning with Second-Order Optimizers – a Practical Study with a Stochastic Quasi-Gauss-Newton Method

Training in supervised deep learning is computationally demanding, and t...
research
09/03/2019

Stochastic quasi-Newton with line-search regularization

In this paper we present a novel quasi-Newton algorithm for use in stoch...
research
02/18/2020

Distributed Adaptive Newton Methods with Globally Superlinear Convergence

This paper considers the distributed optimization problem over a network...
research
11/26/2013

Practical Inexact Proximal Quasi-Newton Method with Global Complexity Analysis

Recently several methods were proposed for sparse optimization which mak...

Please sign up or login with your details

Forgot password? Click here to reset