DeepAI AI Chat
Log In Sign Up

L-DQN: An Asynchronous Limited-Memory Distributed Quasi-Newton Method

08/20/2021
by   Bugra Can, et al.
UNIVERSITY OF TORONTO
Rutgers University
0

This work proposes a distributed algorithm for solving empirical risk minimization problems, called L-DQN, under the master/worker communication model. L-DQN is a distributed limited-memory quasi-Newton method that supports asynchronous computations among the worker nodes. Our method is efficient both in terms of storage and communication costs, i.e., in every iteration the master node and workers communicate vectors of size O(d), where d is the dimension of the decision variable, and the amount of memory required on each node is O(md), where m is an adjustable parameter. To our knowledge, this is the first distributed quasi-Newton method with provable global linear convergence guarantees in the asynchronous setting where delays between nodes are present. Numerical experiments are provided to illustrate the theory and the practical performance of our method.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/04/2018

A Distributed Quasi-Newton Algorithm for Empirical Risk Minimization with Nonsmooth Regularization

In this paper, we propose a communication- and computation- efficient di...
05/30/2019

Scaling Up Quasi-Newton Algorithms: Communication Efficient Distributed SR1

In this paper, we present a scalable distributed implementation of the s...
03/01/2020

Asynchronous Policy Evaluation in Distributed Reinforcement Learning over Networks

This paper proposes a fully asynchronous scheme for policy evaluation of...
12/10/2018

Asynchronous Distributed Learning with Sparse Communications and Identification

In this paper, we present an asynchronous optimization algorithm for dis...
09/11/2017

GIANT: Globally Improved Approximate Newton Method for Distributed Optimization

For distributed computing environments, we consider the canonical machin...
05/16/2021

LocalNewton: Reducing Communication Bottleneck for Distributed Learning

To address the communication bottleneck problem in distributed optimizat...
11/16/2022

Asynchronous Bayesian Learning over a Network

We present a practical asynchronous data fusion model for networked agen...