LocalNewton: Reducing Communication Bottleneck for Distributed Learning

05/16/2021
by   Vipul Gupta, et al.
10

To address the communication bottleneck problem in distributed optimization within a master-worker framework, we propose LocalNewton, a distributed second-order algorithm with local averaging. In LocalNewton, the worker machines update their model in every iteration by finding a suitable second-order descent direction using only the data and model stored in their own local memory. We let the workers run multiple such iterations locally and communicate the models to the master node only once every few (say L) iterations. LocalNewton is highly practical since it requires only one hyperparameter, the number L of local iterations. We use novel matrix concentration-based techniques to obtain theoretical guarantees for LocalNewton, and we validate them with detailed empirical evaluation. To enhance practicability, we devise an adaptive scheme to choose L, and we show that this reduces the number of local iterations in worker machines between two model synchronizations as the training proceeds, successively refining the model quality at the master. Via extensive experiments using several real-world datasets with AWS Lambda workers and an AWS EC2 master, we show that LocalNewton requires fewer than 60 and workers) and less than 40 state-of-the-art algorithms, to reach the same training loss.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2020

Distributed Newton Can Communicate Less and Resist Byzantine Workers

We develop a distributed second order optimization algorithm that is com...
research
06/29/2018

Fundamental Limits of Distributed Data Shuffling

Data shuffling of training data among different computing nodes (workers...
research
05/24/2018

Polynomially Coded Regression: Optimal Straggler Mitigation via Data Encoding

We consider the problem of training a least-squares regression model on ...
research
08/20/2021

L-DQN: An Asynchronous Limited-Memory Distributed Quasi-Newton Method

This work proposes a distributed algorithm for solving empirical risk mi...
research
12/10/2018

Asynchronous Distributed Learning with Sparse Communications and Identification

In this paper, we present an asynchronous optimization algorithm for dis...
research
05/25/2016

Efficient Distributed Learning with Sparsity

We propose a novel, efficient approach for distributed sparse learning i...
research
06/08/2015

DUAL-LOCO: Distributing Statistical Estimation Using Random Projections

We present DUAL-LOCO, a communication-efficient algorithm for distribute...

Please sign up or login with your details

Forgot password? Click here to reset