Accelerating Distributed SGD for Linear Regression using Iterative Pre-Conditioning

11/15/2020
by   Kushal Chakrabarti, et al.
0

This paper considers the multi-agent distributed linear least-squares problem. The system comprises multiple agents, each agent with a locally observed set of data points, and a common server with whom the agents can interact. The agents' goal is to compute a linear model that best fits the collective data points observed by all the agents. In the server-based distributed settings, the server cannot access the data points held by the agents. The recently proposed Iteratively Pre-conditioned Gradient-descent (IPG) method has been shown to converge faster than other existing distributed algorithms that solve this problem. In the IPG algorithm, the server and the agents perform numerous iterative computations. Each of these iterations relies on the entire batch of data points observed by the agents for updating the current estimate of the solution. Here, we extend the idea of iterative pre-conditioning to the stochastic settings, where the server updates the estimate and the iterative pre-conditioning matrix based on a single randomly selected data point at every iteration. We show that our proposed Iteratively Pre-conditioned Stochastic Gradient-descent (IPSG) method converges linearly in expectation to a proximity of the solution. Importantly, we empirically show that the proposed IPSG method's convergence rate compares favorably to prominent stochastic algorithms for solving the linear least-squares problem in server-based networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/26/2021

Robustness of Iteratively Pre-Conditioned Gradient-Descent Method: The Case of Distributed Linear Regression Problem

This paper considers the problem of multi-agent distributed linear regre...
research
08/06/2020

Iterative Pre-Conditioning for Expediting the Gradient-Descent Method: The Distributed Linear Least-Squares Problem

This paper considers the multi-agent linear least-squares problem in a s...
research
08/19/2021

On Accelerating Distributed Convex Optimizations

This paper studies a distributed multi-agent convex optimization problem...
research
03/13/2020

Iterative Pre-Conditioning to Expedite the Gradient-Descent Method

Gradient-descent method is one of the most widely used and perhaps the m...
research
03/20/2019

Byzantine Fault Tolerant Distributed Linear Regression

This paper considers the problem of Byzantine fault tolerant distributed...
research
07/03/2022

Distributed Online System Identification for LTI Systems Using Reverse Experience Replay

Identification of linear time-invariant (LTI) systems plays an important...
research
05/04/2023

A Bootstrap Algorithm for Fast Supervised Learning

Training a neural network (NN) typically relies on some type of curve-fo...

Please sign up or login with your details

Forgot password? Click here to reset