Iterative Pre-Conditioning for Expediting the Gradient-Descent Method: The Distributed Linear Least-Squares Problem

08/06/2020
by   Kushal Chakrabarti, et al.
7

This paper considers the multi-agent linear least-squares problem in a server-agent network. In this problem, the system comprises multiple agents, each having a set of local data points, that are connected to a server. The goal for the agents is to compute a linear mathematical model that optimally fits the collective data points held by all the agents, without sharing their individual local data points. This goal can be achieved, in principle, using the server-agent variant of the traditional iterative gradient-descent method. The gradient-descent method converges linearly to a solution, and its rate of convergence is lower bounded by the conditioning of the agents' collective data points. If the data points are ill-conditioned, the gradient-descent method may require a large number of iterations to converge. We propose an iterative pre-conditioning technique that mitigates the deleterious effect of the conditioning of data points on the rate of convergence of the gradient-descent method. We rigorously show that the resulting pre-conditioned gradient-descent method, with the proposed iterative pre-conditioning, achieves superlinear convergence when the least-squares problem has a unique solution. In general, the convergence is linear with improved rate of convergence in comparison to the traditional gradient-descent method and the state-of-the-art accelerated gradient-descent methods. We further illustrate the improved rate of convergence of our proposed algorithm through experiments on different real-world least-squares problems in both noise-free and noisy computation environment.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/15/2020

Accelerating Distributed SGD for Linear Regression using Iterative Pre-Conditioning

This paper considers the multi-agent distributed linear least-squares pr...
research
08/19/2021

On Accelerating Distributed Convex Optimizations

This paper studies a distributed multi-agent convex optimization problem...
research
03/13/2020

Iterative Pre-Conditioning to Expedite the Gradient-Descent Method

Gradient-descent method is one of the most widely used and perhaps the m...
research
05/22/2018

Robust Gradient Descent via Moment Encoding with LDPC Codes

This paper considers the problem of implementing large-scale gradient de...
research
05/04/2023

A Bootstrap Algorithm for Fast Supervised Learning

Training a neural network (NN) typically relies on some type of curve-fo...
research
02/20/2019

Active Probabilistic Inference on Matrices for Pre-Conditioning in Stochastic Optimization

Pre-conditioning is a well-known concept that can significantly improve ...
research
01/26/2021

Robustness of Iteratively Pre-Conditioned Gradient-Descent Method: The Case of Distributed Linear Regression Problem

This paper considers the problem of multi-agent distributed linear regre...

Please sign up or login with your details

Forgot password? Click here to reset