Iterative Pre-Conditioning to Expedite the Gradient-Descent Method

03/13/2020
by   Kushal Chakrabarti, et al.
12

Gradient-descent method is one of the most widely used and perhaps the most natural method for solving an unconstrained minimization problem. The method is quite simple and can be implemented easily in distributed settings, which is the focus of this paper. We consider a distributed system of multiple agents where each agent has a local cost function, and the goal for the agents is to minimize the sum of their local cost functions. In principle, the distributed minimization problem can be solved by the agents using the traditional gradient-descent method. However, the convergence rate (or speed) of the gradient-descent method is bounded by the condition number of the minimization problem. Consequentially, when the minimization problem to be solved is ill-conditioned, the gradient-descent method may require a large number of iterations to converge to the solution. Indeed, in many practical situations, the minimization problem that needs to be solved is ill-conditioned. In this paper, we propose an iterative pre-conditioning method that significantly reduces the impact of the conditioning of the minimization problem on the convergence rate of the traditional gradient-descent algorithm. The proposed pre-conditioning method can be implemented with ease in the considered distributed setting. For now, we only consider a special case of the distributed minimization problem where the local cost functions of the agents are linear squared-errors. Besides the theoretical guarantees, the improved convergence due to our pre-conditioning method is also demonstrated through experiments on a real data-set.

READ FULL TEXT
research
08/06/2020

Iterative Pre-Conditioning for Expediting the Gradient-Descent Method: The Distributed Linear Least-Squares Problem

This paper considers the multi-agent linear least-squares problem in a s...
research
11/04/2018

A Function Fitting Method

In this article we present a function fitting method, which is a convex ...
research
02/25/2020

Statistically Preconditioned Accelerated Gradient Method for Distributed Optimization

We consider the setting of distributed empirical risk minimization where...
research
08/19/2021

On Accelerating Distributed Convex Optimizations

This paper studies a distributed multi-agent convex optimization problem...
research
11/15/2020

Accelerating Distributed SGD for Linear Regression using Iterative Pre-Conditioning

This paper considers the multi-agent distributed linear least-squares pr...
research
06/11/2021

LocoProp: Enhancing BackProp via Local Loss Optimization

We study a local loss construction approach for optimizing neural networ...
research
05/08/2019

Optimal Statistical Rates for Decentralised Non-Parametric Regression with Linear Speed-Up

We analyse the learning performance of Distributed Gradient Descent in t...

Please sign up or login with your details

Forgot password? Click here to reset