
Accelerating Distributed SGD for Linear Regression using Iterative PreConditioning
This paper considers the multiagent distributed linear leastsquares pr...
read it

On Accelerating Distributed Convex Optimizations
This paper studies a distributed multiagent convex optimization problem...
read it

Iterative PreConditioning to Expedite the GradientDescent Method
Gradientdescent method is one of the most widely used and perhaps the m...
read it

Robust Gradient Descent via Moment Encoding with LDPC Codes
This paper considers the problem of implementing largescale gradient de...
read it

Active Probabilistic Inference on Matrices for PreConditioning in Stochastic Optimization
Preconditioning is a wellknown concept that can significantly improve ...
read it

Optimal Statistical Rates for Decentralised NonParametric Regression with Linear SpeedUp
We analyse the learning performance of Distributed Gradient Descent in t...
read it

Robustness of Iteratively PreConditioned GradientDescent Method: The Case of Distributed Linear Regression Problem
This paper considers the problem of multiagent distributed linear regre...
read it
Iterative PreConditioning for Expediting the GradientDescent Method: The Distributed Linear LeastSquares Problem
This paper considers the multiagent linear leastsquares problem in a serveragent network. In this problem, the system comprises multiple agents, each having a set of local data points, that are connected to a server. The goal for the agents is to compute a linear mathematical model that optimally fits the collective data points held by all the agents, without sharing their individual local data points. This goal can be achieved, in principle, using the serveragent variant of the traditional iterative gradientdescent method. The gradientdescent method converges linearly to a solution, and its rate of convergence is lower bounded by the conditioning of the agents' collective data points. If the data points are illconditioned, the gradientdescent method may require a large number of iterations to converge. We propose an iterative preconditioning technique that mitigates the deleterious effect of the conditioning of data points on the rate of convergence of the gradientdescent method. We rigorously show that the resulting preconditioned gradientdescent method, with the proposed iterative preconditioning, achieves superlinear convergence when the leastsquares problem has a unique solution. In general, the convergence is linear with improved rate of convergence in comparison to the traditional gradientdescent method and the stateoftheart accelerated gradientdescent methods. We further illustrate the improved rate of convergence of our proposed algorithm through experiments on different realworld leastsquares problems in both noisefree and noisy computation environment.
READ FULL TEXT
Comments
There are no comments yet.