
Robustness of Iteratively PreConditioned GradientDescent Method: The Case of Distributed Linear Regression Problem
This paper considers the problem of multiagent distributed linear regre...
read it

Iterative PreConditioning for Expediting the GradientDescent Method: The Distributed Linear LeastSquares Problem
This paper considers the multiagent linear leastsquares problem in a s...
read it

On Accelerating Distributed Convex Optimizations
This paper studies a distributed multiagent convex optimization problem...
read it

Iterative PreConditioning to Expedite the GradientDescent Method
Gradientdescent method is one of the most widely used and perhaps the m...
read it

Byzantine Fault Tolerant Distributed Linear Regression
This paper considers the problem of Byzantine fault tolerant distributed...
read it

Digit Stability Inference for Iterative Methods Using Redundant Number Representation
In our recent work on iterative computation in hardware, we showed that ...
read it

Iterative Least Trimmed Squares for Mixed Linear Regression
Given a linear regression setting, Iterative Least Trimmed Squares (ILTS...
read it
Accelerating Distributed SGD for Linear Regression using Iterative PreConditioning
This paper considers the multiagent distributed linear leastsquares problem. The system comprises multiple agents, each agent with a locally observed set of data points, and a common server with whom the agents can interact. The agents' goal is to compute a linear model that best fits the collective data points observed by all the agents. In the serverbased distributed settings, the server cannot access the data points held by the agents. The recently proposed Iteratively Preconditioned Gradientdescent (IPG) method has been shown to converge faster than other existing distributed algorithms that solve this problem. In the IPG algorithm, the server and the agents perform numerous iterative computations. Each of these iterations relies on the entire batch of data points observed by the agents for updating the current estimate of the solution. Here, we extend the idea of iterative preconditioning to the stochastic settings, where the server updates the estimate and the iterative preconditioning matrix based on a single randomly selected data point at every iteration. We show that our proposed Iteratively Preconditioned Stochastic Gradientdescent (IPSG) method converges linearly in expectation to a proximity of the solution. Importantly, we empirically show that the proposed IPSG method's convergence rate compares favorably to prominent stochastic algorithms for solving the linear leastsquares problem in serverbased networks.
READ FULL TEXT
Comments
There are no comments yet.