
Numerically Stable Binary Gradient Coding
A major hurdle in machine learning is scalability to massive datasets. O...
read it

Distributed Gradient Descent with Coded Partial Gradient Computations
Coded computation techniques provide robustness against straggling serve...
read it

Optimum Transmission Delay for Function Computation in NFVbased Networks: the role of Network Coding and Redundant Computing
In this paper, we study the problem of delay minimization in NFVbased n...
read it

Coded Computing for Boolean Functions
The growing size of modern datasets necessitates a massive computation i...
read it

Speeding Up Distributed Gradient Descent by Utilizing Nonpersistent Stragglers
Distributed gradient descent (DGD) is an efficient way of implementing g...
read it

Coded Computing with Noise
Distributed computation is a framework used to break down a complex comp...
read it

A Sequential Approximation Framework for Coded Distributed Optimization
Building on the previous work of Lee et al. and Ferdinand et al. on code...
read it
Berrut Approximated Coded Computing: Straggler Resistance Beyond Polynomial Computing
One of the major challenges in using distributed learning to train complicated models with large data sets is to deal with stragglers effect. As a solution, coded computation has been recently proposed to efficiently add redundancy to the computation tasks. In this technique, coding is used across data sets, and computation is done over coded data, such that the results of an arbitrary subset of worker nodes with a certain size are enough to recover the final results. The major challenges with those approaches are (1) they are limited to polynomial function computations, (2) the size of the subset of servers that we need to wait for grows with the multiplication of the size of the data set and the model complexity (the degree of the polynomial), which can be prohibitively large, (3) they are not numerically stable for computation over real numbers. In this paper, we propose Berrut Approximated Coded Computing (BACC), as an alternative approach, which is not limited to polynomial function computation. In addition, the master node can approximately calculate the final results, using the outcomes of any arbitrary subset of available worker nodes. The approximation approach is proven to be numerically stable with low computational complexity. In addition, the accuracy of the approximation is established theoretically and verified by simulation results in different settings such as distributed learning problems. In particular, BACC is used to train a deep neural network on a cluster of servers, which outperforms repetitive computation (repetition coding) in terms of the rate of convergence.
READ FULL TEXT
Comments
There are no comments yet.