Leveraging Coding Techniques for Speeding up Distributed Computing

Large scale clusters leveraging distributed computing frameworks such as MapReduce routinely process data that are on the orders of petabytes or more. The sheer size of the data precludes the processing of the data on a single computer. The philosophy in these methods is to partition the overall job into smaller tasks that are executed on different servers; this is called the map phase. This is followed by a data shuffling phase where appropriate data is exchanged between the servers. The final so-called reduce phase, completes the computation. One potential approach, explored in prior work for reducing the overall execution time is to operate on a natural tradeoff between computation and communication. Specifically, the idea is to run redundant copies of map tasks that are placed on judiciously chosen servers. The shuffle phase exploits the location of the nodes and utilizes coded transmission. The main drawback of this approach is that it requires the original job to be split into a number of map tasks that grows exponentially in the system parameters. This is problematic, as we demonstrate that splitting jobs too finely can in fact adversely affect the overall execution time. In this work we show that one can simultaneously obtain low communication loads while ensuring that jobs do not need to be split too finely. Our approach uncovers a deep relationship between this problem and a class of combinatorial structures called resolvable designs. Appropriate interpretation of resolvable designs can allow for the development of coded distributed computing schemes where the splitting levels are exponentially lower than prior work. We present experimental results obtained on Amazon EC2 clusters for a widely known distributed algorithm, namely TeraSort. We obtain over 4.69× improvement in speedup over the baseline approach and more than 2.6× over current state of the art.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/14/2019

Resolvable Designs for Speeding up Distributed Computing

Distributed computing frameworks such as MapReduce are often used to pro...
research
01/22/2019

CAMR: Coded Aggregated MapReduce

Many big data algorithms executed on MapReduce-like systems have a shuff...
research
09/24/2021

A Unified Treatment of Partial Stragglers and Sparse Matrices in Coded Matrix Computation

The overall execution time of distributed matrix computations is often d...
research
02/12/2018

A New Combinatorial Design of Coded Distributed Computing

Coded distributed computing introduced by Li et al. in 2015 is an effici...
research
03/02/2021

Coded Computing via Binary Linear Codes: Designs and Performance Limits

We consider the problem of coded distributed computing where a large lin...
research
09/17/2018

C^3LES: Codes for Coded Computation that Leverage Stragglers

In distributed computing systems, it is well recognized that worker node...
research
08/13/2020

FLCD: A Flexible Low Complexity Design of Coded Distributed Computing

We propose a flexible low complexity design (FLCD) of coded distributed ...

Please sign up or login with your details

Forgot password? Click here to reset