# Heterogeneous Coded Computation across Heterogeneous Workers

Coded distributed computing framework enables large-scale machine learning (ML) models to be trained efficiently in a distributed manner, while mitigating the straggler effect. In this work, we consider a multi-task assignment problem in a coded distributed computing system, where multiple masters, each with a different matrix multiplication task, assign computation tasks to workers with heterogeneous computing capabilities. Both dedicated and probabilistic worker assignment models are considered, with the objective of minimizing the average completion time of all computations. For dedicated worker assignment, greedy algorithms are proposed and the corresponding optimal load allocation is derived based on the Lagrange multiplier method. For probabilistic assignment, successive convex approximation method is used to solve the non-convex optimization problem. Simulation results show that the proposed algorithms reduce the completion time by 60 unbalanced coded scheme.

## Authors

• 14 publications
• 7 publications
• 52 publications
• 128 publications
04/16/2019

### Heterogeneous Computation across Heterogeneous Workers

Coded distributed computing framework enables large-scale machine learni...
09/23/2021

### Coded Computation across Shared Heterogeneous Workers with Communication Delay

Distributed computing enables large-scale computation tasks to be proces...
12/29/2019

### On Batch-Processing Based Coded Computing for Heterogeneous Distributed Computing Systems

In recent years, coded distributed computing (CDC) has attracted signifi...
02/20/2020

### Reliable Distributed Clustering with Redundant Data Assignment

In this paper, we present distributed generalized clustering algorithms ...
01/20/2020

### Bivariate Polynomial Coding for Exploiting Stragglers in Heterogeneous Coded Computing Systems

Polynomial coding has been proposed as a solution to the straggler mitig...
11/22/2017

### Combating Computational Heterogeneity in Large-Scale Distributed Computing via Work Exchange

Owing to data-intensive large-scale applications, distributed computatio...
08/12/2020

### Coded Elastic Computing on Machines with Heterogeneous Storage and Computation Speed

We study the optimal design of heterogeneous Coded Elastic Computing (CE...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Machine learning (ML) techniques are penetrating into many aspects of human lives, and boosting the development of new applications from autonomous driving, virtual and augmented reality, to Internet of things [1]

. Training complicated ML models requires computations with massive volumes of data, e.g., large-scale matrix-vector multiplications, which cannot be realized on a single centralized computing server. Distributed computing frameworks such as MapReduce

[2] enable a centralized master node to allocate data and update global model, while tens or hundreds of distributed computing nodes, called workers, train ML models in parallel using partial data. Since task completion time depends on the slowest worker, a key bottleneck in distributed computing is the straggler effect: experiments on Amazon EC2 instances show that some workers can be 5 times slower than the typical performance [3].

Straggler effect can be mitigated by adding redundancy to the distributed computing system via coding [3, 4, 2, 6, 5, 8, 7], or by scheduling computation tasks [9, 10, 11]. Maximum distance separable (MDS) codes are widely applied for matrix multiplications [3, 4, 2, 6, 5, 7], which can reduce the task completion time by , where is the number of workers [2]. A unified coded computing framework for straggler mitigation is proposed in [4]. Heterogeneous workers are considered in [5], and an asymptotically optimal load allocation scheme is proposed. Although the stragglers are slower than the typical workers, they can still make non-negligible contributions to the system [6, 8]. A hierarchical coded computing framework is thus proposed in [6], where tasks are partitioned into multiple levels so that stragglers contribute to subtasks in the lower levels. Multi-message communication with Lagrange coded computing is used in [8] to exploit straggler servers.

The straggler effect can be mitigated even with uncoded computing, via redundant scheduling of tasks and multi-message communications. A batched coupon’s collector scheme is proposed in [9], and the expected completion time is analyzed in [10]. The input data is partitioned into batches, and each worker randomly processes one at a time, until the master collects all the results. Deterministic scheduling orders of tasks at different workers are proposed in [11], specifically cyclic and staircase scheduling, and the relation between redundancy and task completion time is characterized.

Existing papers mainly consider a single master. However, in practice, workers may be shared by more than one masters to carry out multiple large-scale computation tasks in parallel. Therefore, in this work, we focus on a multi-task assignment problem for a heterogeneous distributed computing system using MDS codes. As shown in Fig. 1, we consider multiple masters, each with a matrix-vector multiplication task, and a number of workers with heterogeneous computing capabilities. The goal is to design centralized worker assignment and load allocation algorithms that minimize the completion time of all the tasks. We consider both dedicated and probabilistic

worker assignment policies, and formulate a non-convex optimization problem under a unified framework. For dedicated assignment, each worker serves one master. The optimal load allocation is derived, and the worker assignment is transformed into a max-min allocation problem, for which NP-hardness is proved and greedy algorithms are proposed. For probabilistic assignment, each worker selects a master to serve based on an optimized probability, and a successive convex approximation (SCA) based algorithm is proposed. Simulation results show that the proposed algorithms can drastically reduce the task completion time compared to uncoded and unbalanced coded schemes.

The rest of the paper is organized as follows. The system model and problem formulation is introduced in Sec. II. Dedicated and probabilistic worker assignments, and the corresponding load allocation algorithms are proposed in Sec. III and Sec. IV, respectively. Simulation results are presented in Sec. V, and the conclusions are summarized in Sec. VI.

## Ii System Model and Problem Formulation

### Ii-a System Architecture

We consider a heterogeneous distributed computing system with masters , and workers , with . We assume that each master has a matrix-vector multiplication task111

In training ML models, e.g., linear regression, matrix-vector multiplication tasks are carried out at each iteration of the gradient descent algorithm. These tasks are independent over iterations, thus we focus on one iteration here.

. The task of master is denoted by , where , , . The masters can use the workers to complete their computation tasks in a distributed manner.

To deal with the straggling workers, we adopt MDS coded computation, and encode the rows of . Define the coded version of as , which is further divided into sub-matrices:

 ~Am=[~ATm,1, ~ATm,2, ⋯, ~ATm,N]T, (1)

where is assigned to worker , and is a non-negative integer representing the load allocated to worker . Vector is multicast from master to the workers with , and worker calculates the multiplication of coded rows of (which is ) and . Matrix is thus -MDS-coded, with the requirement of . Upon aggregating the multiplication results for any coded rows of , master can recover .

The processing times of the assigned computation tasks at the workers are modeled as mutually independent random variables. Following the literature on coded computing

[4, 2, 6, 5]

, the processing time at each worker is modeled by a shifted exponential distribution

222In this work, the worker assignment and load allocation algorithms are designed based on the assumption of shifted exponential distribution. However, the proposed methods can also be applied to other distributions, as long as the corresponding function defined in (28) is convex.. The processing time, , for worker to calculate the multiplication of coded rows of and

has the cumulative distribution function:

 P[T[lm,n]m,n≤t]={1−e−um,nlm,n(t−am,nlm,n),t≥am,nlm,n,0,  otherwise, (2)

where is a parameter indicating the minimum processing time for one coded row, and is the parameter modeling the straggling effect.

We consider a heterogeneous environment by assuming that and are different over different master-worker pairs , for and . This assumption is due to the fact that workers may have different computation speeds, and the dimensions of and vary over .

### Ii-C Worker Assignment Policy

We consider two worker assignment policies:

1) Dedicated worker assignment: In this policy, each worker is assigned computation tasks from a single master . Let indicator if worker provides computing service for master , and otherwise. Since a worker serves at most one master, we have .

2) Probabilistic worker assignment: In this policy, each worker randomly selects which master to serve according to probability . For each worker , we have . In Fig. 1, worker selects master to serve with probability , and master with probability .

### Ii-D Problem Formulation

Let denote the number of multiplication results (one result refers to the multiplication of one coded row of with ) master collects from worker till time . We assume that worker computes and then sends the result to the master upon completion, without further dividing it into subtasks or transmitting any feedbacks before completion. Therefore, master can either receive results or none from worker till time . We denote the number of aggregated results at master until time by , and we have .

Our objective is to minimize the average completion time , upon which all the masters can aggregate sufficient results from the workers to recover their computations with high probability. We aim to design a centralized policy that optimizes worker assignment and load allocation . The optimization problem is formulated as:

 P1:min{lm,n,km,n,t} t (3a) s.t. P[Xm(t)≥Lm]≥ρs, ∀m, (3b) M∑m=1km,n≤1,  ∀n, (3c) km,n∈K,  lm,n∈N,  ∀m,n, (3d)

where we have for dedicated worker assignment, while for probabilistic worker assignment, and is the set of non-negative integers. In constraint (3b), is defined as the probability that master receives no less than results until time ; that is, the probability of being recovered. Constraint (3c) guarantees that under dedicated assignment, each worker serves at most one master, and under probabilistic assignment, the total probability rule is satisfied.

The key challenge to solve is that, constraint (3b) cannot be explicitly expressed, since it is difficult to find all the combinations that satisfy in a heterogeneous environment with non-uniform loads . Therefore, we instead consider an approximation to this problem, by substituting constraint (3b) with an expectation constraint:

 P2:min{lm,n,km,n,t} t (4a) s.t. Lm−E[Xm(t)]≤0, ∀m, (4b) Constraints (???),(???),

where constraint (4b) states that the expected number of results master receives until time is no less than . A similar approach is used in [5], where the gap between the solutions of and is proved to be bounded when there is a single master. We will design algorithms that solve in the following two sections.

Constraint (4b) can be explicitly expressed. Let be an indicator function with value if event is true, and otherwise. If (and thus ),

 E[Xm,n(t)]=E[km,nlm,nI{T[lm,n]m,n≤t}]=⎧⎪⎨⎪⎩km,nlm,n[1−e−um,nlm,n(t−am,nlm,n)],t≥am,nlm,n,0,  otherwise. (5)

If (and thus ), . And we have .

The following observations help us simplify :

1) From constraint (4b), we can infer that for , the optimal task completion time satisfies , where is the subset of workers serving master . In fact, if there exists such that , we have , i.e., master cannot expect to receive any results from worker . By reducing to satisfy , it is possible to further reduce .

2) Due to the high dimension of input matrix , is usually in the order of hundreds or thousands. So we relax the constraint to , and omit the effect of rounding in the following derivations.

Based on the two statements, by substituting (5), we simplify constraint (4b) as:

 Lm−N∑n=1km,nlm,n(1−e−um,nlm,n(t−am,nlm,n))≤0. (6)

And problem can be simplified as follows:

 P3:min{lm,n,km,n,t} t (7a) s.t. Constraints (???),(???), km,n∈K,  lm,n≥0,  ∀m,n. (7b)

Problem is a non-convex optimization problem due to the non-convexity of (6), which is in general difficult to solve. In the following two sections, we will propose algorithms for dedicated and probabilistic worker assignments and corresponding load allocations, respectively.

## Iii Dedicated Worker Assignment

In this section, we solve for dedicated worker assignment, where . Given the assignment of workers, we first derive the optimal load allocation. Then the worker assignment can be transformed into a max-min allocation problem, for which NP-hardness is shown and two greedy algorithms are developed.

### Iii-a Optimal Load Allocation for a Given Worker Assignment

We first assume that the subset of workers that serve master is given by , and derive the optimal load allocation for master , that minimizes the approximate completion time. The problem is formulated as:

 P4: min{lm,n, tm} tm (8a) s.t. Lm−E[Xm(tm)]≤0, (8b) lm,n≥0,∀n∈Ωm, (8c)

where is defined as the approximate completion time of master , is the number of results aggregated at master till time , and

 E[Xm(tm)]=∑n∈Ωmlm,n(1−e−um,nlm,n(tm−am,nlm,n)). (9)
###### Lemma 1.

Problem is a convex optimization problem.

###### Proof.

See Appendix A. ∎

Let . The partial Lagrangian of is given by

 L(lm,tm,λm)=tm+λm(Lm−E[Xm(tm)])=tm+λm[Lm−∑n∈Ωmlm,n(1−e−um,nlm,n(tm−am,nlm,n))], (10)

where is the Lagrange multiplier associated with (8b).

The partial derivatives of can be derived as

 ∂L∂lm,n=λm[(1+um,ntmlm,n)e−um,nlm,n(tm−am,nlm,n)−1], (11)
 ∂L∂tm=1−λm∑n∈Ωmum,ne−um,nlm,n(tm−am,nlm,n). (12)

The optimal solution needs to satisfy the Karush-Kuhn-Tucker (KKT) conditions

 ∂L∂l∗m,n=0, ∀n∈Ωm,  ∂L∂t∗m=0 (13a) λ∗m(Lm−E[Xm(t∗m)])=0 (13b) λ∗m≥0, l∗m,n>0 (13c)

Define as the lower branch of Lambert W function, where and . Let

 ϕm,n≜1um,n[−W−1(−e−um,nam,n−1)−1]. (14)

By solving KKT conditions (13a)-(13c), the optimal load allocation for each individual master is given as follows.

###### Theorem 1.

For master , and a given subset of workers serving this master, the optimal load allocation derived from , and the corresponding minimum approximate completion time are given by:

 l∗m,n=Lmϕm,n∑n∈Ωmum,n1+um,nϕm,n, (15) t∗m=Lm∑n∈Ωmum,n1+um,nϕm,n. (16)
###### Proof.

See Appendix B. ∎

### Iii-B Greedy Worker Assignment Algorithms

Now we consider how to assign workers to different masters to minimize the task completion time . Let

 vm,n≜um,nLm(1+um,nϕm,n). (17)

Based on Theorem 1, the worker assignment problem can be transformed into a max-min allocation problem, given in the following proposition.

###### Proposition 1.

Problem is equivalent to

 P5:max{km,n} minm∈MN∑n=1km,nvm,n (18a) s.t. M∑m=1km,n≤1, km,n∈{0,1}, ∀m,n. (18b)
###### Proof.

We use to represent the minimum task completion time of each master given the set of workers , and define . From Theorem 1, we have:

 Vm=1Lm∑n∈Ωmum,n1+um,nϕm,n=N∑n=1km,nvm,n. (19)

Note that in , . With and , is equivalent to . ∎

Problem

is a combinatorial optimization problem named

max-min allocation, which is motivated by the fair allocation of indivisible goods [12, 13, 14]. Specifically, there are agents and items. Each item has a unique value for each agent, and can only be allocated to one agent. The goal is to maximize the minimum sum value of agents, by allocating items as fairly as possible. In our problem, each master corresponds to an agent with sum value , and each worker can be considered as an item with value for master . The problem can be reduced to a NP-complete partitioning problem [15], when considering only agents and that each item has identical value for both agents. Therefore, problem is NP-hard. An -approximation algorithm in time is proposed in [13] for max-min allocation, with . Another polynomial-time algorithm is proposed in [14], guaranteeing approximation to the optimum. However, these algorithms have high computational complexity, and are difficult to implement. We propose two low-complexity greedy algorithms as follows.

An iterated greedy algorithm is proposed in Algorithm 1, which is inspired by [16], where a similar min-max fairness problem is investigated. In the initialization phase, each worker is assigned to the master for which its value is the highest. The main iteration has the following three phases:

1) Insertion: We extract each worker from the current master , and assign it to a master with minimum sum value . As shown in Lines 12-14, if the minimum sum value among masters is improved, let worker serve master . The complexity is .

2) Interchange: We pick two workers , that serve two masters , , and interchange their assignments. If the minimum sum is improved, and the overall system performance is improved (i.e., ), the interchange is kept. The complexity is . Note that the insertion and interchange are repeated for multiple times within each iteration, in order to obtain a local optimum.

3) Exploration: We randomly remove some workers from the current assignment, and allocate them in a greedy manner. This operation can be regarded as an exploration, which prevents the algorithm to be stuck in a local optimum. When the number of iterations reach a predefined maximum, or the performance does not improve any more, the main loop is terminated. Note that the final output is the assignment obtained before the exploration phase.

While Algorithm 1 still requires iterations to obtain a good assignment, Algorithm 2, which is inspired by the largest-value-first algorithm in [12], is even simpler with only one round. In a homogeneous case with , the algorithm finds an agent with minimum sum value , and assigns a remaining item with the largest value . The algorithm guarantees a approximation to the optimum. We extend the idea of the largest-value-first to the heterogeneous environment, and propose a simple greedy algorithm. As shown in Algorithm 2, in the initialization phase, we find a master without any workers assigned, and allocate an available worker that has the largest contribution for it. In the main loop, we always select master with the minimum sum value , and allocate a remaining worker that has the maximum value for this master. The overall complexity of the simple greedy algorithm is .

## Iv Probabilistic Worker Assignment

In this section, we solve problem for the probabilistic worker assignment, where . The key challenge is the non-convexity of constraint (4b). We observe that constraint (4b) can be decomposed into the difference of convex functions, and adopt SCA method to jointly solve the worker assignment and load allocation problems.

From Lemma 1, we know that defined in (28) is convex. Thus is also convex with respect to and . Let , , and . It is easy to see that the following functions are all convex:

 (20) h+(w)≜12(k+le−utl)2, h−(w)≜12(k2+l2e−2utl), (21)

and we have

 g(w)=g+(w)−g−(w),   h(w)=h+(w)−h−(w). (22)

By linearizing the concave parts and , given any two points , , the convex upper approximations of and can be obtained as follows [17]:

 ~g(w,z)≜ g+(w)−g−(z)−∇wg−(z)T(w−z)≥g(w), (23) ~h(w,z)≜ h+(w)−h−(z)−∇wh−(z)T(w−z)≥h(w). (24)

Let subscript denote the variables, parameters and functions related to master and worker , e.g., , ; and thus,

 −E[Xm(t)]=N∑n=1[gm,n(wm,n)+eum,nam,nhm,n(wm,n)]. (25)

Let , . Now we can give a convex upper approximation for the left-hand side of constraint (6) in the following lemma.

###### Lemma 2.

The left-hand side of constraint (6) can be approximated by a convex function as follows:

 Lm −E[Xm(t)]≤Lm+N∑n=1[~gm,n(wm,n,zm,n) +eum,nam,n~hm,n(wm,n,zm,n)]≜~qm(wm,zm). (26)

Let be a feasible point of . The convex approximation to at point , defined as , is given by:

 P(z):min{lm,n,km,n,t} t (27a) s.t. ~qm(wm,zm)≤0,  ∀m, (27b) Constraints (???),(???).

A probabilistic worker assignment and load allocation algorithm is proposed in Algorithm 3 based on the SCA method. A diminishing step-size rule is adopted with decreasing ratio , guaranteeing the convergence of the SCA [17], and in line 5, is the step-size in the th iteration. Starting from a feasible point of , we iteratively solve convex optimization problems , in which constraint (6) is replaced by its upper convex approximation (27b). The iteration terminates when the solution is stationary (e.g., ), and according to Theorem 2 in [17], the stationary solution obtained by the SCA based algorithm is a local optimum.

### Iv-a Comparison of Dedicated and Probabilistic Assignments

We remark that the completion time of probabilistic worker assignment is a lower bound on what is achieved by dedicated worker assignment, since any feasible point of dedicated assignment is also feasible for probabilistic assignment. However, dedicated assignment simplifies the connections between workers and masters, and requires less communication for the multicast of and less storage at each worker. Moreover, the proposed dedicated assignment algorithms have lower computational complexity and are easier to implement.

## V Simulation Results

In this section, we evaluate the average task completion time of the proposed dedicated and probabilistic worker assignment algorithms, in both small-scale and large-scale scenarios. In the small-scale scenario, we consider masters and workers, and three benchmarks: 1) Uncoded computing with uniform dedicated worker assignment: each master is assigned an equal number of workers, and is equally partitioned into sub-matrices without coding, each with rows. 2) Coded computing with uniform dedicated worker assignment [5]: each master is assigned an equal number of workers, and the load is allocated according to Theorem 1. 3) Brute-force search for dedicated worker assignment: the oracle solution for dedicated worker assignment is obtained by searching all possible combinations, and the load is allocated according to Theorem 1. In the large-scale scenario, we consider masters and workers, and only use the first two benchmarks, due to the high complexity of the brute-force search.

The straggling parameter is randomly selected within , the shift parameter is set as , and , [5]. In Algorithm 1, we randomly remove workers for each exploration. In Algorithm 3, we set the convergence criteria as , decreasing ratio , and use CVX toolbox to solve each convex approximation problem. We obtain the worker assignment and load allocation from the algorithms that minimize the approximate completion time. Then we carry out Monte Carlo realizations and calculate the empirical cumulative distribution function (CDF) and the average of task completion time.

Fig. 2 shows the CDFs of the task completion time. The proposed greedy dedicated assignment and SCA-based probabilistic assignment algorithms outperform the uncoded and coded benchmarks with uniform assignment of dedicated workers. The CDFs achieved by iterated and greedy algorithms are very close, and both performances are close to the optimal brute-force search algorithm. Specifically, when the successful probability , the three dedicated assignment algorithms all achieve task completion time . Probabilistic assignment further outperforms the dedicated assignment, which is consistent with the fact that it is a lower bound for dedicated assignment. When , probabilistic assignment achieves task completion time .

Fig. 3 compares the average task completion time achieved by the proposed algorithms and benchmarks. The first four groups of bars show the average time each master needs to finish its own task using different algorithms. The fifth group of bars show the average task completion time of the system, which is what we aim to minimize, obtained by averaging the maximum time of each realization. From the fifth group of bars, we can see that all the proposed algorithms reduce the delay performance by more than over uncoded benchmark, and more than over coded benchmark. The performance gain is mainly achieved by taking into account the heterogeneity of the system. From the first four groups of bars, we can see that the average delay of each master achieved by our proposed algorithms are very close, indicating that the workers and loads are assigned in a balanced manner.

In Fig. 4, the impact of the decreasing ratio on the convergence of SCA-based probabilistic assignment algorithm is evaluated, in the scenario with masters and workers. The decreasing ratio decides the step-size , and thus the convergence rate of the SCA algorithm. We can see that by choosing a proper , the proposed SCA algorithm can converge after iterations, and outperforms the iterated greedy algorithm for dedicated worker assignment.

## Vi Conclusions

We have considered a joint worker assignment and load allocation problem in a distributed computing system with heterogeneous computing servers, i.e., workers, and multiple master nodes competing for these workers. MDS coding has been adopted by the masters to mitigate the straggler effect, and both dedicated and probabilistic assignment algorithms have been proposed, in order to minimize the average task completion time. Simulation results show that the proposed algorithms can reduce the task completion time by compared to uncoded task assignment, and over an unbalanced coded scheme. While probabilistic assignment is more general, we have observed through simulations that the two have similar delay performances. We have noted that dedicated assignment has lower computational complexity and lower communication and storage requirements, beneficial for practical implementations. As future work, we plan to take communication delay into consideration, and develop decentralized algorithms.

## Appendix A Proof of Lemma 1

It is easy to see that (8a) and (8c) are convex objective and constraints, respectively. Let

 f(x,t)=−x(1−e−ux(t−ax)), (28)

with variables , , and parameters , . The Hessian matrix of is:

 H=⎡⎢ ⎢⎣∂2f∂x2∂2f∂x∂t∂2f∂t∂x∂2f∂t2⎤⎥ ⎥⎦=e−ux(t−ax)⎡⎢⎣u2t2x3−u2tx2−u2tx2u2x⎤⎥⎦. (29)

The eigenvalues of

are and . Thus , and is convex. Let and , is convex. Constraint (8b) is the summation of convex functions, and hence convex. Therefore, is a convex optimization problem.

## Appendix B Proof of Theorem 1

By jointly considering (12) and (13a), we can get . Then, substituting (11) into , we have:

 −(1+t∗mum,nl∗m,n)e−(1+t∗mum,nl∗m,n)=−e−um,nam,n−1, (30)
 t∗ml∗m,n=−W−1(−e−um,nam,n−1)−1um,n=ϕm,n. (31)

Substituting (31) into (13b), we have

 Lm−∑n∈Ωmt∗mϕm,n(1−11+um,nϕm,n)=0. (32)

Thus, and can be derived as in Theorem 1.