I Introduction
One of the main factors behind the success of machine learning algorithms is the availability of large datasets for training. However, as datasets become ever larger, the required computation becomes impossible to execute in a single machine within a reasonable time frame. This computational bottleneck can be overcome by distributed learning across multiple machines, called
workers.Gradient descent (GD) is the most common approach in supervised learning, and can be easily distributed. By employing a
parameter server (PS) type framework [1], the dataset can be divided among workers, and at each iteration, workers compute gradients based on their local data, which can be aggregated by the PS. However, slow, socalled straggling, workers are the Achilles heel of distributed GD (DGD) since the PS has to wait for all the workers to complete an iteration. A wide range of stragglermitigation strategies have been proposed in recent years [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. The main notion is to introduce redundancy in the computations assigned to each worker, so that fast workers can compensate for the stragglers.Most of the coded computation solutions for straggler mitigation suffers from two drawbacks: First, they allow each worker to send a single message per iteration, which results in the underutilization of computational resources [19]. Second, they recover the full gradient at each iteration, which may unnecessarily increase the average completion time of an iteration. Multimessage communication (MMC) strategy addresses the first drawback by allowing each worker to send multiple messages periteration, thus, seeking a balance between computation and communication latency [8, 11, 15, 20, 21, 22, 23, 5]. [24] addresses the second drawback by combining coded computation with partial recovery (CCPR) to provide a tradeoff between the average completion time of an iteration and the accuracy of the recovered gradient estimate.
If the straggling behavior is not independent and identically distributed over time and workers, which is often the case in practice, the gradient estimates recovered by the CCPR scheme become biased. For example, this may happen when a worker straggles over multiple iterations. Regulating the recovery frequency of the partial computations to make sure that each partial computation contributes to the model updates as equally as possible is critical to avoid biased updates. We use the age of information (AoI) metric to track the recovery frequency of partial computations.
AoI has been proposed to quantify the data freshness over systems that involve timesensitive information [25]
. AoI studies aim to guarantee timely delivery of timecritical information to receivers. AoI has found applications in queueing and networks, scheduling and optimization, and reinforcement learning (see the survey in
[26]). Recently, [27] considered the age metric in a distributed computation system that handles timesensitive computations, and [28] introduced an agebased metric to quantify the staleness of each update in a federated learning system. In our work, we associate an age to each partial computation and use this age to track the time passed since the last time each partial computation has been recovered.In this paper, we design a dynamic encoding framework for the CCPR scheme that includes a timely dynamic order operator to prevent biased updates, and improve the performance. The proposed scheme increases the timeliness of the recovered partial computations by changing the codewords and their computation order over time. To regulate the recovery frequencies, we use age of the partial computations
in the design of the dynamic order operator. We show by numerical experiments on a linear regression problem that the proposed dynamic encoding scheme increases the timeliness of the recovered computations, results in less biased model updates, and as a result, achieves better convergence performance compared to the conventional static encoding framework.
Ii System Model and Problem Formulation
For completeness, we first present the coded computation framework and the CCPR scheme.
Iia DGD with Coded Computation
We focus on the leastsquares linear regression problem, where the loss function is the empirical mean squared error
(1) 
where are the data points with corresponding labels , and is the parameter vector. The optimal parameter vector can be obtained iteratively by using the gradient descent (GD) method
(2) 
where is the learning rate and is the parameter vector at the th iteration. Gradient of the loss function in (1) is
(3) 
where and . In (3), only changes over iterations. Thus, the key computational task at each iteration is the matrixvector multiplication of , where . To speed up GD, execution of this multiplication can be distributed across workers, by simply dividing into equalsize disjoint submatrices. However, under this naive approach, computation time is limited by the straggling workers [7].
Coded computation is used to tolerate stragglers by encoding the data before it is distributed among workers to achieve certain redundancy. That is, with coded computation, redundant partial computations are created such that the result of the overall computation can be obtained from a subset of the partial computations. Thus, up to certain number of stragglers can be tolerated since the PS can recover the computation result without getting partial results from all workers. Many coded computation schemes, including MDS [7, 8, 14], LDPC [9], and rateless codes [21] and their various variants have been studied in the literature.
IiB Coded Computation with Partial Recovery (CCPR)
In naive uncoded distributed computation for gradient computation, straggling workers result in erasures in the gradient vector as illustrated in Fig. 1. The main motivation behind the coded computation schemes is to find the minimum number of responsible workers to guarantee the recovery of the gradient vector without any erasures. Alternatively, the CCPR scheme [24] allows erasures on the gradient vector to reduce the computation time while controlling the number of erasures to guarantee certain accuracy for the gradient estimate.
To enable partial recovery, we focus on a linear code structure such that is initially divided into disjoint submatrices . Then, coded submatrices, , are assigned to each worker for computation, where each coded matrix is a linear combination of submatrices, i.e.,
(4) 
Following the initial encoding phase, at each iteration , the th worker performs the computations in the given order, and sends the results one by one as soon as they are completed. In the meantime, the PS collects coded computations from all the workers until it successfully recovers percent of the gradient entries. Parameter denotes the tolerance, which is a design parameter and can be chosen according to the learning problem. In the scope of this work, we utilize the random circularly shifted (RCS) code [24], which allows workers to change codewords over time. In a broad sense, in RCS codes, is divided into submatrices and those submatrices are concatenated to form . Then an assignment matrix, showing assigned submatrices to each worker, is formed by operating random circular shifts on . Once the assignment matrix is established, codewords for each worker can be constructed by combining those submatrices in the th column according to a certain order.
Next, we illustrate how RCS codes can be adapted to timely dynamic encoding.
IiC Partial Recovery and Timely Dynamic Encoding
Dynamic encoding process consists of three phases namely; data partition, ordering and encoding, where the corresponding operators are denoted by , and , respectively. Data partition operator distributes the submatrices among the workers such that
(5) 
where, is a given memory constraint and is the set of assigned submatrices to the th worker. We assume that operator is executed, for each worker, only once before the process, and thus set remains the same over the iterations. The order operator is used to form an ordered set from the initial set for encoding, i.e.,
(6) 
where is an ordered set representing the order of computation at each iteration for the th worker. We remark that unlike the data partition operator, order operator may change over time. These two operators together can be represented by an assignment matrix , whose th column is given by .
Once the assignment matrix is fixed, the encoding process is executed according to a degree vector , which identifies the degree of each codeword based on its computation order. Encoding is executed for each worker independently. The encoder operator maps the ordered set of data (submatrices) to ordered set of codewords of size , where is the length of , i.e.,
(7) 
The encoding operator first divides set into disjoint subsets, , such that . Then, at iteration , the coded submatrix of the th worker with computation order , denoted by , is constructed as
(8) 
An example assignment matrix is given below for and :
(9) 
The elements of the assignment matrix are colored to illustrate the first step of the encoding operator, for , where colors blue, red and yellow represent the submatrices used to generate the first, second, and third codewords, respectively. The encoding phase for the first worker at iteration is illustrated below:
(10) 
With this code, the worker first computes and sends the result directly to the PS. Then, it computes sends the result to the PS. Finally, it computes and sends the result to the PS.
Next, we formally state the problem using the data partition, ordering and encoding operators.
IiD Problem Definition
The recovery of a partial computation at iteration depends on the data partition , ordering decisions , encoding decisions , computation delay statistics of the workers, , and the tolerance , i.e.,
(11) 
where is the recovery operation that returns a vector which demonstrates the recovered partial computations such that if is recovered at the PS for .
In the partial recovery approach, without any further control on the assigned computations, operators are fixed throughout the training process. Thus, recovered submatrix indices may be correlated over time and some partial computations may not be recovered at all. We note that this kind of recovery behavior may lead to divergence especially when is large, since the updates become biased. Our goal is to introduce a dynamic approach for the coded computation/partial recovery procedure to regulate the recovery frequency of each partial computation. For this, we first introduce an agebased performance metric.
We define the age of partial computation at iteration , denoted by , as the number of iterations since the last time the PS recovered . The age for each partial computation is updated at the end of each iteration in the following way
(12) 
A sample age evolution of a partial computation is shown in Fig. 2. Here, partial computation is recovered at iterations and . The average age of the partial computation over the training interval of iterations is
(13) 
In order to make sure that each submatrix contributes to the model update as equally as possible during the training period, our goal is to keep the age of each partial computation under a certain threshold . Thus, our objective is
(14) 
where is the indicator function that returns 1 if holds, 0 otherwise. Here, is a design parameter that determines the desired freshness level for the partial computations and can be adjusted according to the learning problem. We note that the problem in (14) is over all data partitions, ordering and encoding policies, thereby is hard to optimally solve. Instead of solving (14) exactly, we introduce a timely dynamic ordering technique that can be used to regulate the recovery frequency of the partial computations.
Iii Solution Approach: Timely Dynamic Ordering
In this section, we introduce timely dynamic ordering to better regulate the ages of partial computations and to avoid biased updates. We keep the data partition and encoding operators fixed and change only the ordering operator dynamically. This timely dynamic ordering is implemented by employing a vertical circular shift in the assignment matrix. With this, we essentially change the codewords and their computation order, which in turn, changes the recovered indices.
We first employ fixed vertical shifts for dynamic ordering. Then, we will dynamically adjust the shift amount based on the ages of the partial computations.
Iiia Fixed Vertical Shifts
In this code, which we call RCS1, we employ one vertical shift for each worker at each iteration. That is, the order operator becomes
(15) 
where is the circular shift operator and is a modulo operator returning the remainder of . By using vertical shifts, coded computations transmitted to the PS from a particular worker change over time to prioritize certain partial computations. For example, if worker employs the ordered set and codewords specified in (10) at iteration , after applying one vertical shift, its computation order and codewords at iteration , are given by
(16) 
Here, we see that, at iteration , the worker prioritizes the computation of , while in the next iteration computation of is prioritized. We note that, in this method, the shift amount is fixed to one shift at each iteration, and is independent of the ages of the partial computations.
Next, we introduce an agebased vertical shift scheme to control the order of computations.
IiiB AgeBased Vertical Shifts
In this code, which we call RCSadaptive, we choose the vertical shift amount based on the current ages of the partial computations. That is, instead of shifting by at each iteration, the shift amount changes across iterations based on the ages of the partial computations. To effectively avoid biased updates, we focus on recovering the partial computations with the highest age at the current iteration, that is, the computations that have not been recovered in a while. In line with the problem in (14), we term the partial computations with age higher than the threshold as aged partial computations, which need to be recovered as soon as possible. To this end, a vertical shift amount is selected that places the maximum number of aged partial computations in the first position in the nonstraggling workers’ computation order so that they have a higher chance of recovery in the next iteration. In particular, to determine the shift amount in iteration , the PS considers the computation order at the workers that have returned at least one computation in the previous iteration and determines a shift which places maximum number of aged partial computations in the first order in these workers. Upon determining the shift amount, every worker’s assignment matrix is shifted by that amount in the next iteration. For example, if the agebased shift amount is in iteration , then the first user has
(17) 
Here, in iteration , the first worker prioritizes the computation of .
Iv Numerical Results
In this section, we provide numerical results for comparing the proposed agebased partial computation scheme to alternative static schemes using a modelbased scenario for computation latencies. For the simulations, we consider a linear regression problem over synthetically created training and test datasets, as in [10], of size of and , respectively. We also assume that the size of the model and the number of workers while each worker can return computations with . A single simulation includes iterations. For all simulations, we use learning rate . To model the computation delays at the workers, we adopt the model in [19]
, and assume that the probability of completing
computations at any worker, performing identical matrixvector multiplications, by time is given by(18) 
First, we consider an extreme scenario in the straggling behavior, where we assume there are persistent stragglers that are fixed for all the iterations which do not complete any partial computations. For the nonpersistent stragglers, we set and .^{1}^{1}1To simulate the straggling behavior in our simulations, we take for the persistent stragglers so that effectively they do not complete any partial computations. In Fig. 3, we set the tolerance level , such that at each iteration the PS aims at recovering of the total partial computations. We see that the proposed timely dynamic encoding strategy with one vertical shift at each iteration, RCS achieves a significantly better convergence performance than the conventional static encoding with RCS. When the ages of partial computations are taken into consideration in determining the order of computation at each iteration with the proposed RCSadaptive scheme with an age threshold of , we observe a further improvement in the convergence performance.
An interesting observation comes from Fig. 4, where we plot the average ages of the partial computations. While the proposed timely dynamic encoding strategy does not result in a better average age performance for every single partial computation, it targets the partial computations with the highest average age (see computation tasks , , and in Fig. 4). By utilizing the dynamic order operator, we essentially lower the average age of the partial computations with the worst age performance at the expense of slight increase in that of the some remaining partial computations. As expected, agebased vertical shift strategy further lowers the average ages of the partial computations. Here, we can draw parallels with this result and [29], which shows that as long as each component is received every iterations, the distributed SGD can maintain its asymptotic convergence rate. From Fig. 4, we can see that the proposed vertical shift operator guarantees that on average each task is received every iterations, since the yellow bar in Fig. 4 is less than for each partial computation.
We note that in Figs. 3 and 4 the performance gap between RCS1 and RCSadaptive schemes is narrow. This shows that the randomness introduced by a fixed vertical shift is already quite helpful in mitigating the biased updates with less stale partial computations.
In Table I, we look at the value of the objective function in (14) when with and for varying tolerance levels in the case of fixed persistent stragglers throughout all the iterations. We observe that, for each tolerance level , when RCS1 is employed, we achieve a better performance than the static RCS scheme, whereas the agebased vertical shift method RCSadaptive results in the best performance. This is because the RCSadaptive scheme specifically targets the computational tasks that have average age higher than the threshold to effectively create less biased model updates where each partial computation contributes to the learning task more uniformly.
Tolerance level  RCS  RCS1  RCSadaptive 

Second, we consider a more realistic scenario and model the straggling behavior of workers based on a twostate Markov chain: a slow state and a fast state , such that computations are completed faster when a worker is in state . This is similar to the GilbertElliot service times considered in [12, 30]. Specifically, in (18) we have rate in state and rate in state where . We assume that the state changes only occur at the beginning of each iteration with probability ; that is, with probability the state stays the same. A low switching probability indicates that the straggling behavior tends to stay the same in the next iteration. In Fig. 5, we set , , , , and and let workers start at the slow state, i.e., initially we have straggling workers. We note that with initial stragglers and we recover the setting considered in Fig. 3. We observe in Fig. 5 that the proposed timely dynamic encoding strategy improves the convergence performance even though the performance improvement is less compared to the setting in Fig. 3. This is because, in this scenario, the straggling behavior is less correlated over iterations, which results in less biased model updates even for the static RCS scheme. Further, we see in Fig. 5 that the RCSadaptive scheme with performs the best, whereas the RCS1 scheme outperforms the RCSadaptive scheme when . This shows that the age threshold needs to be tuned to get the best performance from the RCSadaptive scheme.
Even though we focus on the distributed coded computation scenario in this work, the proposed dynamic order operator can be applied when the computations are assigned to workers in an uncoded fashion as well. To see the performance in the case of uncoded computations with MMC, we set and and consider the same setup as in Fig. 3. In Fig. 6, we observe that the static partial recovery scheme fails to converge since if coding is not implemented along with partial recovery, model updates are highly biased in the presence of persistent stragglers. However, when the dynamic order operator is employed, particularly the ageaware vertical shifts with , convergence is achieved.
V Conclusion
MMC and partial recovery are two strategies designed to enhance the performance of coded computation employed for straggleraware distributed learning. The main drawback of the partial recovery strategy is biased model updates that are caused when the straggling behaviors of the workers are correlated over time. To prevent biased updates, we introduce a timely dynamic encoding strategy which changes the codewords and their computation order over time. We use an age metric to regulate the recovery frequencies of the partial computations. By conducting several experiments on a linear regression problem, we show that dynamic encoding, particularly an agebased encoding strategy, can significantly improve the convergence performance compared to conventional static encoding schemes. Although our main focus is on coded computation, the advantages of the proposed strategy are not limited to the coded computation scenario. The proposed timely dynamic encoding strategy can be utilized for coded communication and uncoded computation scenarios as well.
References
 [1] M. Li, D. G. Andersen, J. W. Park, A. J. Smola, A. Ahmed, V. Josifovski, J. Long, E. J. Shekita, and B.Y. Su. Scaling distributed machine learning with the parameter server. In USENIX Conference on Operating Systems Design and Implementation, October 2014.
 [2] R. Tandon, Q. Lei, A. G. Dimakis, and N. Karampatziakis. Gradient coding: Avoiding stragglers in distributed learning. In ICML, August 2017.
 [3] W. Halbawi, N. Azizan, F. Salehi, and B. Hassibi. Improving distributed gradient descent using ReedSolomon codes. In IEEE ISIT, June 2018.
 [4] E. Ozfatura, D. Gunduz, and S. Ulukus. Gradient coding with clustering and multimessage communication. In IEEE DSW, June 2019.
 [5] L. Tauz and L. Dolecek. Multimessage gradient coding for utilizing nonpersistent stragglers. In Asilomar Conference, November 2019.
 [6] N. Charalambides, M. Pilanci, and A. O. Hero. Weighted gradient coding with leverage score sampling. In IEEE ICASSP, May 2020.
 [7] K. Lee, M. Lam, R. Pedarsani, D. Papailiopoulos, and K. Ramchandran. Speeding up distributed machine learning using codes. IEEE Tran. Inf. Theory, 64(3):1514–1529, March 2018.
 [8] N. Ferdinand and S. C. Draper. Hierarchical coded computation. In IEEE ISIT, June 2018.

[9]
R. K. Maity, A. Singh Rawa, and A. Mazumdar.
Robust gradient descent via moment encoding and LDPC codes.
In IEEE ISIT, July 2019.  [10] S. Li, S. M. M. Kalan, Q. Yu, M. Soltanolkotabi, and A. S. Avestimehr. Polynomially coded regression: Optimal straggler mitigation via data encoding. January 2018. Available on arXiv: 1805.09934.
 [11] E. Ozfatura, D. Gunduz, and S. Ulukus. Speeding up distributed gradient descent by utilizing nonpersistent stragglers. In IEEE ISIT, July 2019.
 [12] C. S. Yang, R. Pedarsani, and A. S. Avestimehr. Timely coded computing. In IEEE ISIT, July 2019.
 [13] S. Dutta, M. Fahim, F. Haddadpour, H. Jeong, V. Cadambe, and P. Grover. On the optimal recovery threshold of coded matrix multiplication. IEEE Trans. Inf. Theory, 66(1):278–301, July 2019.
 [14] H. Park, K. Lee, J. Sohn, C. Suh, and J. Moon. Hierarchical coding for distributed computing. In IEEE ISIT, June 2018.
 [15] S. Kiani, N. Ferdinand, and S. C. Draper. Exploitation of stragglers in coded computation. In IEEE ISIT, June 2018.
 [16] Y. Yang, M. Interlandi, P. Grover, S. Kar, S. Amizadeh, and M. Weimer. Coded elastic computing. In IEEE ISIT, July 2019.
 [17] Q. Yu, M. A. MaddahAli, and A. S. Avestimehr. Straggler mitigation in distributed matrix multiplication: Fundamental limits and optimal coding. In IEEE ISIT, June 2018.
 [18] R. Bitar, M. Wootters, and S. El Rouayheb. Stochastic gradient coding for straggler mitigation in distributed learning. IEEE Journal on Sel. Areas in Inf. Theory, pages 1–1, 2020.
 [19] E. Ozfatura, S. Ulukus, and D. Gunduz. Straggleraware distributed learning: Communication computation latency tradeoff. Entropy, special issue Interplay between Storage, Computing, and Communications from an InformationTheoretic Perspective, 22(5):544, May 2020.
 [20] E. Ozfatura, S. Ulukus, and D. Gündüz. Distributed gradient descent with coded partial gradient computations. In IEEE ICASSP, May 2019.
 [21] A. Mallick, M. Chaudhari, and G. Joshi. Fast and efficient distributed matrixvector multiplication using rateless fountain codes. In IEEE ICASSP, May 2019.
 [22] B. Hasircioglu, J. GomezVilardebo, and D. Gunduz. Bivariate polynomial coding for exploiting stragglers in heterogeneous coded computing systems, January 2020. Available on arXiv:2001.07227.
 [23] M. Mohammadi Amiri and D. Gunduz. Computation scheduling for distributed machine learning with straggling workers. IEEE Trans. on Sig. Proc., 67(24):6270–6284, December 2019.
 [24] E. Ozfatura, S. Ulukus, and D. Gunduz. Distributed gradient descent with coding and partial recovery. May 2020. https://github.com/emre1925/codedcomputationwithpartialrecovery.
 [25] S. K. Kaul, R. D. Yates, and M. Gruteser. Realtime status: How often should one update? In IEEE Infocom, March 2012.
 [26] Y. Sun, I. Kadota, R. Talak, and E. Modiano. Age of information: A new metric for information freshness. Synthesis Lectures on Communication Networks, 12(2):1–224, December 2019.
 [27] B. Buyukates and S. Ulukus. Timely distributed computation with stragglers. October 2019. Available on arXiv: 1910.03564.
 [28] H. H. Yang, A. Arafa, T. Q. S. Quek, and H. V. Poor. Agebased scheduling policy for federated learning in mobile edge networks. In IEEE ICASSP, May 2020.

[29]
P. Jiang and G. Agrawal.
A linear speedup analysis of distributed deep learning with sparse and quantized communication.
In NIPS. December 2018.  [30] B. Buyukates and S. Ulukus. Age of information with GilbertElliot servers and samplers. In CISS, March 2020.