I Introduction
We focus on a recently introduced cloud service called serverless computing for general distributed computation. Serverless systems have garnered significant attention from industry (e.g., Amazon Web Services (AWS) Lambda, Microsoft Azure Functions, Google Cloud Functions) as well as the research community (see, e.g., [1, 2, 3, 4, 5, 6, 7, 8]). Serverless platforms^{1}^{1}1The name serverless is an oxymoron since all the computing is still done on servers, but the name stuck as it abstracts away the need to provision or manage servers. penetrate a large user base by removing the need for complicated cluster management while providing greater scalability and elasticity [2, 1, 3]. For these reasons, serverless systems are expected to abstract away today’s cloud servers in the coming decade just as cloud servers abstracted away physical servers in the past decade [8, 7, 9].
However, system noise in inexpensive cloudbased systems results in subsets of slower nodes, often called stragglers, which significantly slow the computation. This system noise is a result of limited availability of shared resources, network latency, hardware failure, etc. [10, 11]. Empirical statistics for worker job times are shown in Fig. 1 for AWS Lambda. Notably, there are a few workers () that take much longer than the median job time, severely degrading the overall efficiency of the system.
Techniques like speculative execution have been traditionally used to deal with stragglers (e.g., in Hadoop MapReduce [12] and Apache Spark [13]). Speculative execution works by detecting workers that are running slowly, or will slow down in the future, and then assigning their jobs to new workers without shutting down the original job. The worker that finishes first submits its results. This has several drawbacks: constant monitoring of jobs is required, which is costly when the number of workers is large. Monitoring is especially difficult in serverless systems where worker management is done by the cloud provider and the user has no direct supervision over the workers. Moreover, it is often the case that a worker straggles only at the end of the job (say, while communicating the results). By the time the job is resubmitted, the additional communication and computational overhead would have decreased the overall efficiency of the system.
Ia Existing Work
Error correcting codes are a linchpin of digital transmission and storage technologies, vastly improving their efficiency compared to uncoded systems. Recently, there has been a significant amount research focused on applying codingtheoretic ideas to introduce redundancy into distributed computation for improved straggler and fault resilience, see, e.g., [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29].
This line of work focuses on cloud computing models consistent with firstgeneration cloud platforms (i.e., “serverful" platforms), where the user is responsible for node management through a centralized master node that coordinates encoding, decoding and any update phases. Accordingly, most existing schemes typically employ variants of Maximum Distance Separable (MDS) codes, and have focused on optimizing the recovery threshold (i.e., minimum number of machines needed to do a task) of the algorithm, e.g. [18, 19]. This is equivalent to minimizing the compute time while assuming that the encoding/decoding times are negligible. When the system size is relatively small, the encoding/decoding costs can be safely ignored. However, the encoding/decoding costs of such coded computation schemes scale with the size of the system, and hence this assumption does not hold anymore for serverless systems that can invoke tens of thousands of workers [3, 7, 6]. Furthermore, existing schemes require a powerful master with high bandwidth and large memory to communicate and store all the data to perform encoding and decoding locally. This goes against the very idea of massive scale distributed computation. Therefore, coding schemes designed for serverful systems cannot guarantee low endtoend latency in terms of total execution time for largescale computation in serverless systems.
To formalize this problem, we consider the typical workflow of a serverless system for the task of matrixmatrix multiplication (see Fig. 2). First, worker machines read the the input data from the cloud, jointly encode the data, and write the encoded data to the cloud (). Then, the workers start working on their tasks using the encoded data, and write back the product of coded matrices back to the cloud memory. Denote the joint compute time (including the time to communicate the task results to the cloud) . Once a decodable set of task results are collected, the workers start running the decoding algorithm to obtain the final output (which takes time). Note that all of these phases are susceptible to straggling workers. Hence, one can write the total execution time of a coded computing algorithm as The key question that we ask is how to minimize endtoend latency, , that comprises encoding, decoding and computation times, where all of these phases are performed in parallel by serverless workers.
IB Main Contribution
In this work, we advocate principled, codingbased approaches to accelerate distributed computation in serverless computing. Our goals span both theory and practice: we develop codingbased techniques to solve common machine learning problems on serverless platforms in a fault/straggler resilient manner, analyze their runtime and straggler tolerance, and implement them on AWS Lambda for several popular applications.
Generally, computations underlying several linear algebra and optimization problems tend to be iterative in nature. With this in mind, we aim to develop general codingbased approaches for stragglerresilient computation which meet the following criteria: (1) Encoding over big datasets should be performed once. In particular, the cost for encoding the data for stragglerresilient computation will be amortized over iterations. (2) Encoding and decoding should be lowcomplexity and require at most linear time and space in the size of the data. (3) Encoding and decoding should be amenable to a parallel implementation. This final point is particularly important when working with large datasets on serverless systems due to the massive scale of worker nodes and high communication latency.
It is unlikely that there is a “onesizefitsall" methodology which meets the above criteria and introduces straggler resilience for any problem of interest. Hence, we propose to focus our efforts on a few fundamental operations including matrixmatrix multiplication and matrixvector multiplication, since these form atomic operations for many largescale computing tasks. Our developed algorithms outperform speculative execution and other popular codingbased straggler mitigation schemes by at least
. We demonstrate the advantages of using the developed coding techniques on several applications such as alternating least squares, SVD, Kernel Ridge Regression, power iteration, etc.
Ii Straggler Resilience in Serverless Computing Using Codes
Iia Distributed MatrixVector Multiplication
The main objective of this section is to show that coding schemes can hugely benefit serverless computing by implementing coded matrixvector multiplication on AWS Lambda. Computing , for a large matrix , is a frequent bottleneck of several popular iterative algorithms such as gradient descent, conjugate gradient, power iteration, etc. Many coding theory based techniques for stragglerresilient matrix vector multiplication have been proposed in the literature (e.g. see [14, 28, 25, 17]). We refer the reader to Fig. 2 in [14] for an illustration. Fortunately, many of these schemes can be directly employed in serverless systems since the encoding can be done in parallel and the decoding over the resultant output for computing is inexpensive as it is performed over a vector. Note that a direct applicability is not true for all operations (such as matrixmatrix multiplication), as we will see later in Section IIB.
To illustrate the advantages of coding techniques over speculative execution, we implement power iteration on the serverless platform AWS Lambda. Power iteration requires a matrixvector multiplication in each iteration and gives the dominant eigenvector and corresponding eigenvalue of the matrix being considered. Power iteration constitutes an important component for several popular algorithms such as PageRank and Principal Component Analysis (PCA). PageRank is used by Google to rank documents in their search engine
[30] and by Twitter to generate recommendations of who to follow [31]. PCA is commonly employed as a means of dimensionality reduction in applications like data visualization, data compression and noise reduction
[32].We applied power iteration to a square matrix of dimension using 500 workers on AWS Lambda in the Pywren framework [1]. A comparison of compute times of coded computing with speculative execution is shown in Fig. 3, where a speedup is achieved^{2}^{2}2For our experiments on matrixvector multiplication, we used the coding scheme proposed in [17] due to its simple encoding and decoding that takes linear time. However, we observed that using other coding schemes that are similar, such as the one proposed in [14], result in similar runtimes.. Apart from being significantly faster than speculative execution, another feature of coded computing is reliability, that is, almost all the iterations take a similar amount of time ( seconds) compared to speculative execution, the time for which varies between 340 and 470 seconds. We demonstrate this feature of coded computing throughout our experiments in this paper.
IiB Distributed MatrixMatrix Multiplication
Largescale matrixmatrix multiplication is a frequent computational bottleneck in several problems in machine learning and highperformance computing and has received significant attention from the coding theory community (e.g. see [16, 17, 18, 19, 20, 21, 22]). The problem is computing
(1) 
Proposed Coding Scheme: For stragglerresilient matrix multiplication, we describe our easytoimplement coding scheme below. First, we encode the rowblocks of and in parallel by inserting a parity block after every and blocks of and , respectively, where and are parameters chosen to control the amount of redundancy the code introduces. This produces encoded matrices and . As and are increased, the parity blocks become more spread out, and the code has less redundancy. For example, when , every row of the matrices and is duplicated (and, hence, has redundancy). At the other extreme, when and are set equal to the number of rowblocks in and , respectively, there is only one parity rowblock added in and , and thus, the code exhibits minimum possible redundancy. In Fig. 4, an example of the encoded matrix and the resultant output matrix is shown for the case when and .
Note the locally recoverable structure of : to decode one straggler, only a subset of blocks of need to be read. In Fig. 4, for example, only two blocks need to be read to mitigate a straggler. This is unlike polynomial codes which are MDS in nature and, hence, are optimal in terms of recovery threshold but require reading all the blocks from the output matrix while decoding. The locally recoverable structure of the code makes it particularly amenable to a parallel decoding approach: consists of submatrices, each of which can be separately decoded in parallel. In Fig. 4, there are four such submatrices. We use a simple peeling decoder (for example, see [16, 17]) to recover the systematic part of each submatrix, constructing the final result matrix from these systematic results.
In the event that any of the submatrices are not decodable due to a large number of stragglers, we recompute the straggling outputs. Thus, choosing and presents a tradeoff. We would like to keep them small so that we can mitigate more stragglers without having to recompute, but smaller and imply more redundancy in computation and is potentially more expensive. For example, implies redundancy. Later, we will show how to choose the parameters and given an upper bound on the probability of encountering a straggler in the serverless system. We will also prove that with the right parameters, the probability of not being able to decode the missing blocks is negligible.
We refer to the proposed coding scheme in Fig. 4 as the local product code. In Fig. 5, we compare the local product code with speculative execution, and existing popular techniques for coded matrix multiplication such as polynomial codes [18] and product codes [16]. In our experiment, we set () to be a square matrix with , implying redundancy. Product codes and polynomial codes were also designed such that the amount of redundancy was . Accordingly, we wait for of the workers to return before starting to recompute in the speculative executionbased approach so that all the methods employed had the same amount of redundancy. We note that the codingbased approach performs significantly better than existing codingbased schemes and at least better than the speculative executionbased approach for large matrix dimensions^{3}^{3}3A working implementation of the proposed schemes is available at https://github.com/vvipgupta/serverlessstragglermitigation.
Another important point to note is that existing codingbased approaches perform worse than speculative execution. This is because of the decoding overhead of such schemes. Product codes have to read the entire column (or row) block of and polynomial codes have to read the entire output to decode one straggler. In serverless systems, where workers write their output to a cloud storage and do not communicate directly with the master owing to their ‘stateless’ nature, this results in a huge communication overhead. In fact, for polynomial codes, we are not even able to store the entire output in the memory of the master for larger values of . For this reason, we do not have any global parities—that require reading all the blocks to decode the stragglers—in the proposed local product code. Note that existing coding schemes with locality, such as [17] and [21], also have global parities which are dispensable in serverless and, thus, have high redundancy. This is because such schemes were designed for serverful systems where the decoding is not fully distributed. Moreover, we show in the next section that local product codes are asymptotically optimal in terms of locality for a fixed amount of redundancy. In the event the output is not locally decodable in local product codes, we restart the jobs of straggling workers. However, we later show that such an event is unlikely if the parameters and are chosen properly.
Remark 1.
To mitigate stragglers during encoding and decoding phases, we employ speculative execution. However, in our experiments, we have observed that encoding and decoding times have negligible variance and do not generally suffer from stragglers. This is because the number of workers required during encoding and decoding phases is relatively small (less than
of the computation phase) with smaller job times due to locality. The probability of encountering a straggler in such smallscale jobs is extremely low.Remark 2.
It has been well established in the literature that blocked partitioning of matrices is communication efficient for distributed matrixmatrix multiplication both in the serverful [33, 34] and serverless [5] settings. Even though in Fig. 4 we show partitioning of into rowblocks for clarity of exposition, we further partition the input matrices (and ) into square blocks in all our experiments and perform blockwise distributed multiplication.
Iii Theoretical Analysis of Local Product Codes
Iiia Optimality of Local Product Codes
In codingtheoretic terminology, a locally recoverable code (LRC) is a code where each symbol is a function of small number of other symbols. This number is referred to as the locality, , of the code. In the context of straggler mitigation, this means that each block in is a function of only a few other blocks. Hence, to decode one straggler, one needs to read only blocks. In the example of Fig. 4, the locality is since each block of can be recovered from two other blocks. In general, the locality of the local product code is . Another important parameter of a code is its minimum distance, , which relates directly to the number of stragglers that can be recovered in the worst case. Specifically, to recover the data of stragglers in the worst case, the minimum distance must satisfy .
For a fixed redundancy, Maximum Distance Separable (MDS) codes attain the largest possible minimum distance , and thus, are able to tolerate the most stragglers in the worst case. Many straggler mitigation schemes are focused on MDS codes and have gained significant attention, such as polynomial codes [18]. However, such schemes are not practical in the serverless case since they ignore the encoding and decoding costs. Moreover, as seen from Fig. 5, it is better to restart the straggling jobs than to use the parities from polynomial or product codes since the communication overhead during decoding is high.
Hence, in serverless systems, the locality of the code is of greater importance since it determines the time required to decode a straggler. For any LRC code, the following relation between and is satisfied [35, 36]
(2) 
where is the number of systematic data blocks and is the total number of data blocks including parities. Now, since we want to tolerate at least one straggler, the minimum distance must satisfy . Using , we conclude that or, equivalently,
(3) 
Now, in the case of the local product code, each of the submatrices that can be decoded in parallel represent a product code with and . In Fig. 4, there are four locally decodable submatrices with and . Also, we know that the locality for each of the submatrices is and hence this is the locality for the local product code.
Next, we want to compare the locality of the local product code with any other coding scheme with the same parameters, that is, and . Using Eq. 3, we get
Thus the locality of local product codes is optimal (within a constant factor) since it achieves the lower bound of locality for all LRC codes. This is asymptotically better than, say, a local version of polynomial codes (that is, each submatrix of is a polynomial code instead of a product code) for which the locality is since it needs to read all blocks to mitigate one straggler [18].
Having shown that local product codes are asymptotically optimal in terms of decoding time, we further quantify the decoding time in the serverless case through probabilistic analysis next.
IiiB Decoding Costs
Stragglers arise due to system noise which is beyond the control of the user (and maybe even the cloud provider, for example, unexpected network latency or congestion due to a large number of users). However, a good estimate for an upper bound on the number of stragglers can be obtained through multiple experiments. In our theoretical analysis, we assume that the probability of a given worker straggling is fixed as
, and that this happens independently of other workers. In AWS Lambda, for example, we obtain an upper bound on the number of stragglers through multiple trial runs and observe that less than of the nodes straggle in most trials (also noted from Fig. 1). Thus, a conservative estimate of is assumed for AWS Lambda.Given the high communication latency in serverless systems, codes with low I/O overhead are highly desirable, making locally recoverable codes a natural fit. For local product codes, say the decoding worker operates on a grid of blocks. If a decoding worker sees a single straggler, it reads blocks to recover it. However, when there are more than one stragglers, at most block reads will occur per straggler during recovery. For example, if and there are two stragglers in the same row, the decoding worker read rows per straggler. Thus, if a decoding worker gets stragglers, a total of at most block reads will occur—there are at most block reads for each of the stragglers. Since the number of stragglers, , is random, the number of blocks read, say , is also random. Note that scales linearly with the communication costs.
In Theorem 1, we quantify the decoding costs for local product codes; specifically, we show that the probability of a decoding worker reading a large number of blocks is small.
Theorem 1.
Let be the probability that a serverless worker straggles independently of others, and be the number of blocks read by a decoding worker working on blocks. Also, let . Then, the probability that the decoding worker has to read more than blocks is upper bounded by
Proof.
See Section VA. ∎
Theorem 1 provides a useful insight about the performance of local product codes: the probability of reading more than blocks during decoding decays decays to zero at a superexponential rate. Note that for the special (and more practical) case of , the number of blocks read per straggler is exactly and thus . Thus, using Theorem 1, we can obtain the following corollary.
Corollary 1.
For any and , the probability that the decoding worker reads more blocks than the expected blocks is upper bounded by
For , this becomes
In Fig. 6, we plot the upper bound on for different values of . The values of and were chosen to be consistent with the experiments in Fig. 5, where , so that the maximum number of blocks read per straggler is and the number of blocks of per decoding worker is . Additionally, we used as obtained through extensive experiments on AWS Lambda (see Fig. 1). In a polynomial code with the same locality, blocks would be read to mitigate any straggler by a decoding worker. For the local product code, the probability that blocks are read is upper bounded by .
IiiC Straggler Resiliency of Local Product Codes
To characterize the straggler resiliency of local product codes, we turn our focus to finding the probability of encountering an undecodable set: a configuration of stragglers that cannot be decoded until more results arrive.
Definition 1.
Undecodable set: Consider a single decoding worker that is working on blocks, arranged in an grid, and let be the number of missing workers. The decoding worker’s blocks are said to form an undecodable set if we need to wait for more workers to arrive to decode all the missing blocks.
Some examples of undecodable sets are shown in Fig. 7. In an undecodable set, it is possible that some of the stragglers are decodable, but there will always be some stragglers that are preventing each other from being decoded. For the local product code, an individual straggler is undecodable if and only if there is at least one other straggler in both its row and column, because the code provides a single redundant block along each axis that can be used for recovery. This implies that a decoding worker must encounter at least three stragglers for one of them to be undecodable. However, the code can always recover any three stragglers through the use of a peeling decoder [16, 17]. While the three stragglers may share a column or row and be in an "interlocking" configuration, such as those shown in Fig. 8, two of the three can always be recovered, or "peeled off". Using these blocks, the straggler that was originally undecodable can be recovered. This provides a key result: all undecodable sets consist of four or more stragglers. Equivalently, given , the probability of being unable to decode is zero. This can also be noted directly from the fact the the minimum distance of a product code with one parity row and column is four, and hence, it can tolerate any three stragglers [16].
The following theorem bounds the probability of encountering an undecodable set for local product codes.
Theorem 2.
Let be the probability that a serverless worker straggles independently of others. Let be the event that a decoding worker working on blocks in an grid cannot decode. Then,
where
Proof.
See Section VB. ∎
In Fig. 9, the bound in Theorem 2 is shown with for so that the total number of blocks per worker is . This shows a "sweet spot" around 121 blocks per decoding worker, or , the same choice used in the experiments shown in Fig. 5. With this choice of code parameters, the probability of a decoding worker being able to decode all the stragglers is high. This simultaneously enables low encoding and decoding costs, avoids doing too much redundant computation during the multiplication stage (only ), and gives a high probability of avoiding an undecodable set in the decoding stage. In particular, for , an individual worker is able to decode with probability at least when .
Remark 3.
The analysis in Sections IIIB and IIIC derives bounds for one decoding worker. In general, for decoding using workers in parallel, the respective upper bounds on probabilities in Theorem 1 (any decoding worker reading more than blocks) and Theorem 2 (any decoding worker not able to decode) can be multiplied by using the union bound.
Iv Coded Computing in Applications
In this section, we take several highlevel applications from the field of machine learning and high performance computing, and implement them on the serverless platform AWS Lambda. Our experiments clearly demonstrate the advantages of proposed coding schemes over speculative execution.
Iva Kernel Ridge Regression
We first focus on the flexible class of Kernel Ridge Regression (KRR) problems with Preconditioned Conjugate Gradient (PCG). Oftentimes, KRR problems are illconditioned, so we use a preconditioner described in [37] for faster convergence. The problem can be described as
(4) 
where is a Kernel matrix defined by with the kernel function on the input domain , is the number of samples in training data, is the labels vector and the solution to coefficient vector is desired. A preconditioning matrix based on random feature maps [38] can be introduced for faster convergence, so that the KRR problem in Eq. (4) can be solved using Algorithm 1. Incorporation of such maps has emerged as a powerful technique for speeding up and scaling kernelbased computations, often requiring fewer than 20 iterations of Algorithm 1 to solve (4) with good accuracy.
Straggler mitigation with coding theory: The matrixvector multiplication in Steps 4 and 6 are the bottleneck in each iteration and are distributedly executed on AWS Lambda. As such, they are prone to slowdowns due to faults or stragglers, and should be the target for the introduction of coded computation. To demonstrate the promised gains of the coding theory based approach, we conducted an experiment on the standard classification datasets ADULT and EPSILON [39] with Gaussian kernel with and , and the Kernel matrices are square of dimension and , respectively. We store the training and all subsequently generated data in cloud storage S3 and use Pywren [1] as a serverless computing framework on AWS Lambda.
For this experiment, we implemented a 2D product code similar to that proposed in [17] to encode the rowblocks of and , and distributed them among 64 and 400 Lambda workers, respectively. To compare this coded scheme’s performance against speculative execution, we distribute the uncoded rowblocks of and among the same number of Lambda workers, and wait for of jobs to finish and restart the rest without terminating unfinished jobs. Any job that finishes first would submit its results. The computation times for KRR with PCG on these datasets for the codingbased and speculative executionbased schemes is plotted in Figs. 10 and 11. For coded computation, the first iteration also includes the encoding time. We note that coded computation performs significantly better than speculative execution, with and
reduction in total job times for ADULT and EPSILON datasets, respectively. This experiment again demonstrates that codingbased schemes can significantly improve the efficiency of largescale distributed computations. Other regression problems such as ridge regression, lasso, elastic net and support vector machines can be modified to incorporate codes in a similar fashion.
IvB Alternating Least Squares
Alternating Least Squares (ALS) is a widely popular method to find low rank matrices that best fit the given data. This empirically successful approach is commonly employed in applications such as matrix completion and matrix sensing used to build recommender systems [40]. For example, it was a major component of the winning entry in the Netflix Challenge where the objective was to predict user ratings from already available datasets [41]. We implement the ALS algorithm for matrix completion on AWS Lambda using the Pywren framework [1], where the main computational bottleneck is a large matrixmatrix multiplication in each iteration.
Let be a matrix constructed based on the existing (incomplete) ratings, where and are the number of users giving ratings and items being rated, respectively. The objective is to find the matrix which predicts the missing ratings. One solution is to compute a lowrank factorization based on the existing data, which decomposes the ratings matrix as where for some number of latent factors
, which is a hyperparameter.
Let us call the matrices and the user matrix and item matrix, respectively. Each row of and column of uses an dimensional vector of latent factors to describe each user or item, respectively. This gives us a rank approximation to . To obtain the user and item matrices, we solve the optimization problem where the loss is defined as
where is a regularization hyperparameter chosen to avoid overfitting. The above problem is nonconvex in general. However, it is biconvex—given a fixed , it is convex in , and given a fixed , it is convex in . ALS, described in Algorithm 2, exploits this biconvexity to solve the problem using coordinate descent. ALS begins with a random initialization of the user and item matrices. It then alternates between a user step, where it optimizes over the user matrix using the current item matrix estimate, and an item step, optimizing over the item matrix using the newly obtained user matrix. Thus, the updates to the user and item matrices in the th iteration are given by
In practice, , so computing and inverting the matrix in each step can be done locally at the master node. Instead, the matrix multiplications and in the user and item steps, respectively, are the bottleneck in each iteration, requiring time. To mitigate stragglers, we use local product codes and speculative execution and compare their runtimes in Fig. 12 for seven iterations. The matrix was synthetically generated with and the number of latent factors used was . Each rating was generated independently by sampling a Uniformrandom variable, intended to be the true user rating. Then, noise generated by sampling a distribution was added, and the final rating was obtained by rounding to the nearest integer. The ratings matrix is encoded once before the computation starts, and thus the encoding cost is amortized over iterations. We used workers during the computation phase and workers during the decoding phase for each matrix multiplication. It can be seen that codes perform better than speculative execution while providing reliability, that is, each iteration takes on average seconds with much smaller variance in running times per iteration.
IvC TallSkinny SVD
Singular Value Decomposition (SVD) is a common numerical linear algebra technique with numerous applications, such as in the fields of image processing [42], genomic signal processing [43]
[32], and more. In this section, we employ our proposed coding scheme in mitigating stragglers while computing the SVD of a tall, skinny matrix , where . That is, we would like to compute the orthogonal matrices and and the diagonal matrix , where .To this end, we first compute the matrixmatrix multiplication which is the main computational bottleneck and requires time. Next, we compute the SVD of . Note that is a smaller matrix and its SVD requires only time and memory and can be computed locally at the master node in general. This will give us the matrix and the diagonal matrix . Now, can again be computed in parallel using the matrixmatrix multiplication which requires time.
We compute the SVD of a tall matrix of size on AWS Lambda. For local product codes, we use systematic workers during computation with redundancy, and and workers for parallel encoding and decoding, respectively. For speculative execution, we employed workers for computing in the first phase and started the second phase (that is, recomputing the straggling nodes) as soon as of the workers from the first phase arrive. Averaged over 5 trials, coded computing took seconds compared to seconds required by speculative execution, thus providing a reduction in endtoend latency.
Though we do not implement it here, Cholesky decomposition is yet another application that uses matrixmatrix multiplication as an important constituent. It is frequently used in finding a numerical solution of partial differential equations
[44], solving optimization problems using quasiNewton methods [45], Monte Carlo methods [46][47], etc. The main bottleneck in distributed Cholesky decomposition involves a sequence of largescale outer products [48, 3] and hence local product codes can be readily applied to mitigate stragglers.V Proofs
Va Proof of Theorem 1
To prove Theorem 1, we use a standard Chernoff bound argument. In particular, for any , we can upper bound the probability of reading at least blocks as
(5) 
We know that the number of blocks read, since we read blocks every time we decode a straggler. Thus, we can bound , the MGF of , in terms of the MGF of , , as
(6) 
Since we assume each worker straggles independently with probability , the distribution of is Binomial. Thus, its moment generating function is Using Eq. 6, we have Using this inequality and the fact that in the upper bound of Eq. 5, we get
(7) 
As a last step, we specialize by setting which is obtained by optimizing the RHS above with respect to . Substitution into Eq. 7 gives the desired upper bound on , proving Theorem 1.
VB Proof of Theorem 2
We already discussed in Sec. IIIC that local product codes can decode any three stragglers. Now, we turn our attention to the case of four or more stragglers. Regardless of how much redundancy is used—including the extreme case of where every block is duplicated three times—there exist undecodable sets with four stragglers. An example is shown in the middle figure in Fig. 7. All 4undecodable sets come in squares, with every straggler blocking another two off (otherwise, one would be free and decodable, reducing to three stragglers which can always be handled by a peeling decoder). Using this observation, we can create any 4undecodable set by picking the two rows (from our choices) and two columns (from our choices) to place the stragglers in, yielding exactly four spots. Let be the number of undecodable sets with stragglers. Thus,
All 5undecodable sets come in the form of 4undecodable sets with a fifth straggler placed in any vacant spot on the grid. This gives us a method to count the number of 5undecodable sets. First, choose the two rows and two columns that make up the embedded 4undecodable set. Then, choose from any of the vacant entries to place the fifth straggler, which gives .
In the case of , undecodable sets can be formed in one of two ways: confining all stragglers to three rows and three columns, or constructing a 4undecodable set and then placing two (or three for ) more stragglers anywhere. We can count the former as
(8) 
for both and since choosing three rows and three columns yields nine blocks, of which we choose . For the latter, we can first construct a 4undecodable set by picking the two rows and two columns in which to place the stragglers, and then place the remaining anywhere else, giving a total of
(9) 
such undecodable sets. By summing Eqs. 8 and 9, we obtain an upper bound on for . This is an upper bound, rather than the exact number of undecodable sets, due to the fact that all sets are counted, but several are overcounted. For example, any 6undecodable set where all six stragglers are confined to a contiguous grid is counted by both terms.
In general, if there are stragglers, there are ways to arrange the stragglers. Given the number of stragglers , all configurations are equally likely, and the probability of being unable to decode is the percentage of configurations that are undecodable sets. Since is the number of undecodable sets, the probability of being unable to decode given stragglers is .
The probability of encountering eight or more stragglers is small for suitably chosen , owing to the fact that the probability of encountering a straggler is small (for example, for AWS Lambda). Accordingly, we have chosen to focus our analysis on determining for . We can obtain an upper bound on the probability of being unable to decode by assuming all configurations where are undecodable sets. Let
denote the event that a decoding worker cannot decode. Then by the law of total probability,
Now using the inequality and gives the desired upper bound, proving Theorem 2.
Vi Conclusions and Future Work
In this paper, we argued that in the serverless setting—where communication costs greatly outweigh computation costs—performing some redundant computation based on ideas from coding theory will outperform speculative execution. Moreover, the design of such codes should leverage locality to attain low encoding and decoding costs. Our proposed scheme for coded matrixmatrix multiplication outperforms the widely used method of speculative execution and existing popular coded computing schemes in a serverless computing environment. All three stages of the coded approach are amenable to a parallel implementation, utilizing the dynamic scaling capabilities of serverless platforms. We showed that our proposed scheme is asymptotically optimal in terms of decoding time and further quantified the communication costs during decoding through probabilistic analysis. Additionally, we derived an upper bound on the probability of being unable to decode stragglers.
The proposed schemes for fault/straggler mitigation are universal
in the sense that they can be applied to many existing algorithms without changing their outcome. This is because they mitigate stragglers by working on lowlevel steps of the algorithm which are often the computational bottleneck, such as matrixvector or matrixmatrix multiplication, thus not affecting the algorithm from the application or user perspective. In the future, we plan to devise similar schemes for other matrix operations such as distributed QR decomposition, Gaussian elimination, eigenvalue decomposition, etc. Eventually, we will create a software library implementing the proposed algorithms for running massivescale Python code on AWS Lambda. This library would provide a seamless experience for users: they will execute their algorithms on serverless systems (using frameworks such as Pywren
[1]) as they normally would, and our algorithms can be automatically invoked “under the hood" to introduce fault/stragglerresilience, thus aligning with the overarching goal of serverless systems to reduce management on the user front.References
 [1] E. Jonas, Q. Pu, S. Venkataraman, I. Stoica, and B. Recht, “Occupy the cloud: distributed computing for the 99%,” in Proceedings of the 2017 Symposium on Cloud Computing. ACM, 2017, pp. 445–451.
 [2] I. Baldini, P. Castro, K. Chang, P. Cheng, S. Fink, V. Ishakian, N. Mitchell, V. Muthusamy, R. Rabbah, A. Slominski, and P. Suter, Serverless Computing: Current Trends and Open Problems. Springer Singapore, 2017.
 [3] V. Shankar, K. Krauth, Q. Pu, E. Jonas, S. Venkataraman, I. Stoica, B. Recht, and J. RaganKelley, “numpywren: serverless linear algebra,” ArXiv eprints, Oct. 2018.
 [4] J. M. Hellerstein, J. Faleiro, J. E. Gonzalez, J. SchleierSmith, V. Sreekanti, A. Tumanov, and C. Wu, “Serverless computing: One step forward, two steps back,” arXiv preprint arXiv:1812.03651, 2018.
 [5] V. Gupta, S. Wang, T. Courtade, and K. Ramchandran, “Oversketch: Approximate matrix multiplication for the cloud,” in 2018 IEEE International Conference on Big Data (Big Data), Dec 2018, pp. 298–304.
 [6] V. Gupta, S. Kadhe, T. Courtade, M. W. Mahoney, and K. Ramchandran, “Oversketched newton: Fast convex optimization for serverless systems,” arXiv preprint arXiv:1903.08857, 2019.
 [7] E. Jonas, J. SchleierSmith, V. Sreekanti, C.C. Tsai, A. Khandelwal, Q. Pu, V. Shankar, J. Carreira, K. Krauth, N. Yadwadkar et al., “Cloud programming simplified: a berkeley view on serverless computing,” arXiv preprint arXiv:1902.03383, 2019.
 [8] J. Spillner, C. Mateos, and D. A. Monge, “Faaster, better, cheaper: The prospect of serverless scientific computing and hpc,” in Latin American High Performance Computing Conference, 2017, pp. 154–168.
 [9] S. Jhakotia, “Why serverless is the future of cloud computing?” 2018. [Online]. Available: https://medium.com/@suryaj/whyserverlessisthefutureofcloudcomputing45e417dc4018
 [10] J. Dean and L. A. Barroso, “The tail at scale,” Commun. ACM, vol. 56, no. 2, pp. 74–80, Feb. 2013.
 [11] T. Hoefler, T. Schneider, and A. Lumsdaine, “Characterizing the influence of system noise on largescale applications by simulation,” in Proc. of the ACM/IEEE Int. Conf. for High Perf. Comp., Networking, Storage and Analysis, 2010, pp. 1–11.
 [12] J. Dean and S. Ghemawat, “Mapreduce: Simplified data processing on large clusters,” Commun. ACM, vol. 51, no. 1, pp. 107–113, Jan. 2008.
 [13] M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica, “Spark: Cluster computing with working sets,” in Proceedings of the 2Nd USENIX Conference on Hot Topics in Cloud Computing, 2010, pp. 10–10.
 [14] K. Lee, M. Lam, R. Pedarsani, D. Papailiopoulos, and K. Ramchandran, “Speeding up distributed machine learning using codes,” IEEE Transactions on Information Theory, vol. 64, no. 3, pp. 1514–1529, 2018.
 [15] R. Tandon, Q. Lei, A. G. Dimakis, and N. Karampatziakis, “Gradient coding: Avoiding stragglers in distributed learning,” in Proceedings of the 34th International Conference on Machine Learning, vol. 70. PMLR, 2017, pp. 3368–3376.
 [16] K. Lee, C. Suh, and K. Ramchandran, “Highdimensional coded matrix multiplication,” in IEEE Int. Sym. on Information Theory (ISIT), 2017. IEEE, 2017, pp. 2418–2422.
 [17] T. Baharav, K. Lee, O. Ocal, and K. Ramchandran, “Stragglerproofing massivescale distributed matrix multiplication with ddimensional product codes,” in IEEE Int. Sym. on Information Theory (ISIT), 2018, 2018.
 [18] Q. Yu, M. MaddahAli, and S. Avestimehr, “Polynomial codes: an optimal design for highdimensional coded matrix multiplication,” in Adv. in Neural Inf. Processing Systems, 2017, pp. 4403–4413.
 [19] S. Dutta, M. Fahim, F. Haddadpour, H. Jeong, V. Cadambe, and P. Grover, “On the optimal recovery threshold of coded matrix multiplication,” arXiv preprint arXiv:1801.10292, 2018.
 [20] B. Bartan and M. Pilanci, “Polar coded distributed matrix multiplication,” arXiv preprint arXiv:1901.06811, 2019.
 [21] H. Jeong, F. Ye, and P. Grover, “Locally recoverable coded matrix multiplication,” in 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2018, pp. 715–722.
 [22] H. Jeong, Y. Yang, V. Gupta, V. Cadambe, K. Ramchandran, and P. Grover, “Coded 2.5d summa : Coded matrix multiplication for high performance computing,” 2019.
 [23] A. M. Subramaniam, A. Heiderzadeh, and K. R. Narayanan, “Collaborative decoding of polynomial codes for distributed computation,” arXiv preprint arXiv:1905.13685, 2019.
 [24] J. Zhu, Y. Pu, V. Gupta, C. Tomlin, and K. Ramchandran, “A sequential approximation framework for coded distributed optimization,” in Annual Allerton Conf. on Communication, Control, and Computing, 2017. IEEE, 2017, pp. 1240–1247.

[25]
S. Dutta, V. Cadambe, and P. Grover, “Shortdot: Computing large linear transforms distributedly using coded short dot products,” in
Advances In Neural Information Processing Systems, 2016, pp. 2100–2108.  [26] J. Kosaian, K. Rashmi, and S. Venkataraman, “Learning a code: Machine learning for approximate nonlinear coded computation,” arXiv preprint arXiv:1806.01259, 2018.
 [27] Y. Yang, M. Chaudhari, P. Grover, and S. Kar, “Coded iterative computing using substitute decoding,” arXiv preprint arXiv:1805.06046, 2018.
 [28] Y. Yang, P. Grover, and S. Kar, “Coded distributed computing for inverse problems,” in Advances in Neural Information Processing Systems 30. Curran Associates, Inc., 2017, pp. 709–719.
 [29] M. Ye and E. Abbe, “Communicationcomputation efficient gradient coding,” arXiv preprint arXiv:1802.03475, 2018.
 [30] L. Page, S. Brin, R. Motwani, and T. Winograd, “The pagerank citation ranking: Bringing order to the web.” Stanford InfoLab, Tech. Rep., 1999.
 [31] P. Gupta, A. Goel, J. Lin, A. Sharma, D. Wang, and R. Zadeh, “Wtf: The who to follow service at twitter,” in Proceedings of the 22nd international conference on World Wide Web. ACM, 2013, pp. 505–514.

[32]
C. Ding and X. He, “Kmeans clustering via principal component analysis,” in
Proceedings of the TwentyFirst International Conference on Machine Learning, ser. ICML ?04, New York, NY, USA, 2004, p. 29.  [33] E. Solomonik and J. Demmel, “Communicationoptimal parallel 2.5D matrix multiplication and LU factorization algorithms,” in Proceedings of the 17th International Conference on Parallel Processing, 2011, pp. 90–109.
 [34] R. A. van de Geijn and J. Watts, “Summa: Scalable universal matrix multiplication algorithm,” Tech. Rep., 1995.
 [35] D. S. Papailiopoulos and A. G. Dimakis, “Locally repairable codes,” IEEE Transactions on Information Theory, vol. 60, no. 10, pp. 5843–5855, 2014.
 [36] P. Gopalan, C. Huang, H. Simitci, and S. Yekhanin, “On the locality of codeword symbols,” IEEE Transactions on Information theory, vol. 58, no. 11, pp. 6925–6934, 2012.
 [37] H. Avron, K. L. Clarkson, and D. P. Woodruff, “Faster kernel ridge regression using sketching and preconditioning,” SIAM Journal on Matrix Analysis and Applications, vol. 38, no. 4, pp. 1116–1138, 2017.
 [38] A. Rahimi and B. Recht, “Random features for largescale kernel machines,” in Advances in neural information processing systems, 2008, pp. 1177–1184.
 [39] C.C. Chang and C.J. Lin, “Libsvm: a library for support vector machines,” ACM transactions on intelligent systems and technology (TIST), vol. 2, no. 3, p. 27, 2011.

[40]
P. Jain, P. Netrapalli, and S. Sanghavi, “Lowrank matrix completion using
alternating minimization,” in
Proceedings of the fortyfifth annual ACM symposium on Theory of computing
. ACM, 2013, pp. 665–674.  [41] Y. Koren, R. Bell, and C. Volinsky, “Matrix factorization techniques for recommender systems,” Computer, no. 8, pp. 30–37, 2009.
 [42] R. A. Sadek, “Svd based image processing applications: state of the art, contributions and research challenges,” arXiv preprint arXiv:1211.7102, 2012.
 [43] S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, and D.U. Hwang, “Complex networks: Structure and dynamics,” Physics reports, vol. 424, no. 45, pp. 175–308, 2006.
 [44] P.G. Martinsson, “A fast direct solver for a class of elliptic partial differential equations,” Journal of Scientific Computing, vol. 38, no. 3, pp. 316–330, 2009.
 [45] M. Powell, “Updating conjugate directions by the bfgs formula,” Mathematical Programming, vol. 38, no. 1, p. 29, 1987.
 [46] P. Sabino, “Monte carlo methods and pathgeneration techniques for pricing multiasset pathdependent options,” arXiv preprint arXiv:0710.0850, 2007.
 [47] R. Eubank and S. Wang, “The equivalence between the cholesky decomposition and the kalman filter,” The American Statistician, vol. 56, no. 1, pp. 39–43, 2002.
 [48] G. Ballard, J. Demmel, O. Holtz, and O. Schwartz, “Communicationoptimal parallel and sequential cholesky decomposition,” SIAM Journal on Scientific Computing, vol. 32, no. 6, pp. 3495–3523, 2010.
Comments
There are no comments yet.