# Coded Fourier Transform

We consider the problem of computing the Fourier transform of high-dimensional vectors, distributedly over a cluster of machines consisting of a master node and multiple worker nodes, where the worker nodes can only store and process a fraction of the inputs. We show that by exploiting the algebraic structure of the Fourier transform operation and leveraging concepts from coding theory, one can efficiently deal with the straggler effects. In particular, we propose a computation strategy, named as coded FFT, which achieves the optimal recovery threshold, defined as the minimum number of workers that the master node needs to wait for in order to compute the output. This is the first code that achieves the optimum robustness in terms of tolerating stragglers or failures for computing Fourier transforms. Furthermore, the reconstruction process for coded FFT can be mapped to MDS decoding, which can be solved efficiently. Moreover, we extend coded FFT to settings including computing general n-dimensional Fourier transforms, and provide the optimal computing strategy for those settings.

## Authors

• 26 publications
• 31 publications
• 28 publications
05/24/2018

### Coded FFT and Its Communication Overhead

We propose a coded computing strategy and examine communication costs of...
04/25/2018

### Fundamental Limits of Coded Linear Transform

In large scale distributed linear transform problems, coded computation ...
09/08/2021

### Computational Polarization: An Information-theoretic Method for Resilient Computing

We introduce an error resilient distributed computing method based on an...
01/27/2021

### List-Decodable Coded Computing: Breaking the Adversarial Toleration Barrier

We consider the problem of coded computing where a computational task is...
10/08/2019

### Timely Distributed Computation with Stragglers

We consider a status update system in which the update packets need to b...
03/08/2018

### Multilevel Illumination Coding for Fourier Transform Interferometry in Fluorescence Spectroscopy

Fourier Transform Interferometry (FTI) is an interferometric procedure f...
01/22/2021

### An in-place truncated Fourier transform

We show that simple modifications to van der Hoeven's forward and invers...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Discrete Fourier transform (DFT) is one of the fundamental operations, which has been broadly used in many applications, including signal processing, data analysis, and machine learning algorithms. Due to the increasing size and dimension of data, many modern applications require massive amount of computation and storage, which can not be provided by a single machine. Thus, finding efficient design of algorithms including DFT in a distributed computing environment has gained considerable attention. For example, several distributed DFT implementations, such as FFTW

[1] and PFFT [2], have been introduced and used widely.

A major performance bottleneck in distributed computing problems is the latency caused by “stragglers" [3], which are the small fraction of computing nodes at the high latency tail that prolongs the computation. Mitigating this effect involves creating certain types of “computation reduncancy”, such that the computation can be completed even without collecting the intermediate results assigned to the stragglers. For example, one can replicate the same computing task onto multiple nodes to provide this redundancy [4].

Recently, it has been shown that coding theoretic concepts that were originally developed for communication systems can also be useful in distributed computing systems, playing a transformational role by improving the performance of computation in various aspects. In this context, two “coded computing” concepts has been proposed: The first one, introduced in [5, 6, 7], injects computation redundancy in order to alleviate the communication bottleneck and accelerate distributed computing algorithms (e.g., Coded Terasort [8]). The second coded computing concept, introduced in [9, 10], utilizes coding to handle the straggler effects and speed up the computations for distributed matrix multiplication. This technique has been further extended to decentralized “master-less” architectures [11], distributed convolution [12]

, short dot linear transform

More recently, polynomial code[15] has been proposed for distributed massive matrix multiplication, for optimal straggler effect mitigation. It was shown that by designing a pair of codes, whose multiplicative product forms an Maximum Distance Separable (MDS) code, one can orderwise improve upon the prior arts in terms of the recovery threshold

(i.e., the number of workers that the master needs to wait in order to be able to compute the final output), while optimizing other metrics including computation latency and communication load. This provides the first code that achieves the optimum recovery threshold. Furthermore, it allows mapping the reconstruction problem of the final output to polynomial interpolation, which can be solved efficiently, bridging the rich literature of algebraic coding and distributed matrix multiplication. Moreover, a variation of the polynomial code was applied to coded convolution, and its order-optimality has been proved.

In this work, our focus is on mitigating the straggler effects for distributed DFT algorithms. Specifically, we consider a distributed Fourier transform problem where we aim to compute the discrete Fourier transform given an input vector . As shown in Figure 1, the computation is carried out using a distributed system with a master node and worker nodes that can each store and process fraction of the input vector, for some parameter . The vector stored at each worker can be designed as an arbitrary function of the input vector . Each worker can also compute an intermediate result of the same length based on an arbitrary function of its stored vector, and return it to the master. By designing the computation strategy at each worker (i.e., designing the functions to store the vector and to compute the intermediate result), the master only need to wait for the fastest subset of workers before recovering the final output , which mitigates the straggler effects.

Our main result in this paper is the development of an optimal computing strategy, referred to as the coded FFT. This computing design achieves the optimum recovery threshold , while allowing the the master to decode the final output with low complexity. Furthermore, we extend this technique to settings including computing multi-dimensional Fourier transform, and propose the corresponding optimal computation strategies.

To develop coded FFT, we leverage two key algebraic properties of the Fourier transform operations. First due to its recursive structures, we can decompose the DFT into multiple identical and simpler operations (i.e., DFT over shorter vectors), which suits the distributed computing framework and can be potentially assigned to multiple worker nodes. Secondly, due to the linearity of Fourier transform, we can apply linear codes on the input data, which commutes with the DFT operation and translates to the computing results. These two properties allow us to develop a coded computing strategy where the outputs from the worker nodes has certain MDS properties, which can optimally mitigate straggler effects.

## Ii System Model and Main Results

We consider a problem of computing the Discrete Fouier transform in a distributed computing environment with a master node and worker nodes. The input and the output are vectors of length over an arbitrary field with a primitive th root of unity, denoted by .111When the base field is finite, we assume it is sufficiently large. We want to compute the elements of the output vector, denoted by , as a function of the elements of the input vector, denoted by , based on the following equations.

 Xi≜s−1∑j=0xjωijs fori∈{0,…,s−1}. (1)

Each one of the workers can store and process fraction of the vector. Specifically, given a parameter satisfying , each worker can store an arbitrary vector as a function of the input , compute an intermediate result as a function of , and return to the server. The server only waits for the results from a subset of workers, before recovering the final output using certain decoding functions, given these intermediate results returned from the workers.

Given the above system model, we can design the functions to compute s’ and s’ for the workers. We refer to these functions as the encoding functions and the computing functions. We say that a computation strategy consists of encoding functions and computing functions, denoted by

 f=(f0,f1,...,fN−1), (2)

and

 g=(g0,g1,...,gN−1), (3)

that are used to compute the s’ and s’. Specifically, given a computation strategy, each worker stores and computes according to the following equations:

 ai =fi(x), (4) bi =gi(ai). (5)

For any integer , we say a computation strategy is -recoverable if the master can recover given the computing results from any workers using certain decoding functions. We define the recovery threshold of a computation strategy as the minimum integer such that the computation strategy is -recoverable.

The goal of this paper is to find the optimal computation strategy that achieves the minimum possible recovery threshold, while allowing efficient decoding at the master node. This essentially provides the computation strategy with the maximum robustness against the straggler effect, which only requires a low additional computation overhead.

We summarize our main results in the following theorems:

###### Theorem 1.

In a distributed Fourier transform problem of computing using workers that each can store and process fraction of the input , we can achieve the following recovery threshold

 K∗=m. (6)

Furthermore, the above recovery threshold can be achieved by a computation strategy, referred to as the Coded FFT, which allows efficient decoding at the master node, i.e., with a complexity that scales linearly with respect to the size of the input data.

Moreover, we can prove the optimally of coded FFT, which is formally stated in the following theorem

###### Theorem 2.

In a distributed Fourier transform environment with workers that each can store and process fraction of the input vector, the following recovery threshold

 K∗=m (7)

is optimal when the base field is finite.222Similar results can be generalized to the case where the base field is infinite, by taking into account of some practical implementation constrains (see Section IV).

###### Remark 1.

The above converse demonstrates that our proposed coded FFT design is optimal in terms of recovery threshold. Moreover, we can prove that coded FFT is also optimal in terms of the communication load (see Section IV).

###### Remark 2.

While in the above results we focused on the developing the optimal coding technique for the one dimensional Fourier transform. The techniques developed in this paper can be easily generalized to the -dimensional Fourier transform operations. Specifically, we can show that in a general -dimensional Fourier transform setting, the optimum recovery threshold can still be achieved, using a generalized version of the coded FFT strategy (see Section V). Similarly, this also generalized to the scenario where we aim to compute the Fourier transform of multiple input vectors. The optimum recovery threshold can also be achieved (see Section VI).

###### Remark 3.

Although the coded FFT strategy was designed focusing on optimally handling the stragglers issues, it can also be applied to the fault tolerance computing setting (e.g., as considered in [16, 17], where a module can produce arbitrary error results under failure), to improve robustness to failures in computing. Specifically, given that the coded FFT produces computing results that are coded by an MDS code, it also enables detecting, or correcting maximum amounts errors even when the erroneous workers can produce arbitrary computing results.

## Iii Coded FFT: the Optimal Computation Strategy

In this section, we prove Theorem 1 by proposing an optimal computation strategy, referred to as Coded FFT. We start by demonstrate this computation strategy and the corresponding decoding procedures through a motivating example.

### Iii-a Motivating Example

Consider a distributed Fourier transform problem with an input vector , workers, and a design parameter . We want to compute the Fourier transform , which is specified as follows.

 ⎡⎢ ⎢ ⎢⎣X0X1X2X3⎤⎥ ⎥ ⎥⎦ =⎡⎢ ⎢ ⎢ ⎢⎣11111−√−1−1√−11−11−11√−1−1−√−1⎤⎥ ⎥ ⎥ ⎥⎦⎡⎢ ⎢ ⎢⎣x0x1x2x3⎤⎥ ⎥ ⎥⎦. (8)

We aim to design a computation strategy to achieve a recovery threshold of .

In order to design the optimal strategy, we exploit two key properties of the DFT operation. Firstly, DFT has the following recursive structure:

 Xi =3∑j=0xj(−√−1)ij (9) =1∑k=0c0,k(−1)ik+(−√−1)i1∑k=0c1,k(−1)ik, (10)

where vectors and are the interleaved version of the input vector:

 c0 =[x0,x2], (11) c1 =[x1,x3]. (12)

This structure decomposes the Fourier transform into two identical and simpler operations: the Fourier transform of and , defined as follows.

 Ci,j≜1∑k=0ci,k(−1)jk. (13)

Hence, computing the Fourier transform of a vector is essentially computing the Fourier transforms of its sub-components. This property has been exploited in the context of single machine algorithms and led to the famous Cooley-Tukey algorithm [18].

On the other hand, we exploit the linearity of the DFT operation to inject linear codes in the computation to provide robustness against stragglers. Specifically, given that the Fourier transform of any linearly coded vector equals the linear combination of the Fourier transforms of the individual vectors, by injecting MDS code on the interleaved vectors and and computing their Fourier transforms, we obtain a coded version of the vectors and . This provides the redundancy to mitigate the straggler effects.

Specifically, we encode and using a -MDS code, and let each worker store one of the coded vectors. I.e.,

 a0 =c0, (14) a1 =c1, (15) a2 =c0+c1. (16)

Each worker computes the Fourier transform of its assigned vector. Specifically, each worker computes

 [bi,0bi,1] =[111−1][ai,0ai,1]. (17)

To prove that this computation strategy gives a recovery threshold of , we need to design a valid decoding function for any subset of workers. We demonstrate this decodability through a representative scenario, where the master receives the computation results from worker and worker as shown in Figure 2. The decodability for the other possible scenarios can be proved similarly.

According to the designed computation strategy, the server can first recover the computing result of worker given the results from the other workers as follows:

 b0=b2−b1. (18)

After recovering , we can verify that the server can then recover the final output using and as follows:

 ⎡⎢ ⎢ ⎢⎣X0X1X2X3⎤⎥ ⎥ ⎥⎦=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣b0,0+b1,0b0,1−√−1⋅b1,1b0,0−b1,0b0,1+√−1⋅b1,1⎤⎥ ⎥ ⎥ ⎥ ⎥⎦. (19)

### Iii-B General Description of Coded FFT

Now we present an optimal computing strategy that achieves the optimum recovery threshold stated in Theorem 1, for any parameter values of and . First of all we interleave the input vector into vectors of length , denoted by . Specifically, we let the th element of each equal

 ci,j=xi+jm. (20)

We denote the discrete Fourier transform of each interleaved vector , in the domain of , as . Specifically,

 Ci,j≜sm−1∑k=0ci,kωjkms forj∈{0,…,sm−1}. (21)

Note that if the master node can recover all the above Fourier transform of the interleaved vectors, the final output can be computed based on the following identities:

 Xi =m−1∑j=0sm−1∑k=0cj,kωi(j+km)s (22) =m−1∑j=0Cj,mod(i,sm)ωijs, (23)

where denotes the remainder of divided by .

Based on this observation, we can naturally view the distributed Fourier transform problem as a problem of distributedly computing a list of linear transformations, i.e., computing the Fourier transform of ’s. We inject the redundancy as follows to provide robustness to the computation:

We first encode the using an arbitrary -MDS code, where the coded vectors are denoted and are assigned to the workers correspondingly. Then each worker computes the Fourier of , and return it to the master. Given the linearity of Fourier transform, the computing results are essentially linear combinations of the Fourier transform ’s, which are coded by the same MDS code. Hence, after the master receives any computing results, it can decode the message ’s, and proceed to recover the final result. This allows achieving the recovery threshold of .

###### Remark 4.

The recovery threshold achieved by coded FFT can not be achieved using computation strategies that were developed for generic matrix-by-vector multiplication in the literature [9, 13]. Specifically, the conventional uncoded repetition strategy requires a recovery threshold of , and the short-dot (or short-MDS) strategy provided in [9, 13] requires . Hence, by developing a coding strategy for the specific purpose of computing Fourier transform, we can achieve order-wise improvement in the recovery threshold.

### Iii-C Decoding Complexity of Coded FFT

Now we show that coded FFT allows an efficient decoding algorithm at the master for recovering the output. After receiving the computing results, the master needs to recover the output in two steps: decoding the MDS code and then computing from the intermediate value ’s.

For the first step, the master needs of decode an -MDS code by times. This can be computed efficiently, by selecting an MDS code with low decoding complexity for the coded FFT design. There has been various works on finding efficiently decodable MDS codes (e.g.,[19, 20]). In general, an upper bound on the decoding complexity of -MDS code is given by , which can be attained by the Reed-Solomon codes [21] and using fast polynomial interpolation [22] as the decoding algorithm. Consequently, the first step of the decoding algorithm has a complexity of at most , which scales linearly with respect to .

For the second step, the master node needs to evaluate equation (23) to recover the final result. Equivalently, the master needs to compute

 Xi+jsm =m−1∑k=0Ck,iωik+jksms (24)

for any and . This is essentially the Fourier transform of vectors of length , where the th element of the th vector equals . In most cases (e.g., ), the Fourier transform of a length vector can be efficiently computed with a complexity of , which is faster than the corresponding MDS decoding procedure used in the first step. In general, the computational complexity of Fourier transform is upper bounded by , which can be achieved by a combination of Bluestein’s algorithm and fast polynomial multiplication [23]. Hence, the complexity of the second step is at most .

To conclude, our proposed coded FFT strategy allows efficient decoding with a complexity of at most , which is linear to the input size . The decoding computation is bottlenecked by the first step of the algorithm, which is essentially decoding an -MDS code by times. To achieve the best performance, one can pick any MDS code with a decoding algorithm that requires the minimum amount of computation based on the problem scenatio [24].

## Iv Optimality of coded FFT

In this section, we prove Theorem 2 through a matching information theoretic converse. Specifically, we need to prove that for any computation strategy, the master needs to wait for at least workers in order to recover the final output.

Recall that Theorem 2 is stated for finite fields, we can let the input be be uniformly randomly sampled from . Given the invertibility of the Discrete Fourier transform, the output vector given this input distribution must also be uniformly random on

. This means that the master node essentially needs to recover a random variable with entropy of

bits. Note that each worker returns elements of , providing at most bits of information. By applying a cut-set bound around the master, we can show that at least results from workers need to be collected. Thus we have that the recovery threshold is optimal.

###### Remark 5.

Besides the recovery threshold, communication load is also an important metric in distributed computing. The above cut-set converse in fact directly bounds the needed communication load for computing Fourier transform directly, proving that at least bits of communication is needed. Note that our proposed coded FFT uses exactly this amount of communication to deliver the intermediate results to the server. Hence, it is also optimal in terms of communication.

###### Remark 6.

Although Theorem 2 focuses on the scenario where the base field is finite, similar results can be obtained when the base field is infinite (e.g., ), by taking into account of the practical implementation constrains. For example, any computing device can only keep variables reliably with finite precision. This quantization requirement in fact allows applying the cut-set bound for the distributed Fourier transform problem, even when is infinite, and enables proving the optimally of coded FFT in those scenarios.

## V n-dimensional Coded FFT

Fourier transform in higher dimensional spaces is a frequently used operation in image processing and machine learning applications. In this section, we consider the problem of designing optimal codes for this operation. We show that the coded FFT strategy can be naturally extended to this scenario, and achieves the optimum performances. We start by formulating the system model and state the main results.

### V-a System Model and Main results

We consider a problem of computing an -dimensional Discrete Fourier transform in a distributed computing environment with a master node and worker nodes. The input and the output

are tensors of order

, with dimension . For brevity, we denote the total number of elements in each tensor by , i.e., .

The elements of the tensors belong to a field with a primitive th root of unity for each , denoted by . We want to compute the elements of the output tensor , denoted by , as a function of the elements of the input tensor, denoted by , based on the following equations.

 Ti0i1...in−1≜∑jℓ∈{0,...,si−1},∀ℓ∈{0,...,n−1}tj0j1...jn−1n−1∏k=0ωikjksk. (25)

Each one of the workers can store and process fraction of the tensor. Specifically, given a parameter satisfying , each worker can store an arbitrary vector as a function of the input , compute an intermediate result as a function of , and return to the server. The server only waits for the results from a subset of workers, before recovering the final output using certain decoding functions, given these intermediate results returned from the workers.

Similar to the one dimensional Fourier transform problem, we design the functions to compute s’ and s’ for the workers, and refer to them as the computation strategy. We aim to find an optimal computation strategy that achieves the minimum possible recovery threshold, while allowing efficient decoding at the master node.

Our main results are summarized in the following theorems:

###### Theorem 3.

In an -dimensional distributed Fourier transform problem of computing using workers that each can store and process fraction of the input , we can achieve the following recovery threshold

 K∗=m. (26)

Furthermore, the above recovery threshold can be achieved by a computation strategy, referred to as the -dimentional Coded FFT, which allows efficient decoding at the master node, i.e., with a complexity that scales linearly with respect to the size of the input data.

Moreover, we can prove the optimally of -dimensional coded FFT, which is formally stated in the following theorem.

###### Theorem 4.

In an -dimensional distributed Fourier transform environment with workers that each can store and process fraction of the input vector from a finite field , the following recovery threshold

 K∗=m (27)

is optimal.333Similar to the -dimensional case, this optimally can be generalized to base fields with infinite cardinally, by taking into account of some practical implementation constrains.

### V-B General Description of n-dimensional Coded FFT

We first prove Theorem 3 by proposing an optimal computation strategy, referred to as -dimensional Coded FFT, that achieves the recovery threshold for any parameter values of and .

First of all we interleave the input tensor into smaller tensors, each with a total size of . Specifically, given that , we can find integers , such that for each , and for each tuple satisfying , we define a tensor with dimension , with the following elements:

 ci0i1,...in−1,j0j1,...jn−1=t(i0+j0m)(i1+j1m)...(in−1+jn−1m). (28)

We denote the discrete Fourier transform of each interleaved tensor by . Specifically,

 Ci0i1,...in−1,j0j1,...jn−1 ≜ (29) ∑j′ℓ∈{0,...,simi−1},∀ℓ∈{0,...,n−1} ci0i1,...in−1,j′0j′1...j′n−1n−1∏k=0ωjkj′kmksk (30)

for any .

Note that if the master node can recover all the above Fourier transform of the interleaved tensors, the final output can be computed based on the following identity:

 Ti0i1...in−1 =∑jℓ∈{0,...,mi−1},∀ℓ∈{0,...,n−1}Cj0j1...jn−1,i′0i′1...i′n−1n−1∏k=0ωikjksk, (31)

where . Hence, we can view this distributed Fourier transform problem as a problem of computing a list of linear transformations, and we inject the redundancy using MDS code similar to the one dimensional coded FFT strategy.

Specifically, we encode the ’s using an arbitrary -MDS code, where the coded tensors are denoted and are assigned to the workers correspondingly. Then each worker computes the Fourier of tensor , and return it to the master. Given the linearity of Fourier transform, the computing results are essentially linear combinations of the Fourier transform ’s, which are coded by the same MDS code. Hence, after the master receives any computing results, it can decode the message ’s, and proceed to recover the final result. This allows achieving the recovery threshold of .

In terms of the decoding complexity, -dimensional coded FFT also requires first decoding an MDS code, and then recovering the final result by computing Fourier transforms of tensors with lower dimension. Similar to the one dimensional FFT, the bottleneck of the decoding algorithm is also the first step, which requires decoding an -MDS code by times. This decoding complexity is upper bounded by , which is linear with respect to the input size . It can be further improved in practice by using any MDS code or MDS decoding algorithms with better computational performances.

### V-C Optimally of n-dimensional Coded FFT

The optimally of -dimensional Coded FFT (i.e., Theorem 4) can be proved as follows. When the base field is finite, let the input be be uniformly randomly sampled from . Given the invertibility of the -dimensional Discrete Fourier transform, the output tensor given this input distribution must also be uniformly random on . Hence, the master node needs to collect at least bits of information, where each worker can provide at most bits. By applying the cut-set bound around the master, we can prove that at least worker needs to return their results to finish the computation.

Moreover, the above converse can also be extended to prove that the -dimensional Coded FFT is optimal in terms of communication.

## Vi Coded FFT with multiple inputs

Coded FFT can also be extended to optimally handle computation tasks with multiple inputs entries. In this section, we consider the problem of designing optimal codes for such scenario.

### Vi-a System Model and Main results

We consider a problem of computing the -dimensional Discrete Fourier transform of input tensors, in a distributed computing environment with a master node and worker nodes. The inputs, denoted by , are tensors of order and dimension . For brevity, we denote the total number of elements in each tensor by , i.e., . The elements of the tensors belong to a field with a primitive th root of unity for each , denoted by . We aim to compute the Fourier transforms of the input tensors, which are denoted by . Specifically, we want to compute the elements of the output tensors according to the following equations.

 Th,i0i1...in−1≜∑jℓ∈{0,...,si−1},∀ℓ∈{0,...,n−1}th,j0j1...jn−1n−1∏k=0ωikjksk. (32)

Each one of the workers can store and process fraction of the entire input. Specifically, given a parameter satisfying , each worker can store an arbitrary vector as a function of the input tensors, compute an intermediate result as a function of , and return to the server. The server only waits for the results from a subset of workers, before recovering the final output using certain decoding functions.

For this problem, we can find an optimal computation strategy that achieves the minimum possible recovery threshold, while allowing efficient decoding at the master node. We summarize this result in the following theorems:

###### Theorem 5.

For an -dimensional distributed Fourier transform problem using workers, if each worker can store and process fraction of the inputs, we can achieve the following recovery threshold

 K∗=m. (33)

Furthermore, the above recovery threshold can be achieved by a computation strategy, which allows efficient decoding at the master node, i.e., with a complexity that scales linearly with respect to the size of the input data.

Moreover, we prove the optimally of our proposed computation strategy, which is formally stated in the following theorem.

###### Theorem 6.

In an -dimensional distributed Fourier transform environment with workers that each can store and process fraction of the input vector, the following recovery threshold

 K∗=m (34)

is optimal when the base field is finite.444Similar to the single input case, this optimally can be generalized to base fields with infinite cardinally, by taking into account of some practical implementation constrains.

### Vi-B General Description of Coded FFT with Multiple Inputs

We prove Theorem 5 by proposing an optimal computation strategy that achieves the recovery threshold . First of all we interleave the inputs into smaller tensors. Specifically, given that , we can find integers , such that and for each . For each input tensor and each tuple satisfying , we define a tensor with dimension , with the following elements:

 ch,i0i1,...in−1,j0j1,...jn−1=th,(i0+j0m)(i1+j1m)...(in−1+jn−1m). (35)

As explained in Section V-B, if the master node can obtain the Fourier transforms of all the interleaved tensors, then the final outputs can be computed efficiently. Hence, we can view this distributed Fourier transform problem as a problem of computing a list of linear transformations, and we inject the redundancy using MDS code similar to the single input coded FFT strategy.

Specifically, we first bundle the input tensors into disjoint subsets of same size. For convenience, we denote the set of indices for the th subset by . Within each subset, we view all interleaved tensors with the same index parameter as one message symbol and we encode all the symbols using an arbitrary -MDS code. More precisely, for each and each index parameter , we create the following symbol . There are symbols in total and we encode them using an -MDS code. We assign the coded symbols to workers, and each of them computes the Fourier transform of all coded tensors contained in the symbol.

Given the linearity of Fourier transform, the computing results are essentially linear combinations of the Fourier transforms of the interleaved tensors, which are coded by the same MDS code. Hence, after the master receives any computing results, it can decode all the needed intermediate values, and proceed to recover the final result. This allows achieving the recovery threshold of .

In terms of the decoding complexity, one can show that the bottleneck of the decoding algorithm is the decoding of the -MDS code by times, using similar arguments mentioned in Section V. This decoding complexity is upper bounded by , which is linear with respect to the input size . It can be further improved in practice by using any MDS code or MDS decoding algorithms with better computational performances.

### Vi-C Optimally of Coded FFT with multiple inputs

The optimally of our proposed Coded FFT strategy for multiple users (i.e., Theorem 6) can be proved as follows. When the base field is finite, let the input tensors be be uniformly randomly sampled from . Given the invertibility of the Discrete Fourier transform, the output tensors must also be uniformly random on . Hence, the master node needs to collect at least bits of information, where each worker can provide at most bits. By applying the cut-set bound around the master, we can prove that at least worker needs to return their results to finish the computation.

Moreover, the above converse also applies for proving the optimally of Coded FFT in terms of communication.

## Vii Conclusions

We considered the problem of computing the Fourier transform of high-dimensional vectors, distributedly over a cluster of machines. We propose a computation strategy, named as coded FFT, which achieves the optimal recovery threshold, defined as the minimum number of workers that the master node needs to wait for in order to compute the output. We also extended coded FFT to settings including computing general -dimensional Fourier transforms, and provided the optimal computing strategy for those settings. There are several interesting future directions, including the practical demonstration of coded FFT over distributed clusters, generalization of coded FFT to more general master-less architectures, and extension of coded FFT to other computing architectures (e.g., edge and fog computing architectures [25, 26, 27]).

## Viii Acknowledgement

This work is in part supported by NSF grant CIF 1703575, ONR award N000141612189, and a research gift from Intel. This material is based upon work supported by Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001117C0053. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.

## References

• [1] M. Frigo and S. G. Johnson, “The design and implementation of FFTW3,” Proceedings of the IEEE, vol. 93, no. 2, pp. 216–231, 2005. Special issue on “Program Generation, Optimization, and Platform Adaptation”.
• [2] M. Pippig, “Pfft: An extension of fftw to massively parallel architectures,” SIAM Journal on Scientific Computing, vol. 35, no. 3, pp. C213–C236, 2013.
• [3] J. Dean and L. A. Barroso, “The tail at scale,” Communications of the ACM, vol. 56, no. 2, pp. 74–80, 2013.
• [4] M. Zaharia, A. Konwinski, A. D. Joseph, R. H. Katz, and I. Stoica, “Improving MapReduce performance in heterogeneous environments,” OSDI, vol. 8, p. 7, Dec. 2008.
• [5] S. Li, M. A. Maddah-Ali, and A. S. Avestimehr, “Coded MapReduce,” 53rd Annual Allerton Conference on Communication, Control, and Computing, Sept. 2015.
• [6] S. Li, M. A. Maddah-Ali, Q. Yu, and A. S. Avestimehr, “A fundamental tradeoff between computation and communication in distributed computing,” to appear in IEEE Transactions on Information Theory, 2017.
• [7] Q. Yu, S. Li, M. A. Maddah-Ali, and A. S. Avestimehr, “How to optimally allocate resources for coded distributed computing?,” in 2017 IEEE International Conference on Communications (ICC), pp. 1–7, May 2017.
• [8] S. Li, S. Supittayapornpong, M. A. Maddah-Ali, and A. S. Avestimehr, “Coded terasort,” 6th International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics, 2017.
• [9] K. Lee, M. Lam, R. Pedarsani, D. Papailiopoulos, and K. Ramchandran, “Speeding up distributed machine learning using codes,” e-print arXiv:1512.02673, 2015.
• [10] K. Lee, C. Suh, and K. Ramchandran, “High-dimensional coded matrix multiplication,” in Information Theory (ISIT), 2017 IEEE International Symposium on, pp. 2418–2422, IEEE, 2017.
• [11] S. Li, M. A. Maddah-Ali, and A. S. Avestimehr, “A unified coding framework for distributed computing with straggling servers,” arXiv preprint arXiv:1609.01690, 2016.
• [12] S. Dutta, V. Cadambe, and P. Grover, “Coded convolution for parallel and distributed computing within a deadline,” arXiv preprint arXiv:1705.03875, 2017.
• [13] S. Dutta, V. Cadambe, and P. Grover, “Short-dot: Computing large linear transforms distributedly using coded short dot products,” in Advances In Neural Information Processing Systems, pp. 2100–2108, 2016.
• [14] R. Tandon, Q. Lei, A. G. Dimakis, and N. Karampatziakis, “Gradient coding,” arXiv preprint arXiv:1612.03301, 2016.
• [15] Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr, “Polynomial codes: an optimal design for high-dimensional coded matrix multiplication,” arXiv preprint arXiv:1705.10464, 2017.
• [16] J. Y. Jou and J. A. Abraham, “Fault-tolerant fft networks,” IEEE Transactions on Computers, vol. 37, pp. 548–561, May 1988.
• [17] S.-J. Wang and N. K. Jha, “Algorithm-based fault tolerance for fft networks,” IEEE Transactions on Computers, vol. 43, pp. 849–854, Jul 1994.
• [18] J. W. Cooley and J. W. Tukey, “An algorithm for the machine calculation of complex fourier series,” Mathematics of computation, vol. 19, no. 90, pp. 297–301, 1965.
• [19] F. Didier, “Efficient erasure decoding of reed-solomon codes,” arXiv preprint arXiv:0901.1886, 2009.
• [20] A. Soro and J. Lacan, “Fnt-based reed-solomon erasure codes,” in 2010 7th IEEE Consumer Communications and Networking Conference, pp. 1–5, Jan 2010.
• [21] R. Roth, Introduction to coding theory. Cambridge University Press, 2006.
• [22] K. S. Kedlaya and C. Umans, “Fast polynomial factorization and modular composition,” SIAM Journal on Computing, vol. 40, no. 6, pp. 1767–1802, 2011.
• [23] D. G. Cantor and E. Kaltofen, “On fast multiplication of polynomials over arbitrary algebras,” Acta Informatica, vol. 28, no. 7, pp. 693–701, 1991.
• [24] S. Baktir and B. Sunar, “Achieving efficient polynomial multiplication in fermat fields using the fast fourier transform,” in Proceedings of the 44th Annual Southeast Regional Conference, ACM-SE 44, (New York, NY, USA), pp. 549–554, ACM, 2006.
• [25] S. Li, Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr, “Edge-facilitated wireless distributed computing,” in Global Communications Conference (GLOBECOM), 2016 IEEE, pp. 1–7, IEEE, 2016.
• [26] S. Li, Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr, “A scalable framework for wireless distributed computing,” IEEE/ACM Transactions on Networking, 2017.
• [27] S. Li, M. A. Maddah-Ali, and A. S. Avestimehr, “Coding for distributed fog computing,” IEEE Communications Magazine, vol. 55, pp. 34–40, Apr. 2017.