Edge computing is emerging as a new paradigm to allow processing data at the edge of the network, where data is typically generated and collected. This paradigm advocates offloading tasks from an edge device to other edge/end devices including mobile devices, and/or servers in close proximity. Edge computing can be used in Internet of Things (IoT) applications which connects an exponentially increasing number of devices, including smartphones, wireless sensors, and health monitoring devices at the edge. Many IoT applications require processing the data collected by these devices through computationally intensive algorithms with stringent reliability, security and latency constraints. In many scenarios, these algorithms cannot be run locally on computationally-limited IoT-devices.
One of the existing solutions to handle computationally-intensive tasks is computation offloading, which advocates offloading tasks to remote servers or to cloud computing platforms. Yet, offloading tasks to remote servers or to the cloud could be a luxury that cannot be afforded by most edge applications, where connectivity to remote servers can be expensive, energy consuming, lost or compromised. In addition, offloading tasks to remote servers may not be efficient in terms of delay, especially when data is generated and collected at the edge. This makes edge computing a promising solution to handle computationally-intensive tasks, where the task is divided into sub-tasks and each sub-task is offloaded to an edge device for computation.
However, offloading tasks to other devices leaves the edge computing applications at the complete mercy of an attacker. One of the attacks, which is the focus of this work, is Byzantine attacks, where one or more devices (workers) can corrupt the offloaded tasks. Furthermore, exploiting the potential of edge computing is challenging mainly due to the heterogeneous and time-varying nature of the devices at the edge. Thus, our goal is to develop a secure, dynamic, and heterogeneity-aware edge computing mechanism that provides both security and computation efficiency guarantees.
Our key tool is the graceful use of coded cooperative computation and homomorphic hash functions. Coded computation advocates mixing data in computationally-intensive tasks by employing erasure codes and offloading these coded tasks to other devices for computation [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. The following canonical example demonstrates the effectiveness of coded computation.
Consider the setup where a master device wishes to offload a task to 3 workers. The master has a large data matrix and wants to compute matrix vector product
and wants to compute matrix vector product.
The master device divides the matrix row-wise equally into two smaller matrices and , which are then encoded using a Maximum Distance Separable (MDS) code111An MDS code divides the master’s data into chunks and encodes it into chunks () such that any chunks out of are sufficient to recover the original data. to give , and , and sends each to a different worker. Also, the master device sends to workers and asks them to compute , . When the master receives the computed values (i.e., ) from at least two out of three workers, it can decode its desired task, which is the computation of . The power of coded computations is that it makes act as a “joker” redundant task that can replace any one of the other two tasks if a worker ends up straggling, i.e., being slow or unresponsive.
This example demonstrates the benefit of coding for edge computing. However, the very nature of task offloading to workers makes the computation framework vulnerable to attacks. We focus on Byzantine attacks in this work. For example, if workers and in Example 1 corrupt and , the master can only obtain a wrong value of . Thus, it is crucial to develop a secure coded computation mechanism for edge devices against this type of attacks.
In this paper, we develop a secure coded cooperative computation (SC) mechanism which uses homomorphic hash functions. Example 2 illustrates the main idea of homomorphic hash functions in coded computation.
Consider the same setup in Example 1, and assume that worker returns the computed value to the master device. If worker is an honest worker, holds. The master device checks the integrity of by calculating its hash function , where is a homomorphic hash function. (The details of the homomorphic hash function which we use will be provided in Section II.) The master also calculates using its local information, i.e., using and . If the master finds that , it concludes that the computed value is corrupted. Otherwise, is declared as verified.
The above example shows how homomorphic hash functions can be used for coded computation. However, existing hash-based solutions [14, 15] introduce high computational overhead, which is not suitable for edge applications, where computation power and energy are typically limited. In this paper, we use homomorphic hash functions and coded computation gracefully and efficiently. In particular, we develop and analyze light-weight and heavy-weight integrity check tools for coded computation using homomorphic hash functions. We design SC by exploiting both light- and heavy-weight tools. The following are the key contributions of this work:
We use a homomorphic hash function as in  and show that the hash of a linear combination of computed values can be constructed by the hashes of the original tasks.
We develop light- and heavy-weight integrity check tools for coded computation, and analyze these tools in terms of computation complexity and attack detection probability. We also analyze the trade-off between using light- and heavy-weight tools for different number of tasks.
We design SC by exploiting light- and heavy-weight tools. If an attack is detected, SC can pinpoint which tasks are corrupted.
We analyze the task completion delay of SC by providing an upper bound as well as a lower bound on the gap between the task completion delay of SC and a baseline.
We evaluate SC for different number and strength of malicious (Byzantine) workers. The simulation results show that our algorithm significantly improves task completion delay as compared to the baselines.
The structure of the rest of this paper is as follows. Section II presents our system model. Section III presents light- and heavy-weight integrity check tools. Section IV presents our secure coded cooperative computation (SC) algorithm. Section V provides the theoretical analysis of the task completion delay of SC. Section VI provides simulation results of SC. Section VII presents related work. Section VIII concludes the paper.
Ii System Model
Setup. We consider a master/worker setup at the edge of the network, where the master device offloads its computationally intensive tasks to workers , (where ) via device-to-device (D2D) links such as Wi-Fi Direct and/or Bluetooth. The master device divides a task into smaller sub-tasks, and offloads them to parallel processing workers. Task Model. Our focus is on computation of linear functions; i.e., the master device would like to compute the multiplication of matrix with vector ; , where , , and is a finite field. The motivation of focusing on linear functions stems from matrix multiplication applications where computing linear functions is a building block of several iterative algorithms such as gradient descent.
Coding. We divide matrix into rows denoted by , . The master device applies Fountain coding [16, 17, 18] across rows to create coded information packets , , where is the overhead required by Fountain coding222The overhead required by Fountain coding is typically as low as 5% ., and are coding coefficients of Fountain coding and the information packet is a row vector with size . Rateless coding enabled by Fountain codes is compatible with our goal to deal with heterogeneity and time-varying nature of resources. In other words, coded packets are generated on the fly and transmitted to workers depending on the amount of their resources (as described in Section IV-A) and Fountain codes are flexible to achieve this goal.
Worker & Attack Model. The workers incur random delays while executing the task assigned to them by the master device. The workers have different computation and communication specifications resulting in a heterogeneous environment which includes workers that are significantly slower than others, known as stragglers. Moreover, the workers cannot be trusted by the master. In particular, we consider Byzantine attacks, where one or more workers can corrupt the tasks that are assigned to them.
Homomorphic Hash Function. We consider the following hash function that maps a large number to an output with much smaller size
where is a prime number selected randomly from the field , is a prime number that satisfies (i.e., is divisible by ) and is a number in which is calculated as for a random selection of [14, 15]. The defined hash function is a collision-resistant hash function with the property that when increases, is compressed less; i.e., becomes a better approximation of for larger . However, the computational cost of calculating increases for larger . Thus, there is a trade-off between computational complexity and better approximation of in calculating . Our goal is to exploit this trade-off in the context of coded computation as described in the next sections. Another property of the defined hash function is homomorphism, i.e., , which we will exploit in matrix-vector multiplication (in Section III).
Delay Model. Each packet transmitted from the master to a worker experiences the following delays: (i) transmission delay for sending the packet from the master to the worker, (ii) computation delay for computing the multiplication of the packet by the vector , and (iii) transmission delay for sending the computed packet from the worker back to the master. We denote by the computation time of the packet at worker .
Iii Light- and Heavy-Weight Integrity Check Tools for Coded Computation
In this section we present how homomorphic hash functions considered in [14, 15] and defined in (1) are used gracefully with coded computation. We first show that (1) can be applied to coded computation. Then, we develop light- and heavy-weight integrity check tools. The tools we develop in this section will be building blocks of our secure coded cooperative computation mechanism (SC).
Iii-a Homomorphic Hash Function for Coded Computation
Let us consider that coded information packets are offloaded to worker . The packet offloaded to is , which can be represented as , where is the element of vector . Worker calculates and sends it back to the master device.
Assume that the master receives from , where if packet is not corrupted. The master device checks the integrity of packets calculated at according to the following rule. First, it calculates
using the hash function defined in (1), where ’s are coefficients (We will discuss how is selected later in this section.). Next, it calculates
where is the element of vector , and and are the parameters of the hash function defined in (1). in (3) is calculated by the master device using its local data and . is used to check as described in the next theorem.
If does not corrupt packets, i.e., , , and is a nonzero integer, then holds.
Proof: The proof is provided in Appendix A.
We note that Theorem 1 is necessary, but not sufficient condition to determine if is malicious or not. The sufficiency condition depends on how is selected as explained next.
Iii-B Light-Weight Integrity Check (LW Function)
The light-weight integrity check (LW function) uses Theorem 1 to determine if workers corrupt packets or not. In particular, LW function calculates in (2) and in (3) by selecting randomly and uniformly from . LW function concludes that packets processed by are not corrupted if . However, as we discussed earlier, this condition is not always a sufficient condition, so LW function detects attacks with some probability, which is provided next.
Iii-B1 Probability of Attack Detection
We first consider a pairwise Byzantine attack, where malicious worker corrupts two packets out of packets by adding and subtracting terms. For example, and , for any arbitrary , satisfying . In this attack pattern, if , and considering that the coefficients are selected from in LW function, the attack is detected with 100% probability. On the other hand, if the attack is symmetric, i.e., , the probability of detecting the attack is 50%. As symmetrical attacks are the most difficult ones to detect, we focus on this scenario in the next lemma.
Consider an attack where the malicious worker selects an even number randomly out of packets and corrupt them by adding to half of them, and subtracting from the other half. The probability of attack detection by LW function is
Proof: The proof is provided in Appendix B.
As seen from Lemma 2, the probability of attack detection increases with increasing number of corrupted packets. This result intuitively holds for any attack pattern as the coefficients (
) are selected randomly for each packet and estimating these values by an attacker becomes difficult for larger set of corrupted packets. Another attack pattern and its detection probability are provided in the following.
Consider an attack pattern where the malicious worker corrupts three packets out of packets by adding to two of randomly selected computed packets and subtracting from another randomly selected computed packet. This attack pattern can be detected unless the coefficients for the three corrupted packets are all ’s or all ’s. Therefore, the probability of attack detection for this attack pattern is . For a general attack pattern, the following lemma, provides a lower bound on the probability of attack detection.
The probability of attack detection when LW function is used and for any attack pattern is lower bounded by .
Proof: The proof is provided in Appendix C.
Iii-B2 Computational Complexity
The computational complexity of LW function for checking packets calculated by is , where is the size of each information packet, is the complexity of multiplication in , and and are the parameters of the hash function defined in (1).
Proof: The complexity of LW function consists of two parts; calculation of in (2) and in (3). We first analyze computational complexity of calculating . The sum only has addition and subtraction as , and can be ignored. The complexity of the modular exponentiation while calculating the hash function is by using the method of exponentiation by squaring.
Similarly, we can calculate the computational complexity of calculating . The complexity for computing corresponds to the complexity of addition and subtraction, which is negligible. The complexity of computing has two components: (i) Calculating the modular exponentiations : The complexity for this calculation is for one modular exponentiation and for all modular exponentiations. (ii) Multiplying all the calculated modular exponentiations, i.e., in : The complexity for this calculation is . Thus, the total complexity of LW function becomes . This concludes the proof.
Noting that the computational complexity of calculating the original matrix multiplication is , where is the complexity of multiplication in . As seen, the complexity of the LW function is significantly low, compared to the original task. This means LW function provides security check with low complexity. However, the probability of attack detection using LW function could be as low as 50%, which may not be acceptable in some applications. Thus, we provide a heavy-weight integrity check tool (HW function) in the next section. Our ultimate goal is to use LW and HW functions together for higher attack detection probability while still having low computational complexity.
Iii-C Heavy-Weight Integrity Check (HW Function)
The heavy-weight integrity check (HW function) uses Theorem 1 similar to the LW function, but chooses the coefficients from a larger field rather than . This selection, i.e., choosing coefficients from a larger field, comes with larger attack detection probability and computational complexity as described next.
Iii-C1 Probability of Attack Detection
The probability that HW function detects a Byzantine attack with any attack pattern is expressed as
where is the parameter of the hash function in (1).
Proof: The proof is provided in Appendix D.
As seen from Lemma 5, the attack detection probability increases with increasing . Next, we present the computational complexity of HW function.
Iii-C2 Computational Complexity
The computational complexity of HW function for checking packets calculated by is .
Proof: The proof follows the same logic of the proof of Theorem 4, i.e., the complexity of HW function depends on calculating in (2) and in (3). The difference as compared to the proof of Theorem 4 is that ’s are selected from a larger field, so reducing multiplication to addition in of (2) and of (3) cannot be made. In particular, the complexity of calculating these terms is . Following similar steps as in the proof of Theorem 4, we can conclude that the computational complexity of HW function becomes . Since the second term dominates the computational complexity for large (hence ), we calculate the computational complexity as . This concludes the proof.
We can approximate to on average assuming that coded information packets are distributed homogeneously across workers, where is the number of information packets, is the Fountain coding overhead, and is the number of workers. Thus, the computational complexity of HW function across all workers becomes . As we discussed earlier, the computational complexity of the original matrix multiplication is . We also note that . This means that even though HW function is computationally-complex as compared to LW function, it is still computationally-efficient with respect to the original matrix multiplication (considering that is small and approaches to 0 with increasing number of packets).
Iii-D Light- versus Heavy-Weight Integrity Check
In this section, we investigate employing LW function multiple rounds/times to achieve higher attack detection probability with low computational complexity. LW function is used to check packets computed by by selecting uniformly randomly from . Let us call this the first round. In the second round, we can use LW function again, but selected values of will be different from the first round. Thus, if an attack is not detected in the first round, it may still be detected in the next round. Thus, using LW function over multiple rounds will increase the attack detection probability. The next theorem characterizes the performance of LW function when used in multiple rounds as compared to HW function.
The attack detection probability of multiple-round LW function is equal to the attack detection probability of HW function in (5) when LW function is used for rounds. Furthermore, the computational complexity of -round LW function is lower than HW function if the following condition is satisfied.
Proof: The proof is provided in Appendix E.
Iv Sc: Secure Coded Cooperative Computation
In this section, we present our secure coded cooperative computation (SC) mechanism. SC consists of packet offloading, attack detection, and attack recovery modules.
Iv-a Dynamic Packet Offloading
The dynamic packet offloading module of SC is based on . In particular, the master offloads coded packets gradually to workers and receives two ACKs for each transmitted packet; one confirming the receipt of the packet by the worker, and the second one (piggybacked to the computed packet) showing that the packet is computed by the worker. Then, based on the frequency of the received ACKs, the master decides to transmit more/less coded packets to that worker. In particular, each packet is transmitted to each worker before or right after the computed packet is received at the master. For this purpose, the average per packet computing time is calculated for each worker dynamically based on the previously received ACKs. Each packet is transmitted after waiting from the time is sent or right after packet is received at the master, thus reducing the idle time at the workers. This policy is shown to approach the optimal task completion delay and maximizes the workers’ efficiency and is shown to improve task completion delay significantly compared with the literature .
Iv-B Attack Detection
Assume that while the dynamic packet offloading process continues, the set of received packets from worker at the master device during time interval is (). The attack detection module of SC is applied on periodically and consists of two phases.
The first phase applies LW function on the packets in for any worker . Let us assume that attack is detected in the packets coming from worker . Then, all the packets in are discarded and the malicious worker is removed from the set of workers, i.e., . As we discussed earlier, the attack detection probability of LW function increases with increasing corrupted packets. Thus, if an attack is detected in this phase, we can consider that most of the packets are corrupted, so we can discard all the received packets.
The goal of the second phase is to detect any attacks, which are not detected in the first phase. Both HW and multiple-round LW functions are used in this phase. In particular, if the inequality in Theorem 7 is satisfied, LW function is used for times. Otherwise, HW function is used. If an attack is not detected, all the packets in are labeled as verified packets. Otherwise, i.e., if an attack is detected, the attack recovery module, which is described later in this section, starts.
Iv-C Attack Recovery
If an attack is detected in the second phase of the attack detection module of SC, we consider that a small number of packets are corrupted. Otherwise, the first phase of the attack detection module could have detected the attack and discarded all the packets. Thus, the goal of the attack recovery module is to detect a small number of corrupted packets and recover the non-corrupted packets, i.e., avoid discarding all the packets.
Let us assume that an attack is detected among the packets received from , i.e., in . In order to pinpoint the packets that are corrupted, we use a binary search algorithm. In particular, is divided into two disjoint sets; and . The second phase of the attack detection module is run over these two sets. If an attack is not detected on any of these sets, all the packets in that set are verified. Otherwise, the binary search (this set splitting) continues over the sets where an attack is detected. When the size of a splitted set is one, i.e., it has one packet in it, and an attack is detected, the packet in that set is declared a corrupted packet and discarded. As seen, the attack recovery module can still verify some of the packets coming from a malicious worker. This is important to efficiently utilize available resources while still providing security guarantees.
Iv-D Sc in a nutshell
SC algorithm is provided in Algorithm 1. As detailed in this algorithm, in SC, the attack detection module, which if required will be followed by the attack recovery module, is applied until the number of verified packets from all workers reaches (Note that is the number of packets required for a successful decoding of Fountain codes). In particular, first, the attack detection module is applied on each set of packets received from worker during the time period , where is defined as the time period that packets are received collectively from all workers, i.e., . If all packets in the set are labeled as verified, then all workers have been honest and sent back correct results to the master device. Otherwise, i.e., if an attack is detected, then the number of correct packets delivered by honest workers and labeled as verified, is less than . In this case, the master device waits until it receives additional packets collectively from all workers, where is the number of packets labeled as verified. Then, for each worker , the attack detection module is applied on the set of newly received packets. This process is repeated until packets are labeled as verified. Finally, Fountain decoding is applied on packets labeled as verified and the result of the multiplication task is obtained by the master device.
V Performance Analysis of SC
In this section, we first characterize the task completion delay of SC and then we provide a lower bound on the gap between the task completion delay of SC and the task completion delay of a baseline. The task completion delay is the time spent to receive computed and verified packets at the master device collectively from all workers.
The average task completion delay of SC for a set of workers , out of which is the set of malicious workers, is upper bounded by:
where is equal to
is the probability that a packet is corrupted by a malicious worker, and is given by
Proof: The proof is provided in Appendix F.
In the following we characterize the task completion delay of SC as compared with a baseline, where the master detects the malicious workers and takes advantage of only honest workers to accomplish its task. One method to detect the malicious workers is using HW function with a high value for the parameter , so that the probability of attack detection given in (5) is close to . We call this baseline as HW-only and denote its task completion delay by . Note that in HW-only, if a worker is detected as malicious, all the packets coming from that worker are discarded, while SC uses both LW and HW functions gracefully to discard only corrupted packets coming from malicious workers.
The gap between the task completion delay of HW-only and the task completion delay of SC is lower bounded by:
Proof: The proof is provided in Appendix G.
From Lemma 9, we can conclude that the faster the honest helpers are, the closer are the performances of HW-only and our SC. This is expected as the performance of SC is dominated by the fastest workers and the performance of HW-only is dominated by the speed of honest workers and thus SC performs close to HW-only when the fastest workers are honest. In addition, smaller (which results in smaller ) results in larger gap between HW-only and SC. This is expected as smaller results in less number of corrupted packets delivered by malicious workers and thus using SC that takes advantage of non-corrupted packets delivered by malicious workers results in more performance improvement compared with HW-only that throws away non-corrupted packets delivered by malicious workers. Finally, lower bound on the gap is linearly proportional to . This implies that more improvement is obtained by using SC compared with HW-only for larger input matrix .
Vi Performance Evaluation
In this section, we evaluate the performance of our algorithm; Secure Coded Cooperative Computation (SC) via simulations. We consider master/worker setup, where some of the workers are malicious. Each computed packet is corrupted by the malicious worker with probability . The computing resources are heterogeneous and vary across workers, where per packet computing delaywith the baselines (i) HW-only, which uses HW function to detect corrupted packets, while SC uses both LW and HW functions gracefully. In HW-only, if a worker is detected as malicious, all the packets coming from that worker are discarded, (ii) Lower Bound, which is obtained by using C3P proposed in , the best known dynamic but unsecured coded cooperative computation. Note that C3P is not practical in the presence of an attacker, however it can provide a lower bound on SC and the gap between Lower Bound and SC shows the cost that we should pay to make our system secure against Byzantine attack, and (iii) Upper Bound provided in (7).
Task Completion Delay vs. Number of Malicious Workers. Fig. 1 compares the task completion delay of SC with the baselines for increasing number of malicious workers.
In this setup, the total number of workers is , the number of rows in matrix is , the number of columns is , the overhead of Fountain codes is , the probability of packet corruption is , and per-packet computing delay is a shifted exponential random variable with the mean selected uniformly between and for each worker.
The task completion delay of SC and HW-only increases with increasing number of malicious workers. When the number of malicious workers increases, there will be more corrupted packets in the system. These corrupted packets are detected and discarded by SC and HW-only. As more packets are discarded when the number of malicious workers is higher, the task completion delay increases. The increase in the task completion delay of SC is less than HW-only thanks to (i) using both LW and HW functions to reduce completion time and thus computational complexity, and (ii) attack recovery module of SC. SC performs better than its Upper Bound as the Upper Bound is based on the theoretical analysis in the worst case scenario. Finally, the completion time of Lower Bound does not change by increasing the number of malicious workers as it uses C3P in , which is not designed for an environment with malicious workers and uses all received packets including the corrupted packets to obtain the computation task result. By increasing the number of malicious workers, the gap between the performance of SC and the Lower Bound increases, as the cost for providing a secure system increases when the adversary attacks more workers.
Task Completion Delay versus Packet Corruption Probability. Fig. 2 compares the task completion delay of SC with (i) HW-only, (ii) Lower Bound, and (iii) Upper Bound for different values of , the probability that a delivered packet by a malicious worker is corrupted. The number of workers, the number of rows in , Fountain coding overhead, and per-packet computing delay are the same as the previous setup above. The number of malicious workers is .
The task completion delay of HW-only does not change with increasing packet corruption probability. The reason is that HW-only does not have attack recovery feature and discards all the packets coming from a malicious worker. On the other hand, task completion delay of SC is significantly lower than HW-only especially when the packet corruption probability is low thanks to using both LW and HW functions and employing the attack recovery module. Again, the completion time of Lower Bound does not change by increasing and by increasing , the gap between the performance of SC and the Lower Bound increases, as the cost for providing a secure system increases when number of corrupted packets increases.
Task Completion Delay Gap between SC and HW-only. Fig. 3 shows the gap between the HW-only and SC and compares the simulated gap with the lower bound of the gap provided in (9) for the total number of workers out of which are malicious. The number of rows in is for Figs. 3(a) and (b), the number of columns is , Fountain coding overhead is , the probability of packet corruption is for Figs. 3(a) and (c), and per-packet computing delay is a shifted exponential random variable with the mean selected uniformly between and for each worker for Figs. 3(b) and (c).
Fig. 3(a) shows the gap versus the speed of computation at honest helpers. The per-packet computing delay is a shifted exponential random variable with the mean selected uniformly between and for each malicious worker. For each honest worker, the mean is selected uniformly between and for the first simulated points, between and for the second simulated points, and between and for the third simulated points. As seen, the faster the honest workers are, the closer are the performances of the HW-only and our SC. This observation confirms our analysis in section V.
Fig. 3(b) shows the gap versus , the probability of packet corruption by a malicious worker. As seen, larger (which results in more corrupted packets delivered by malicious workers) results in smaller gap between HW-only and SC. This observation confirms our analysis in section V.
Fig. 3(c) shows the gap versus the number of rows of matrix . As seen, the gap between HW-only and SC3P increases with an increase in the number of rows of matrix . This observation confirms our analysis in section V stating that more improvement will be obtained by using SC compared with HW-only for larger input matrix .
Vii Related Work
Coded computation, advocating mixing data in computationally intensive tasks by employing erasure codes and offloading these tasks to other devices for computation, has recently received a lot of attention, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. For example, coded cooperative computation is shown to provide higher reliability, smaller delay, and reduced communication cost in MapReduce framework , where computationally intensive tasks are offloaded to distributed server clusters . The effectiveness of coded computation in terms of task completion delay has been investigated in [11, 7, 1]. In , the same problem is considered, but with the assumption that workers are heterogeneous in terms of their resources. In , a dynamic and adaptive algorithm with reduced task completion time is introduced for heterogeneous workers. As compared to this line of work, we consider secure coded computation by focusing on Byzantine attacks.
There is existing work at the intersection of coded computation and security by specifically focusing on privacy [2, 22, 23, 24]. As compared to this line of work, we focus on Byzantine attacks and use homomorphic hash functions. Homomorphic hash functions have been widely used for transmission of network coded data. Corrupted network coded packets are detected by applying homomorphic hash functions that we consider in this work . The hash function is applied to random linear combinations of network coded packets in . SC, although similar to these work, is more efficient in terms of computational efficiency, which was not the main concern of [14, 15] as their focus was on transmitting network coded packets, not computation.
In this paper, we focused on secure edge computing against Byzantine attacks. We considered a master/worker scenario where honest and malicious workers with heterogeneous resources are connected to a master device. We designed a secure coded cooperative computation mechanism (SC) that provides both security and computation efficiency guarantees by gracefully combining homomorphic hash functions, and coded cooperative computation. Homomorphic hash functions are used against Byzantine attacks and coded cooperative computation is used to improve computation efficiency when edge resources are heterogeneous and time-varying. Simulations results show that SC improves task completion delay significantly.
-  Y. Keshtkarjahromi, Y. Xing, and H. Seferoglu, “Dynamic heterogeneity-aware coded cooperative computation at the edge,” in 2018 IEEE 26th International Conference on Network Protocols (ICNP), Sept 2018.
-  R. Bitar, P. Parag, and S. El Rouayheb, “Minimizing latency for secure distributed computing,” in Information Theory (ISIT), 2017 IEEE International Symposium on. IEEE, 2017, pp. 2900–2904.
-  S. Li, M. A. Maddah-Ali, and A. S. Avestimehr, “A unified coding framework for distributed computing with straggling servers,” in Globecom Workshops (GC Wkshps), 2016 IEEE. IEEE, 2016, pp. 1–6.
-  S. Dutta, V. Cadambe, and P. Grover, “Coded convolution for parallel and distributed computing within a deadline,” arXiv preprint arXiv:1705.03875, 2017.
Y. Yang, P. Grover, and S. Kar, “Computing linear transformations with unreliable components,”IEEE Trans. on Information Theory, 2017.
-  W. Halbawi, N. Azizan-Ruhi, F. Salehi, and B. Hassibi, “Improving distributed gradient descent using reed-solomon codes,” arXiv preprint arXiv:1706.05436, 2017.
-  Q. Yu, M. Maddah-Ali, and S. Avestimehr, “Polynomial codes: an optimal design for high-dimensional coded matrix multiplication,” in Advances in Neural Information Processing Systems, 2017.
-  S. Dutta, V. Cadambe, and P. Grover, “Short-dot: Computing large linear transforms distributedly using coded short dot products,” in NIPS, 2016, pp. 2092–2100.
R. Tandon, Q. Lei, A. G. Dimakis, and N. Karampatziakis, “Gradient coding:
Avoiding stragglers in distributed learning,” in
International Conference on Machine Learning, 2017, pp. 3368–3376.
-  S. Li, M. A. Maddah-Ali, and A. S. Avestimehr, “Fundamental tradeoff between computation and communication in distributed computing,” in IEEE International Symposium on Information Theory (ISIT), 2016.
-  K. Lee, M. Lam, R. Pedarsani, D. Papailiopoulos, and K. Ramchandran, “Speeding up distributed machine learning using codes,” IEEE Transactions on Information Theory, vol. 64, no. 3, pp. 1514–1529, 2018.
-  C. Karakus, Y. Sun, S. Diggavi, and W. Yin, “Straggler mitigation in distributed optimization through data encoding,” in Advances in Neural Information Processing Systems, 2017, pp. 5434–5442.
-  M. F. Aktas, P. Peng, and E. Soljanin, “Effective straggler mitigation: Which clones should attack and when?” ACM SIGMETRICS Performance Evaluation Review, vol. 45, no. 2, pp. 12–14, 2017.
-  M. N. Krohn, M. J. Freedman, and D. Mazieres, “On-the-fly verification of rateless erasure codes for efficient content distribution,” in IEEE Symposium on Security and Privacy, 2004. Proceedings. 2004. IEEE, 2004, pp. 226–240.
-  C. Gkantsidis and P. Rodriguez, “Cooperative security for network coding file distribution.” in INFOCOM, vol. 3, no. 2006, 2006.
-  M. Luby, “Lt codes,” in The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002. Proceedings., Nov 2002, pp. 271–280.
-  A. Shokrollahi, “Raptor codes,” IEEE/ACM Transactions on Networking (TON), vol. 14, no. SI, pp. 2551–2567, 2006.
-  D. J. MacKay, “Fountain codes,” IEE Proceedings-Communications, vol. 152, no. 6, pp. 1062–1068, 2005.
-  J. Dean and S. Ghemawat, “MapReduce: simplified data processing on large clusters,” Communications of the ACM, vol. 51, no. 1, pp. 107–113, 2008.
-  S. Li, M. A. Maddah-Ali, and A. S. Avestimehr, “Coded mapreduce,” in 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2015, pp. 964–971.
-  A. Reisizadeh, S. Prakash, R. Pedarsani, and A. S. Avestimehr, “Coded computation over heterogeneous clusters,” IEEE Transactions on Information Theory, 2019.
-  H. Yang and J. Lee, “Secure distributed computing with straggling servers using polynomial codes,” IEEE Transactions on Information Forensics and Security, vol. 14, no. 1, pp. 141–150, Jan 2019.
-  Q. Yu, N. Raviv, J. So, and A. S. Avestimehr, “Lagrange coded computing: Optimal design for resiliency, security and privacy,” arXiv preprint, arXiv:1806.00939, 2018.
-  R. Bitar, Y. Xing, Y. Keshtkarjahromi, V. Dasari, S. El Rouayheb, and H. Seferoglu, “Prac: Private and rateless adaptive coded computation at the edge,” in SPIE Defense + Commercial Sensing, 2019.
Appendix A: Proof of Theorem 1
where is the quotient of dividing by . On the other hand, as mentioned before, can be written as and thus by using the condition of and using Fermat’s little theorem, we have:
Appendix B: Proof of Lemma 2
This kind of attack is not detected by the master device if the coefficients for corrupted packets are selected such that the added ’s are canceled out with the subtracted ’s. For example, let us assume there are corrupted packets of and among packets received from worker . For this attack not to be detected, there are six possibilities for values of coefficients : , , , , , and . Note that the other six coefficients can have any value as those packets are not corrupted. All possible cases for the first four coefficients are cases. Therefore, the probability of attack detection for this example is . In general, for corrupted packets, the number of cases the attack cannot be detected is equal to the combination of . The reason is that half of the coefficients for which the corrupted packets are added by can have any value but the other half for which the corrupted packets are subtracted by should be chosen such that when they are multiplied by their correspondent coefficients, the added ’s can be canceled out with the subtracted ’s. Note that any permutation of coefficients for which the corrupted packets are added by (or subtracted by ) do not have any effect on the attack detection. As the total number of cases for coefficient selections of the corrupted packets is equal to , the probability that the attack is not detected is equal to . This concludes the proof.
Appendix C: Proof of Proposition 3
If the malicious worker corrupts only one packet out of packets, it is obvious that this attack can be detected by applying . Among the remaining attack patterns, i.e., all attack patterns, where the malicious worker corrupts more than one packet, the most difficult one to detect is a symmetric pairwise Byzantine attack, where the malicious worker corrupts two packets out of packets. The reason is that the attack is detected by applying function unless the coefficients corresponding to the corrupted packets have a systematic structure, while the coefficients corresponding to the remaining packets can have any value. Therefore, for to fail the attack detection, among all attacks with more than two corrupted packets, the function has the least freedom on selecting the coefficients when the number of corrupted packets is two. This results in the least probability of attack detection when the number of corrupted packets is two. This fact can also be confirmed in Lemma 2, as the detection probability presented in (4) is an increasing function of the number of corrupted packets. On the other hand, among all attack patterns that changes two of the packets, the symmetrical attacks, where one of the packets is corrupted by adding to the result and the other packet is corrupted by subtracting the same amount of from the result is the most difficult one to be detected (In fact, the probability of attack detection for the asymmetrical pairwise attack is ). Therefore, the most difficult attack pattern for function is symmetrical pairwise Byzantine attack, for which the probability of attack detection is . This concludes the proof.
Appendix D: Proof of Lemma 5
In order for an attack not to be detected by applying HW function, the value of the corrupted packets should be changed by the attacker such that in (2) is equal to in (3). For this condition to be satisfied, should be equal to . In other words, if the following condition is satisfied, then the attack will not be detected:
where with size is the set of corrupted packets among all received packets, i.e., . For this condition to be satisfied, one of the coefficients out of all coefficients, should be selected depending on the values of the other coefficients, i.e., should be selected such that the following condition is satisfied:
Since is a prime number, from the modular arithmetic principles, has a unique solution. Considering that is selected randomly in by the master device, the probability that the selected coefficient satisfies the above equation is . Therefore, the probability that the attack is not detected by HW function is . This concludes the proof.
Appendix E: Proof of Theorem 7
According to Lemma 3, the probability of attack detection when LW function is used for one round is at least . When LW function is used for two rounds, i.e., no attack is detected by selecting the coefficients uniformly randomly from and thus a different set of coefficients are selected uniformly at random, the probability of attack detection is at least . Similarly, the probability of attack detection when LW function is used for rounds is at least , which can be approximated as when . Therefore, for , the probability of attack detection when LW function is used for rounds is , which is equal to the attack detection probability of HW function.
According to Theorem 4, the computational complexity for one round of LW function is and thus the computational complexity of rounds of LW function is . On the other hand, according to Theorem 6, the computational complexity of HW function is . Therefore, if , the computational complexity of -round LW function is lower than HW function.
This concludes the proof.
Appendix F: Proof of Theorem 8
In order to characterize the completion time of SC, we calculate the required time for receiving the required number of packets collectively from all workers at the master device during each time period , defined in Algorithm 1.
According to Algorithm 1, the first time period is defined as the time interval during which packets are received collectively from all workers. Using the dynamic packet offloading module of SC, this time period is equal to , according to (17) in , where is the average of per packet computing time at worker .
According to Algorithm 1, the second time period is defined as the time interval during which packets are received collectively from all workers, where is the number of packets labeled as verified after applying the attack detection module on the packets received during the first time interval. In the worst case scenario, additional packets that should be received at the master device and labeled as verified, are delivered only by honest workers. This worst case scenario results in the maximum time for receiving additional packets, which is equal to . By taking into account this worst case scenario and the upper bound on the average value of (provided in Lemma 10), and adding the time during the first time period, the completion time is upper bounded by .
The average number of packets among all packets received during the first time period , that are not labeled as verified by SC, is upper bounded by:
where is given by .
Proof: The packets received during the first time period , that are not labeled as verified by SC, consist of two kinds:
(i) Packets received from malicious workers, where attack is detected by applying the LW function: The average number of these packets is equal to , where is equal to the probability of attack detection by LW function when applied on packets received from worker . From (4), is given by .
(ii) Corrupted packets received from malicious workers, where attack is not detected by applying the LW function but attack is detected in the attack recovery module. The average number of such packets is upper bounded by , where is the average number of corrupted packets received from the malicious worker and is the probability that the attack is not detected by applying function LW. Note that for larger values of , the probability of attack detection by applying HW function or multiple-round LW function is closer to 1 and the exact value gets closer to its upper bound.
This concludes the proof.
Appendix G: Proof of Lemma 9
HW-only uses only the honest workers for computing packets, i.e., all workers that are not in the set , and thus its task completion delay is equal to , which can be equivalently written as: