I Introduction
With the growing size of modern datasets for applications such as machine learning and data science, it is necessary to partition a massive computation into smaller computations and perform these smaller computations in a distributed manner for improving overall performance
[1]. However, distributing the computations to some external entities, which are not necessarily trusted, i.e., adversarial servers make security a major concern [3, 7, 5]. Thus, it is important to provide security against adversarial workers that deliberately send erroneous data in order to affect the computation for their benefit.Computing Boolean functions is the key component of many applications of interest. For instance, learning a Boolean function for the inference of classification in discrete attribute spaces from examples of its input/output behavior has been widely studied in the past few decades [21]. The examples in the classification problem are represented by binary ( or ) attributes, and the inference can be converted into a Boolean function which outputs the category of each example belongs to [19]. For hash functions based on bit mixing (e.g., SHA2), the Boolean functions are used to represent the verification functions. Moreover, Boolean functions are also primarily used in in the design of cryptographic algorithm [8].
In this paper, we consider the problem of computing the Boolean function in which the computation is carried out distributively across several workers with particular focus on security against Byzantine workers. Specifically, using a masterworker distributed computing system with workers, the goal is to compute the Boolean function over a large dataset , i.e., , in which the (encoded) datasets are prestored in the workers such that the computations can be secure against adversarial workers in the system.
Any Boolean function can be modeled as an Algebraic normal form (i.e., multivariate polynomial). Thus, the recent proposed Lagrange Coded Computing (LCC) [28], an universal encoding technique for arbitrary multivariate polynomial computations, can be used to simultaneously alleviate the issues of resiliency, security, and privacy. The security threshold (maximum number of adversarial workers can be tolerated) provided by LCC is which can be extremely low if the degree of polynomial is high. Such degree problem can be further amplified in complex Boolean functions whose degree can grow exponentially in general. Thus, we aim at designing the efficient coding scheme achieves the optimal security threshold with low decoding overhead.
Ia Main Contributions
As main contributions of the paper is that instead of modeling the Boolean function as a general polynomial, we propose the three proposed schemes modeling it as the concatenation of some lowdegree polynomials and the threshold functions (see Figure 1). To illustrate the main idea of the proposed schemes, consider an AND function of three input bits which is formally defined by . The function can be modeled as a polynomial function (Algebraic normal form) which has a degree of . For this polynomial, LCC achieves the security threshold . Instead of directly computing the degree polynomial, our proposed approach is to model it as a linear threshold function in which if and only if . Then, a simple linear code (e.g., MDS code) can be used for computing the linear function , which provides the optimal security threshold .
We propose three different schemes called coded Algebraic normal form (ANF), coded Disjunctive normal form (DNF) and coded polynomial threshold function (PTF). The idea behind coded ANF (DNF) is to first decompose the Boolean function into some monomials (clauses) and then construct a linear threshold function for each monomial (clause). Then, an MDS code is used to encode the datasets. On the other hand, the proposed coded PTF models the Boolean function as a lowdegree polynomial threshold function, then LCC is used for the data encoding.
In Table I, we summarize the performance comparison of LCC and the proposed three schemes in terms of the security threshold and the decoding complexity. For any general Boolean function , the proposed coded ANF and coded DNF achieve the best security threshold (matches to the theoretical outer bound) which is independent of . As compared to LCC, coded ANF and coded DNF provides the substantial improvement on the security threshold.
In particular, coded ANF has the decoding complexity which works well for the Boolean functions with low sparsity ; coded DNF has the decoding complexity which works well for the Boolean functions with small weight (see the definitions of and in Section II). For the Boolean functions with the polynomial size of and , coded PTF outperforms LCC by achieving the better security threshold and the almost linear decoding complexity which is independent of (see more details in Section VI).
IB Related Prior Work
Coded computing broadly refers to a family of techniques that utilize coding to inject computation redundancy in order to alleviate the various issues that arise in largescale distributed computing. In the past few years, coded computing has had a tremendous success in various problems, such as straggler mitigation and bandwidth reduction (e.g., [14, 17, 9, 15, 29, 26, 16, 20]). Coded computing has also been expanded in various directions, such as heterogeneous networks (e.g., [24]), partial stragglers (e.g., [10]), secure and private computing (e.g., [6, 28, 25, 11, 22]), distributed optimization (e.g., [12]) and dynamic networks (e.g., [27]). So far, research in coded computing has focused on developing frameworks for some linear function (e.g., matrix multiplications). However, there has been no works prior to our work that consider coded computing for Boolean functions. Compared with LCC, we make the substantial progress of improving the security threshold by proposing coded ANF, coded DNF and coded PTF.
Security Threshold  Decoding Complexity  

LCC  
coded ANF  
coded DNF  
coded PTF  
Outer Bound   
Ii System Model
We consider the problem of evaluating a Boolean function over a dataset , where . Given a distributed computing environment with a master and workers, our goal is to compute .
Each Boolean function can be represented by an Algebraic normal form (ANF) as follows:
(1) 
where is the bit of data and is the ANF coefficient of the corresponding monomial . We denote the degree of Boolean function by and the sparsity (number of monomials) of by , i.e., .
Furthermore, we denote the support of by
which is the set of vectors in
such that , i.e., . Let be the weight of Boolean function , defined by . Alternatively, each Boolean function can be represented by a Disjunctive normal form (DNF) as follows:(2) 
where each clause has literals which corresponds to an input such that . For example, if , then the corresponding clause is .
Prior to computation, each worker has already stored a fraction of the dataset in a possibly coded manner. Specifically, each worker stores , where is the encoding function of worker . Each worker computes and returns the result to the master, in which is the function decided by the master. Then, the master aggregates the results from the workers until it receives a decodable set of local computations. We say a set of computations is decodable if can be obtained by computing decoding functions over the received results. More concretely, given any subset of workers that return the computing results (denoted by ), the master computes , where each is a deterministic function. We refer to the ’s as decoding functions.
In particular, we focus on finding the coding scheme to be robust to as many adversarial workers as possible in the system. The following term defines the security which can be provided by a coding scheme.
Definition 1 (Security Threshold).
For an integer , we say a scheme is secure if the master can be robust against adversaries. The security threshold, denoted by , is the maximum value of such that a scheme is secure, i.e.,
(3) 
Based on the above system model, the problem is now formulated as: What is the coding scheme which achieves the optimal security threshold with low decoding complexity?
Iii Overview of Lagrange Coded Computing
In this section, we consider Lagrange Coded Computing (LCC) [28] and show how it works for our problem.
Since Lagrange coded computing requires the underlying field size to be at least the number of workers , we first extend the field size of such that the size of extension field is at least the number of workers . More specifically, we embed each bit of data into a binary extension filed such that with . The embedding of the bit is generated such that
(4) 
Note that over extension field the output of Boolean function is if the original result is ; if the original result is .
For the data encoding by using LCC, we first select distinct elements from extension field , and let be the respective
Lagrange interpolation polynomial
:(5) 
where is a polynomial of degree such that . Then we can select distinct elements from extension field , and encode to for all , i.e.,
(6) 
Each worker stores locally. Following the above data encoding, each worker computes function on and sends the result back to the master upon its completion.
In the following, we present the security threshold provided by LCC. By [28], to be robust to adversarial workers (given and ), LCC requires ; i.e., LCC achieves the security threshold
(7) 
After receiving results from the workers, the master can obtain all coefficients of by applying ReedSolomon decoding [2, 18]. Having this polynomial, the master evaluates it at for every to obtain . The complexity of decoding a length ReedSolomon code with dimension is . To have a sufficiently large field for LCC, we pick . Thus, the decoding process by the master requires complexity .
The security threshold achieved by LCC depends on the degree of function , i.e., the security guarantee is highly degraded if has high degree. To mitigate such degree effect, we model the Boolean function as the concatenation of some lowdegree polynomials and the threshold functions by proposing three schemes in the following sections.
Iv Scheme 1: Coded Algebraic Normal Form
In this section, we propose a coding scheme called coded Algebraic normal form (ANF) which computes the ANF representation of Boolean function by the linear threshold functions (LTF) and a simple linear code is used for the data encoding. We start with an example to illustrate the idea of coded ANF.
Example 1.
We consider a function which has an ANF representation defined as follows:
(8) 
Then, we define a linear function over the real field:
(9) 
where if and only if . Otherwise, . Thus, we can compute by computing its corresponding linear threshold function , i.e., if ; otherwise, if . Unlike computing the function with the degree which results in low security threshold, computing the linear function allows us to apply a linear code on the computations.
Iva Formal Description of coded ANF
Given the ANF representation defined in (1), we now present the proposed coded ANF as follows. For each monomial such that , we define a linear function as follows:
(10) 
It is clear that if and only if . Otherwise, . Thus, there are constructed linear threshold functions, and each monomial can be computed by its corresponding linear threshold function .
The master encodes to over the real field using an MDS code. Each worker stores locally. Each worker computes the functions and then sends the results back to the master. After receiving the results from the workers, the master first recovers for each and each . Then, the master has if ; if . Lastly, the master recovers by summing the monomials.
IvB Security Threshold of Coded ANF
To decode the MDS code, coded ANF applies ReedSolomon decoding. Successful decoding requires the number of errors of computation results such that . The following theorem shows the security achieved by coded ANF.
Theorem 1.
Given a number of workers and a dataset , the proposed coded ANF can be robust to adversaries for computing for any Boolean function , as long as
(11) 
i.e., coded ANF achieves the security threshold
(12) 
Whenever the master receives results from the workers, the master decodes the computation results using a length ReedSolomon code for each of linear functions which incurs the total complexity . Computing all the monomials via the signs of corresponding linear threshold functions incurs the complexity . Lastly, computing by summing the monomials incurs the complexity since there are additions in function . Thus, the total complexity of decoding step is which works well for small . Note that the operation of this scheme is over the real field whose size doesn’t scale with size of .
V Scheme 2: Coded Disjunctive Normal Form
In this section, we propose a coding scheme called coded Disjunctive normal form (DNF) which computes the DNF representation of Boolean function by LTFs and a simple linear code is used for the data encoding. We start with an example to illustrate the idea behind coded DNF.
Example 2.
Consider a function which has an ANF representation defined as follows:
which has the degree and the number of monomials . Alternatively, this function has a DNF representation as follows:
which has the weight .
For the clause , we define a linear function over the real field:
(13) 
where if and only if . Otherwise, . Similarly, for the clause , we define a linear function over the real field:
(14) 
where if and only if . Otherwise, . Therefore, we can compute by computing and , i.e., if at least one of and is equal to . Otherwise, . Unlike directly computing the function with the degree of , computing the linear functions and allows us to apply a linear code on the computations.
Va Formal Description of coded DNF
Given the DNF representation defined in (2), we now present the proposed coded DNF as follows. For each clause with the corresponding input such that , we define a linear function over the real field:
(15) 
where
(16) 
It is clear that and for all other inputs . Thus, there are constructed linear threshold functions, and each clause can be computed by its corresponding linear threshold function .
The master encodes to over the real field using an MDS code. Each worker stores locally. Each worker computes the functions and then sends the results back to the master. After receiving the results from the workers, the master first recovers for each and each via MDS decoding. Then, the master has if ; otherwise . Lastly, the master has if at least one of is equal to . Otherwise, .
VB Security Threshold of Coded DNF
Similar to coded ANF, we present the following theorem shows the security threshold achieved by the coded DNF.
Theorem 2.
Given a number of workers and a dataset , the proposed coded DNF can be robust to adversaries for computing for any Boolean function , as long as
(17) 
i.e., coded DNF achieves the security threshold
(18) 
Whenever the master receives results from the workers, the master decodes the computation results using a length ReedSolomon code for each of linear functions which incurs the total complexity . Computing all the clauses via the signs of corresponding linear threshold functions incurs the complexity . Lastly, computing by checking all the clauses requires the complexity . Thus, the total complexity of decoding step is which works well for small .
Vi Scheme 3: Coded Polynomial Threshold Function
In this section, we propose a coding scheme called coded polynomial threshold function which computes the DNF representation of Boolean function by the polynomial threshold functions (PTF) and LCC is used for the data encoding.
Via Formal Description of coded PTF
Given the DNF representation defined in (2), we now present the proposed coded PTF. Following the construction proposed in [23, 13], we now construct a polynomial threshold function for computing where is a polynomial function with the degree at most . The construction of such PTF has the following steps.
Decision Tree Construction: We construct an
leaf decision tree over variables
such that each input in arrives at a different leaf. Such a tree can be always constructed by a greedy algorithm. Let be a leaf of this tree in which reaches leaf . We label with the linear threshold function defined in (15). The constructed decision tree, in which internal nodes are labeled with variables and leaves are labeled with linear threshold functions, computes exactly .Decision List: For this leaf decision tree, we construct an equivalent decision list. Following from the definition that the rank of an leaf tree is at most . We find a leaf in the decision tree at distance at most from the root, and place the literals along the path to the leaf as a monomial at the top of a new decision list. We then remove the leaf from the tree, creating a new decision tree with one fewer leaf, and repeat this process [4]. Without loss of generality, we let be the th removed leaf in the process of list construction with the corresponding monomial of at most variables. The constructed list is defined as "if then output ; else if then output ; … else if then output .
Polynomial Threshold Function: Having the constructed decision list, we now construct the polynomial function with degree of at most as follows:
where are appropriately chosen positive values.
The master encodes to over the real field using LCC. Each worker stores locally. Each worker computes the function and then sends the result back to the master. After receiving the results from the workers, the master first recovers via LCC decoding. Then, the master has if ; otherwise .
ViB Security Threshold of Coded PTF
Since has degree of at most , to be robust to adversaries, LCC requires the number of workers such that . Then, we have the following theorem.
Theorem 3.
Given a number of workers and a dataset , the proposed coded polynomial threshold function can be robust to adversaries for computing for any Boolean function , as long as
(19) 
i.e., coded PTF achieves the security threshold
(20) 
Whenever the master receives results from the workers, the master decodes the computation results using a length ReedSolomon code for the polynomial function which incurs the total complexity . Lastly, computing by checking the signs requires the complexity . Thus, the total complexity of decoding step is .
In the following example, we show that coded PTF outperforms LCC for the Boolean functions with the polynomial size of and .
Example 3.
Consider a function which has an ANF representation defined as follows:
(21) 
where . Note that here we focus on the case that is large enough such that . The function has the degree of , the sparsity of and the weight of .
For the Boolean function considered in Example 3, coded PTF achieves the security threshold which is greater than the security threshold provided by LCC. Although coded ANF and coded DNF achieve the optimal security threshold but they require decoding complexity which has the order of , i.e., they only work for small . With the security slightly worse than coded ANF and coded DNF, coded PTF achieves the better decoding complexity which is independent of , i.e., coded PTF works for large .
References
 [1] (2016) Tensorflow: a system for largescale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 265–283. Cited by: §I.
 [2] (1968) Nonbinary bch decoding (abstr.). IEEE Transactions on Information Theory 14 (2), pp. 242–242. Cited by: §III.
 [3] (2017) Machine learning with adversaries: byzantine tolerant gradient descent. In Advances in Neural Information Processing Systems, pp. 119–129. Cited by: §I.
 [4] (1992) Rankr decision trees are a subclass of rdecision lists. Information Processing Letters 42 (4), pp. 183–185. Cited by: §VIA.
 [5] (2008) Sharemind: a framework for fast privacypreserving computations. In European Symposium on Research in Computer Security, pp. 192–206. Cited by: §I.
 [6] (2018) DRACO: byzantineresilient distributed training via redundant gradients. In International Conference on Machine Learning, pp. 903–912. Cited by: §IB.
 [7] (2015) Secure multiparty computation. Cambridge University Press. Cited by: §I.
 [8] (2017) Cryptographic boolean functions and applications. Academic Press. Cited by: §I.

[9]
(2016)
Shortdot: computing large linear transforms distributedly using coded short dot products
. In Advances In Neural Information Processing Systems, pp. 2100–2108. Cited by: §IB.  [10] (2018) Hierarchical coded computation. In 2018 IEEE International Symposium on Information Theory (ISIT), pp. 1620–1624. Cited by: §IB.
 [11] (2019) Gradient coding based on block designs for mitigating adversarial stragglers. arXiv preprint arXiv:1904.13373. Cited by: §IB.
 [12] (2017) Straggler mitigation in distributed optimization through data encoding. In Advances in Neural Information Processing Systems, pp. 5434–5442. Cited by: §IB.
 [13] (2004) Learning dnf in time 2o (n1/3). Journal of Computer and System Sciences 68 (2), pp. 303–318. Cited by: §VIA.
 [14] (2018) Speeding up distributed machine learning using codes. IEEE Transactions on Information Theory 64 (3), pp. 1514–1529. Cited by: §IB.
 [15] (2017) Highdimensional coded matrix multiplication. In Information Theory (ISIT), 2017 IEEE International Symposium on, pp. 2418–2422. Cited by: §IB.
 [16] (2017) Coding for distributed fog computing. IEEE Communications Magazine 55 (4), pp. 34–40. Cited by: §IB.
 [17] (2018) A fundamental tradeoff between computation and communication in distributed computing. IEEE Transactions on Information Theory 64 (1), pp. 109–128. Cited by: §IB.
 [18] (1969) Shiftregister synthesis and bch decoding. IEEE transactions on Information Theory 15 (1), pp. 122–127. Cited by: §III.
 [19] (2000) The use of boolean concepts in general classification contexts. Technical report EPFL. Cited by: §I.
 [20] (2019) Slack squeeze coded computing for adaptive straggler mitigation. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 14. Cited by: §IB.

[21]
(1987)
On learning boolean functions.
In
Proceedings of the nineteenth annual ACM symposium on Theory of computing
, pp. 296–304. Cited by: §I.  [22] (2019) Secure coded multiparty computation for massive matrix operations. arXiv preprint arXiv:1908.04255. Cited by: §IB.
 [23] (2008) Extremal properties of polynomial threshold functions. Journal of Computer and System Sciences 74 (3), pp. 298–312. Cited by: §VIA.
 [24] (2019) Coded computation over heterogeneous clusters. IEEE Transactions on Information Theory. Cited by: §IB.
 [25] (2019) Codedprivateml: a fast and privacypreserving framework for distributed machine learning. arXiv preprint arXiv:1902.00641. Cited by: §IB.
 [26] (2017) Gradient coding: avoiding stragglers in distributed learning. In International Conference on Machine Learning, pp. 3368–3376. Cited by: §IB.
 [27] (2019) Timelythroughput optimal coded computing over cloud networks. In Proceedings of the Twentieth ACM International Symposium on Mobile Ad Hoc Networking and Computing, pp. 301–310. Cited by: §IB.

[28]
(2019)
Lagrange coded computing: optimal design for resiliency, security, and privacy.
In
The 22nd International Conference on Artificial Intelligence and Statistics
, pp. 1215–1225. Cited by: §IB, §I, §III, §III.  [29] (2017) Polynomial codes: an optimal design for highdimensional coded matrix multiplication. In Advances in Neural Information Processing Systems, pp. 4403–4413. Cited by: §IB.