MPC-enabled Privacy-Preserving Neural Network Training against Malicious Attack

In the past decades, the application of secure multiparty computation (MPC) to machine learning, especially privacy-preserving neural network training, has attracted tremendous attention from both academia and industry. MPC enables several data owners to jointly train a neural network while preserving their data privacy. However, most previous works focus on semi-honest threat model which cannot withstand fraudulent messages sent by malicious participants. In this work, we propose a construction of efficient n-party protocols for secure neural network training that can secure the privacy of all honest participants even when a majority of the parties are malicious. Compared to the other designs that provides semi-honest security in a dishonest majority setting, our actively secured neural network training incurs affordable efficiency overheads. In addition, we propose a scheme to allow additive shares defined over an integer ring ℤ_N to be securely converted to additive shares over a finite field ℤ_Q. This conversion scheme is essential in correctly converting shared Beaver triples in order to make the values generated in preprocessing phase to be usable in online phase, which may be of independent interest.



There are no comments yet.


page 1

page 2

page 3

page 4


Adam in Private: Secure and Fast Training of Deep Neural Networks with Adaptive Moment Estimation

Privacy-preserving machine learning (PPML) aims at enabling machine lear...

Highly Scalable Beaver Triple Generator from Additive-only Homomorphic Encryption

In a convolution neural network, a composition of linear scalar product,...

Private Speech Characterization with Secure Multiparty Computation

Deep learning in audio signal processing, such as human voice audio sign...

Privacy-Preserving Training of Tree Ensembles over Continuous Data

Most existing Secure Multi-Party Computation (MPC) protocols for privacy...

Privacy-Preserving Feature Selection with Secure Multiparty Computation

Existing work on privacy-preserving machine learning with Secure Multipa...

Trident: Efficient 4PC Framework for Privacy Preserving Machine Learning

Machine learning has started to be deployed in fields such as healthcare...

ASTRA: High Throughput 3PC over Rings with Application to Secure Prediction

The concrete efficiency of secure computation has been the focus of many...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

During this last decade, with the development of machine learning, especially deep neural network (DNN), the scenario where different parties, e.g., data owners or cloud service providers, jointly solve a machine learning problem while preserving their data privacy has attracted tremendous attention from both academia and industry. Federated learning scheme (McMahan et al., 2016) seems to offer the possibility for distributed privacy-preserving machine learning, focusing on cross-device and cross-silo setting (Kairouz et al., 2019) where multiple clients train local models using their raw data, then aggregate their models under the coordination of a central server. However, there is still no formal privacy guarantee to this baseline learning model. Therefore, secure multiparty computation (MPC) (Evans et al., 2018) as a practical mature privacy-preserving technique aiming to enable multiple parties jointly evaluating a function, is natural to be applied to address such privacy issues of training neural networks in a distributed manner.

Previous Results: There have been several schemes proposed to perform distributed neural network. This research direction is arguably pioneered by the design of SecureML by Mohassel and Zhang(Mohassel and Zhang, 2017)

where they propose several MPC-friendly activation functions to enable neural network training on secret shared data. The study of neural network on shared data can be classified to several classes based on their goals, number of parties and threat model. First, some models focus on the neural network prediction. A scheme designed for neural network prediction called MiniONN

(Liu et al., 2017) was constructed based on the design of SecureML(Mohassel and Zhang, 2017). With the extensive use of packing techniques and additive homomorphic encryption (AHE) cryptosystem along with garbled circuit, Gazelle(Juvekar et al., 2018) provides a neural network prediction protocol with a more efficient linear computation.

Secondly, some other models provides secure neural network training for two active parties providing security against one corrupted party. As discussed above, SecureML (Mohassel and Zhang, 2017) provides us with a neural network training protocol for two parties which is secure against semi-honest adversary controlling one party. Chameleon (Riazi et al., 2018) proposes a different neural network training protocol for two parties. Instead of using the relatively more costly oblivious transfer (OT) protocols, the design relies on the use of external parties that aid the computation without having their own private inputs. In this case, the protocol is proven to be secure against semi-honest adversary under honest majority setting. ABY(Demmler et al., 2015) presents a framework of efficient conversion between various two-party computation schemes to support different machine learning algorithms. It is proven to be secure against semi-honest adversary controlling one party. SecureNN(Wagh et al., 2019) relies on more sophisticated protocols and up to two non-colluding external parties to provide neural network training protocols for two active parties. Its security is guaranteed against semi-honest adversary controlling up to 1 party. QUOTIENT (Agrawal et al., 2019) deeply integrates OT protocols with advanced neural network techniques such as ternary weight neural network (Li et al., 2016) in 2-server setting against semi-honest adversary controlling 1 of the two servers. Note that all the designs mentioned here are designed for two parties with only semi-honest security guarantee with number of corrupted parties being at most half of the total number of participating parties. Despite some efforts to extend these designs to a larger number of active parties(Mohassel and Rindal, 2018), the improvement has been rather limited.

Lastly, there have also been some designs dedicated for system with more than two parties with active security guarantee in dishonest majority setting. Most of these works are based on the SPDZ scheme(Damgård et al., 2012). These works(Chen et al., 2019; Damgård et al., 2019; Sharma et al., ) utilizes SPDZ to provide several accurate and efficient machine learning algorithms. Nevertheless, these works only rely on existing libraries, such as SCALE-MAMBA (Aly et al., 2019) and FRESCO (Alexandra Institute, 2020)

, which do not offer primitives and optimizations for neural network training. Therefore, in this work, we present dedicated MPC protocols based on SPDZ for convolutional neural networks (CNN) training and demonstrate that our protocols obtain active security with affordable overheads compared to the existing secure neural network training in semi-honest setting.

For SPDZ protocol, the main efficiency improvement and active security come from the extensive use of pre-computed Beaver triples (Beaver, 1991) with Message authentication code (MAC) (see Section 3.2) to accelerate arithmetic operations. During this last decade, following the initial scheme proposed in BDOZ (Bendlin et al., 2011), many researchers have been working on protocols of efficient Beaver triple generation in malicious setting. Both HE based schemes such as (Damgård et al., 2012, 2013; Keller et al., 2018), and OT based schemes such as (Nielsen et al., 2012; Keller et al., 2016; Cramer et al., 2018) offer reasonable efficiency and security. Relying on these schemes, all parties are able to jointly generate Beaver triples over a finite field or a ring which can be directly used for MAC checking in the online phase of SPDZ and its variants. Specifically, for HE based schemes, we have to choose a proper crypto-system to be used in the offline phase, e.g., leveled BGV (Brakerski et al., 2014) is used in (Damgård et al., 2013; Keller et al., 2018; Orsini et al., 2020) for its high performance due to the extensive use of packing technique, e.g., single-instruction multiple data (SIMD) trick (Smart and Vercauteren, 2014). Unfortunately, Beaver triples generated based on AHE crypto-systems over , e.g., Paillier (Paillier, 1999) or DGK (Damgard et al., 2008) as used in SecureML (Mohassel and Zhang, 2017), cannot be directly used for the verification of standard SPDZ which is based on a finite field. Therefore, an instance of SPDZ that is based on Paillier or DGK requires a secure scheme to transform the triples generated modulo to the underlying field of the SPDZ,

Our contributions: In this work, we propose construction of efficient

-party protocols for secure CNN training in malicious majority setting, including linear and convolutional layer, Rectified Linear Unit (ReLU) layer, Maxpool layer, normalization layer, dropout layer, and their derivatives. In addition, we present a secure conversion scheme for shares defined over an integer ring

to shares over a prime field which can also be used to correctly convert shared Beaver triples. We believe that this result may be of independent interest. Our experimental results show that our protocols for secure neural network training provide affordable overheads compared with existing schemes in semi-honest setting.

Organisation of the paper: The rest of the paper is organized as follows. In Section 2, we provide notations and threat model used in this paper, as well as a brief discussion of secure computation and neural network. In Section 3, we introduce several supporting protocols including Distributed Paillier crypto-system, SPDZ protocol, and protocols of secure computation of fixed-point numbers. Section 4 contains our MPC protocols which can be used to construct an efficient secure neural network protocol as illustrated in Section 5. Then we analyze the performance of our protocol in Section 6. Finally, we present our experimental evaluation in Section 7 and give the conclusion in Section 8.

2. Preliminaries

2.1. Privacy Considerations in Neural Network Training

Neural network consists of many layers with nodes defined by a set of linear operations, such as addition and multiplication, and non-linear operations such as ReLU, Maxpool, and dropout. At a very high level, we represent a neural network as a function where represents the set of input data with their respective labels, represents the set of weights of neural network, and function can be represented with linear operations and non-linear operations as mentioned above. The target of training neural network is to obtain these weights which can be used to map a new unlabeled data to its predicted label , i.e., prediction.

2.2. Secure multiparty computation

Privacy-preserving technology provides privacy guarantee for data in various purposes such as for computation or for publishing. This technology broadly encompasses all schemes for privacy preserving function evaluations, including but not limited to differential privacy (DP), secure multiparty computation (MPC), and homomorphic encryption (HE). It is well known that DP provides a tradeoff between accuracy and privacy that can be mathematically analyzed while MPC and HE offer cryptographic privacy but with high communication or computation overheads. In this work, we focus on MPC to construct efficient neural network training protocols. Since its general definition in (Yao, 1982), thanks to both theoretical and engineering breakthroughs, MPC has moved from pure theoretical interests to practical implementations, e.g., Danish sugar beets auction (Bogetoft et al., 2009) and Estonian students study (Bogdanov et al., 2008).

In addition to pure homomorphic encryption based MPC, there are two schemes which can be used to construct MPC protocols, i.e., circuit garbling and secret sharing. Circuit garbling, as used in SecureML (Mohassel and Zhang, 2017), involves encrypting and decrypting keys in a specific order (Applebaum et al., 2014), while the latter emulates a function evaluation more efficiently based on the "secretly-shared" inputs between all parties. Our work leverages on an additive secret sharing MPC protocol called SPDZ (see Section 3.2), such that for each data , it will be randomly split up into pieces and then be distributed among parties (see Algorithm 1 in Section 3.2). In the rest of the paper, we write to denote being secretly shared between all parties such that each holds and For simplicity of notation, when the context is clear, we abuse the notation and use instead of , the share of owned by party . Similarly, when the underlying space is clear from the context, we write instead of

2.3. Threat model and security

In many real life neural network applications, the training data are distributed across multiple parties which are independent business entities and are required to comply with the applicable data privacy regulations. Therefore, due to the competitive nature of business organizations, in this work, we consider the scenario where a majority of parties may collude to obtain the data from the other parties by sending fraudulent messages to them. Such threat model is the same as the one used in SPDZ, i.e. a security against malicious adversary controlling up to parties. This means that in an -party setting, such MPC protocol is secure even if parties are corrupted by a malicious adversary. Such threat model is different from that of MPC protocols in SecureML (Mohassel and Zhang, 2017) and SecureNN (Wagh et al., 2019), which are against semi-honest adversary. As demonstrated in (Cramer et al., 2015), semi-honest protocols can be elevated into the malicious model, which may incur infeasible cost overhead. However, thanks to the online-offline architecture of SPDZ, such overhead can be moved from online phase to offline phase and thus the amortized efficiency of function evaluation can be improved. Our security definition is based on Universal Composability (UC) framework and we refer interested readers to (Canetti, 2001) for the details.

The correctness and security of our proposed protocol depends on the supporting building blocks, i.e., distributed Paillier crypto-system, SPDZ, and secure computation of fixed-point numbers. Data representation follows the format in (Catrina and Saxena, 2010) and we use standard SPDZ scheme over a finite field . The subprotocol of SPDZ involved in this works include protocol for data resharing , multiplication , Paillier based Beaver triple generation , and MAC checking. The details of all the above mentioned supporting protocols can be found in the Section 3.

3. Supporting protocols

3.1. Distributed Paillier cryptosystem

Paillier (Paillier, 1999) is a public key encryption scheme that possesses partial homomorphic property (Acar et al., 2018). The public key is and the secret key is pair where and are large primes. First, we fix to be a random invertible integer modulo . The encryption of a message is defined to be for a randomly chosen invertible . The decryption function is where function is defined as and where is Euler’s totient function. Paillier supports homomorphic addition between two ciphertexts and homomorphic multiplication between a plaintext and a ciphertext, in particular, and . For simplicity, we denote the following two functions: and . We can easily generalize the two notations such that and . Due to the invertibility of and modulo it is easy to see that a Paillier ciphertext is invertible modulo Hence, if we will also have to be a valid encryption of We denote Using Paillier can support homomorphic subtraction, . These homomorphic properties enable several protocols proposed in Section 3.2 and Section 4. However, in terms of MPC scenario, the secret key pair is not allowed to be owned by any party. To keep the hardness of composite residuosity (Jager, 2012) used for the security of Paillier, the value of and can not be known by anyone. Hence we need to generate the public key in distributed manner and keep its factors secret while still enabling joint decryption to be done without revealing the private key. Fortunately, such distributed Paillier does exist. Distributed Paillier key generation includes two sub-protocols, i.e., (i) distributed RSA modulus generation, and (ii) distributed biprimality test to verify the validity of generated RSA modulus in (i). Inspired by (Boneh and Franklin, 1997) which proposed the first RSA modulus generation protocol in multiparty setting, several works (Frankel et al., 1998; Nishide and Sakurai, 2010; Gilboa, 1999) provides solutions in different threat models. Note that to ensure our protocol is secure against malicious adversaries, we have to guarantee that all the sub-protocols are also secure against same threat model in -party setting. Thus we rely on the scheme proposed in (Hazay et al., 2019). We denote the Paillier cryptosystem with plaintext space as , as well as its encryption and distributed decryption, i.e., , .

3.2. Spdz

SPDZ is a well known secret sharing based MPC protocol against malicious majority proposed in (Damgård et al., 2012). Following this initial somewhat homomorphic encryption (SHE) based work, several variants are proposed including its improved version (Damgård et al., 2013), OT based version called MASCOT (Keller et al., 2016), improved SHE based version called Overdrive (Keller et al., 2018), and the versions over such as SPDZ2k (Cramer et al., 2018) and Overdrive2k (Orsini et al., 2020). We will refer to all of these variants in SPDZ family as SPDZ.

SPDZ consists of a pre-processing or offline phase, which is independent from both the input data and the very efficient online phase for function evaluation. In the offline phase, all parties jointly generate some "raw materials", typically the Beaver triples (Beaver, 1991). In the online phase, the parties only need to exchange some shares and perform some efficient verification. The active security is guaranteed by the MAC which enables the validation of parties’ behavior during computation. In the rest of this section, several important techniques in SHE based SPDZ are introduced in order to construct some higher level protocols proposed in Section 4.

Data Resharing: Given , all parties can follow the protocol given in Algorithm 1 to obtain as the shares of . We note that this resharing of an encrypted value is only done during preprocessing phase to help in the generation of auxiliary values and it is not used in the sharing protocol of private inputs of the function For the sharing of private inputs during the online phase of the computation, it follows the protocol given in Algorithm 2.

1:  Each party publishes , where is uniformly selected from .
2:  All parties calculate using homomorphic addition.
3:  All parties jointly decrypt to obtain .
4:   sets its share , sets its share for .
5:  Return
Algorithm 1 Data Resharing:
0:  A shared random value
1:  Each party sends his share of to enabling to recover the value of
2:   sets and for sets
3:  Return
Algorithm 2 Data Sharing: where is a private value owned by

Arithmetic operation: SPDZ is based on secret sharing, thus there is no communication cost for addition and scaling by a public constant. Multiplication between two secretly shared values is more complex but we can use the well known Beaver triple trick to accelerate this operation. In the following discussion, all values are secretly shared using the additive secret sharing scheme over the same space. Assume that we have generated three secret shared values , , and , called Beaver triples, such that . Given and , all parties can follow the protocol in Algorithm 3 to calculate . Correctness and security are proven in (Beaver, 1991). Note that all the protocols in SPDZ can be applied to matrices.

1:  Each party publishes and .
2:  Each party compute and .
3:   sets its share , sets its share for .
4:  Return
Algorithm 3 Multiplication based on Beaver triple:

Considering that 1 triple cannot be used to perform 2 multiplications for privacy reason (Bendlin et al., 2011), the number of triples we need to generate depends on the number of multiplications we want to complete. Furthermore, these Beaver triples do not depend on inputs data as well as the function to be evaluated, which means they can be generated at any point prior to evaluating the function, i.e., the offline phase in SPDZ, thus enabling highly efficient online phase.

Beaver triple generation: Algorithm 4 describes the -party protocol for Beaver triple generation based on Paillier such that . For simplicity of discussion, we only discuss the protocols under semi-honest setting. As discussed before, such protocols can be made secure against malicious adversary by the combination of zero knowledge proof and standard technique of sacrificing an auxiliary value to check the correctness of another. A more detailed discussion of this technique can be found in(Rotaru and Smart, ). As mentioned in Section 1, these Beaver triples generated using Algorithm 4 are over , thus cannot be directly in the online phase of SPDZ which is over a finite field..

1:  Each party publishes and , where and are uniformly selected from .
2:  Each party computes .
3:  Each party computes and publishes using homomorphic multiplication in Paillier.
4:  Each party computes .
5:  All parties call to get .
6:  Return
Algorithm 4 Beaver triple generation based on Paillier:

Note that in step 3 of Algorithm 4, only knows the value of , which enables the homomorphic multiplication between a plaintext and ciphertext (refer Section 3.1), i.e., and , thus no information leaks.

MAC checking: To obtain active security over

, the main idea of SPDZ is to use unconditional MAC which enables verification of computation correctness. This authentication scheme prevents parties from cheating on their interactive computation with high probability. In SPDZ, to enable authentication, each private value, including the Beaver triples that are generated, come with their respective tags. To obtain this, first the parties agree on a random MAC key

which is secretly shared among all parties. To compute the tag of a secretly shared value , the parties compute and store it along with their shares of . We can observe that if some adversaries cheat such that the secretly shared value is changed from to , they can do so undetected only if they can modify the corresponding tag to such that . This means that the probability of cheating without being detected is equal to the probability of guessing correctly, which is inversely proportional to the finite field size When considering a similar scheme over a ring the security is no longer as strong. This is due to the fact that, contrary to not all non-zero value in is invertible. Because of this, the probability that becomes larger. For example, if and , can only be either 0 or 1 making the probability 1/2. As illustrated above, we use Paillier to generate Beaver triples with MAC, which means all the secret shared values are in , hence we have to convert all these shares from to while preserving the relationship between them. Note that for any sub-protocols, all the inputs and outputs should always be secretly shared and not be in clear. In addition to the shares of the outputs , the parties should also hold a secret share of their respective tags .

3.3. Secure computation of fixed-point numbers

For typical neural networks, data and weights are represented by floating point numbers. However, in terms of combining neural network with cryptographic techniques such as HE and MPC, we have to use a very large finite field to preserve full accuracy (Gilad-Bachrach et al., 2016). This method supports for only limited number of multiplications to avoid the overflow, which is prohibitive for neural network where a large number of multiplications are involved. Fortunately, the authors in (Mohassel and Zhang, 2017) illustrate that rational numbers can be treated as fixed point number relying on a truncation technique while preserving appropriate accuracy of neural network prediction. Other prior works regarding neural network with MPC, such as (Wagh et al., 2019) and (Agrawal et al., 2019) also show their accuracy in terms of truncation. In our work, we follow the same methodology and extend its applications to secure -party truncation, comparison, and arithmetic operations based on the protocols given in (Catrina and De Hoogh, 2010; Catrina and Saxena, 2010; Toft and others, 2007; Damgård et al., 2006). Note that the correctness of these protocols are given in above mentioned works and security is guaranteed by the MAC-checking method in SPDZ.

Data representation: Rational numbers can be treated as a sequence of digits including integer and fractional parts split by a radix point. More specifically, for any real number , set and as positive integers such that and the storage accuracy is within Then we can find a sign bit and such that To encode we first encode it as an integer by multiplying it by Hence we have Next, we set to be a prime number that is at least bits where and is the security parameter. To encode as a field element in we map to the element Calculation can then be done using MPC schemes that is over .

Truncation and comparison: In order to maintain the same resolution of secretly shared values and enable comparison, two truncation protocols are used in our work, i.e., probabilistic truncation and deterministic truncation , as given in (Catrina and Saxena, 2010). Probabilistic truncation supports efficient truncation as no bit-wise operation is involved and some "raw materials" needed in the protocol can be prepared during the pre-processing phase. However, it introdcues error with probability depending on the size of the Least Significant Bit (LSB) after truncation. In terms of data representation of , the Most Significant Bit (MSB) of determines whether is greater than or not, which can be obtained by simply truncating the last least significant bits. Due to the fact that the LSB after truncation is now large, using TruncPr for this purpose will yield a non-negligible error. Hence, an alternative truncation protocol is required. Deterministic truncation is less efficient but enables truncation with zero error probability. Therefore, although probabilistic truncation may be used to avoid overflow during multiplication computation, deterministic truncation is needed for comparison computation. We denote by , an adapted version of comparison protocol following the notation given in (Catrina and De Hoogh, 2010) such that if , and 0 otherwise, in order to keep the consistency with ReLU function (see Section 2.1).

Arithmetic operations: Addition and public scaling on additive shares can be done without interaction, while maintaining the same resolution. Multiplication can be done using Beaver triple method (see Section 3.2) with its resolution changing from to , which means probabilistic truncation is needed. Protocols of division with public divisor and secretly shared divisor are also given in (Catrina and Saxena, 2010) that offers reasonable accuracy and efficiency.

4. Proposed protocols

In this section, we describe protocols to support Beaver triple conversion and neural network training. We assume that there are two distributed Paillier crypto-systems with different plaintext space and . Based on the definition of Paillier and the notation introduced in Section 3.1, let where , and where , such that . All the multiplications involved in these protocols can be done following the similar multiplication protocol in Paillier based Beaver triple generation given in Section 3.2. In the rest of the paper, we write to denote being secretly shared between all parties such that each holds and For simplicity of notation, when the context is clear, we abuse the notation and use instead of , the share of owned by party . Similarly, when the underlying space is clear, we write instead of

4.1. Comparison over

The first supporting protocol that we want to introduce is the secure comparison protocol over More specifically, this function receives a secretly shared value and the bit length of It then outputs where if and otherwise. Note that this function can be used to compare two secretly shared values by computing This algorithm is an adapted version combining protocol in (Catrina and De Hoogh, 2010) and the deterministic comparison protocol in (Catrina and Saxena, 2010), which is based on the following remark: for x with -bit length (refer to data representation in Section 3.3), if then , and if then . This protocol, which depends on a subprotocol cannot be directly applied in our case due to the difference in the underlying space. is a protocol that returns if and otherwise where and are secretly shared in their binary representations. In the scheme proposed in(55), involves protocols to generate random bits and random invertible integers over the underlying finite field. In order to have a similar protocol to over we first describe some subprotocols that are required, namely random bit generation and random invertible integer generation modulo

Algorithm 5 describes our -party protocol for random bit generation over . The standard protocol in setting, such as in (Damgård et al., 2006), is to take an inverse of root from squaring modulo by dividing the initial value, i.e., . However, this may no longer works when working modulo . Fortunately, we can rely on existing Paillier crypto-system and Resharing protocol in SPDZ (see Section 3.2). Correctness is easily proved and security depends on the Resharing protocol in SPDZ and Indistinguishability under chosen-plaintext attack (IND-CPA) security of Paillier crypto-system which guarantees the security when . Furthermore, in Step 2 must be calculated either by or but not anyone else.

1:  Each party generates a uniformly random bit and computes Let and
2:  for  do
3:      sends to
4:      calculates
5:  end for
6:  Return
Algorithm 5 Random bit generation over :

Algorithm 6 describes our -party protocol for random integer with inverse generated over , which is an adapted version of in (55). Algorithm 6 is built on another secure protocol This protocol generates a random share where is unknown to any of the parties. This can be done by letting each party deal a sharing and is defined to be Note that as long as there is one honest party, the resulting

can be proved to be uniformly distributed in

. The correctness of the algorithm is straightforward since is invertible if and only if and are also invertible. Security proof is similar to that of in(55).

1:  All parties call to generate , , and then compute .
2:  Repeat step until is invertible.
3:  Return .
Algorithm 6 Random integer with inverse generation over :

Now we are ready to present our -party protocol for comparison over . This algorithm is an adapted version combining protocol in (Catrina and De Hoogh, 2010) and the deterministic comparison protocol in (Catrina and Saxena, 2010), which is based on the following remark: for x with -bit length (refer to data representation in Section 3.3), if then , and if then . However this protocol which depends on a subprotocol cannot be directly applied in our case due to the difference in the underlying space. is a protocol that returns if and otherwise where and are secretly shared in their binary representations. In the scheme proposed in(55), involves protocols to generate random bits and random invertible integers over the underlying finite field. In order to have a similar protocol to over , we propose the protocol over for generating random bit in Algorithm 5. For random integer generation over , we can simply follow the protocol given in (Omori and Kanaoka, 2017). Correctness and security are proven in (Catrina and De Hoogh, 2010).

1:  For each , all parties calculate in parallel, and thus obtain .
2:  All parties calculate .
3:  All parties publish Output, and then calculate .
4:  All parties call .
5:  All parties compute .
6:  All parties compute mod .
Algorithm 7 Comparison over :

4.2. Wrap, modulo reduction, share conversion

In this section, we discuss the secure conversion protocol that will help us in converting the values we generated during offline phase (modulo for some RSA modulus ) to equivalent value that is compatible with the online phase (modulo for a prime ). More specifically, given the additive share of a secret value modulo we want to calculate the additive share of the same secret value modulo for some prime First, for simplicity, we discuss about the transformation of the secret sharing values. Note that initially, we want our secret value and its shares to be an element in For simplicity of our argument in this section, we transform all these values to be non-negative value in via congruence operation. Note that this does not change the correctness of any sharing and transformation between the two formats can be done trivially.

Note that if there exists such that


Hence if we want to consider the equation modulo we will have


So in order to calculate from we need to calculate the value of which can be rewritten as .

Now we discuss how we can calculate the value of Note that Equation (1) will not yield the value of if it is computed modulo Intuitively, if we consider the equation modulo for some such that the equation does not become equivalence and hence we can use it to calculate Once we have the equation modulo we can find the maximum value of such that It is easy to see that Now since the equation is modulo the calculation will give us We let this procedure to be called which is only applicable if Algorithm 8 provides the complete protocol. Note that the security of the protocol is guaranteed due to the security of protocol and the fact that no intermediate values is revealed.

1:  Each party computes mod .
2:  For each , all parties compute , where .
3:  Return .
Algorithm 8 LiftWrap:

The next step is to convert to In other words, we need a secure conversion protocol to convert a secretly shared value back to where In order to complete this, first, we observe that given setting we have In other words, for any we can calculate Let this procedure be called which is only applicable if Algorithm 9 provides the complete protocol. The security is guaranteed based on the security guarantee of protocol.

1:  Parties jointly compute
2:  For each having and (the shares of and respectively), calculates
3:  Return
Algorithm 9 Lift shares in to :

Now we are ready to discuss the last subprotocol needed for the Wrap function, Intuitively, assuming the existence of the protocols and over we can generate a random value that is secretly shared twice, once over and another over Having and we can calculate and reveal the value of Taking a simple algebraic manipulation gives us gives us a valid secret sharing of over Since the only value that is revealed is masked using a fresh secretly shared random value where along with the security guarantees of the other subprotocols being used, security is guaranteed if Algorithm 10 provides a complete protocol.

1:  Parties jointly compute
2:  Parties jointly compute
3:  Parties locally compute publish their shares of and recover
4:  Return
Algorithm 10 Convert shares in to :

Note that we can apply to obtain from We note that in this use, is secure since So this also guarantees the security of the protocol Algorithm 11 provides the complete protocol.

1:  Parties jointly compute
2:  Parties jointly compute and return
Algorithm 11 Wrap over :

Note that the conversion protocols and are only securely applicable in very restrictive case. More specifically, can only convert from to where and must be well defined over So in other words, must be either an RSA modulus or a prime. On the other hand, can only convert from to with and having the same requirements as the ones in Furthermore, is only secure when

Recall that our main objective of this part is to have a secure conversion protocol to convert a secret sharing to where is an RSA modulus while is a prime. Since this needs to be used to convert secret sharing of random values or Beaver Triple generated modulo in order to have all possible random values modulo we need to have Note that if we use for this purpose, the value of needs to be much bigger than more specifically, In the following, we propose another conversion protocol which can accomplish this goal securely as long as

Following Equation (2), having the first terms can be calculated locally by each party Now in order for the conversion to be completed, we need the last term, Recall that the only information we have about is its secret sharing modulo Note that to be able to calculate we need to first convert to Note that we can use a variant of to achieve this. However, this can only be achieved securely if So in order to guarantee this, in our discussion, we will assume that Now suppose that we have and we would like to calculate Since we do not have the guarantee that we cannot apply directly. Instead, we will again use the space for this purpose. More specifically, after the calculation of to obtain we can directly call to obtain instead of Now once is obtained, we can obtain completing the calculation of Equation (2). It is easy to see that since all the subprotocols being used here are secure, the protocol that calculates Equation (2) we have just discussed is secure. The complete protocol of can be found in Algorithm 12.

1:  Parties jointly compute
2:  Parties jointly compute
3:  for  do
4:      possesses and the -th share of and respectively
5:      locally computes
6:  end for
7:  Return
Algorithm 12 Share conversion:

Note that Algorithm 12 can be used for any and securely as long as they satisfy the following requirements: (i) , (ii) we have a secure protocol modulo , and (iii) we have secure and protocols modulo . Due to this observation, we have that is applicable as long as and are either RSA moduli or prime numbers.

4.3. Beaver triple conversion

Share conversion is not trivial in terms of MPC as proved in (Benhamouda et al., 2018), let alone Beaver triple conversion. Inspired by (Cramer et al., 2005), several share conversion protocols have been developed. To convert a sharing over to a sharing over , we can rely on the method of (Algesheimer et al., 2002) and the mixed-protocol framework given in ABY (Demmler et al., 2015) which generalizes the conversion protocols between different sharing schemes including Arithmetic sharing, Boolean sharing, and Yao’s garbled circuit. However, for additive sharing of integer over , the conversion protocol follows the scheme , which involves bit decomposition and bit sharing conversion over a field to another field (Damgård and Thorbek, 2008). These approaches are complicated and expensive. In our work, we do not use these protocols to construct share conversion, instead, we rely on two Paillier crypto-systems which enables us to convert Beaver triples from to .

First we observe that given a triple we have or equivalently, for some integer such that Similar to the discussion of in the previous section, in order to get the value of we need to lift the equation modulo for some Using the algorithm described above, we can obtain Then Note that it is not secure to use to obtain from even if is an RSA modulus or a prime. This is because it is impossible that Hence we will need to use to achieve this. In order to make this possible, we require that the we choose to be either an RSA modulus or a prime. Once we have it is easy to see that This protocol, denoted by is secure due to the security of all the subprotocols involved. The complete protocol of can be found in Algorithm 13.

1:  Parties agree on such that and is either an RSA modulus or a prime. Parties also agree on such that
2:  Parties jointly compute