Two-Phase Multi-Party Computation Enabled Privacy-Preserving Federated Learning

05/25/2020 ∙ by Renuga Kanagavelu, et al. ∙ 0

Countries across the globe have been pushing strict regulations on the protection of personal or private data collected. The traditional centralized machine learning method, where data is collected from end-users or IoT devices, so that it can discover insights behind real-world data, may not be feasible for many data-driven industry applications in light of such regulations. A new machine learning method, coined by Google as Federated Learning (FL) enables multiple participants to train a machine learning model collectively without directly exchanging data. However, recent studies have shown that there is still a possibility to exploit the shared models to extract personal or confidential data. In this paper, we propose to adopt Multi Party Computation (MPC) to achieve privacy-preserving model aggregation for FL. The MPC-enabled model aggregation in a peer-to-peer manner incurs high communication overhead with low scalability. To address this problem, the authors proposed to develop a two-phase mechanism by 1) electing a small committee and 2) providing MPC-enabled model aggregation service to a larger number of participants through the committee. The MPC enabled FL framework has been integrated in an IoT platform for smart manufacturing. It enables a set of companies to train high quality models collectively by leveraging their complementary data-sets on their own premises, without compromising privacy, model accuracy vis-a-vis traditional machine learning methods and execution efficiency in terms of communication cost and execution time.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The astounding growth of data being collected and analyzed has led to rapid advances in data-driven technologies and applications [1]. Every data collected is valuable as it contains insights towards various application domains. With privacy laws being enforced by many countries globally over data collection of sensitive or personal data, this has a repercussion on how and what data is being collected. The traditional way of just collecting data and processing it may no longer apply. If systems were to collect just non-sensitive or non personal data, many important insights will be missed. Even if there is consent to collect sensitive or personal data, , data sharing without authorization would result in threats of data breaches  [2]. Data collected could likewise be mishandled, thereby violating the fundamental rights to privacy for both individuals and companies [3][4]. The invention of advanced technologies to handle sensitive and personal information to maximise its value has become one of the biggest science growth for data analytics community in recent years. An interesting aspect that makes this growth even more important is that there are many scenarios that require the combination of obtaining sensitive information from several sources towards very useful insights. For example, hospitals, medical researchers and drug companies can benefit from jointly measuring the incidence of an illness without revealing private patient data; banks and financial regulators can assess systemic risk without revealing their private portfolios.However, in practice, data sharing across collaborators has various limitations including data ownership, legal compliance, and technical constraint [5]. As a result, massive volume of data collected today remains in the form of fragmented data islands at individual organizations.

Fig. 1: Federated Learning Architecture

Federated Learning (FL) [6], introduced by Google’s AI research team, supports decentralized collaborative machine learning over a number of devices or companies. The training data stays on each client device and only the locally trained model parameters are transferred to a central coordinator, which aggregate these models into a global federated model as shown in Figure 1. The federated model is then sent back to the clients for training and improving the model iteratively. Google claimed that privacy is protected by keeping raw data on client devices and company’s on-premise IT infrastructure. In addition, federated learning could also make use of participants’ computation resources for training the time-consuming model.

Although the FL technique appears to ensure data remain on-premise, scientists have investigated that data (sensitive or otherwise) can still be extracted out from the FL technique  [7]. They have published methods to show that the machine learning model parameters can be reverse engineered to extract the data sets that are residing with the devices or in the company’s IT infrastructure  [8],  [9]. Generally, it is concluded that in order to ensure the confidentiality of all the information and prevent the indirect data leakage, the model’s parameters need to be securely encrypted  [10] against any possibility of reverse engineering and third party attacks against federated learning model training. For this reason, privacy-preserving techniques, such as secure Multi-Party Computation (MPC)  [11], Homomorphic Encryption (HE)  [12],  [13] and Differential Privacy (DP)  [15]are typical technology candidates to work together with FL for preserving the data confidentiality of model aggregation  [9]. Google apparently recognized this flaw and subsequently proposed a practical MPC-based secure aggregation for FL [16] ; SecureBoost  [17] which is implemented as a vertical tree-based FL system using homomorphic encryption to protect the gradients; and OpenMind  [21] combined MPC and DP functionalities in its FL framework. Secure MPC is a class of cryptography techniques that allow a number of parties to compute a joint function over their sensitive data sets. Only the output of the joint function is disclosed without revealing participants’ private inputs. Secure MPC performance [14][15]

has been improved significantly, due to the fast development of MPC protocols, software implementations, and underlying computation infrastructure. Today, secure MPC has known to be thousands of times faster than fully homomorphic encryption (FHE) implementation in typical applications. Another method to ensure data security and confidentiality is Differential Privacy (DP). DP protects data privacy by adding random noise in the computation by a third party but it is not suitable for federated learning as the noise may affect the accuracy of federated learning model. For above reasons, this paper chooses MPC to support privacy-preserving (or data security and confidentiality of) model aggregation for federated learning. However, there is a downside if one were to employ MPC to preserve the data confidentiality of FL. In general, MPC protocols require all parties to generate and exchange secret shares of private data to all other parties. This basic needs inevitably results in high communication overhead and this overhead exponentially increases with the number of parties in the membership list agreeing to work together, regardless if they are trusted or non trusted parties. This leads to the motivation of this paper and the technique the authors are publishing. To reduce the communication cost and improve scalability, this paper proposes a two-phase MPC-enabled FL framework. What it means is that instead of a condition that every member is required to generate and exchange secret shares of data across all members list, it proceed to elect a subset of FL members as the model aggregation committee members out of the whole membership list. The elected committee membersand then uses MPC service to aggregate the local models of all FL parties. The 2-phase MPC introduces a hierarchical structure, where there is a need for fewer secret shares being exchanged for privacy-preserving (or data confidentiality) model aggregation. The technique becomes more pronounced when the membership lists is large, and the number of committee members is only a fraction of the number of FL parties. Both Peer-to-Peer and Two-Phase MPC-enabled FL frame- work are implemented on PyTorch, a Facebook-based distributed machine learning and deep learning platforms. A parallel mechanism is developed to enable MPC-based model aggregation on the entire model tensors with large amounts of parameters. The FL framework is further integrated with our in-house Industrial IoT (IIoT) platform which could be deployed on-premise or on public Cloud. A smart manufacturing use case with real-world sensor data sets and machine failure labels is used in our experiments, to demonstrate the effectiveness of the proposed FL framework in terms of model accuracy and execution efficiency.

Ii Related Work

Federated Learning (FL) is too broad a subject. Thus, FL can be further categorized into horizontal FL and vertical FL. In horizontal FL, all participating parties (e.g. regional hospitals or manufacturing companies using the same type of machine)have the entire feature set and labels available to train their local model. The computed local gradients from different participants are being sent (to where is it being sent?) to the server/aggregator to train a global model. In vertical FL, all participating parties (e.g. banks, insurance and e-commerce companies within the same city) collect data with different feature spaces and only one party has access to the label. Parties (without label) are unable to train a model on their own, due to the lack of information. As a result of this, additional layer of complexity is added to securely align the sample spaces and the training process requires exchanging partial model updates. In this paper, the authors focus on horizontal FL only.

Currently, only a few production-grade alternatives that challenge the centralized machine learning paradigm are available. Google has started to work towards Federated Learning called Tensor flow Federated (TFF) [22] in Mar 2019. CrypTen  [23] is a MPC-based privacy preserving machine learning framework built on PyTorch by Facebook. OpenMined  [20]

, an open-source community focused on building technology that combines Deep Learning, Federated Learning, Homomorphic Encryption and Blockchain over decentralized data. FATE (Federated AI Technology Enabler) 

[24] is another open-source project initiated by WeBank’s AI Group to provide a secure computing framework for building the federated AI ecosystem. ByteLake [25], an AI consultancy based in Poland, recently released a proof of concept for the manufacturing industry in concert with Lenovo for predictive maintenance. Different from existing general FL frameworks, our federated learning framework supports privacy-preserving, value oriented federated learning over decentralized data that is integrated into IIoT platform with visualization and smart manufacturing domain knowledge. Although MPC has been widely applied for privacy-preserving FL, limited work has been published to enhance performance and scalability of MPC-enable FL frameworks. OpenMined [20] provides a very basic evaluation of MPC overhead for model aggregation. Google provides complexity analysis and communication overhead evaluation on its practical MPC protocols for FL. In contrast, this paper proposes a hierarchical and parallel MPC operations for efficient model aggregation.

Iii MPC-enabled Federated Learning Framework

In this section, the architecture and implementation details are illustrated for both traditional Peer-to-Peer and Two-Phase MPC-enabled federated learning framework. In addition, theoretical analysis on the number and size of messages exchanged is also provided. Some variables are defined and self-explained in Table I.

Category Symbol Type Description
Federated Learning Framework Int Number of parties
Int Number of iterations in local model training
Int

Number of epochs in global FL model training

Model Aggregation Committee Int Number of elected committee members
Int Batch size of each round of committee election
Array List of committee members
Neural Network Model Tensor Parameters/weights of local model of party at global epoch and local iteration
Tensor Parameters/weights of aggregated model of epoch
Integer The size of parameters/weights of local/aggregated models
TABLE I: Variables in Federated Learning Framework

Iii-a Multi-Party Computation

Secure MPC is a class of cryptographic techniques that allow for confidential computation over sensitive data. The two dominated MPC techniques [26] today are garbled circuits and secret sharing. Garbled circuit is a cryptographic protocol based on Boolean circuit. It was proposed to solve the popular millionaire problem, describing two millionaires want to know who is richer without revealing their actual wealth.

Fig. 2: Secure Multi-Party Computation

Recently, secret sharing based MPC protocols have been more commonly used in production systems [27]. A typical secret share MPC protocol is shown in Figure 2 , each sensitive data is split into “secret shares”, which in combination yield the original data. They interact with each other to compute confidential function by exchanging secret shares according to MPC protocols. As long as majority of parties honestly follow the protocol, no party learns anything beyond the final output of the computation. Since each computing party works on one piece of secret share, only if a threshold number of computing parities are compromised, the attacker could reconstruct private data from secret shares. Consequently, MPC can be viewed as a cryptography method for providing the functionality of a trusted party—who would accept private inputs, compute a function and return the result to the stakeholders—without the need for mutual trust among participants. Both Additive [28] and Shamir [34] secret sharing MPC protocols are used for privacy-preserving model aggregations in FL framework. For simplicity consideration, this study focus on Neural Network (NN) machine learning models only and the model aggregation calculates the averaged weights among local NN models trained by all parties. Since the number of FL parties are pre-known, only addition MPC operation is needed.

1:Function{Additive.add(V, n)}
2:
3:for  do
4:     #All parties do this loop concurrently
5:     for  do
6:          
7:     end for
8:     
9:     for  do
10:          Send to  party
11:          Receive from party
12:     end for
13:     
14:
15:end for
16:for  do
17:     Send to  party
18:     Receive from party
19:end for
20:
21:return
22:EndFunction
Algorithm 1 Addition operation using Additive secret sharing MPC protocol

The addition operation (i.e., the sum of private value among parties) using Additive secret sharing MPC protocol is shown in Algorithm 1. Each party generates secret shares for its private value using random numbers. The last piece of secret share is calculated as . The meaningless secret shares are exchanged with other parties and then each party calculate the sum of one piece of secret share of the private values from all parties. The sum value calculated at each party is further exchanged and accumulated to get the addition result . By adopting Additive MPC protocol, all parties conduct confidential computation on meaningless secret shares. The introduced randomness could be eliminated by two rounds of message exchange and addition, and it is straight forward to prove that . The confidential computation only discloses the final output . It is impossible to deduce parities’ private value

, if the parties are honest-but-curious without collusion. On contrary, Shamir secret sharing protocol uses polynomial interpolation and is secure under a finite field. In order to generate secret share

, it uses a random degree polynomial , where = , and are random numbers. Given any shares, polynomial could be reconstructed using Lagrange interpolation. Hence, the secret could be trivially recovered using . We choose , that is all parties generate secret shares for each private data. The secret shares are exchanged with other parties for the addition operation on their private data without compromising the privacy. The security level and computation cost of Shamir secret sharing protocol are much higher than Additive protocols. However, the number and size of messages exchanged remains the same.

Iii-B Peer-to-Peer MPC-enabled Federated Learning

We consider the standard settings for federated learning, where participants concurrently train their local NN models on their private data sets. The objective is to find a tensor

of parameters/weights of the NN model that minimizes the empirical loss function. To enhance the privacy without sacrificing accuracy, the tensor

is securely aggregated using MPC technique. Once the convergence criterion is met, the training will be completed. As illustrated in Section III-A, MPC introduces overhead on both computation and communication. Hence, the performance enhancement is essential to make privacy-preserving FL practicable. The machine learning models become more and more complex to solve large-scale and complex problems. Sequential MPC operations on individual parameters/weights of the model would introduce tremendous overhead. The efficiency of parallel MPC operations has been verified in [35], as the computation and commutation cost could be reduced significantly by doing so in bulks. Figure 3 illustrates the procedure of model aggregation by calculating the average value of the tensors. The tensors of individual local models are privacy-sensitive. They are split into multiple tensors as the secret shares by adding randomness. After exchanging secret shares, each participant holds one share of each tensor. Consequently, participants conduct a local aggregation of the secret shares, followed by a global aggregation to cancel out the randomness and thus get the average value of the tensors (i.e., local models) of all participants. In this way, participants get federated model for the global training epoch, while keeping their data and local models in private.

Fig. 3: MPC-enabled Privacy-Preserving Model Aggregation

The number of messages denoted as exchanged in the model aggregation can be calculated as follows

(1)

The size of messages denoted as exchanged in the model aggregation is

(2)

Since the computation complexity of the above two Equations is , the traditional Peer-to-Peer MPC-enabled FL framework has low scalability [29]. On the other hand, in practice, when more and more companies join the federate learning ecosystem, the benefit achieved becomes more and more significant.

Iii-C Two-Phase MPC-enabled Federated Learning

In order to improve the scalability, we propose a two-phase MPC-enabled FL framework, which avoids exchanging large amounts of huge tensor of model’s parameters/weights (in the form of secret shares) across all FL parties. As shown in Figure 4, uses peer-to-peer MPC to elect a subset of FL parties as the model aggregation committee members. uses the MPC service provided by committee members to aggregate the local models of all FL parties.

Fig. 4: Two-phase MPC-enabled federated learning framework
1:Function{Committee.election(n, k, b)}
2:for  do
3:     #All FL parties do this loop concurrently
4:     while  do
5:          for  do
6:               
7:          end for
8:          for  do
9:               Generate secret share  for
10:          end for
11:          for  do
12:               Send to  party
13:               Receive from party
14:          end for
15:          
16:
17:          for  do
18:               Send to  party
19:               Receive from party
20:          end for
21:          
22:
23:           =
24:          for  do
25:               
26:          end for
27:     end while
28:end forreturn
29:EndFunction
Algorithm 2 Phase I: Committee Election

The pseudo code of committee election function is shown in Algorithm 2. It selects out of FL parties to establish a model aggregation committee. After all parties execute the function concurrently, they would get the list of committee members (i.e., ). All parties generate a batch of random numbers as their initial votes. The votes of all parties are added together through the MPC protocol to obtain aggregated votes, without revealing the initial votes from all parties. The cost of is marginal, as it is executed once only for each FL training and the message exchanged is a small array of committee votes (in form of secret shares). The number of messages denoted as exchanged in is

(3)

The size of messages denoted as exchanged in is

(4)
1:Function{Model.aggregation(n, e, t, m)}
2:for  do
3:     for  do
4:          #All FL parties do this loop concurrently
5:          for  do
6:               T(i, j, k) = trained local model
7:          end for
8:          for  do
9:               Generate secret share  for
10:               Upload to committee member
11:          end for
12:     end for
13:     for  do
14:          #All committee members do this loop concurrently
15:          
16:          for  do
17:               Send to  party
18:               Receive from party
19:          end for
20:          
21:          for  do
22:               if   then
23:                    Send to party
24:               end if
25:          end for
26:     end for
27:end forreturn
28:EndFunction
Algorithm 3 Phase II: Model Aggregation

The pseudo code of model aggregation function is shown in Algorithm 3. All FL parties trained models locally with iteration and get local model (Line 5 to 7). They generate and upload secret shares of their local models to corresponding committee members (Line 8 to 11 ). Thereafter, committee members work together to conduct confidential model aggregation (Line 15 to 20), and then broadcast the aggregated model of this epoch to all FL parties. FL training and model aggregation is conducted in pre-known number of global epochs or until the model is converged. The cost of is controlled by reducing the number of secret shares of huge tensors from to , where . The number of messages denoted as exchanged in can be calculated by adding following items 1) all parties upload secret shares to committee; 2) Committee members do model aggregation; and 3) committee members broadcast the aggregated model to all parties.

(5)

The size of messages denoted as exchanged in is

(6)

By accumulate and of both and , the communication cost of the proposed two-phase MPC-enabled FL framework is illustrated in following equations.

(7)
(8)

By comparing with traditional Peer-to-Peer, the proposed Two-Phase MPC-enabled FL framework could significantly reduce both the number and size of messages exchanged. The scalability is also improved significantly, by avoiding generating secret shares of huge tensors (in form of secret shares) and exchanges them with all parties. Experimental results for a particular use case and parameter configuration are further illustrated in Section IV-D.

Iii-D Integrated Federated Learning with IIoT Platform for Smart Manufacturing

Recent advancement in IoT and AI have push fast transformation in smart manufacturing. Industrial IoT (IIoT) platforms like Azure, AWS, ThingWorx and MindSphere support and orchestrate data collection, storage, processing and visualization on edge, on-premise and Cloud. Predictive maintenance [21] has become a very strong use case for manufacturers advancing to Industry 4.0. It offers an effective solution for detecting faults (diagnosis) and predictions of the future working conditions and the remaining useful life (prognosis), using the data captured from IoT devices (e.g., vibration, force, acoustic and emission sensors). This helps to avoid system interruptions and thereby minimizing the downtime and cost. In order to provide high quality predictive maintenance service, the equipment vendor requires customers to share data, including equipment operation conditions, sensor data and failure incidences. By aggregating the data sets from a number of customers, the vendor could train and update the predictive maintenance which taking different operation scenarios into considerations. However, this centralized training mechanism faces lots of challenges e.g., data ownership, data privacy, latency and cost.

Federated learning as shown in Figure 5 could address the issues of centralized training mechanism. It has following features:

  • Sensor/feature data are always kept at on-premise;

  • Companies train models locally and concurrently;

  • Local models are aggregated using a secure MPC protocol in a privacy-preserving manner;

  • Models are trained and aggregated iteratively to continuously improve the accuracy; and

  • Federated model is able to predict failures under different operation conditions.

Fig. 5: Federated Learning for Smart Manufacturing
Fig. 6: Federated Learning Enabled IIoT Platform

The proposed federate learning framework is integrated with our in-house IIoT platform for smart manufacturing applications as shown in Figure 6. The integrated system includes following components:

  • Edge device - which can be a MiniPC or Raspberry PI, collects data from machine or sensors, and extracting features from the raw data. Consequently, the feature data are pushing to the IIoT platform through standard communication protocols, e.g., MQTT and OPC UA.

  • IIoT platform- can be deployed on on-premise or cloud servers. It receives streaming data from edge device and then visualize it on dashboard for real-time monitoring.

  • Data analytics system - can also be deployed on on-premise or cloud servers. The streaming data are saved into a database. Using the historical data, companies could train machine learning models in different manners (i.e., Locally, Centralized and Federated). These models with different versions are packaged as docker containers and stored in model marketplace. Companies could deploy these models to provide predictions using real-time streaming data. The prediction and insights are helpful for operators to improve manufacturing process and efficiency.

Iv Experiment Design and Results

Iv-a Use case: Fault Detection in Electrical Machines

In order to evaluate the proposed federated learning framework, this paper studied a use case of fault detection in electrical machines which are widely adopted in smart manufacturing. Effective diagnosis of motor faults is essential to enhance reliability of manufacturing process as well as to reduce the cost of operation and maintenance. This study used the real world sensor data (e.g., vibration, temperature, currency and acoustic) and machine healthy and faulty label from a number of induction motors. The data sets were collected in the University of Tennessee USA, by Prof. J. Wesley Hines’s team, where each motor was subject to thermal aging processes [31]. In the same way as conducted in [32]

, the sensory data were pre-processed and the time-domain features were extracted, with the top 121 features being employed here in this study. Due to the different total lifetimes, it can be reasonably assumed that the health status of motors at each life cycles are different, mimicking the different fault types of different companies. We choose PyTorch 1.2.0 and Python 3.73 as machine learning library in our federated learning framework for following reasons: easy to use API, multi GPU support, python support, custom data loaders and simplified pre-processors. In our experiment, we made the assumption that all companies are using the same features extracted from sensor data. They are using the Neural Network (NN) models provided by PyTorch for fault detection. Both simple and complex neural network architecture structures were used. In the simple NN model, it only has input (121 features) and output layers (2 prediction results). That is the size of simpleNN model (i.e.,

in table I

) is equal to 242. In the complexNN model, another hidden layer with 60 neurons is added. Hence, the size of complexNN model is equal to 7380.

Iv-B Experimental Design

In our experiments, we evaluated the performance of proposed federated learning platform using both Additive and Shamir secret sharing MPC protocols. Experiments are conducted to compare traditional peer-to-peer and proposed two-phase federated learning in terms of messages exchanged and execution time. Experiments are conducted on following three different environments:

  • Local-SingleServer: All parties run corresponding processes on a single local server with an Intel(R) Xeon(R) CPU i7-7700 V8 (3.60GHz) and 32GB of RAM, and Ubuntu 18.04 OS.

  • AWS-SameRegion: All parties run on 16 EC2 virtual machine (VM) instances at the same region (i.e., Singapore)

  • AWS-CrossRegion: All parties run on 16 EC2 virtual machine (VM) instances at different eight regions (i.e., Singapore, Canada, London, Ireland, Tokyo, Ohio, Sao Paulo and Mumbai)

AWS EC2 VMs are t3.medium instances, each of which has 2 vCPUs and 4 GiB RAM, and Ubuntu 18.04 OS.

Iv-C Model Prediction Accuracy

The main purpose of this series experiments is to verify the effectiveness of federated learning, by comparing the accuracy of following trained models.

  • Local model:Companies train individual models with iterations based on the local data set only.

  • Centralized model: Companies upload data sets to a centralized server and the latter trains a model by iterations based on the consolidated data set.

  • Federated model: Companies join peer-to-peer MPC-enabled model aggregation for global epochs after training local model with iterations

Three popular metrics, whose definition could be found in [33], are used to measure prediction accuracy.

  • Recall: The percentages of positive events (i.e., machine faulty cycles) which are predicted correctly.

  • Precision: The percentages of predicted positive events which are positive in truth.

  • Balanced: Overall performance of a model considering both positive and negative classes without worrying about the imbalance of a data set.

Model Type Training Method Recall Mean Recall Highest Recall Lowest Precision Mean Precision Highest Precision Lowest Balanced Mean Balanced Highest Balanced Lowest
SimpleNN Local 0.922 1.0 0.810 0.891 1.0 0.686 0.911 0.989 0.772
Centralized 0.939 1.0 0.899 0.937 1.0 0.838 0.941 1.0 0.802
Federated 0.933 1.0 0.869 0.934 1.0 0.828 0.935 1.0 0.852
ComplexNN Local 0.913 1.0 0.803 0.921 1.0 0.7969 0.918 0.992 0.814
Centralized 0.952 1.0 0.881 0.945 1.0 0.828 0.949 1.0 0.850
Federated 0.947 1.0 0.861 0.940 1.0 0.820 0.945 1.0 0.845
TABLE II: Prediction Accuracy of Differently Trained models

Four motor data sets (refer to Section IV-A) belonging to independent companies were used. We select any three company’s data sets as training data and the remaining one as testing data, in a round-robin manner. Additive and Shamir secret sharing MPC obtained the same experimental results for SimpleNN and ComplexNN models respectively. As shown in Table II, federated learning achieves comparable prediction accuracy to centralized learning, which verify the effectiveness of federated learning for smart manufacturing, without compromising data privacy. Federated learning models outperforms local models, but the advantages for different companies might vary.

Iv-D Communication Cost

We evaluate the communication cost in terms of the number and size of messages exchanged between all FL parties and/or model aggregation committee members. In traditional peer-to-peer framework, the shared model could be reconstructed only if all peers are working together. In contrast, in two-phase framework, the shared model could be reconstructed from the collusion among all selected committee members. We considered the committee with 3 parties, based on the the assumption that no collusion among participants, which is usually acceptable in application where performance trade-off is considered[34]. The theoretical analysis has been provided in Section III. Without further specification, SimpleNN model is used; local training takes three iterations (i.e., ) and global model aggregation takes 15 epochs (i.e., ). For Two-Phase MPC-enabled FL framework, three parties are selected as the model aggregation committee (i.e., ) and the batch size of each round of election is 10 (i.e., ). Accordingly to empirical experience, one round is more than sufficient to decide the committee members.

Fig. 7: Number of Messages VS number of parties on Local-SingleServer
Fig. 8: Size of Messages VS number of parties on Local-SingleServer
Fig. 9: Number and Size of messages of two-phase framework on Local-SingleServer

A series of experiments are conducted on Local-SingleServer environment, by increasing the number of parties (i.e., processes) from 4 to 128. The number and size of messages(depends on the size of the model parameters) are illustrated on Figure 7 and 8 respectively. The traditional Peer-to-Peer MPC-enabled FL framework has low scalability, as both the number and size of messages increase dramatically (squarely) with respect to the number of parties. As expected in theoretical analysis, the proposed two-phase method dramatically reduces the number and size of messages, and thus improve system scalability. More details of Two-Phase framework are further illustrated in Figure 9. When , has slightly less messages than , but after that, has much more messages than . This is because, the number of messages at has complexity (refer to Equation 3), while the number of messages at has complexity (refer to Equation 5). However, since the messages exchanged at are much smaller that those exchanged at . (The difference would be even much bigger for ComplexNN model). Hence, it is observed in Figure 9 that the size of messages exchanged at is significantly lower than those at , although the former may have larger number of messages when a large number of parties join the federated learning.

Fig. 10: Number of Messages VS number of parties on AWS-SameRegion
Fig. 11: Size of Messages VS number of parties on AWS-SameRegion

Another two series of experiments are conducted on AWS-SameRegion and AWS-crossRegion environments, by increasing the number of parties from 3 to 16. The number and size of messages measured for AWS-SameRegion and AWS-crossRegion environment are the same. To save space, only AWS-SameRegion scenario is illustrated on Figure 10 and 11. Similarly to Local-SingleServer execution environment, compared with traditional Peer-to-Peer method, the proposed Two-Phase MPC-enabled FL framework has much lower communication cost and better scalability with respect to the increasing number of parties. Since is considerably low, the number of messages exchanged at (i.e., Equation 3) is smaller than that at (i.e., Equation 5). However, the size of messages exchanged at is marginal compared with those at as each message at is a small array of committee votes (in the form of secret shares).

Iv-E Execution Time

Fig. 12: Execution time VS number of parties on Local-SingleServer

The execution time on local-SingleServer environment is plot on Figure 12. Traditional peer-to-peer framework is definitely not scalable in terms of the increasing number of parties. Fortunately, by adopting the proposed two-phase MPC election and model aggregation, the system is much more scalable. The execution time is reduced by 25 times, when . The execution time of Peer-to-Peer scenario is which is out of the scale of Figure 12.

Fig. 13: Execution time VS number of parties on AWS-SameRegion

The execution time on AWS-SameRegion environment is plot on Figure 13 where 3 to 16 FL parties are executed on individual AWS instances located at Singapore data centre. As expected, the execution time increasing trend of Two-Phase framework is less significant than that of peer-to-peer framework. Execution time of is marginal, as it runs one time only for entire FL training and the size of each messages (i.e., ) is very small.

Fig. 14: Execution time VS number of parties on AWS-CrossRegion

The execution time on AWS-CrossRegion environment is plot on Figure 14 where 3 to 16 FL parties are executed on individual AWS instances located at different regions around the world. The execution time increasing trends of all curves are similar to Figure 13. When , Two-Phase framework is 1.97 and 4.56 times faster than Peer-to-Peer framework for AWS-SameRegion and AWS-CrossRegion respectively. Due to the network latency and bandwidth limitation across data centres at different regions, the execution time increase hundreds of times. Moreover, erupts are observed when increase from 12 to 13. This is because we assigned two parties at each region, and the new added parties were located at Sao Paulo (South America) data centre which had significantly worse network condition compared with others. In order to evaluate MPC overhead further, execution time of scenario (i.e., all FL parties exchange local models directly in a peer-to-peer manner and then obtain the aggregated model with averaged weights) is also reported in Figure 1213, and 14. When , Peer-to-Peer (Two-Phase) MPC-enabled FL framework is around 3.34 and 5.42 times (2.12 and 2.03 times) slower than scenario for AWS-SameRegion and AWS-CrossRegion execution environment respectively. Since Two-Phase MPC-enabled FL framework could improve scalability significantly, it becomes even faster than scenario when in local-SingleServer case, as shown in Figure 12 .

Fig. 15: Execution Time of Additive and Shamir MPC Protocols on AWS-SameRegion
Fig. 16: Execution Time of SimpleNN and ComplexNN Models on AWS-SameRegion

Figure  15 compares federated learning execution time between Additive and Shamir secret sharing MPC protocols. As mentioned Section III-A, Shamir protocol requires much heavier computation overhead to reconstruct the polynomial using Lagrange interpolation. So Shamir MPC protocol needs longer execution time than Additive MPC protocol. The execution time increasing trends with respect to the number of parties are also more significant. When , Two-phase MPC-enabled FL framework is 1.97 (Additive) and 2.12 (Shamir respectively) faster than traditional Peer-to-Peer framework. Figure  16 compares FL execution time between SimpleNN and ComplexNN models. By adding a hidden layer, the model size of ComplexNN increase by 30.5 times. Since large messages are sent efficiently in bulks, the execution time of of FL using ComplexNN is more than two times of that using SimpleNN. However, we can still observe that the increasing trends of those curves using ComplexNN are more significant. When , Two-phase MPC-enabled FL framework is 1.97 (SimpleNN) and 1.94 (ComplexNN respectively) faster than traditional Peer-to-Peer framework.

V Conclusion and Future Work

In this paper we presented a Two-Phase MPC-enabled FL framework. It supports multiple organizations to collectively learn a machine learning model, while keeping private data on-premise. Both Additive and Shamir secret sharing MPC protocols are adopted to aggregate lo- cal models in a privacy-preserving manner. The proposed federated learning framework is further integrated with an IIoT platform for smart manufacturing. The effectiveness of federated learning has been verified as it can build a better model than that trained on silo data sets and achieves comparable prediction accuracy versus the traditional centralized learning. Compared with the traditional Peer-to-Peer framework, the Two-Phase MPC-enabled FL framework significantly reduce communication cost and improves system scalability. Depending on the number of parties, it achieves 2 to 25 times of speed-up on execution time of federated learning for both simple and complex neural network models. In the future, we will extend our work to include various application domains to exploit the advantages of FL in practice. Besides horizontal FL, vertical FL and transfer learning 

[30] will also be investigated to enable collaborations across multiple domains. Further, we will continue our research on system development for performance and scalability enhancement by considering large number of parties on AWS-SameRegion and AWS-CrossRegion with large datasets, as well as on privacy-preserving and cyber security techniques to deal with malicious users who negatively affect federated learning effectiveness in a trustless environment [9].

Acknowledgment

This research was supported by Grant No:A1918g0063, Trusted Data Vault(Phase 1), RIE 2020 Advanced Manufacturing And Engineering(AME) Domain’s Core Funds-SERC Strategic Funds.

References

  • [1] D. Mascio, Giulia, “Data-drivenness:(big) data and data-driven enterprises-A multiple case study on B2B companies within the telecommunication sector.”, Master Thesis-2019.
  • [2] Garcia, Y. M. Kassa, A. Cuevas, M. Cebrian, E. Moro, I. Rahwan, and R. Cuevas, “Analyzing gender inequality through large-scale Facebook advertising data”, Proceedings of the National Academy of Sciences, vol.115, no. 27, pp. 6958–6963, 2018.
  • [3] “General Data Protection Regulation”, https://gdpr-info.eu/
  • [4] Z. Chai, H. Fayyaz, Z. Fayyaz, A.Anwar,Y. Zhou, N. Baracaldo, H. Ludwig,and Y. Cheng, “Towards Taming the Resource and Data Heterogeneity in Federated Learning”, In Proceedings of USENIX Conference on Operational Machine Learning, 2019.
  • [5] H. Ren, H. Li, Y. Dai, Y. Kan, and X. Lin, “Querying in internet of things with privacy preserving: Challenges, solutions and opportunities”, IEEE Network, vol. 32, no. 6, pp. 144–151, 2018.
  • [6] B.McMahan, and D.Ramage, “Federated learning: Collaborative machine learning without centralized training data”, 2017. https://ai.googleblog.com/2017/04/federated-learning-collaborative.html. Google AI Blog.
  • [7] B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep models under the gan: information leakage from collaborative deep learning”, In Proceedings of ACM Computer and Communications Security, 2017.
  • [8] R.Shokri, M.Stronati, C.Song and V. Shmatikov, “Membership inference attacks against machine learning models”, In Proceedings of IEEE Symposium on Security and Privacy (SP), 2017.
  • [9] M. Nasr, R. Shokri, and A. Houmansadr,“Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks”, In Proceedings of IEEE ACM Symposium on Security and Privacy (SP),2019.
  • [10] K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan,S. Patel, D. Ramage, A. Segal, and K. Seth, “Practical secure aggregation for privacy-preserving machine learning”, In Proceedings of ACM Computer and Communications Security,2017.
  • [11] D. Evans, V. Kolesnikov and M. Rosulek, “A Pragmatic Introduction to Secure Multi-Party Computation”, NOW Publishers, 2018.
  • [12] I.Chillotti, N. Gama, M. Georgieva, and M. Izabachène, “Faster PackedvHomomorphic Operations and Efficient Circuit Bootstrapping for TFHE”, in Advances in Cryptology , Part I. Vol. 10624.,pp 377-408, Lecture Notes in Computer Science, 2017.
  • [13] L. T. Phong, Y. Aono, T. Hayashi, L. Wang and S. Moriai, “Privacy-Preserving Deep Learning via Additively Homomorphic Encryption,” IEEE Transactions on Information Forensics and Security, vol. 13, no. 5, pp. 1333-1345, May 2018.
  • [14] D. W. Archer, D. Bogdanov, Y. Lindell, L. Kamm, K. Nielsen, J. I.Pagter, N. P. Smart, and R. N. Wright, “From keys to databases—realworld applications of secure multi-party computation”, The Computer Journal,Volume 61, Issue 12, pp 1749–1771, 2018.
  • [15] P.Bogetoft, D.L.Christensen, I.Damgård, M.Geisler, T.P.Jakobsen, M. Krøigaard, J.D.Nielsen, J.Pagter, M.Schwartzbach, and T.Toft, “Secure multiparty computation goes live”,Financial Cryptography and Data Security,pp 325 - 343,2009.
  • [16] Boneh, D., et al.: Threshold cryptosystems from threshold fully homomorphic encryption. Cryptology ePrint Archive, Report 2017/956 (2017). https://eprint.iacr.org/2017/956.
  • [17] W. Alexandra, M. Altman, A. Bembenek, M. Bun, M. Gaboardi, J. Honaker, K. Nissim, R. OBrien, T. Steinke, and S. Vadhan, “Differential privacy: A primer for a non-technical audience”, Vanderbilt Journal of Entertainment & Technology Law 21, no. 1 ,pp 209-275,2018.
  • [18] K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth, “Practical secure aggregation for privacy-preserving machine learning ”, In Proceedings of ACM SIGSAC Conference on Computer and Communications Security, 2017.
  • [19] K. Cheng, T. Fan, Y. Jin, Y. Liu, T. Chen, and Q. Yang, “Secureboost: A lossless federated learning framework”, arXiv preprint arXiv:1901.08755, 2019.
  • [20] OpenMined/PySyft-A library for encrypted, privacy preserving deep learning.https://github.com/OpenMined/PySyft
  • [21] J.Lee, H. Kao, S.Yang, “Service innovation and smart analytics for industry 4.0 and big data environment”, In the Proceedings of the 6th CIRP Conference on Industrial Product-Service Systems, 2014.
  • [22] TensorFlow Federated, https://www.tensorflow.org/federated/federated_learning
  • [23] CrypTen Documentation, https://crypten.readthedocs.io/en/latest/
  • [24] Federated AI technology enabler, https://github.com/FederatedAI/FATE
  • [25] ByteLake, https://www.bytelake.com/en/federated-learning/
  • [26] J.Choi and B. Butler, “Secure Multiparty Computation and Trusted Hardware: Examining Adoption Challenges and Opportunities”, Security and Communication Networks 2019.
  • [27] G. Zyskind, O. Nathan, and A. Pentland, “Enigma: Decentralized Computation Platform with Guaranteed Privacy” . https://arxiv.org/abs/1506.03471
  • [28] Morten Dahl, “Secret Sharing, Part 1 - Distributing Trust and Work”, https://mortendahl.github.io/2017/06/04/secret-sharing-part1/
  • [29] J. Saia, M.Zamani, “Recent results in scalable multi-party computation”, LNCS, Springer, Heidelberg, vol. 8939, pp. 24–44,, 2015.
  • [30] Y. Liu, T. Chen, and Q. Yang, “Secure federated transfer learning”, arXiv preprint arXiv:1812.03337, 2018.
  • [31] M. E. Sharp, “Prognostic approaches using transient monitoring methods,” Ph.D. dissertation, Univ. Tennessee, Knoxville, TN, USA, 2012
  • [32] F. Yang, M. S. Habibullah, T. Zhang, Z. Xu, P. Lim and S. Nadarajan, “Health Index-Based Prognostics for Remaining Useful Life Predictions in Electrical Machines,” in IEEE Transactions on Industrial Electronics, vol. 63, no. 4, pp. 2633-2644, April 2016.
  • [33]

    Wikipedia “Precision and recall”, Accessed on Dec-2019.

    https://en.wikipedia.org/wiki/Precision_and_recall.
  • [34] Adi Shamir, “How to Share a Secret”, Commun. ACM 22, 612–613, 1979. https://doi.org/10.1145/359168.359176
  • [35] L.Zengxiang, K. Chutima, P. Phunchongharnb, Y. Yang, R. S. M. Goh, and Y. Li, “Efficient Multi-Party Computation Algorithm Design For Real-World Applications”, International Workshop on Emerging Topic in Computer Science (ETCS), IEEE, ICPADS, 2019
  • [36] M. Chase, R. Gilad-Bachrach, K. Laine, K. E. Lauter, and P. Rindal.,“Priate Collaborative Neural Network Learning”, IACR Cryptology ePrint Archive 2017.