Cloud-based Federated Boosting for Mobile Crowdsensing

05/09/2020 ∙ by Zhuzhu Wang, et al. ∙ Xidian University 9

The application of federated extreme gradient boosting to mobile crowdsensing apps brings several benefits, in particular high performance on efficiency and classification. However, it also brings a new challenge for data and model privacy protection. Besides it being vulnerable to Generative Adversarial Network (GAN) based user data reconstruction attack, there is not the existing architecture that considers how to preserve model privacy. In this paper, we propose a secret sharing based federated learning architecture FedXGB to achieve the privacy-preserving extreme gradient boosting for mobile crowdsensing. Specifically, we first build a secure classification and regression tree (CART) of XGBoost using secret sharing. Then, we propose a secure prediction protocol to protect the model privacy of XGBoost in mobile crowdsensing. We conduct a comprehensive theoretical analysis and extensive experiments to evaluate the security, effectiveness, and efficiency of FedXGB. The results indicate that FedXGB is secure against the honest-but-curious adversaries and attains less than 1 XGBoost model.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Extreme gradient boosting (XGBoost) is an efficient, flexible and portable model that has a good performance in dealing with classification and regression, and hence applied in many apps such as malware detection (Wang et al., 2017b) and consumption behaviour prediction (XingFen et al., 2018). Highly optimized multicore design, tactfully distributed implementation and enhanced ability to handle sparse data contribute to the success of XGBoost (Chen and Guestrin, 2016)

. As a machine learning algorithm, the performance of XGBoost depends on the quality of dataset. Therefore, most companies and institutions will collect datasets with good performance by themselves but this will in need of lots of manpower 

(Lenzen et al., 2013) and material resources. Hence, mobile crowdsensing (MCS), collecting data from volunteer users willing to share data, was proposed. Recently, the privacy protection in MCS needs to be solved urgently (Wang et al., 2020).

Consider the existing mobile crowdsensing architecture, a central cloud server, owned by a service provider, collects the distributed user data and builds a machine learning model. Such architecture suffers from two limitations: (1) the service provider has the heavy computational cost on the central cloud server since it not only stores a large amount of user data but also builds the machine learning model (Wang et al., 2018); (2) the service provider may leak the privacy because user private data are operated in the central cloud server in plaintext. Such data leakage may cause severe problems for not only individuals but also organizations. Nearly 3 million encrypted customer credit card records are stolen due to the data leakage of the famous company Adobe, and this caused Adobe to pay for up to $ million.

To address the above two limitations, the federated learning (FL) architecture is proposed (McMahan et al., 2016). FL is a kind of machine learning method that allows distributed users to not upload sensitive private data but calculated gradients (Liu et al., 2020).

Figure 1. Unattended Problems for the Current Federated Learning Architecture

Although users are protected against private data leakage, the federated learning architecture for mobile crowdsensing leads other security issues, shown in Fig. 1. The detailed instructions are as follows:

  • User Data Reconstruction. Recent studies pointed out that the federated learning architectures are vulnerable to user data reconstruction attack (Wang et al., 2019),(Cheng et al., 2019).

    A malicious central cloud server can retrieve user private data by exploiting gradient aggregation results uploaded by users based on the generative adversarial networks (GAN).

  • Model Privacy Leakage. Existing federated learning architectures built a model relying on publishing the newly built model to all users for the next round of model training (Yang et al., 2019). However, users are not always reliable. The trained model can potentially be stolen by adversaries with very little expense (i.e., some computation power and registration cost).

  • User Dropout. Instability of users is not handled by the federated learning architecture (McMahan et al., 2016) or its follow-up apps on specific models (Zhao et al., 2018) (Tran et al., 2019). Previous architectures make an assumption that users’ connectivity to the server remain steady. Once an user is dropped out, they have no choice but to abandon the current round of training.

To resolve the above issues, we propose a secret sharing based federated learning architecture (FedXGB) to achieve privacy-preserving training of XGBoost for mobile crowdsensing. FedXGB is composed of three kinds of entities, a central cloud server, edge servers and users. Compared with the architecture shown in Fig. 1, edge servers are included to reflect the emerging edge computing architecture and provide a layer of privacy protection at the architectural level(Tong et al., 2016). FedXGB proceeds in two steps. First, it invokes a suite of secret sharing based protocols for the privacy-preserving classification and regression tree (CART) building of XGBoost. These protocols protect user private data against the reconstruction attack. Instead of directly publishing the newly built CART, FedXGB applies a secret sharing based prediction protocol for user data updating. It encrypts the model from being stolen by users.

In summary, our main contributions can be summarised as follows:

  • Boosting Privacy. FedXGB utilizes the secret sharing technique to protect user private data and the updated model during each round of federated XGBoost training. The gradients of user private data are secretly shared to protect them against the user data reconstruction attacks. The model prediction are performed with secretly shared parameters to address the model privacy leakage.

  • Low Accuracy Loss. We evaluate FedXGB by applying two popular datasets. The results indicate that FedXGB maintains the high performance of XGBoost with less than 1% of accuracy loss.

  • Robust Architecture against User Dropout. We validate that FedXGB stays stable to execute each round of training when the user dropout happens. The effectiveness and efficiency are barely affected by the dropout of users.

The rest of this paper is organized as follows. In Section 2, we briefly introduce some background knowledge. Section 3 describes an overview of FedXGB. Section 4 and Section 5 list the implementation details of FedXGB. Section 6 discusses the security of FedXGB. In Section 7, we perform a series of comprehensive experiments. Section 8 discusses the related work. The last section concludes the paper.

2. Preliminary

The background knowledge about XGBoost, secret sharing and two cryptographic functions are briefly introduced in this section. For convenience, notations used in this paper are summarized in Table 1.

Notation Description

an arbitrary loss function with second-order derivative

the first-order derivative of for the instance
the second-order derivative of for the instance
the secret share distributed to the user
the set of random share of private mask key for user
a finite field , e.g. for some large prime
the CART obtained from the -th iteration of XGBoost
key for signature, encryption or secret mask generation
Table 1. Notations

2.1. Extreme Gradient Boosting

XGBoost is one of the most outstanding ensemble learning methods due to its excellent performance in processing classification, regression, and Kaggle tasks(Mitchell and Frank, 2017), which implements machine learning algorithms under the Gradient Boosting framework by establishing numerous classification and regression trees (CART) models. The core of algorithm is the optimization of the value of the objective function as follows.

(1)

where is the number of iterations, is the total number of training samples, is the index of each sample, and is the label of the -th sample. represents the predicted label of the -th sample at the ()-th iteration. is a regularization item. By expanding and using the second-order Taylor approximation, the optimal weight of leaf node is calculated as follows.

(2)

where represents the set of training samples in the leaf node. is a constant value. is the number of tree leaves. According to the above equations, we can retrieve an optimal tree for the -th iteration.

2.2. Secret Sharing

Since attackers can easily derive user private data by exploiting the uploaded gradients (Cheng et al., 2019), the Secret Sharing (SS) scheme (Shamir, 1979) is adopted in our scheme. For the SS scheme, a secret is split into shares. is recovered only if at least random shares are provided; otherwise, it cannot be obtained. The share generation algorithm is illustrated as SS.share, in which represents the number of users involved in SS and is a set including these users. describes the share for each user . To recover the secret, the Lagrange polynomials based recovery algorithm SS.recon is used. It requires that has to contain at least users.

We apply a secret sharing based comparison protocol (SecCmp) (Huang et al., 2019) to fulfill the secure comparison in FedXGB. Without revealing the plaintext values to edge servers, SecCmp returns the comparison result to the user.

Secure Comparison Protocol (SecCmp). Given two sets of secret shares, SS.Share and SS.Share, the random shares of the comparison result is generated. Having at least shares, i.e., and , the result is recovered. If , SS.Recon; otherwise, SS.Recon.

2.3. Cryptographic Definition

To securely transmit data, three cryptographic functions are utilized in FedXGB.

2.3.1. Key Agreement

Key agreement is used for key generation. Three algorithms are involved for key agreement, namely key setup KEY.Set, key generation KEY.Gen, and key agreement KEY.Agr. Specifically, the key setup algorithm, KEY.Set, is for setting up a public parameter . is a security parameter that defines the field size of a secret sharing scheme . KEY.Set outputs a quaternion . is an additive cyclic group with a large prime order and a generator . And is a common hash function that generates a fixed length output. Consider two arbitrary users, and , first applies the key generation algorithm to generate a private-public key pair KEY.Gen. Then, can use the key agreement algorithm to create a shared key with , KEY.Agr.

2.3.2. Identity Based Encryption & Signature

Identity based encryption and signature are utilized to encrypt sensitive data and verify identity, respectively. Given a shared key KEY.Agr, the identity based encryption algorithm IDE.Enc outputs ciphertext IDE.Enc. And the decryption function IDE.Dec recovers the plaintext by computing IDE.Dec. Similarly, the signature algorithms, SIG.Sign and SIG.Verf, are defined. Given the key pair for signature KEY.Gen, SIG.Sign outputs a signature SIG.Sign . If SIG.Verf, is proved to be valid; otherwise, is invalid.

3. Overview of FedXGB

In this section, we introduce how the secret sharing based federated learning architecture (FedXGB) is implemented.

3.1. Entities of FedXGB

FedXGB consists of three types of entities: users , edge servers , and a remote central cloud server . The entities of FedXGB are showed in Fig. 2. Details are presented as follows.

Figure 2. Entities of FedXGB

Users. . For each , represents a set of users belonging to the domain . Users are data generators and volunteers to participate in the crowdsensing model training for FedXGB.

Edge Servers. , where is an edge server. Edge servers are provided by various operators. Each edge server provides the communication service for users that belong to the domain it controls.

Central Cloud Server. is a central cloud computing server owned by a mobile crowdsensing service provider. The trained model in FedXGB only belongs to , and is not publicly accessible.

3.2. Security Model

In FedXGB, we use the curious-but-honest model as our standard security model. The definition of the adversary in our security model is formalized as follows:

Definition 1 (Paverd et al., 2014). In a communication protocol, a legitimate entity, , does not deviate from the defined protocol, but attempts to learn all possible information from the legitimately received messages.

Any , and can be an , with the following abilities: 1) can corrupt or collude with less than legitimate users or edge servers and get the corresponding inputs; 2) cannot extract the information from other good parties (e.g., legitimate inputs, random seeds); 3) has limited computing power to launch attacks (i.e., polynomial-time attacks). FedXGB needs to achieve the following two goals.

  • Data Privacy. and are unable to learn the private data of , especially through the data reconstruction.

  • Model Privacy. and are unable to learn the key model parameters owned by .

3.3. Workflow of FedXGB

Two protocols are involved in each round of FedXGB, secure CART model building (SecBoost) and secure CART model prediction (SecPred), shown in Fig. 3. Working details of the protocols are given below.

Figure 3. Workflow of FedXGB

SecBoost takes the following four steps:

  1. Setup. All entities (i.e., , , and ) setup essential parameters for CART model building, including preset parameters of XGBoost and cryptographic keys.

  2. User Selection. According to the predefined standards, each edge server selects the active users in its domain. The selected users verify the identity with each other and exchange the public keys. Additionally, each selected user creates a mask key pair for further secret sharing between the user and its corresponding edge server.

  3. Key Shares Collection. Depending on the number of selected users, each user generates random shares of the private mask key and sends each share to the other selected users. The set of key shares is constructed by collecting the random shares from other users, and used to recover the private mask key when the user drops out.

  4. Boosting. To build a CART model securely, the selected users mask the locally calculated sub-aggregation of gradients and upload the masked value to edge servers. Then, the edge servers sum the masked sub-aggregations and send the results to the central cloud server. The central server adds the received values up for further CART model building. FedXGB iteratively calculates the optimal split to extract an optimal CART model until the termination condition is met (Loop L in Fig. 3).

SecPred is designed to extract prediction results of the newly obtained CART without model privacy disclosure. executes SecPred by taking the following steps: a) all splits of the CART model calculated by and user private data are secretly shared to ; b) repeatedly invokes SecCmp to compute the comparison results; c) The comparison results are sent to for updating , mentioned in Eq. 1.

After SecPred terminates, FedXGB completes one round of training and begins the next round of training (i.e., Loop L shown in Fig. 3). When L is completed, gets a trained XGBoost model , where is the maximum training round.

The security goals of FedXGB are achieved as follows. For the first goal, the gradient sub-aggregation of users are protected with the secret sharing technique in SecBoost to defend the reconstruction attack proposed in (Cheng et al., 2019). And the tree structure of XGBoost is chosen against the other type of reconstruction attack proposed in (Wang et al., 2019). The security analysis for the goal is given in Section 6 and Section 7.4. The second goal is also achieved by the secret sharing technique in SecPred, whose security analysis is discussed in Section 6.

4. Secure CART building of FedXGB

In this section, we present the protocol SecBoost in details.

In All users in FedXGB are orderly labeled by a sequence of indexes (i.e., ) to represent their identities. Each user is deployed a small local dataset . The message sent from to is briefly described as : .

4.1. Secure CART Model Building

Protocol SecBoost is for secure CART model building, shown in Protocol 1. Refer to the overview of SecBoost illustrated in Fig. 4, we introduce the steps in details as below.

Figure 4. High-level Overview of SecBoost

Step 1 - Setup: , , and setup public parameters for key generation and model training. Firstly, FedXGB is given the input space field , the secret sharing field , the parameter for key generation KEY.Set, and the publicly known XGBoost parameters including , and maximum tree depth . and the publicly known XGBoost parameters including , , and maximum tree depth . Then, a trusted third party generates a signature key pair for . The encryption key pair is generated by .

Step 2 - User Selection: To minimize the cost for data recovery of dropout users, selects the more active users to participate in the model training. The predefined selection standards include the keeping active time, the connection stability, and the maximum of users. The selected users are expressed as . The number of users control by is , and their secret sharing threshold is . Through KEY.Gen, generates the mask key pair for secret sharing. The message for public key exchange ( and ) is signed. The legitimacy of selected users is confirmed by verifying their signatures.

Step 3 - Key Shares Collection: Users generate random shares of their private mask keys by computing SS.share, and send to each of the other selected users in the encrypted format. User decrypts to extract and expands its key share set . is used to recover the private mask key when the user drops out, as discussed in Section 5.2. The encryption key between and is calculated by KEY.Agr .

Step 4 - Boosting: Assume that the feature set of the user data is . For boosting, randomly selects a sub-sample and invokes the secure split finding protocol (SecFind), introduced in Section 4.2, to find the optimal split. To build a new CART model with an optimal structure, successively operates the boosting process until the current tree depth reaches or other termination conditions (Chen and Guestrin, 2016) are met. Finally, SecBoost outputs a well trained CART model .

1:A central server , a edge server set , a user set and a trusted third party .
2:A well-trained CART.
3:Step 1: selects security parameter KEY.Set and publishes the model parameters , , , , .
4: generates signature key pair for , and operates : .
5: invokes KEY.Gen.
6:Step 2: selects a set of active users , secret sharing threshold and operates : .
7: invokes KEY.Gen.
8:: SIG.Sign .
9: and verify SIG.Verf and forward : .
10:Other users in verify whether SIG.Verf .
11:Step 3: computes and collects the shares of mask key by invoking SS.share.
12: : IDE.Enc.
13: decrypts IDE.Dec, and collects .
14:Step 4: randomly selects a feature sub-sample from full feature set .
15: invokes SecFind to determine the current optimal split.
16:Repeat Step 2 until reaching the termination condition.
Protocol 1 Secure Extreme Gradient Boosting Based Tree Building (SecBoost)

4.2. Secure Split Finding for SecBoost

1:All candidate features , the active user set , the edge server set .
2:The optimal split for feature and its score.
3:for  do
4:     Each generates a random value and its random shares SS.share.
5:      : IDE.Enc .
6:     Each receives and decrypts IDE.Dec.
7:end for
8:for  do
9:      generates KEY.Agr for each .
10:      computes SecMask and SecMask.
11:      : IDE.Enc .
12:      decrypts and reconstructs SS.Recon .
13:      : IDE.Enc, IDE.Enc.
14:end for
15: calculates IDE.Dec and IDE.Dec.
16:for  do
17:      enumerates every possible candidate split for feature and publishes them to each user through . For each , take the following steps.
18:      generates a new random value , and shares it like what it does in the first loop.
19:      computes SecMask and SecMask.
20:      : IDE.Enc.
21:      decrypts and reconstructs SS.Recon .
22:      : IDE.Enc, IDE.Enc.
23:     : IDE.Dec, and IDE.Dec, .
24:      then obtains .
25:end for
Protocol 2 Secure Split Finding (SecFind)

The most important operation in XGBoost training is to optimize the tree structure by finding the optimal split for each node of the CART. In FedXGB, we propose a novel secret sharing based split finding protocol SecFind, presented in Protocol 2. Details of SecFind are as follows.

First, each user generates a random value for masking secret. Each random share of is distributed to a specific user . The secret masking function (SecMask) is given as follows.

(3)

where is a secret value, , KEY.Agr . From Eq. 3, it indicates that can directly get when is recovered without giving . As discussed in Section 5.2, the recovery always occurs when drops out. Therefore, is an essential value for the security of SecFind. The correctness of Eq. 3 is given in (Bonawitz et al., 2017).

Then, uploads sub-aggregations of gradients for all its data, and , mentioned in Eq. 2. Each sub-aggregation is masked based on the Eq. 3. sums all masked sub-aggregations and sends the result to in the encrypted format. In order to get the correct summing result, has to reconstruct and subtracts it from the masked sub-aggregations, and . The encryption keys utilized here are KEY.Gen and KEY.Gen Having the summation values from each , adds them up to get the final aggregation result, and , for all data.

Finally, for each given candidate feature , enumerates all possible candidate splits and publishes them to . Similar to the above aggregation process for and , iteratively collects the left-child gradient aggregation results for each candidate split. The aggregation results are used to compute the score for each candidate split according to Eq. 4. When the iteration is terminated, SecFind outputs the split with maximum score. Moreover, the weights of left and right child nodes about the optimal split are also determined by Eq. 2.

(4)

5. Secure CART Model Prediction of FedXGB

In this section, the protocol SecPred is presented in details. Besides, we discuss its robustness against user dropout.

5.1. Secure CART Model Prediction

For existing federated learning schemes, an indispensable operation is to update each user’s local model at the end of each round of training (McMahan et al., 2016). For XGBoost, the updated model is used for extracting prediction results to update the in Eq. 1 that are taken as input of the next round of training. However, users are honest-but-curious entities. They potentially steal the model information to benefit themselves (e.g. sell the model to the competitors of ). To protect the model privacy, FedXGB executes a lightweight secret sharing protocol SecPred, presented in Protocol 3, instead of transmitting the updated CART model in plaintext. In SecPred, takes a CART model as input and takes as input of the weights of leaf nodes in the CART model. and secretly and separately send the shared model parameters (optimal split to each node) and user data (feature values to each optimal split) to edge servers. Then, executes SecCmp and returns each comparison result to the corresponding user. Finally, decides the leaf node for each sample based on the comparison results and collects prediction results based on the weights of leaf nodes. Under such way, we guarantee nodes of the CART model are unable to be accessed by and .

1: gets a CART and the thresholds for its nodes, ; gets leaf node weights of .
2:The prediction result.
3:for  do
4:      computes SS.Share and sends them to the corresponding edge server.
5:     for each  do
6:         Select the feature values corresponding to .
7:         Compute SS.Share and send to the corresponding edge server.
8:     end for
9:      invokes SecCmp and forwards corresponding results to .
10:end for
11:Based on the results, determines the leaf node and obtains the prediction result.
Protocol 3 Secure Prediction for a CART (SecPred)

5.2. Robustness against User Dropout.

Three possible cases of user dropout in FedXGB are discussed as follows.

Case 1: A user drops out at the Step 1 or Step 3 of Protocol 1. Thus, the edge server of cannot receive messages from anymore. In such case, reject the message uploaded in the current round of training.

Case 2: A user drops out during the split finding process. Its edge server recovers the private mask key of and removes from if does not reconnect in the subsequent computation, that is, the remaining user set and . To recover the private mask key of , the edge server first collects the random shares of private mask key from at least users, i.e., , and . Then, extracts the private mask key of through SS.Recon . Finally, retrieves the gradient aggregation result (line 11 and 20, Protocol 2) by adding the recomputed values, that is, . Here, KEY.Agr.

Case 3: A user drops out at the prediction step. The edge server directly ignores the prediction request of and removes from the active users at the next iteration.

6. Security Analysis

The FedXGB security depends on three protocols, SecFind and SecBoost and SecPred. To prove their security, we give the formal definition of security for secret sharing protocol Definition 2 (Ma et al., 2019) and Theorem 1 from (Huang et al., 2019).

Definition 2. We say that a protocol is secure if there exists a probabilistic polynomial-time simulator that can generate a view for the adversary in the real world and the view is computationally indistinguishable from its real view.

Theorem 1. The protocol SecCmp is secure in the honest-but-curious security model. Since the security of SecPred is only related to SecCmp, we omit its security proof which has been given in (Huang et al., 2019). The security of SecFind and SecBoost is proved as follows.

Theorem 2. The protocol SecFind is secure in the honest-but-curious security model.

Proof. Denote the views of user and edge server as and . From the operation process of SecFind, we can derive , and , where , , and . Based on Theorem 1, it can be seen that, except the XGBoost parameters, the elements belonging to , and are all uniformly random shares. According to Shamir’s secret sharing theory (McEliece and Sarwate, 1981), the shares can be simulated by randomly chosen values from . Consequently, there exists a simulator that can generate indistinguishable simulated view from the real view of SecFind. According to Definition 1, it is derived that the protocol is secure.

Theorem 3. The protocol SecBoost is secure in the honest-but-curious security model.

Proof. In the protocol SecBoost, the user and edge server views denoted as and . From the protocol definition, only parts of users are selected for model training in SecBoost. The views of unselected users are set to be empty. The views of remaining users are . And for the edge server and the cloud server, their views are and , where , , and . , and are the views generated by SecFind. Except the encryption keys, ciphertext and signature which can be treated as random values, the remaining elements of , and are all random shares as mentioned in the security proof of Theorem 2. Thus, similarly, we can derive that , and are simulatable for the simulator , and the simulated views cannot be distinguished within a polynomial time by the adversary. Based on Definition 1, SecBoost is proved to be secure.

Lemma 1. A protocol is perfectly simulatable if all its sub-protocols are perfectly simulatable.

According to universal composibility theory given in Lemma 1 (Bogdanov et al., 2008) and the above proofs, it is concluded that FedXGB is simulatable. Based on the formal definition of security in Definition 2, FedXGB is secure.

7. Performance Evaluation

In this section, we first introduce the experiment configuration. Then we analyze the effectiveness and efficiency of FedXGB by conducting experiments.

7.1. Experiment Configuration

Environment. A workstation, with an Intel(R) Core(TM) i7-7920HQ CPU @3.10GHz and 64.00GB of RAM, is utilized to serve as our central server. Ten computers, with an Intel(R) Core(TM) i5-7400 CPU @3.00GHz and 8.00GB of RAM, are set up. By launching multiple processes, each of them simulates at most two edge servers. We also deploy 30 standard BeagleBone Black development boards to serve as crowdsensing users. Each of them simulates at most 30 users. The programs are implemented in C++. OpenMP library (Muddukrishna et al., 2016) is used to accelerate the concurrent operations.

Dataset. Two datasets are collected, ADULT111ADULT: https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/binary.html and MNIST222MNIST: http://yann.lecun.com/exdb/mnist/. ADULT is for adult income prediction, which has 123 features, and provides 32k instances for training data, 16k instances for testing. MNIST is for handwriting digit classification, which has 784 features, and divides the instances into 60k for training and 10k for testing. Both are commonly used databases to evaluate machine learning model performance.

Setup. Parameters in FedXGB are set up as, step size , minimum loss reduction , regulation rate , user number , maximum tree depth , and edge server number . We use Elliptic-Curve Diffie-Hellman (Doerner et al., 2018) over the NIST P-256 curve, composed with a SHA-256 hash, to fulfill key agreement,. Authenticated encryption is operated by 128-bit AES-GCM (Bellare and Tackmann, 2016). Given each dataset, the instances are averagely assigned to each user with no overlap. User dropout is assumed to occur every 10 rounds of boosting in our experiment. That is, 0%, 10%, 20%, 30% of users are randomly selected to be disconnected at each round of training. Meanwhile, the same number of replacements are rearranged to substitute the lost users.

7.2. Effectiveness Analysis

To assess the effectiveness of FedXGB

, we compute its classification accuracy and loss under the two datasets. The loss functions utilize in the experiments are the logistic regression for ADULT and the softmax for MNIST. We evaluate the accuracy and loss for each boosting stage in

FedXGB, shown in Fig. 5. Fig. 5(c) and Fig. 5(d) respectively show the accuracy and loss of MNIST, and Fig. 5(a) and Fig. 5(b) reveal the result of ADULT. When boosting stage is determined, FedXGB has only a loss of less than 1% compared with the original XGBoost. FedXGB acquires small fault tolerance with the user dropout rate ranged from 0% to 30%.

(a) Accuracy with different user dropout rates for ADULT.
(b) Loss with different user dropout rates for ADULT.
(c) Accuracy with different user dropout rates for MNIST.
(d) Loss with different user dropout rates for MNIST.
Figure 5. Accuracy and loss for each boosting stage in FedXGB for ADULT and MNIST

7.3. Efficiency Analysis

7.3.1. Theoretical Analysis

To evaluate the efficiency of FedXGB, we first perform the theoretical analysis of computation cost for SecBoost, SecFind and SecPred.

Let denote the number of training instances. The computation costs of each user, each edge server and the central server for SecBoost are , and . As shown in Protocol 1, SecBoost has four steps. Since the setup stage can be operated offline, its computation and communications cost are ignored. The remaining three steps are divided into two parts. One part contains the second and third steps. Each user executes key agreements, signature and encryption operations, which take time. Each edge server executes signature operations, which also take time. The central server executes signature operations, which take time. The other part is composed of invocations of SecFind. And for SecFind, the gradient aggregation is operated for times, which takes , and time for each user, each edge server and the central server. As for the SecPred, it invokes SecCmp for times, which takes , and time.

7.3.2. Experiment Results

To further evaluate the efficiency of FedXGB, we experiment with the runtime and communication overhead under different numbers of users and edge servers as shown in Fig. 6. In the experiments, we set 50K and .

(a) Runtime per user with different numbers of users.
(b) Communication overhead per user with different numbers of users.
(c) Runtime per edge server with different numbers of users.
(d) Communication overhead per edge server with different numbers of users.
(e) Runtime for the central server with different numbers of users.
(f) Runtime per user with different numbers of edge servers.
(g) Runtime per edge server with different numbers of edge servers.
(h) Communication overhead per edge server with different numbers of edge servers.
Figure 6. Runtime and communication overhead with different numbers of users and edge servers.

Number of Users. When the involved users increase, the runtime for each user grows linearly, and inversely, the communication overhead for each user decreases, shown in Fig. 6(a) and Fig. 6(b), respectively. The linear growth of the runtime is caused by the incremental cost of user selection and key shares collection steps. And due to the less samples distributed to each user, the communication overhead for each user decreases. The user dropout rate barely influences the runtime because the correlated active user only need to transmit one secret sharing for the private mask key reconstruction. Considering the impact of the incremental user number performed on each edge server, the runtime for each edge server follows the quadratic growth, described in Fig. 6(c). The private mask key recovery for dropped users has the main effect on the increase of the runtime cost. Nonetheless, the communication overhead is barely influenced because only a little overhead increment is caused for the key shares collection stage of SecBoost. The higher user dropout causes obvious time increment to reconstruct lost data via the time-consuming Lagrange polynomials. Specially, the central server deploys less computation tasks than edge server, but has more runtime as illustrated in Fig.6(e). The phenomenon is due to the fact that the central server has to wait for collecting every edge server’s response to continue the subsequent computation. The communication overhead plots about central server are omitted, because, for central server, its communication overhead is just the edge server number multiplied the difference between the edge server overhead and the user overhead.

Number of Edge Servers. When the involved edge servers increase, the runtime cost for each user decreases, illustrated in Fig. 6(f). Because the number of users in each domain managed by each edge server reduces, the computational cost of secret sharing also becomes less for each user. Similarly, the runtime cost of each edge server decreases, Fig. 6(g) while the computation of secret sharing assigned on each edge server reduces. As more edge servers are involved for computation, the communication overhead of each server decreases, shown in Fig. 6(h). For each user, the communication overhead does not have obvious change because the assigned instances are static. And the cost of central server performs similar to Fig. 6(e) with users. Due to the space limitation, we omit these two plots in this paper.

Stage RunTime (s)
Our FedXGB SecureBoost (Cheng et al., 2019)
User Selection 0.285 1.112 3.288 N.A.
Mask Collection 1.333 1.458 N.A.
Boosting 18.802 23.308 26.863 46.25
Prediction 5.182 5.987 6.961
Total 25.602 31.865 37.112 46.25
Table 2. Protocol runtime for different Stages

In Table 2, we list the runtime cost of different stages in FedXGB. It indicates that the main overhead in FedXGB is caused by the boosting stage, namely, the optimal split finding algorithm, because numerous loop operations are proceeded. We also compare the runtime between FedXGB and the only existing privacy-preserving XGBoost architecture, SecureBoost, proposed in (Cheng et al., 2019). SecureBoost deploys the homomorphic encryption algorithm (HE) technology to protect the sub-aggregation of gradients. Since HE is still a time-consuming technique for multi-party computation (Aslett et al., 2015), SecureBoost takes more time than FedXGB to handle the same size instances. And different from FedXGB, SecureBoost is specially designed for the vertically split distributed data (i.e. the data are split in the feature dimension). The setting limits its suitable application situation, because for most circumstances of mobile crowdsensing applications, each user independently forms a dataset with all features, like the individual income condition (i.e. the data are horizontally split). Additionally, the user dropout condition and the model privacy leakage problem are not considered in SecureBoost.

7.4. Defense Against User Data Reconstruction Attack

GAN based reconstruction attack (Wang et al., 2019) is one of the most common and effective attacks against federated learning. Based on GAN, the attack reconstructs user data by solving an optimization problem. However, FedXGB is protected against such GAN-based attack due to the tree structure we choose. In order to validate how well FedXGB is protected, we conduct two experiments by launching the user data reconstruction (UDR) attack against the original federated learning approach (McMahan et al., 2016) and FedXGB. In the experiment, MNIST is used, shown in Fig. 7.

The left column of Fig. 7 illustrates that the federated learning approach is attacked successfully. The attacker (i.e., the central server), , first collects the gradient aggregations uploaded by the specific victim and other users , . Based on and , the attacker derives the representatives of the victim by solving the optimization problem , where is a regularization item and is the gradient of . Given , GAN outputs almost identical images.

The right column of Fig. 7 presents the failed UDR attack launched on FedXGB. Suppose that is the gradient aggregation obtained by an malicious edge server . Because the CART model partitions the input space into discrete regions, is unable to solve the optimization problem . The optimizer can only advance towards random directions and output images that look like random noises. The gray-level frequency histograms in the last row of Fig.7 further illustrate that, for FedXGB, UDR can hardly fit the features of original images.

Figure 7. Security of FedXGB against user data reconstruction attack

8. Related Work

Most of the existing privacy-preserving works for machine learning are data driven and based on traditional cryptographic algorithms. For example, Wang et al. (Wang et al., 2017a) proposed a privacy-preserving data mining model learning scheme for canonical correlation analysis in cross-media retrieval system garbled circuit. Ma et al. (Ma et al., 2019)

proposed a lightweight ensemble classification learning framework for the universal face recognition system by exploiting additive secret sharing. Considering the wide applications of gradient boosting decision tree (GDBT) in data mining, Zhao

et al. (Zhao et al., 2018) utilized the differential privacy technology to implement two novel privacy-preserving schemes for classification and regression tasks. Towards the patient’s medical data privacy protection in e-Health system, Liu in (Liu et al., 2019)

advocated a homomorphic encryption based scheme to implement privacy-preserving reinforcement learning scheme for patient-centric dynamic treatment regimes. Because of data security driven, the above four types of privacy-preserving schemes still have to upload encrypted user data to central server and cause massive communication overheads.

Therefore, the federated learning concept was proposed (McMahan et al., 2016). Up to now, there were only a few works that adapted the architecture to propose practical schemes for applications (Tran et al., 2019)

. And most existing federated learning schemes still concentrated on the stochastic gradient descent (SGD) based models. For example, considering the limited bandwidth, precious storage and imperative privacy problem in Internet of Things (IoT) environment, Wang

et al. (Wang et al., 2018) provided a SGD based federated machine learning architecture based on the edge nodes. For the privacy-preserving machine learning model training in smart vehicles, Sumudu et al. (Samarakoon et al., 2018) proposed a federated learning based novel joint transmit power and resource allocation approach. And to avoid the adversary to analyze the hidden information about user private data from the uploaded gradient values, cryptographic methods were then added to the original federated learning scheme for protecting gradients. Keith et al. (Bonawitz et al., 2017) designed a universal and practical model aggregation scheme for mobile devices with secret sharing technology. In (Nock et al., 2018), Richard et al. utilized the homomorphic encryption to protect the uploaded gradients and designed an entity resolution and federated learning framework.

9. Conclusion

In this paper, we proposed a privacy-preserving federated learning architecture (FedXGB) for the training of extreme gradient boosting model (XGBoost) in crowdsensing applications. For securely building classification and regression forest of XGBoost, we designed a series of secure protocols based on the secret sharing technique. The protocols guarantee that the privacy of user data, learning gradients and model parameters are simultaneously preserved during the model training process of XGBoost. Moreover, we conducted numerous experiments to evaluate the effectiveness and efficiency of FedXGB. Experiment results showed that FedXGB is able to support massive crowdsensing users working together to efficiently train a high-performance XGBoost model without data privacy leakage.

References

  • (1)
  • Aslett et al. (2015) Louis JM Aslett, Pedro M Esperança, and Chris C Holmes. 2015. A review of homomorphic encryption and software tools for encrypted statistical machine learning. arXiv preprint arXiv:1508.06574 (2015).
  • Bellare and Tackmann (2016) Mihir Bellare and Björn Tackmann. 2016. The multi-user security of authenticated encryption: AES-GCM in TLS 1.3. In Annual International Cryptology Conference. Springer, 247–276.
  • Bogdanov et al. (2008) Dan Bogdanov, Sven Laur, and Jan Willemson. 2008. Sharemind: A framework for fast privacy-preserving computations. In European Symposium on Research in Computer Security. Springer, 192–206.
  • Bonawitz et al. (2017) Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 1175–1191.
  • Chen and Guestrin (2016) Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 785–794.
  • Cheng et al. (2019) Kewei Cheng, Tao Fan, Yilun Jin, Yang Liu, Tianjian Chen, and Qiang Yang. 2019. SecureBoost: A Lossless Federated Learning Framework. arXiv preprint arXiv:1901.08755 (2019).
  • Doerner et al. (2018) Jack Doerner, Yashvanth Kondi, Eysa Lee, and Abhi Shelat. 2018. Secure two-party threshold ECDSA from ECDSA assumptions. In 2018 IEEE Symposium on Security and Privacy (SP). IEEE, 980–997.
  • Huang et al. (2019) Kai Huang, Ximeng Liu, Shaojing Fu, Deke Guo, and Ming Xu. 2019.

    A Lightweight Privacy-Preserving CNN Feature Extraction Framework for Mobile Sensing.

    IEEE Transactions on Dependable and Secure Computing (2019).
  • Lenzen et al. (2013) Manfred Lenzen, Daniel Moran, Keiichiro Kanemoto, and Arne Geschke. 2013. Building Eora: a global multi-region input–output database at high country and sector resolution. Economic Systems Research 25, 1 (2013), 20–49.
  • Liu et al. (2019) Ximeng Liu, Robert Deng, Kim-Kwang Raymond Choo, and Yang Yang. 2019. Privacy-Preserving Reinforcement Learning Design for Patient-Centric Dynamic Treatment Regimes. IEEE Transactions on Emerging Topics in Computing (2019).
  • Liu et al. (2020) Yang Liu, Zhuo Ma, Ximeng Liu, and Jianfeng Ma. 2020. Learn to Forget: User-Level Memorization Elimination in Federated Learning. arXiv preprint arXiv:2003.10933 (2020).
  • Ma et al. (2019) Zhuo Ma, Yang Liu, Ximeng Liu, Jianfeng Ma, and Kui Ren. 2019. Lightweight Privacy-preserving Ensemble Classification for Face Recognition. IEEE Internet of Things Journal (2019).
  • McEliece and Sarwate (1981) Robert J. McEliece and Dilip V. Sarwate. 1981. On sharing secrets and Reed-Solomon codes. Commun. ACM 24, 9 (1981), 583–584.
  • McMahan et al. (2016) H Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, et al. 2016. Communication-efficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629 (2016).
  • Mitchell and Frank (2017) Rory Mitchell and Eibe Frank. 2017. Accelerating the XGBoost algorithm using GPU computing. PeerJ Computer Science 3 (2017), e127.
  • Muddukrishna et al. (2016) Ananya Muddukrishna, Peter A Jonsson, Artur Podobas, and Mats Brorsson. 2016. Grain graphs: OpenMP performance analysis made easy. In ACM SIGPLAN Notices, Vol. 51. ACM, 28.
  • Nock et al. (2018) Richard Nock, Stephen Hardy, Wilko Henecka, Hamish Ivey-Law, Giorgio Patrini, Guillaume Smith, and Brian Thorne. 2018. Entity Resolution and Federated Learning get a Federated Resolution. arXiv preprint arXiv:1803.04035 (2018).
  • Paverd et al. (2014) AJ Paverd, Andrew Martin, and Ian Brown. 2014. Modelling and automatically analysing privacy properties for honest-but-curious adversaries. Tech. Rep. (2014).
  • Samarakoon et al. (2018) Sumudu Samarakoon, Mehdi Bennis, Walid Saad, and Merouane Debbah. 2018. Federated learning for ultra-reliable low-latency V2V communications. In 2018 IEEE Global Communications Conference (GLOBECOM). IEEE, 1–7.
  • Shamir (1979) Adi Shamir. 1979. How to share a secret. Commun. ACM 22, 11 (1979), 612–613.
  • Tong et al. (2016) Liang Tong, Yong Li, and Wei Gao. 2016. A hierarchical edge cloud architecture for mobile computing. In IEEE INFOCOM 2016-IEEE Conference on Computer Communications. IEEE, 1–9.
  • Tran et al. (2019) Nguyen H Tran, Wei Bao, Albert Zomaya, and Choong Seon Hong. 2019. Federated Learning over Wireless Networks: Optimization Model Design and Analysis. In IEEE INFOCOM 2019-IEEE Conference on Computer Communications. IEEE, 1387–1395.
  • Wang et al. (2017b) Jiong Wang, Boquan Li, and Yuwei Zeng. 2017b. XGBoost-Based Android Malware Detection. In 2017 International Conference on Computational Intelligence and Security (CIS). IEEE, 268–272.
  • Wang et al. (2017a) Qian Wang, Shengshan Hu, Minxin Du, Jingjun Wang, and Kui Ren. 2017a. Learning privately: Privacy-preserving canonical correlation analysis for cross-media retrieval. In IEEE INFOCOM 2017-IEEE Conference on Computer Communications. IEEE, 1–9.
  • Wang et al. (2018) Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K Leung, Christian Makaya, Ting He, and Kevin Chan. 2018. When edge meets learning: Adaptive control for resource-constrained distributed machine learning. In IEEE INFOCOM 2018-IEEE Conference on Computer Communications. IEEE, 63–71.
  • Wang et al. (2020) Yongfeng Wang, Zheng Yan, Wei Feng, and Shushu Liu. 2020. Privacy protection in mobile crowd sensing: a survey. World Wide Web 23, 1 (2020), 421–452.
  • Wang et al. (2019) Zhibo Wang, Mengkai Song, Zhifei Zhang, Yang Song, Qian Wang, and Hairong Qi. 2019. Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning. In IEEE INFOCOM 2019-IEEE Conference on Computer Communications. IEEE, 2512–2520.
  • XingFen et al. (2018) Wang XingFen, Yan Xiangbin, and Ma Yangchun. 2018. Research on User Consumption Behavior Prediction Based on Improved XGBoost Algorithm. In 2018 IEEE International Conference on Big Data (Big Data). IEEE, 4169–4175.
  • Yang et al. (2019) Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated Machine Learning: Concept and Applications. ACM Transactions on Intelligent Systems and Technology (TIST) 10, 2 (2019), 12.
  • Zhao et al. (2018) Lingchen Zhao, Lihao Ni, Shengshan Hu, Yaniiao Chen, Pan Zhou, Fu Xiao, and Libing Wu. 2018. InPrivate Digging: Enabling Tree-based Distributed Data Mining with Differential Privacy. In IEEE INFOCOM 2018-IEEE Conference on Computer Communications. IEEE, 2087–2095.