The modern society is increasingly concerned with the unlawful use and exploitation of our personal data. At the individual level, improper use of personal data may cause potential risk to user privacy. At the enterprise level, data leakage may impinge have grave consequences on commercial interests. Actions are being taken by different societies. For example, the European Union has recently enacted a low known as General Data Protection Regulation (GDPR). The GDPR is designed to give users more control over their personal data [Regulation2016, Albrecht2016, Mayer-Schonberger and Padova2015, Goodman and Flaxman2016]. Many enterprises that rely heavily on machine learning are beginning to make sweeping changes as a consequence.
Despite difficulty in meeting the goal of user privacy protection, the need for different organizations to collaborate while building machine-learning models still stay strong. In reality, many data owners do not have a sufficient amount of data to build high-quality models. For example, retail companies have user transactions data, which correspond to different data dimensions or features as credit-rating companies do. Likewise, mobile phone users have their usage data, but each device only have a small amount of user-activity data. To have a usable model for user preference prediction, it would be necessary to integrate the data collected by the clients.
Thus, it is a challenge to both allow different data owners to collaborate together in order to build high-quality machine learning models while at the same time, protect user data privacy and confidentiality. In the past, several attempts have been made to address the user-privacy problem while exchanging data [Hardy et al.2017, Mohassel and Zhang2017]. For example, Apple proposed to use differential privacy [Dwork, Roth, and others2014, Dwork2008] to address the privacy preservation issue. The basic idea of differential privacy (DP) is to add properly calibrated noise to data to disambiguate the identity of any individuals when the data is being exchanged and analyzed by a third party. However, as we discuss in the paper, DP only prevent user-data leakage to a certain degree and cannot completely rule out the identity of an individual. In addition, data exchange under the DP still requires that the data changes hands between organizations, which may not be allowed by strict laws like GDPR. Furthermore, the DP method is lossy in machine learning in that models built after noise is injected can reduce much performance in prediction accuracy.
More recently, Google introduces a federated learning framework [Konečnỳ et al.2016] on its Android cloud. The basic idea is to allow individual clients to encrypt their models which are then uploaded and aggregated at a central cloud site. The machine-learning process at that site can make use of these encrypted models while not leaking the clients’ information. This framework applies to a data-partition framework where each partition corresponds to a subset of data samples collected from one or more users.
In this paper, we consider a general setting of multiple parties collaboratively build their machine-learning models while protecting user privacy and data confidentiality. Our setting is as shown in Figure 2. We consider a collection of parties each holding a part of its own data. We can visualize the data located at different parties as a subsection of a big data table that is obtained by taking the union of all data at different parties. Then the data at each party has the following property:
The big data table is vertically split, such that the data are split in the feature dimension among parties;
only one data providers has the label information;
the users have partial overlap across different parties.
Our goal is then to allow each party to build a prediction model for some designated label, while disallow any party to obtain any information on the data of other parties.
Our above setting have several advantages. In contrast with most existing work on privacy-preserving data mining and machine learning, the complexity in our setting is significantly increased. Unlike the situation where the data are horizontally split, the above setting requires a more complex mechanism to decompose the loss function at each party[Vaidya2008, Vaidya and Clifton2005, Hardy et al.2017]. In addition, in each model-building process for all parties, only one data provider owns the label information. It requires us to propose a secure protocol to guide the learning process instead of sharing label information explicitly among all parties. Finally, data confidentiality and privacy concerns prevents the parties to expose their own users who are not common among the group when building the models. Hence, entity alignment should also be conducted in a sufficiently secure manner.
In this paper, we propose a novel end-to-end privacy-preserving tree-boosting algorithm and framework known as SecureBoost to enable machine learning in a federated setting. Unlike previous federated learning frameworks that split the data on user dimensions, our framework ensures that collaborative model building is done when data is split among different parties on the feature dimension. Our federated learning framework operates in two steps. First, we find the common users among the parties under a privacy-preserving constraint. Then, we collaboratively learn a shared classification or regression model without leaking any user information to each other. We summarize our main contributions as follows:
We formally define a novel problem of privacy-preserving machine learning over vertically partitioned data in the setting of federated learning.
We present an approach to train a high-quality tree boosting model for each party collaboratively while keeping the training data secret over multiple parties. We go through this machine learning process without the participation of a trusted third party.
Finally and importantly, we prove that our approach is lossless in the sense that it is as accurate as any centralized non-privacy-preserving methods that bring all data to a central location.
In addition, along with a proof of security, we discuss what would be required to make the protocols completely secure.
Preliminaries and Related Work
The existing literature on privacy-preserving machine learning broadly address two objectives: privacy of the data used for learning a model or as input to an existing model. To protect privacy of the data used for learning a model, in [Shokri and Shmatikov2015, Abadi et al.2016]
, the authors propose to take advantage of differential privacy for learning a deep learning model. As one of the most popular privacy-preserving techniques, differential privacy[Dwork2008]
protects sensitive data by injecting noise to the raw datasets such that the amount of information leaked from an individual record is minimized. Even though differential privacy ensures a pretty low probability of identifying an individual record, there’s still a probability of leakage, which is against the requirement of GDPR. To address the above problems, Google introduces a federated learning framework to bring the model training to each mobile terminal[Konečnỳ et al.2016]. It achieves the goal of privacy protection by forbidding the data from transferring out. Another privacy preserving techniques is focuses on the inference stage instead of training stage. Microsoft proposed a cryptographic deep learning framework, CryptoNets [Gilad-Bachrach et al.2016]
based on Homomorphic Encryption to enable a trained neural network to make encrypted predictions over the encrypted data. However, it has to sacrifice the accuracy to obtain security. In[Rouhani, Riazi, and Koushanfar2017], another framework DeepSecure is proposed to securely conduct deep learning execution on encrypted data using Yao’s Garbled Circuit (GC) protocol. Although it does not involve a trade-off between utility and privacy, it suffers from serious inefficiency.
All the above methods are designed for horizontally partitioned data whose data providers record the same features for different entities. We consider a vertical data partition as shown in Figure 2
, in which multiple parties record different features at different sites. Different from the horizontal partitioning, which assumes that ensemble happens over data samples, the vertical partition builds a model over a common set of users. How to collaboratively build the model is an open question. Some previous works discuss privacy-preserving decision trees over vertically partitioned data[Vaidya and Clifton2005, Vaidya et al.2008]. However, their proposed methods have to reveal class distribution over the given attributes, which will cause potential security risk. In addition, they can only handle discrete data, which is less practical for real-life scenario. In contrast, our method guarantees more secure protection to the data and can easily apply to continuous data. In [Djatmiko et al.2017]
, Patrini et al. proposed a framework to jointly perform logistic regression over the encrypted vertically-partitioned data by approximating a non-linear logistic loss by a Taylor expansion. Clearly, in this approximation, the algorithm will inevitably cause a loss of accuracy. To the contrary, we propose a novel approach that islossless in nature. We believe that the SecureBoost framework is the first attempt for privacy-preserving federated learning over vertically partitioned data which balance accuracy and security.
We now formally define our problem and clarify the difference between our setting and previous works. Let be the data matrix distributed on private parties with each row being a data instance. We use to denote the feature set of corresponding data matrix . If we consider all data come from a virtual big data table involving all users and all features, then we can view the data as being vertically split from a large virtual table across different parties, such that each party holds a different set of vertically partitioned data over a subset of users. Two parties and have different sets of features, denoted as . Different data providers may hold different sets of users as well, allowing some degree of overlap. That is, parties at sites may be different from each other. As mentioned before, when building a model for a common task, we consider that only one of the data providers has a class attribute for classification or regression. We denote the class label as where the class label is held by the -th party.
We define the active party as the data provider who holds both a data matrix and the class label.
Since the class label information is indispensable for supervised learning, there must be an active party with access to the label. The active party naturally takes the responsibility as a dominating server in federated learning.
We define the data provider which has only a data matrix as a passive party.
Passive parties play the role of clients in the federated learning setting. They are also in need of building a model to predict the class label for their prediction purposes. Thus they must collaborate with the active party to build their model to predict for their future users using their own features.
The problem of privacy-preserving machine learning over vertically partitioned data in federated learning can be stated as follows:
Given: a vertically partitioned data matrix distributed on private parties and the class labels distributed on active party.
Learn: a machine learning model without giving information of the data matrix of any parties to others in the process. The model is a function that has a projection at each party , such that takes input of its own features .
Lossless Constraint: We require that the model is lossless, which means that the loss of under federated learning over the training data is the same as the loss of when is built on the union of all data.
Federated Learning with SecureBoost
Our first goal is to find a common set of data samples at all participating parties so as to build a joint model . When the data is vertically partitioned over multiple parties, different parties hold different but partially overlapped users. These users may be identified by their unique user IDs. A problem is how to find the common shared users or data samples across the parties without compromising the non-shared parts of the user sets. In particular, we align the data samples under an encryption scheme by using the privacy-preserving protocol for inter-database intersections [Liang and Chawathe2004].
After aligning the data across different parties under the privacy constraint, we now consider the problem of jointly building tree ensemble model over multiple parties without violating privacy in federated learning. Before further discussing the detail of the algorithm, we first introduce a general framework of federated learning. In federated learning, a typical interation consists of four steps. At first, each client downloads the current global model from server. Next, each client computes an updated model based on its local data and the current global model, which resides with the active party. Third, each client sends the model update back to the server under encryption. Finally, the server aggregates these model updates and construct the improved global model.
Following the general framework of federated learning, we can see that to achieve a privacy-preserving tree boosting framework in the setting of federated learning, in essence, we have to answer the following three questions: (1) How can each client (i.e., a passive party) compute an updated model based on its local data without reference to class label? (2) How can the server (i.e., the active party) aggregate all the updated model and obtain a new global model? (3) How to share the updated global model among all parties without leaking any information at inference time? To answer these three questions, we start by reviewing a tree ensemble model, XGBoost[Chen and Guestrin2016], in a non-federated setting.
Given a data set with samples and features, XGBoost predicts the output by using regression trees.
To learn the set of regression tree models used in Eq.(1), it greedily adds a tree at the -th iteration to minimize the following loss.
where , and .
When construct the regression tree in the -th iteration, it starts from the tree with the depth of and add a split for each leaf nodes of the tree until reaching the maximum depth. In particular, it employs the following equation to determine the best split.
In the above equation, and are the instance spaces of left and right tree nodes after the split. The split that maximizes the score is selected as the best split.
When it obtains an optimal tree structure, the optimal weight of leaf can be computed by the following equation:
where is the instance space of leaf .
From the above review, we make following observations:
(1) The evaluation of split candidates and the calculation of the optimal weight of leaf only depend on the and .
(2) The class label is needed for the calculation of and . For instance, when we take the logistic loss as the loss function, we have and . Hence, it is easy to recover the class label from and once we obtain the value of .
With the guidance of the above observations, we now discuss how to adapt a non-federated gradient boosted tree model to a federated learning setting. Following observation (1), we can see that each passive party can determine the locally optimal split independently with only its local data once it obtainsand . Thus, a naive solution is requiring the active party to send and to each passive party. However, according to observation (2), and should be regarded as sensitive data as well, since they can be used to discover the class label information. To ensure security, passive parties cannot get access to and directly. In order to keep and confidential, we require the active party to encrypt and before sending them to passive parties. The remaining challenge is how to determine the locally optimal split with encrypted and for each passive party.
According to Eq.(5), the optimal split can be found if and can be calculated for every possible splits, where is the instance space of left nodes after the split. Next, we show how to obtain and with encrypted and using additively homomorphic encryption scheme [Paillier1999].
First, we define the encryption of a number under additively homomorphic encryption scheme as . Recalling the main properties of additively homomorphic encryption scheme, for any two numbers and , we have . Therefore, is equivalent to and similarly, can be computed by . By taking advantage of additively homomorphic encryption scheme, the best split can be found in the following way. First, each passive party computes and for all possible splits locally. It then sends the values back to the active party. After collecting the values from all passive parties, the active party deciphers and and calculates the global optimal split according to Eq.(5). In this case, the communication cost between the active and each passive parties is for a single split. Here, denotes the size of ciphertext, represents the number of instances associated with the node to be split and is the number of features held by the passive party.
We can observe that this solution is not efficient since it requires the transfer of and for all possible split candidates. To construct the tree with lower communication cost, we take advantage of an approximate framework proposed by [Chen and Guestrin2016], where the detailed calculation is shown in Algorithm 1. For each passive party, instead of computing and directly, it maps the features into buckets and then aggregates the encrypted gradient statistics based on the buckets. In this way, the active party only needs to collect the aggregated encrypted gradient statistics from all passive parties. As a result, it can determine the globally optimal split as described in Algorithm 2. In this case, the communication cost for constructing a regression tree can be reduced to where denotes the number of instances in one bucket. Clearly, we have . Therefore, we can indeed decrease the communication cost. After the active party obtains the global optimal split, [party id (), feature id (), threshold id ()], it returns the feature id and threshold id to the corresponding passive party . Passive party decides the selected attribute’s value based on the value of and . Then, it partitions the current instance space according to the selected attribute’s value. In addition, it builds a lookup table locally to record the selected attribute’s value, [feature, threshold value], as shown in Figure 3. After that, it returns the index of the record and the instance space of left nodes after the split () back to the active party. The active party splits the current node according to the received instance space and associate current node with [party id, record id], until a stopping criterion or the max depth is reached. All the leaf nodes are stored inside the active party.
Federated Inference based on the Learned Model
In this section, we describe how to use the learned model (distributed among parties) to classify a new instance even though the features of the instance to be classified are private and distributed among parties. Since each site knows the its own features (and can thus evaluate the branch), but knows nothing of the others, we need a secure distributed protocol to control passes from site to site, based on the decision made.
To illustrate the inference process, we consider a a system with three parties as depicted in Figure 3. Specifically, party is the active party, which collects information about user’s monthly bill payment and level of education, as well as the label information. Party and party are passive parties, which hold the features, [age, gender,marriage status] and [amount of given credit] respectively. Suppose we wish to know if a user would make payment on time. All sites have to collaborate to make the prediction. The whole process is coordinated by the active party. Starting from the root, by referring to the record [party id:, record id:], the active party knows party holds the root node. Thereby, it requires party to retrieve the corresponding attribute, Bill Payment, from its lookup table based on the record id . Since the classifying attribute is bill payment and party knows the bill payment for user is , which is less than the threshold value , it makes the decision that it should move down to its left child, node . Then, active party refers to the record [party id:3, record id:1] associated with node and requires party to conduct the same operations. This process continues until a leaf is reached.
Theoretical Assessment for Lossless Property
SecureBoost is lossless as defined in Section Problem Statement, provided that the model and have the same initialization and parameters.
The loss of the model under the setting of federated learning is the same as the loss of when is built on the union of all data, because and are identical. According to Eq.(5), and is the only information needed for the calculation of best split. Provided that with the same initialization, each iteration, each instance has the same value of and under both settings, model and can always achieve the same best split during the construction of the tree. Thereby, and are identical, which ensures the property of lossless. ∎
In this section, we discuss the security of our proposed SecureBoost framework. In particular, we will provide detailed analysis of information leakage of the framework and discuss the security of our framework in the presence of semi-honest adversaries. In addition, along with a proof of security, we discuss what would be required to make the protocols completely secure.
Analysis of Information Leakage
As SecureBoost consists of two components, we discussion information leakage of these two component respectively.
During privacy-preserving entity alignment, the encryption techniques guarantees that nothing reveals but the ID of the common shared users across the parties. Although revealing ID of the common shared users might cause some potential risk, these level of leakage is acceptable in most scenarios.
For the construction of the tree ensemble model, all that is revealed contains: (1) Each party knows instance space for the each split; (2) Each party knows the tree nodes held by itself; (3) Active party knows the number of features held by each passive parties; (4) Active party knows the actual value of and ; (5) Active party knows which site is responsible for the decision made at each node. Considering a system with one passive party and one active party, we now discuss the potential security risk caused by the leaked information.
First, we study how much information the passive parties can learn about active party. As we know, SecureBoost essentially is a decision tree model. Although its leaf nodes do not hold a class label, instances associated with the same leaf still strongly indicates that they may belong to the same class or result in similar regression results. Thereby, in SecureBoost, we require leaf nodes to be unknown to passive party in order to prevent the label information from disclosure. However, such protection is not enough to guarantee the security. Let us consider the situation that a passive party holds the parent node of two leaf nodes. In this case, the instances space of those leaf nodes is no longer hidden from passive party. Passive party can make a guess that all instances associated with the same leaf belong to the same class. The confidence of the inference is determined by leaf purity where leaf purity refers to the proportion of samples which belong to the majority class. Thus, we take leaf purity as metrics to give a quantitative information leakage analysis to SecureBoost. More precisely, we consider the scenario of binary classification for the reason that it will potentially cause the greatest security risk.
According to Eq.( 2), to learn the SecureBoost model, we greedily add a decision tree at the t-th iteration to fitting residual . Therefore, when , the instances associated with the same leaf only indicate that they may have similar residual, which cannot directly used to infer the label information. However, when , try to fit the label . In this case, the instance space of the leaf nodes may reveal the label information. Thereby, our security concern mainly focus on how much information we can infer from the first tree, . Let’s start our analysis with Theorem 2.
For a learned SecureBoost model, the information leakage is given by the weight of the first tree’s leaves.
The loss function for binary classification problem is shown as follows.
Based on the loss function, we have and during the construction of the decision tree at first iteration. Specifically, is given as initialized value. Suppose we initialize all as where . According to Eq.( 4), for the instances associated with the specific leaf , where
is the sigmoid function. Suppose the number of instances associated with the leafis and the percentage of positive samples is . When is relatively big, we can ignore . Therefore, we have
Notice is the leaf purify of leaf . In another word, given a learned SecureBoost model, the information leakage can be inferred from the weight of the first tree’s leaves.
According to Theorem 2, as long as weight of the first tree’s leaves are close enough to , the protocol is considered secure.
Second, we focus on whether the active party can learn about private information of passive party. Specifically, we have security concern that if active party can recover portion of features held by passive parties with some confidence. During training, active party learns (1) instance space for the each split; (2) tree nodes held by itself; (3) the number of features held by each passive parties; (4) the actual value of and ; (5) which site is responsible for the decision made at each node. To recover the features, active party has to learn partial order relation among all instances regarding to a specific feature. However, the only information it knows is how best splits partition the instance space, which is obvious not enough to learn the partial order relation.
Generally speaking, the level of information leakage for SecureBoost is acceptable based on our analysis.
In this subsection, we would like to discuss security of our framework under the semi-honest assumption. In our security definition, all parties are honest-but-curious. Some corrupt parties might cooperate with each other in order to gather private information. Specifically, we require active party does not collude with any passive party. We now prove that Secureboost is secure under the security definition.
Our SecureBoost system can be split into two parts, the first part includes only active party and the second part includes all passive parties. When all passive parties collude, the system is equal to a system with one active party and one super passive party. This super passive party holds all feature from passive parties. As discussed in Section Analysis of Information Leakage, we have proved that when our system has only one active party and one passive party, the level of information leakage is acceptable. Therefore, our system is secure under the semi-honest assumption. ∎
As discussed in Section Analysis of Information Leakage, our main security concern is that instance space for leaf nodes may reveal too much information and the passive party indeed has chance to know instance space for the leaf nodes when collaboratively construct the tree ensemble model with the active party. To alleviate this problem, we proposed Completely SecureBoost to prevent the passive party from constructing the first tree. Unlike SecureBoost, the active party of Completely SecureBoost learns the first tree independently based on its own features, rather than collaborate with passive parties. Thereby, the instance space of the leaf nodes of the first tree can be protected. In this case, all that passive party can learn is the residuals. Although we intuitively illustrate that residuals won’t reveal much information once the first tree get protected, to make it more plausible, we now give a theoretical proof as presented in Theorem 3.
The residuals of the tree won’t reveal much information when leaf purity of the previous tree is high.
As mentioned before, for binary classification problem, we have and , where . Hence,
When we construct the decision tree at the t-th iteration with leaves to fit the residuals of the previous tree, in essential, we split the data into clusters to minimize the following loss.
We know and . Thus, we have for positive samples and for negative samples. Taking the range of into consideration, we rewrite the above equation as follows.
Where and denote the set of negative samples and positive samples associated with leaf respectively. We denote the expectation of for positive samples as and the expectation of for negative samples as . When we have a large amount of samples but small number of leave nodes , we can use the following equation to approximates Eq.( 8).
Where and represent the number of negative samples and positive samples associated with leaf . Since and , we know the numerator has to be positive and the denominator has to be negative. Thus, the whole equation has to be negative. To minimize Eq.(9) is equal to maximizing the numerator while minimizing the denominator. Notice that the denominator is and the numerator is where . The equation is dominated by numerator. Thereby, minimizing Eq.( 9) can be regarded as maximizing the numerator . Ideally, we require in order to prevent label information from divulging. When is bigger, more possible we can achieve the goal. And we know for negative samples and for positive samples. Thereby, and . can be calculated as follows.
Where and correspond to the number of negative samples and positive samples in total. is the percentage of positive samples associated with leave for decision tree at -th iteration (previous decision tree). denote the number of instances associated with leave for previous decision tree. where represents the weight of -th leave of previous decision tree. When the positive samples and negative samples are balanced, , we have
As observed from Eq.( 11), it achieves the minimum value when . By solving the equation, we have the optimal solution of as . In order to achieve bigger , we want the deviation from to to be as big as possible. When we have proper initialization of , for instance , . In this case, maximizing is the same as maximizing , which exactly is the leaf purity. Therefore, we have proved that high leaf purity will guarantee big difference between and , which finally results in less information leakage. We complete our proof. ∎
Given Theorem 3, we can conclude that Completely SecureBoost is secure when its first tree learn enough information to mask the actual label with residuals.
|# samples A# samples B|
In this section, we conduct experiments on two public datasets. The summary of these datasets is shown as follows.
Credit 111https://www.kaggle.com/c/GiveMeSomeCredit/data: It involves the problem of classifying whether a user would suffer from serious financial problems. It contains a total of instances and attributes.
Credit 222https://www.kaggle.com/uciml/default-of-credit-card-clients-dataset: It is also a credit scoring dataset, which is correlated to the task of predicting whether a user would make payment on time. It consist of instances and attributes in all.
In our experiment, we use of the datasets for training and the remained for testing. We split the data vertically into two halves and distribute them to two parties. To fairly compare different methods, we set the maximum depth of individual regression tree as , the fraction of samples to be used for fitting the individual regression trees as and learning rate as for all methods. The Paillier encryption scheme is taken as our additively homomorphic scheme with a key size of bits. All experiments are conducted on a machine with GB RAM and Intel Core CPU.
As SecureBoost consists of two components, the privacy-preserving entity alignment and the secure federated tree boosting system, we study the scalability of each component separately.
Efficiency of Privacy-Preserving Entity Alignment
We consider a system with only two parties when evaluating the scalability of privacy-preserving entity alignment algorithm. The number of samples distributed on parties A and B are important factors to consider. To investigate the effects of two factors, we vary the number of samples distributed on parties A and B on the log-scale from to separately. We study the effect of each variation by fixing the other to investigate how the change affects the running time. The results are shown in the Table 1 with the following observations.
In general, the runtime variation w.r.t. the size of samples distributed on party A has a similar trend as the variation of the size of samples distributed on party B, which suggests that the number of samples distributed on party A and B, respectively, contribute equally to running time.
The running time strongly depends on max(samples A, samples B). When the size of samples distributed on party A is equal to the samples distributed on party B, the runtime increases almost linearly with the increase of the size of samples.
It only takes around minutes of computation time to align entities when the number of samples distributed on both parties A and B are , which is fairly efficient. This observation validates the scalability of our entity alignment algorithms.
|Mean Purity||Credit 1||Credit 2|
|Accuracy||Credit 1||Credit 2|
|1st Tree of SecureBoost||0.9298||0.7806|
|1st Tree of Completely SecureBoost||0.9186||0.7793|
|Overall Performance of SecureBoost||0.9345||0.8180|
|Overall Performance of Completely SecureBoost||0.9331||0.8179|
|F1-score||Credit 1||Credit 2|
|1st Tree of SecureBoost||0.012||0|
|1st Tree of Completely SecureBoost||0||0|
|Overall Performance of SecureBoost||0.2576||0.4634|
|Overall Performance of Completely SecureBoost||0.2549||0.4650|
|AUC||Credit 1||Credit 2|
|1st Tree of SecureBoost||0.7002||0.6381|
|1st Tree of Completely SecureBoost||0.6912||0.6320|
|Overall Performance of SecureBoost||0.8461||0.7701|
|Overall Performance of Completely SecureBoost||0.8423||0.7682|
Efficiency of Secure Federated Tree Boosting System
We notice that the effectiveness of secure federated tree boosting system may be influenced by (1) convergence rate; (2) maximum depth of the individual regression tree; (3) the sample size of the datasets; and (4) the feature size of the datasets. In this subsection, we study the impact of all four variables on the runtime of learning respectively. All experiments are conducted on dataset Credit .
First, we are interested in the convergence rate of our proposed system. We compare the convergence rate of SecureBoost with non-federated tree boosting implementation, including GBDT333http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html and XGBoost444https://github.com/dmlc/xgboost. As can be observed from the Figure 4, SecureBoost shows a similar learning curve with other non-federated baseline methods on the training dataset. It performs slightly better than others on the test dataset. In addition, we can see that with the increase of boost stages, both training loss and testing loss drop rapidly at first. When the boosting stages keep increasing from to , the loss does not vary much on both the training dataset and the test dataset. To sum up, the algorithm performs quite well in terms of convergence, which is appealing in practice as it significantly reduces the computational costs.
Next, to investigate how maximum depth of the individual tree affects the runtime of learning, we vary the maximum depth of each individual tree among and record the runtime of one boosting stage. As depicted by Figure 5 (a), we can see with the increase of the maximum depth of each individual tree, the runtime increases almost linearly. This indicates that we can train a relatively deep tree with comparatively little time, which is very appealing in practice, especially in the scenario of big data.
Finally, we would like to study the impact of data size on the scalability of our proposed system. We augment the feature sets by feature products. As shown in Figure 5 (b) and Figure 5 (c), we investigate the effects of feature number and sample number, respectively. As depicted by Figure 5 (b) and Figure 5 (c), to study the effect of those two variables, we vary the feature number in the range of and the sample number in . We fix the maximum depth of the individual regression trees to . We compare the runtime of one boosting stage to investigate how each variant affects the efficiency of the algorithm. From the result, we make similar observations on both Figure 5 (b) and Figure 5 (c). The results imply that sample and feature numbers contribute equally to running time. In addition, we can see that our proposed framework scales well even with relatively big data.
Performance of Completely SecureBoost
To investigate performance of Completely SecureBoost in both security and prediction accuracy, specifically, we aim to answer the following two questions: (1) Does the first tree, built upon only features held by active party, learns enough information to mask the actual label by residuals? (2) Does the Completely SecureBoost suffers from a great loss of performance compared with SecureBoost?
First, we study the performance of Completely SecureBoost in security. Following the analysis in Section Analysis of Information Leakage, we evaluate information leakage in terms of leaf purity. As discussed in Theorem 3, we know that when the first tree of Completely SecureBoost fits the label information well, the residuals won’t reveal much label information. Therefore, to verify the security of Completely SecureBoost, we have to illustrate that the first tree of Completely SecureBoost indeed masks the actual label well. We conduct experiments on two real-world datasets, Credit and Credit . As shown in Table 2, we compare the mean leaf purity of the first tree with the second tree. In particular, the mean leaf purity is the weighted average, which is calculated by . Here, represents number of leaves in total. and are defined as leaf purity and number of instances associated with leaf . corresponds to number of instances in total. According to Table 2, the mean leaf purity decreases significantly from the first tree to the second tree on both datasets, which validates the effectiveness of Completely SecureBoost in information protection. Moreover, the mean leaf purity of the second tree is just over on both datasets, which is good enough to prevent the label information from revealing.
Next, to investigate the performance of Completely SecureBoost in prediction accuracy, we compare Completely SecureBoost with SecureBoost with respect to the the first tree’s performance and the overall performance. We conduct experiments on datasets, Credit and Credit
. Both of them involve the task of binary classification. Thus, we consider the commonly used accuracy, Area under the ROC curve (AUC) and f1-score as the evaluation metric. All these three evaluation metric are the higher the better. The results are presented in Table3. As can be observed, Completely SecureBoost performs equally well compared to SecureBoost in almost all cases. We also conduct a pairwise Wilcoxon signed-rank test between Completely SecureBoost and SecureBoost. The comparison results indicate that Completely SecureBoost is as accurate as SecureBoost, with a significance level of . The property of lossless can still be guaranteed for Completely SecureBoost.
In this paper, we proposed a novel lossless privacy-preserving algorithm, SecureBoost, to train a high-quality tree boosting model when the training data remains secret over multiple parties. We theoretically prove that our proposed framework is as accurate as non-federated gradient tree boosting algorithms that bring all the data into one place naively. Along with a proof of security, we discuss what would be required to make the protocols completely secure. The experimental results show that our proposed SecureBoost scales well even with relatively big data.
We believe that the research in federated learning is just beginning. While in this paper we showed how to adapt a Boosted Tree algorithm to federated learning settings, much remains to be done on other machine-learning algorithms in privacy-preserving and lossless manners. Other encryption algorithms can be considered as well that ensures the above properties.
- [Abadi et al.2016] Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H. B.; Mironov, I.; Talwar, K.; and Zhang, L. 2016. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 308–318. ACM.
- [Albrecht2016] Albrecht, J. P. 2016. How the gdpr will change the world. Eur. Data Prot. L. Rev. 2:287.
- [Chen and Guestrin2016] Chen, T., and Guestrin, C. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 785–794. ACM.
- [Djatmiko et al.2017] Djatmiko, M.; Hardy, S.; Henecka, W.; Ivey-Law, H.; Ott, M.; Patrini, G.; Smith, G.; Thorne, B.; and Wu, D. 2017. Privacy-preserving entity resolution and logistic regression on encrypted data. Private and Secure Machine Learning (PSML).
- [Dwork, Roth, and others2014] Dwork, C.; Roth, A.; et al. 2014. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science 9(3–4):211–407.
- [Dwork2008] Dwork, C. 2008. Differential privacy: A survey of results. In International Conference on Theory and Applications of Models of Computation, 1–19. Springer.
- [Friedman et al.2000] Friedman, J.; Hastie, T.; Tibshirani, R.; et al. 2000. Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors). The annals of statistics 28(2):337–407.
- [Gilad-Bachrach et al.2016] Gilad-Bachrach, R.; Dowlin, N.; Laine, K.; Lauter, K.; Naehrig, M.; and Wernsing, J. 2016. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In International Conference on Machine Learning, 201–210.
- [Goodman and Flaxman2016] Goodman, B., and Flaxman, S. 2016. European union regulations on algorithmic decision-making and a” right to explanation”. arXiv preprint arXiv:1606.08813.
- [Hardy et al.2017] Hardy, S.; Henecka, W.; Ivey-Law, H.; Nock, R.; Patrini, G.; Smith, G.; and Thorne, B. 2017. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. arXiv preprint arXiv:1711.10677.
- [He et al.2014] He, X.; Pan, J.; Jin, O.; Xu, T.; Liu, B.; Xu, T.; Shi, Y.; Atallah, A.; Herbrich, R.; Bowers, S.; et al. 2014. Practical lessons from predicting clicks on ads at facebook. In Proceedings of the Eighth International Workshop on Data Mining for Online Advertising, 1–9. ACM.
- [Konečnỳ et al.2016] Konečnỳ, J.; McMahan, H. B.; Yu, F. X.; Richtárik, P.; Suresh, A. T.; and Bacon, D. 2016. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492.
- [Li et al.2017] Li, J.; Cheng, K.; Wang, S.; Morstatter, F.; Trevino, R. P.; Tang, J.; and Liu, H. 2017. Feature selection: A data perspective. ACM Computing Surveys (CSUR) 50(6):94.
- [Liang and Chawathe2004] Liang, G., and Chawathe, S. S. 2004. Privacy-preserving inter-database operations. In International Conference on Intelligence and Security Informatics, 66–82. Springer.
- [Mayer-Schonberger and Padova2015] Mayer-Schonberger, V., and Padova, Y. 2015. Regime change: Enabling big data through europe’s new data protection regulation. Colum. Sci. & Tech. L. Rev. 17:315.
- [Mohassel and Zhang2017] Mohassel, P., and Zhang, Y. 2017. Secureml: A system for scalable privacy-preserving machine learning. In 2017 38th IEEE Symposium on Security and Privacy (SP), 19–38. IEEE.
- [Oentaryo et al.2014] Oentaryo, R. J.; Lim, E.-P.; Finegold, M.; Lo, D.; Zhu, F.; Phua, C.; Cheu, E.-Y.; Yap, G.-E.; Sim, K.; Nguyen, M. N.; et al. 2014. Detecting click fraud in online advertising: a data mining approach. Journal of Machine Learning Research 15(1):99–140.
- [Paillier1999] Paillier, P. 1999. Public-key cryptosystems based on composite degree residuosity classes. In International Conference on the Theory and Applications of Cryptographic Techniques, 223–238. Springer.
- [Regulation2016] Regulation, P. 2016. The general data protection regulation. European Commission. Available at: https://eur-lex. europa. eu/legal-content/EN/TXT.
- [Rouhani, Riazi, and Koushanfar2017] Rouhani, B. D.; Riazi, M. S.; and Koushanfar, F. 2017. Deepsecure: Scalable provably-secure deep learning. arXiv preprint arXiv:1705.08963.
- [Shokri and Shmatikov2015] Shokri, R., and Shmatikov, V. 2015. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, 1310–1321. ACM.
- [Vaidya and Clifton2005] Vaidya, J., and Clifton, C. 2005. Privacy-preserving decision trees over vertically partitioned data. In IFIP Annual Conference on Data and Applications Security and Privacy, 139–152. Springer.
- [Vaidya et al.2008] Vaidya, J.; Clifton, C.; Kantarcioglu, M.; and Patterson, A. S. 2008. Privacy-preserving decision trees over vertically partitioned data. ACM Transactions on Knowledge Discovery from Data (TKDD) 2(3):14.
- [Vaidya2008] Vaidya, J. 2008. A survey of privacy-preserving methods across vertically partitioned data. In Privacy-preserving data mining. Springer. 337–358.