Many attractive applications of modern machine-learning techniques involve training models using highly sensitive data. For example, models trained on people’s personal messages or detailed medical information can offer invaluable insights into real-world language usage or the diagnoses and treatment of human diseases (McMahan et al., 2017; Liu et al., 2017). A key challenge in such applications is to prevent models from revealing inappropriate details of the sensitive data—a non-trivial task, since models are known to implicitly memorize such details during training and also to inadvertently reveal them during inference (Zhang et al., 2017; Shokri et al., 2017).
Recently, two promising, new model-training approaches have offered the hope that practical, high-utility machine learning may be compatible with strong privacy-protection guarantees for sensitive training data (Abadi et al., 2017). This paper revisits one of these approaches, Private Aggregation of Teacher Ensembles, or PATE (Papernot et al., 2017)
, and develops techniques that improve its scalability and practical applicability. PATE has the advantage of being able to learn from the aggregated consensus of separate “teacher” models trained on disjoint data, in a manner that both provides intuitive privacy guarantees and is agnostic to the underlying machine-learning techniques (cf. the approach of differentially-private stochastic gradient descent(Abadi et al., 2016)). In the PATE approach multiple teachers are trained on disjoint sensitive data (e.g., different users’ data), and uses the teachers’ aggregate consensus answers in a black-box fashion to supervise the training of a “student” model. By publishing only the student model (keeping the teachers private) and by adding carefully-calibrated Laplacian noise to the aggregate answers used to train the student, the original PATE work showed how to establish rigorous differential-privacy guarantees (Papernot et al., 2017)—a gold standard of privacy (Dwork et al., 2006). However, to date, PATE has been applied to only simple tasks, like MNIST, without any realistic, larger-scale evaluation.
The techniques presented in this paper allow PATE to be applied on a larger scale to build more accurate models, in a manner that improves both on PATE’s intuitive privacy-protection due to the teachers’ independent consensus as well as its differential-privacy guarantees. As shown in our experiments, the result is a gain in privacy, utility, and practicality—an uncommon joint improvement.
The primary technical contributions of this paper are new mechanisms for aggregating teachers’ answers that are more selective and add less noise. On all measures, our techniques improve on the original PATE mechanism when evaluated on the same tasks using the same datasets, as described in Section 5. Furthermore, we evaluate both variants of PATE on a new, large-scale character recognition task with 150 output classes, inspired by MNIST. The results show that PATE can be successfully utilized even to uncurated datasets—with significant class imbalance as well as erroneous class labels—and that our new aggregation mechanisms improve both privacy and model accuracy.
To be more selective, our new mechanisms leverage some pleasant synergies between privacy and utility in PATE aggregation. For example, when teachers disagree, and there is no real consensus, the privacy cost is much higher; however, since such disagreement also suggest that the teachers may not give a correct answer, the answer may simply be omitted. Similarly, teachers may avoid giving an answer where the student already is confidently predicting the right answer. Additionally, we ensure that these selection steps are themselves done in a private manner.
To add less noise, our new PATE aggregation mechanisms sample Gaussian noise, since the tails of that distribution diminish far more rapidly than those of the Laplacian noise used in the original PATE work. This reduction greatly increases the chance that the noisy aggregation of teachers’ votes results in the correct consensus answer, which is especially important when PATE is scaled to learning tasks with large numbers of output classes. However, changing the sampled noise requires redoing the entire PATE privacy analysis from scratch (see Section 4 and details in Appendix A).
Finally, of independent interest are the details of our evaluation extending that of the original PATE work. In particular, we find that the virtual adversarial training (VAT) technique of Miyato et al. (2017)
is a good basis for semi-supervised learning on tasks with many classes, outperforming the improved GANs bySalimans et al. (2016) used in the original PATE work. Furthermore, we explain how to tune the PATE approach to achieve very strong privacy () along with high utility, for our real-world character recognition learning task.
This paper is structured as follows: Section 2 is the related work section; Section 3 gives a background on PATE and an overview of our work; Section 4 describes our improved aggregation mechanisms; Section 5 details our experimental evaluation; Section 6 offers conclusions; and proofs are deferred to the Appendices.
2 Related Work
Differential privacy is by now the gold standard of privacy. It offers a rigorous framework whose threat model makes few assumptions about the adversary’s capabilities, allowing differentially private algorithms to effectively cope against strong adversaries. This is not the case of all privacy definitions, as demonstrated by successful attacks against anonymization techniques (Aggarwal, 2005; Narayanan & Shmatikov, 2008; Bindschaedler et al., 2017).
The first learning algorithms adapted to provide differential privacy with respect to their training data were often linear and convex (Pathak et al., 2010; Chaudhuri et al., 2011; Song et al., 2013; Bassily et al., 2014; Hamm et al., 2016)
. More recently, successful developments in deep learning called for differentially private stochastic gradient descent algorithms(Abadi et al., 2016), some of which have been tailored to learn in federated (McMahan et al., 2017) settings.
Differentially private selection mechanisms like GNMax (Section 4.1) are commonly used in hypothesis testing, frequent itemset mining, and as building blocks of more complicated private mechanisms. The most commonly used differentially private selection mechanisms are exponential mechanism (McSherry & Talwar, 2007) and LNMax (Bhaskar et al., 2010). Recent works offer lower bounds on sample complexity of such problem (Steinke & Ullman, 2017; Bafna & Ullman, 2017).
resp.) use the intuition that selecting samples under certain constraints could result in better training than using samples uniformly at random. In Machine Learning Theory, active learning(Cohn et al., 1994) has been shown to allow learning from fewer labeled examples than the passive case (see e.g. Hanneke (2014)). Similarly, in model stealing (Tramèr et al., 2016), a goal is to learn a model from limited access to a teacher network. There is previous work in differential privacy literature (Hardt & Rothblum, 2010; Roth & Roughgarden, 2010) where the mechanism first decides
whether or not to answer a query, and then privately answers the queries it chooses to answer using a traditional noise-addition mechanism. In these cases, the sparse vector technique(Dwork & Roth, 2014, Chapter 3.6) helps bound the privacy cost in terms of the number of answered queries. This is in contrast to our work where a constant fraction of queries get answered and the sparse vector technique does not seem to help reduce the privacy cost. Closer to our work, Bun et al. (2017) consider a setting where the answer to a query of interest is often either very large or very small. They show that a sparse vector-like analysis applies in this case, where one pays only for queries that are in the middle.
3 Background and Overview
We introduce essential components of our approach towards a generic and flexible framework for machine learning with provable privacy guarantees for training data.
3.1 The PATE Framework
Here, we provide an overview of the PATE framework. To protect the privacy of training data during learning, PATE transfers knowledge from an ensemble of teacher models trained on partitions of the data to a student model. Privacy guarantees may be understood intuitively and expressed rigorously in terms of differential privacy.
Illustrated in Figure 2, the PATE framework consists of three key parts: (1) an ensemble of teacher models, (2) an aggregation mechanism and (3) a student model.
Teacher models: Each teacher is a model trained independently on a subset of the data whose privacy one wishes to protect. The data is partitioned to ensure no pair of teachers will have trained on overlapping data. Any learning technique suitable for the data can be used for any teacher. Training each teacher on a partition of the sensitive data produces different models solving the same task. At inference, teachers independently predict labels.
Aggregation mechanism: When there is a strong consensus among teachers, the label they almost all agree on does not depend on the model learned by any given teacher. Hence, this collective decision is intuitively private with respect to any given training point—because such a point could have been included only in one of the teachers’ training set. To provide rigorous guarantees of differential privacy, the aggregation mechanism of the original PATE framework counts votes assigned to each class, adds carefully calibrated Laplacian noise to the resulting vote histogram, and outputs the class with the most noisy votes as the ensemble’s prediction. This mechanism is referred to as the max-of-Laplacian mechanism, or LNMax, going forward.
For samples and classes , let denote the -th teacher model’s prediction and denote the vote count for the -th class (i.e., ). The output of the mechanism is . Through a rigorous analysis of this mechanism, the PATE framework provides a differentially private API: the privacy cost of each aggregated prediction made by the teacher ensemble is known.
Student model: PATE’s final step involves the training of a student model by knowledge transfer from the teacher ensemble using access to public—but unlabeled—data. To limit the privacy cost of labeling them, queries are only made to the aggregation mechanism for a subset of public data to train the student in a semi-supervised way using a fixed number of queries. The authors note that every additional ensemble prediction increases the privacy cost spent and thus cannot work with unbounded queries. Fixed queries fixes privacy costs as well as diminishes the value of attacks analyzing model parameters to recover training data (Zhang et al., 2017). The student only sees public data and privacy-preserving labels.
3.2 Differential Privacy
Differential privacy (Dwork et al., 2006) requires that the sensitivity of the distribution of an algorithm’s output to small perturbations of its input be limited. The following variant of the definition captures this intuition formally:
A randomized mechanism with domain and range satisfies -differential privacy if for any two adjacent inputs and for any subset of outputs it holds that:
For our application of differential privacy to ML, adjacent inputs are defined as two datasets that only differ by one training example and the randomized mechanism would be the model training algorithm. The privacy parameters have the following natural interpretation: is an upper bound on the loss of privacy, and
is the probability with which this guarantee may not hold. Composition theorems(Dwork & Roth, 2014) allow us to keep track of the privacy cost when we run a sequence of mechanisms.
3.3 Rényi Differential Privacy
Papernot et al. (2017) note that the natural approach to bounding PATE’s privacy loss—by bounding the privacy cost of each label queried and using strong composition (Dwork et al., 2010) to derive the total cost—yields loose privacy guarantees. Instead, their approach uses data-dependent privacy analysis. This takes advantage of the fact that when the consensus among the teachers is very strong, the plurality outcome has overwhelming likelihood leading to a very small privacy cost whenever the consensus occurs. To capture this effect quantitatively, Papernot et al. (2017) rely on the moments accountant, introduced by Abadi et al. (2016) and building on previous work (Bun & Steinke, 2016; Dwork & Rothblum, 2016).
In this section, we recall the language of Rényi Differential Privacy or RDP (Mironov, 2017). RDP generalizes pure differential privacy () and is closely related to the moments accountant. We choose to use RDP as a more natural analysis framework when dealing with our mechanisms that use Gaussian noise. Defined below, the RDP of a mechanism is stated in terms of the Rényi divergence.
Definition (Rényi Divergence).
The Rényi divergence of order between two distributions and is defined as:
Definition (Rényi Differential Privacy (RDP)).
A randomized mechanism is said to guarantee -RDP with if for any neighboring datasets and ,
RDP generalizes pure differential privacy in the sense that -differential privacy is equivalent to -RDP. Mironov (2017) proves the following key facts that allow easy composition of RDP guarantees and their conversion to -differential privacy bounds.
Theorem 1 (Composition).
If a mechanism consists of a sequence of adaptive mechanisms such that for any , guarantees -RDP, then guarantees -RDP.
Theorem 2 (From RDP to DP).
If a mechanism guarantees -RDP, then guarantees -differential privacy for any .
While both -differential privacy and RDP are relaxations of pure -differential privacy, the two main advantages of RDP are as follows. First, it composes nicely; second, it captures the privacy guarantee of Gaussian noise in a much cleaner manner compared to -differential privacy. This lets us do a careful privacy analysis of the GNMax mechanism as stated in Theorem 3. While the analysis of Papernot et al. (2017) leverages the first aspect of such frameworks with the Laplace noise (LNMax mechanism), our analysis of the GNMax mechanism relies on both.
3.4 PATE Aggregation Mechanisms
The aggregation step is a crucial component of PATE. It enables knowledge transfer from the teachers to the student while enforcing privacy. We improve the LNMax mechanism used by Papernot et al. (2017) which adds Laplace noise to teacher votes and outputs the class with the highest votes.
First, we add Gaussian noise with an accompanying privacy analysis in the RDP framework. This modification effectively reduces the noise needed to achieve the same privacy cost per student query.
Second, the aggregation mechanism is now selective: teacher votes are analyzed to decide which student queries are worth
answering. This takes into account both the privacy cost of each query and its payout in improving the student’s utility. Surprisingly, our analysis shows that these two metrics are not at odds and in fact align with each other: the privacy cost is the smallest when teachers agree, and when teachers agree, the label is more likely to be correct thus being more useful to the student.
Third, we propose and study an interactive mechanism that takes into account not only teacher votes on a queried example but possible student predictions on that query. Now, queries worth answering are those where the teachers agree on a class but the student is not confident in its prediction on that class. This third modification aligns the two metrics discussed above even further: queries where the student already agrees with the consensus of teachers are not worth expending our privacy budget on, but queries where the student is less confident are useful and answered at a small privacy cost.
3.5 Data-dependent Privacy in PATE
A direct privacy analysis of the aggregation mechanism, for reasonable values of the noise parameter, allows answering only few queries before the privacy cost becomes prohibitive. The original PATE proposal used a data-dependent analysis, exploiting the fact that when the teachers have large agreement, the privacy cost is usually much smaller than the data-independent bound would suggest.
In our work, we perform a data-dependent privacy analysis of the aggregation mechanism with Gaussian noise. This change of noise distribution turns out be technically much more challenging than the Laplace noise case and we defer the details to Appendix A. This increased complexity of the analysis however does not make the algorithm any more complicated and thus allows us to improve the privacy-utility tradeoff.
Sanitizing the privacy cost via smooth sensitivity analysis.
An additional challenge with data-dependent privacy analyses arises from the fact that the privacy cost itself is now a function of the private data. Further, the data-dependent bound on the privacy cost has large global sensitivity (a metric used in differential privacy to calibrate the noise injected) and is therefore difficult to sanitize. To remedy this, we use the smooth sensitivity framework proposed by Nissim et al. (2007).
Appendix B describes how we add noise to the computed privacy cost using this framework to publish a sanitized version of the privacy cost. Section B.1 defines smooth sensitivity and outlines algorithms 3–5 that compute it. The rest of Appendix B
argues the correctness of these algorithms. The final analysis shows that the incremental cost of sanitizing our privacy estimates is modest—less than 50% of the raw estimates—thus enabling us to use precise data-dependent privacy analysis while taking into account its privacy implications.
4 Improved Aggregation Mechanisms for PATE
The privacy guarantees provided by PATE stem from the design and analysis of the aggregation step. Here, we detail our improvements to the mechanism used by Papernot et al. (2017). As outlined in Section 3.4
, we first replace the Laplace noise added to teacher votes with Gaussian noise, adapting the data-dependent privacy analysis. Next, we describe the Confident and Interactive Aggregators that select queries worth answering in a privacy-preserving way: the privacy budget is shared between the query selection and answer computation. The aggregators use different heuristics to select queries: the former does not take into account student predictions, while the latter does.
4.1 The GNMax Aggregator and Its Privacy Guarantee
This section uses the following notation. For a sample and classes to , let denote the -th teacher model’s prediction on and denote the vote count for the -th class (i.e., ). We define a Gaussian NoisyMax (GNMax) aggregation mechanism as:
where. The aggregator outputs the class with noisy plurality after adding Gaussian noise to each vote count. In what follow, plurality more generally refers to the highest number of teacher votes assigned among the classes.
The Gaussian distribution is more concentrated than the Laplace distribution used by Papernot et al. (2017). This concentration directly improves the aggregation’s utility when the number of classes is large. The GNMax mechanism satisfies -RDP, which holds for all inputs and all (precise statements and proofs of claims in this section are deferred to Appendix A). A straightforward application of composition theorems leads to loose privacy bounds. As an example, the standard advanced composition theorem applied to experiments in the last two rows of Table 1 would give us and resp. at for the Glyph dataset.
To refine these, we work out a careful data-dependent analysis that yields values of smaller than for the same . The following theorem translates data-independent RDP guarantees for higher orders into a data-dependent RDP guarantee for a smaller order . We use it in conjunction with Section 4.1 to bound the privacy cost of each query to the GNMax algorithm as a function of , the probability that the most common answer will not be output by the mechanism.
Theorem 3 (informal).
Let be a randomized algorithm with -RDP and -RDP guarantees and suppose that given a dataset , there exists a likely outcome such that . Then the data-dependent Rényi differential privacy for of order at is bounded by a function of , , , , , which approaches 0 as .
The new bound improves on the data-independent privacy for as long as the distribution of the algorithm’s output on that input has a strong peak (i.e., ). Values of close to could result in a looser bound. Therefore, in practice we take the minimum between this bound and (the data-independent one). The theorem generalizes Theorem 3 from Papernot et al. (2017), where it was shown for a mechanism satisfying -differential privacy (i.e., and ).
The final step in our analysis uses the following lemma to bound the probability when corresponds to the class with the true plurality of teacher votes.
For any , we have where is the complementary error function.
In Appendix A, we detail how these results translate to privacy bounds. In short, for each query to the GNMax aggregator, given teacher votes and the class with maximal support, Section 4.1 gives us the value of to use in Theorem 3. We optimize over and to get a data-dependent RDP guarantee for any order . Finally, we use composition properties of RDP to analyze a sequence of queries, and translate the RDP bound back to an -DP bound.
This data-dependent privacy analysis leads us to the concept of an expensive query in terms of its privacy cost. When teacher votes largely disagree, some values may be small leading to a large value for : i.e., the lack of consensus amongst teachers indicates that the aggregator is likely to output a wrong label. Thus expensive queries from a privacy perspective are often bad for training too. Conversely, queries with strong consensus enable tight privacy bounds. This synergy motivates the aggregation mechanisms discussed in the following sections: they evaluate the strength of the consensus before answering a query.
4.2 The Confident-GNMax Aggregator
In this section, we propose a refinement of the GNMax aggregator that enables us to filter out queries for which teachers do not have a sufficiently strong consensus. This filtering enables the teachers to avoid answering expensive queries. We also take note to do this selection step itself in a private manner.
The proposed Confident Aggregator is described in Algorithm 1. To select queries with overwhelming consensus, the algorithm checks if the plurality vote crosses a threshold . To enforce privacy in this step, the comparison is done after adding Gaussian noise with variance . Then, for queries that pass this noisy threshold check, the aggregator proceeds with the usual GNMax mechanism with a smaller variance . For queries that do not pass the noisy threshold check, the aggregator simply returns and the student discards this example in its training.
In practice, we often choose significantly higher values for compared to . This is because we pay the cost of the noisy threshold check always, and without the benefit of knowing that the consensus is strong. We pick so that queries where the plurality gets less than half the votes (often very expensive) are unlikely to pass the threshold after adding noise, but we still have a high enough yield amongst the queries with a strong consensus. This tradeoff leads us to look for ’s between to the number of teachers.
The privacy cost of this aggregator is intuitive: we pay for the threshold check for every query, and for the GNMax step only for queries that pass the check. In the work of Papernot et al. (2017), the mechanism paid a privacy cost for every query, expensive or otherwise. In comparison, the Confident Aggregator expends a much smaller privacy cost to check against the threshold, and by answering a significantly smaller fraction of expensive queries, it expends a lower privacy cost overall.
4.3 The Interactive-GNMax Aggregator
While the Confident Aggregator excludes expensive queries, it ignores the possibility that the student might receive labels that contribute little to learning, and in turn to its utility. By incorporating the student’s current predictions for its public training data, we design an Interactive Aggregator that discards queries where the student already confidently predicts the same label as the teachers.
Given a set of queries, the Interactive Aggregator (Algorithm 2) selects those answered by comparing student predictions to teacher votes for each class. Similar to Step 1 in the Confident Aggregator, queries where the plurality of these noised differences crosses a threshold are answered with GNMax. This noisy threshold suffices to enforce privacy of the first step because student predictions can be considered public information (the student is trained in a differentially private manner).
For queries that fail this check, the mechanism reinforces the predicted student label if the student is confident enough and does this without looking at teacher votes again. This limited form of supervision comes at a small privacy cost. Moreover, the order of the checks ensures that a student falsely confident in its predictions on a query is not accidentally reinforced if it disagrees with the teacher consensus. The privacy accounting is identical to the Confident Aggregator except in considering the difference between teachers and the student instead of only the teachers votes.
In practice, the Confident Aggregator can be used to start training a student when it can make no meaningful predictions and training can be finished off with the Interactive Aggregator after the student gains some proficiency.
5 Experimental Evaluation
Our goal is first to show that the improved aggregators introduced in Section 4 enable the application of PATE to uncurated data, thus departing from previous results on tasks with balanced and well-separated classes. We experiment with the Glyph dataset described below to address two aspects left open by Papernot et al. (2017): (a) the performance of PATE on a task with a larger number of classes (the framework was only evaluated on datasets with at most 10 classes) and (b) the privacy-utility tradeoffs offered by PATE on data that is class imbalanced and partly mislabeled. In Section 5.2, we evaluate the improvements given by the GNMax aggregator over its Laplace counterpart (LNMax) and demonstrate the necessity of the Gaussian mechanism for uncurated tasks.
In Section 5.3, we then evaluate the performance of PATE with both the Confident and Interactive Aggregators on all datasets used to benchmark the original PATE framework, in addition to Glyph. With the right teacher and student training, the two mechanisms from Section 4 achieve high accuracy with very tight privacy bounds. Not answering queries for which teacher consensus is too low (Confident-GNMax) or the student’s predictions already agree with teacher votes (Interactive-GNMax) better aligns utility and privacy: queries are answered at a significantly reduced cost.
5.1 Experimental Setup
MNIST, SVHN, and the UCI Adult databases.
We evaluate with two computer vision tasks (MNIST and Street View House Numbers(Netzer et al., 2011)) and census data from the UCI Adult dataset (Kohavi, 1996). This enables a comparative analysis of the utility-privacy tradeoff achieved with our Confident-GNMax aggregator and the LNMax originally used in PATE. We replicate the experimental setup and results found in Papernot et al. (2017) with code and teacher votes made available online. The source code for the privacy analysis in this paper as well as supporting data required to run this analysis is available on Github.111https://github.com/tensorflow/models/tree/master/research/differential_privacy
A detailed description of the experimental setup can be found in Papernot et al. (2017)
; we provide here only a brief overview. For MNIST and SVHN, teachers are convolutional networks trained on partitions of the training set. For UCI Adult, each teacher is a random forest. The test set is split in two halves: the first is used as unlabeled inputs to simulate the student’s public data and the second is used as a hold out to evaluate test performance. The MNIST and SVHN students are convolutional networks trained using semi-supervised learning with GANs à laSalimans et al. (2016). The student for the Adult dataset are fully supervised random forests.
This optical character recognition task has an order of magnitude more classes than all previous applications of PATE. The Glyph dataset also possesses many characteristics shared by real-world tasks: e.g., it is imbalanced and some inputs are mislabeled. Each input is a grayscale image containing a single glyph generated synthetically from a collection of over 500K computer fonts.222Glyph data is not public but similar data is available publicly as part of the notMNIST dataset. Samples representative of the difficulties raised by the data are depicted in Figure 3
. The task is to classify inputs as one of theUnicode symbols used to generate them.
This set of 150 classes results from pre-processing efforts. We discarded additional classes that had few samples; some classes had at least 50 times fewer inputs than the most popular classes, and these were almost exclusively incorrectly labeled inputs. We also merged classes that were too ambiguous for even a human to differentiate them. Nevertheless, a manual inspection of samples grouped by classes—favorably to the human observer—led to the conservative estimate that some classes remain 5 times more frequent, and mislabeled inputs represent at least of the data.
To simulate the availability of private and public data (see Section 3.1), we split data originally marked as the training set (about 65M points) into partitions given to the teachers. Each teacher is a ResNet (He et al., 2016)
made of 32 leaky ReLU layers. We train on batches of 100 inputs for 40K steps using SGD with momentum. The learning rate, initially set to, is decayed after 10K steps to and again after 20K steps to . These parameters were found with a grid search.
We split holdout data in two subsets of 100K and 400K samples: the first acts as public data to train the student and the second as its testing data. The student architecture is a convolutional network learnt in a semi-supervised fashion with virtual adversarial training (VAT) from Miyato et al. (2017). Using unlabeled data, we show how VAT can regularize the student by making predictions constant in adversarial333In this context, the adversarial component refers to the phenomenon commonly referred to as adversarial examples (Biggio et al., 2013; Szegedy et al., 2014) and not to the adversarial training approach taken in GANs.
directions. Indeed, we found that GANs did not yield as much utility for Glyph as for MNIST or SVHN. We train with Adam for 400 epochs and a learning rate of.
5.2 Comparing the LNMax and GNMax Mechanisms
Section 4.1 introduces the GNMax mechanism and the accompanying privacy analysis. With a Gaussian distribution, whose tail diminishes more rapidly than the Laplace distribution, we expect better utility when using the new mechanism (albeit with a more involved privacy analysis).
To study the tradeoff between privacy and accuracy with the two mechanisms, we run experiments training several ensembles of teachers for on the Glyph data. Recall that 65 million training inputs are partitioned and distributed among the teachers with each teacher receiving between 650K and 13K inputs for the values of above. The test data is used to query the teacher ensemble and the resulting labels (after the LNMax and GNMax mechanisms) are compared with the ground truth labels provided in the dataset. This predictive performance of the teachers is essential to good student training with accurate labels and is a useful proxy for utility.
For each mechanism, we compute -differential privacy guarantees. As is common in literature, for a dataset on the order of samples, we choose and denote the corresponding as the privacy cost. The total is calculated on a subset of 4,000 queries, which is representative of the number of labels needed by a student for accurate training (see Section 5.3). We visualize in Figure 4 the effect of the noise distribution (left) and the number of teachers (right) on the tradeoff between privacy costs and label accuracy.
|MNIST||LNMax (Papernot et al., 2017)||100||2.04||98.0%||99.2%|
|LNMax (Papernot et al., 2017)||1,000||8.03||98.1%|
|SVHN||LNMax (Papernot et al., 2017)||500||5.04||82.7%||92.8%|
|LNMax (Papernot et al., 2017)||1,000||8.19||90.7%|
|Adult||LNMax (Papernot et al., 2017)||500||2.66||83.0%||85.0%|
|Interactive-GNMax, two rounds||4,341||0.837||73.2%|
On the left of Figure 1, we compare our GNMax aggregator to the LNMax aggregator used by the original PATE proposal, on an ensemble of teachers and for varying noise scales . At fixed test accuracy, the GNMax algorithm consistently outperforms the LNMax mechanism in terms of privacy cost. To explain this improved performance, recall notation from Section 4.1. For both mechanisms, the data dependent privacy cost scales linearly with —the likelihood of an answer other than the true plurality. The value of falls of as for GNMax and for LNMax, where is the ratio . Thus, when is (say) , LNMax would have , whereas GNMax would have , thereby leading to a much higher likelihood of returning the true plurality. Moreover, this reduced translates to a smaller privacy cost for a given leading to a better utility-privacy tradeoff.
As long as each teacher has sufficient data to learn a good-enough model, increasing the number of teachers improves the tradeoff—as illustrated on the right of Figure 4 with GNMax. The larger ensembles lower the privacy cost of answering queries by tolerating larger ’s. Combining the two observations made in this Figure, for a fixed label accuracy, we lower privacy costs by switching to the GNMax aggregator and training a larger number of teachers.
5.3 Student Training with the GNMax Aggregation Mechanisms
As outlined in Section 3, we train a student on public data labeled by the aggregation mechanisms. We take advantage of PATE’s flexibility and apply the technique that performs best on each dataset: semi-supervised learning with Generative Adversarial Networks (Salimans et al., 2016) for MNIST and SVHN, Virtual Adversarial Training (Miyato et al., 2017) for Glyph, and fully-supervised random forests for UCI Adult. In addition to evaluating the total privacy cost associated with training the student model, we compare its utility to a non-private baseline obtained by training on the sensitive data (used to train teachers in PATE): we use the baselines of , , and reported by Papernot et al. (2017) respectively for MNIST, SVHN, and UCI Adult, and we measure a baseline of for Glyph. We compute -privacy bounds and denote the privacy cost as the value at a value of set accordingly to number of training samples.
Given a pool of 500 to 12,000 samples to learn from (depending on the dataset), the student submits queries to the teacher ensemble running the Confident-GNMax aggregator from Section 4.2. A grid search over a range of plausible values for parameters , and yielded the values reported in Table 1, illustrating the tradeoff between utility and privacy achieved. We additionally measure the number of queries selected by the teachers to be answered and compare student utility to a non-private baseline.
The Confident-GNMax aggregator outperforms LNMax for the four datasets considered in the original PATE proposal: it reduces the privacy cost , increases student accuracy, or both simultaneously. On the uncurated Glyph data, despite the imbalance of classes and mislabeled data (as evidenced by the 82.2% baseline), the Confident Aggregator achieves 73.5% accuracy with a privacy cost of just . Roughly 1,300 out of 12,000 queries made are not answered, indicating that several expensive queries were successfully avoided. This selectivity is analyzed in more details in Section 5.4.
On Glyph, we evaluate the utility and privacy of an interactive training routine that proceeds in two rounds. Round one runs student training with a Confident Aggregator. A grid search targeting the best privacy for roughly 3,400 answered queries (out of 6,000)—sufficient to bootstrap a student—led us to setting and a privacy cost of .
In round two, this student was then trained with 10,000 more queries made with the Interactive-GNMax Aggregator . We computed the resulting (total) privacy cost and utility at an exemplar data point through another grid search of plausible parameter values. The result appears in the last row of Table 1. With just over 10,422 answered queries in total at a privacy cost of , the trained student was able to achieve 73.2% accuracy. Note that this students required fewer answered queries compared to the Confident Aggregator. The best overall cost of student training occurred when the privacy costs for the first and second rounds of training were roughly the same. (The total is less than due to better composition—via Theorems 1 and 2.)
Comparison with Baseline.
Note that the Glyph student’s accuracy remains seven percentage points below the non-private model’s accuracy achieved by training on the 65M sensitive inputs. We hypothesize that this is due to the uncurated nature of the data considered. Indeed, the class imbalance naturally requires more queries to return labels from the less represented classes. For instance, a model trained on 200K queries is only 77% accurate on test data. In addition, the large fraction of mislabeled inputs are likely to have a large privacy cost: these inputs are sensitive because they are outliers of the distribution, which is reflected by the weak consensus among teachers on these inputs.
5.4 Noisy Threshold Checks and Privacy Costs
Sections 4.1 and 4.2 motivated the need for a noisy threshold checking step before having the teachers answer queries: it prevents most of the privacy budget being consumed by few queries that are expensive and also likely to be incorrectly answered. In Figure 5, we compare the privacy cost of answering all queries to only answering confident queries for a fixed number of queries.
We run additional experiments to support the evaluation from Section 5.3. With the votes of 5,000 teachers on the Glyph dataset, we plot in Figure 5 the histogram of the plurality vote counts ( in the notation of Section 4.1) across 25,000 student queries. We compare these values to the vote counts of queries that passed the noisy threshold check for two sets of parameters and in Algorithm 1. Smaller values imply weaker teacher agreements and consequently more expensive queries.
When we capture a significant fraction of queries where teachers have a strong consensus (roughly votes) while managing to filter out many queries with poor consensus. This moderate check ensures that although many queries with plurality votes between 2,500 and 3,500 are answered (i.e., only 50–70% of teachers agree on a label) the expensive ones are most likely discarded. For , queries with poor consensus are completely culled out. This selectivity comes at the expense of a noticeable drop for queries that might have had a strong consensus and little-to-no privacy cost. Thus, this aggressive check answer fewer queries with very strong privacy guarantees. We reiterate that this threshold checking step itself is done in a private manner. Empirically, in our Interactive Aggregator experiments, we expend about a third to a half of our privacy budget on this step, which still yields a very small cost per query across 6,000 queries.
The key insight motivating the addition of a noisy thresholding step to the two aggregation mechanisms proposed in our work is that there is a form of synergy between the privacy and accuracy of labels output by the aggregation: labels that come at a small privacy cost also happen to be more likely to be correct. As a consequence, we are able to provide more quality supervision to the student by choosing not to output labels when the consensus among teachers is too low to provide an aggregated prediction at a small cost in privacy. This observation was further confirmed in some of our experiments where we observed that if we trained the student on either private or non-private labels, the former almost always gave better performance than the latter—for a fixed number of labels.
Complementary with these aggregation mechanisms is the use of a Gaussian (rather than Laplace) distribution to perturb teacher votes. In our experiments with Glyph data, these changes proved essential to preserve the accuracy of the aggregated labels—because of the large number of classes. The analysis presented in Section 4 details the delicate but necessary adaptation of analogous results for the Laplace NoisyMax.
As was the case for the original PATE proposal, semi-supervised learning was instrumental to ensure the student achieves strong utility given a limited set of labels from the aggregation mechanism. However, we found that virtual adversarial training outperforms the approach from Salimans et al. (2016) in our experiments with Glyph data. These results establish lower bounds on the performance that a student can achieve when supervised with our aggregation mechanisms; future work may continue to investigate virtual adversarial training, semi-supervised generative adversarial networks and other techniques for learning the student in these particular settings with restricted supervision.
We are grateful to Martín Abadi, Vincent Vanhoucke, and Daniel Levy for their useful inputs and discussions towards this paper.
- Abadi et al. (2016) Martín Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318. ACM, 2016.
- Abadi et al. (2017) Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, H. Brendan McMahan, Nicolas Papernot, Ilya Mironov, Kunal Talwar, and Li Zhang. On the protection of private information in machine learning systems: Two recent approaches. In 2017 IEEE 30th Computer Security Foundations Symposium (CSF), pp. 1–6, 2017.
Charu C Aggarwal.
-anonymity and the curse of dimensionality.In Proceedings of the 31st International Conference on Very large Data Bases, pp. 901–909. VLDB Endowment, 2005.
- Bafna & Ullman (2017) Mitali Bafna and Jonathan Ullman. The price of selection in differential privacy. In Proceedings of the 2017 Conference on Learning Theory (COLT), volume 65 of Proceedings of Machine Learning Research, pp. 151–168, July 2017.
- Bassily et al. (2014) Raef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In Proceedings of the 2014 IEEE 55th Annual Symposium on Foundations of Computer Science (FOCS), pp. 464–473, 2014. ISBN 978-1-4799-6517-5.
- Bhaskar et al. (2010) Raghav Bhaskar, Srivatsan Laxman, Adam Smith, and Abhradeep Thakurta. Discovering frequent patterns in sensitive data. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 503–512. ACM, 2010.
- Biggio et al. (2013) Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 387–402, 2013.
- Bindschaedler et al. (2017) Vincent Bindschaedler, Reza Shokri, and Carl A Gunter. Plausible deniability for privacy-preserving data synthesis. Proceedings of the VLDB Endowment, 10(5), 2017.
- Bun & Steinke (2016) Mark Bun and Thomas Steinke. Concentrated differential privacy: Simplifications, extensions, and lower bounds. In Theory of Cryptography Conference (TCC), pp. 635–658, 2016.
- Bun et al. (2017) Mark Bun, Thomas Steinke, and Jonathan Ullman. Make up your mind: The price of online queries in differential privacy. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1306–1325. SIAM, 2017.
- Chaudhuri et al. (2011) Kamalika Chaudhuri, Claire Monteleoni, and Anand D Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12(Mar):1069–1109, 2011.
- Cohn et al. (1994) David Cohn, Les Atlas, and Richard Ladner. Improving generalization with active learning. Machine learning, 15(2):201–221, 1994.
- Dwork & Roth (2014) Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3–4):211–407, 2014.
- Dwork & Rothblum (2016) Cynthia Dwork and Guy N Rothblum. Concentrated differential privacy. arXiv preprint arXiv:1603.01887, 2016.
- Dwork et al. (2006) Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Proceedings of the Third Conference on Theory of Cryptography (TCC), volume 3876, pp. 265–284, 2006.
- Dwork et al. (2010) Cynthia Dwork, Guy N Rothblum, and Salil Vadhan. Boosting and differential privacy. In Proceedings of the 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 51–60, 2010.
- Hamm et al. (2016) Jihun Hamm, Yingjun Cao, and Mikhail Belkin. Learning privately from multiparty data. In International Conference on Machine Learning (ICML), pp. 555–563, 2016.
- Hanneke (2014) Steve Hanneke. Theory of disagreement-based active learning. Foundations and Trends in Machine Learning, 7(2-3):131–309, 2014.
- Hardt & Rothblum (2010) Moritz Hardt and Guy N Rothblum. A multiplicative weights mechanism for privacy-preserving data analysis. In 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 61–70, 2010.
He et al. (2016)
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Deep residual learning for image recognition.
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016.
- Kohavi (1996) Ron Kohavi. In KDD, volume 96, pp. 202–207, 1996.
- Liu et al. (2017) Yun Liu, Krishna Gadepalli, Mohammad Norouzi, George E Dahl, Timo Kohlberger, Aleksey Boyko, Subhashini Venugopalan, Aleksei Timofeev, Philip Q Nelson, Greg S Corrado, et al. Detecting cancer metastases on gigapixel pathology images. arXiv preprint arXiv:1703.02442, 2017.
- McMahan et al. (2017) H Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. Learning differentially private language models without losing accuracy. arXiv preprint arXiv:1710.06963, 2017.
- McSherry & Talwar (2007) Frank McSherry and Kunal Talwar. Mechanism design via differential privacy. In Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 94–103, 2007.
- Mironov (2017) Ilya Mironov. Rényi differential privacy. In 2017 IEEE 30th Computer Security Foundations Symposium (CSF), pp. 263–275, 2017.
- Miyato et al. (2017) Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. arXiv preprint arXiv:1704.03976, 2017.
- Narayanan & Shmatikov (2008) Arvind Narayanan and Vitaly Shmatikov. Robust de-anonymization of large sparse datasets. In Proceedings of the 2008 IEEE Symposium on Security and Privacy, pp. 111–125. IEEE, 2008.
- Netzer et al. (2011) Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, pp. 5, 2011.
Nissim et al. (2007)
Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith.
Smooth sensitivity and sampling in private data analysis.
Proceedings of the Thirty-ninth Annual ACM Symposium on Theory of Computing (STOC), pp. 75–84, 2007.
- Papernot et al. (2017) Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Semi-supervised knowledge transfer for deep learning from private training data. In Proceedings of the 5th International Conference on Learning Representations (ICLR), 2017.
- Pathak et al. (2010) Manas Pathak, Shantanu Rane, and Bhiksha Raj. Multiparty differential privacy via aggregation of locally trained classifiers. In Advances in Neural Information Processing Systems, pp. 1876–1884, 2010.
- Roth & Roughgarden (2010) Aaron Roth and Tim Roughgarden. Interactive privacy via the median mechanism. In Proceedings of the Forty-second ACM Symposium on Theory of Computing (STOC), pp. 765–774, 2010.
- Salimans et al. (2016) Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. In Advances in Neural Information Processing Systems, pp. 2234–2242, 2016.
- Shokri et al. (2017) Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In Proceedings of the 2017 IEEE Symposium on Security and Privacy, pp. 3–18. IEEE, 2017.
- Song et al. (2013) Shuang Song, Kamalika Chaudhuri, and Anand D Sarwate. Stochastic gradient descent with differentially private updates. In 2013 IEEE Global Conference on Signal and Information Processing, pp. 245–248, 2013.
- Steinke & Ullman (2017) Thomas Steinke and Jonathan Ullman. Tight lower bounds for differentially private selection. In 58th IEEE Annual Symposium on Foundations of Computer Science (FOCS), pp. 552–563, 2017.
Szegedy et al. (2014)
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan,
Ian Goodfellow, and Rob Fergus.
Intriguing properties of neural networks.In Proceedings of the 2nd International Conference on Learning Representations (ICLR), 2014.
- Tramèr et al. (2016) Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. Stealing machine learning models via prediction APIs. In USENIX Security Symposium, pp. 601–618, 2016.
van Erven & Harremoës (2014)
Tim van Erven and Peter Harremoës.
Rényi divergence and Kullback-Leibler divergence.IEEE Transactions on Information Theory, 60(7):3797–3820, July 2014.
- Zhang et al. (2017) Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In Proceedings of the 5th International Conference on Learning Representations (ICLR), 2017.
Appendix A Appendix: Privacy Analysis
In this appendix, we provide the proofs of Theorem 3 and Section 4.1. Moreover, we present Appendix A, which provides optimal values of and to apply towards Theorem 3 for the GNMax mechanism. We start off with a statement about the Rényi differential privacy guarantee of the GNMax.
The GNMax aggregator guarantees -RDP for all .
The result follows from observing that can be decomposed into applying the operator to a noisy histogram resulted from adding Gaussian noise to each dimension of the original histogram. The Gaussian mechanism satisfies -RDP (Mironov, 2017), and since each teacher may change two counts (incrementing one and decrementing the other), the overall RDP guarantee is as claimed. ∎
For a GNMax aggregator , the teachers’ votes histogram , and for any , we have
Recall that , where are distributed as . Then for any , we have
where the last equality follows from the fact that
is a Gaussian random variable with mean zero and variance. ∎
We now present a precise statement of Theorem 3.
Let be a randomized algorithm with -RDP and -RDP guarantees and suppose that there exists a likely outcome given a dataset and a bound such that . Additionally suppose that and . Then, for any neighboring dataset of , we have:
where and .
Before we proceed to the proof, we introduce some simplifying notation. For a randomized mechanism and neighboring datasets and , we define
As the proof involves working with the RDP bounds in the exponent, we set and .
Finally, we define the following shortcuts:
and note that .
From the definition of Rényi differential privacy, -RDP implies:
Since , is convex. Applying Jensen’s Inequality we have the following:
Next, by the bound at order , we have:
By the data processing inequality of Rényi divergence, we have
which implies and thus
Although Equation (6) is very close to the corresponding statement in the theorem’s claim, one subtlety remains. The bound (6) applies to the exact probability . In the theorem statement, and in practice, we can only derive an upper bound on . The last step of the proof requires showing that the expression in Equation (6) is monotone in the range of values of that we care about.
Lemma (Monotonicity of the bound).
Let the functions and be
Then is increasing in .
Taking the derivative of , we have:
We intend to show that:
For and , define as:
We claim that is increasing in and therefore , and prove it by showing the partial derivative of with respect to is non-negative. Take a derivative with respect to as:
To see why is non-negative in the respective ranges of and , note that:
|(in the resp. range of and )|
Consider . Since and , we have and hence
Therefore we can set and apply the fact that for all to get
as required by (7).
Taking the derivative of , we have:
Combining the two terms together, we have:
For to be non-negative we need:
So is increasing for . This means for , we have . This completes the proof of the lemma and that of the theorem. ∎
Theorem 3 yields data-dependent Rényi differential privacy bounds for any value of and larger than . The following proposition simplifies this search by calculating optimal higher moments and for the GNMax mechanism with variance .