Differentially Private Variational Dropout

11/30/2017 ∙ by Beyza Ermis, et al. ∙ Boğaziçi University 0

Deep neural networks with their large number of parameters are highly flexible learning systems. The high flexibility in such networks brings with some serious problems such as overfitting, and regularization is used to address this problem. A currently popular and effective regularization technique for controlling the overfitting is dropout. Often, large data collections required for neural networks contain sensitive information such as the medical histories of patients, and the privacy of the training data should be protected. In this paper, we modify the recently proposed variational dropout technique which provided an elegant Bayesian interpretation to dropout, and show that the intrinsic noise in the variational dropout can be exploited to obtain a degree of differential privacy. The iterative nature of training neural networks presents a challenge for privacy-preserving estimation since multiple iterations increase the amount of noise added. We overcome this by using a relaxed notion of differential privacy, called concentrated differential privacy, which provides tighter estimates on the overall privacy loss. We demonstrate the accuracy of our privacy-preserving variational dropout algorithm on benchmark datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks (DNN) have recently generated significant interest, largely due to their successes in several important learning applications, including image classification, language modeling and many more (e.g., [1, 2, 3]). The success of neural networks is directly related to the availability of large and representative datasets for training. However, these datasets are often collected from people, such as their tastes and behavior as well as medical health records, and present obvious privacy issues. Their usage requires methods that provide precise privacy guarantees while meeting the demands of the applications.

Overfitting is another challenge in deep neural networks, since DNNs can model complex prediction functions using a large number of parameters. It is often difficult to optimize these functions due to the potentially large number of local minimas in the space of parameters, and standard optimization techniques are prone to getting stuck in a local minimum which might be far from the global optimum. A popular regularization technique to avoid such local minima is dropout [4, 5, 6]

which introduces noise into a model and optimizes loss function under stochastic setting. Recently, it was shown that variational dropout can be treated as a generalization of Gaussian dropout 

[7] and this method can be used to tune each weight’s individual dropout rates [8]. Besides, regularization techniques, specifically dropout, may hide details some of the training data. These features of the dropout method inspire us to use dropout in order to provide a theoretical guarantee for the privacy protection on neural networks.

We provide a general framework for privacy-preserving variational dropout algorithm by exploiting the inherent randomization of the dropout. Differential privacy (DP) is currently a widely accepted privacy definition [9, 10] and we use DP to provide a formalization to the privacy protection of the proposed algorithm. The main principle of DP is to ensure that an adversary should not be able to reliably infer whether or not a particular individual is participating in a database, even with unlimited computational power and access to every entry except for that particular individual’s data. This can be accomplished through adding noise into an algorithm at different stages such as adding noise to data itself or changing the objective function to be optimized. In order to design efficient differentially private algorithms, one need to design a noise injection mechanism such that there is a good trade-off between privacy and utility. However, iterative algorithms such as variational inference accumulate the privacy loss at each access to the training data, and the number of iterations required to guarantee accurate posterior estimates causes high cumulative privacy loss. Therefore, our algorithm uses the zCDP composition analysis [11] that is inspired by concentrated differential privacy (CDP) [12]

which is a recently proposed notion of the differential privacy. CDP is well suited for iterative algorithms since it provides high probability bounds for cumulative privacy loss and requires adding much less noise for the same expected privacy guarantee compared to the DP.

In this paper, we study Variational dropout in the case when we tune individual dropout rates for each weight of neural network to provide measurable privacy guarantee. Our main contributions can be summarized as follows:

  • We first extend the dropout algorithm to protect the privacy of the training data of DNNs. We use the intrinsic additive noise of the recently proposed variational dropout algorithm [8], then we explore and analyze that under what conditions dropout helps in training DNNs within a modest privacy budget.

  • In order to use the privacy budget more efficiently over many iterations, our approach uses the zCDP composition combined with the privacy amplification effect due to subsampling of data, which significantly decrease the amount of additive noise for the same expected privacy guarantee compared to the standard DP analysis.

  • We empirically show that for general single hidden layer neural network models, dropout helps to regularize network and improves accuracy while providing -DP and zCDP. As our experiments illustrate, variational dropout with zCDP outperforms both the standard DP and the state-of-the-art algorithms, especially when the privacy budget is low.

The rest of the paper is organized as follows. In Section 2, we survey the related studies on differentially private deep neural networks and we provide an overview of the relevant ingredients in Section 3. We formalize our setup, present the differentially private variational dropout algorithm and provide a theoretical guarantee for differential privacy on the presented model in Section 4. Section 5 presents our experimental results on some real datasets. We conclude with some open directions in Section 6.

2 Related Work

Differential privacy has been actively studied in many machine learning and data mining problems. Dwork

et al. [10] cover much of the earlier theoretical work and Sarwate et al. [13] review differentially private signal processing and machine learning studies. Problems that have been studied in the literature range from recommender systems [14], classification [15] and empirical risk minimization applications [16]

such as logistic regression 

[17]

and Support Vector Machines 

[18].

There are a number of works that address deep learning under differential privacy. Recently, Shokri and Shmatikov 

[19] designed a system that enables multiple parties to train a neural-network model without sharing their datasets. Their key contribution is the selective sharing of model parameters during training that is based on perturbing the gradients of the SGD. Phan et al. [20]

proposed a different approach towards differentially private deep learning that focuses on learning autoencoders. Privacy relies on perturbing the objective functions of these autoencoders. Most recently, Papernot

et al. [21] proposed a method where privacy-preserving models are learned locally from disjoint datasets, and then combined in a privacy-preserving fashion. Our work is most closely related to the work by Abadi et al. [22]

. They developed the moments accountant method to accumulate the privacy cost that provides a tighter bound for privacy loss than previous composition methods. Then, by using the moments accountant they propose a differentially private SGD algorithm to train a neural network by perturbing the gradients in SGD.

Two distinct types of mechanisms have been proposed for differentially private variational inference: perturbing the sufficient statistics of an exponential family model [23] and perturbing the gradients in optimization of variational inference [24]. The second mechanism [24] uses the moments accountant to perturb the gradients. Our goal is to apply the concentrated DP [12, 11], which is closely related to the moments accountant, to gradient perturbation mechanism. We exploit the intrinsic randomized noise of the variational dropout and derive similar bounds to the moments accountant to strengthen the privacy guarantee in neural networks.

3 Background

3.1 Differential Privacy

A natural notion of privacy protection prevents inference about specific records by requiring a randomized query response mechanism that yields similar distributions on responses of similar datasets. Formally, for any two possible input datasets and with the edit distance or Hamming distance , and any subset of possible responses , a randomized algorithm satisfies (, ) differential privacy if:

(1)

If and are the same except one data point, then = 1. (, )-differential privacy ensures that for all adjacent , , the absolute value of the privacy loss will be bounded by with probability at least . Here, controls the maximum amount of information gain about an individual’s data given the output of the algorithm. When the positive parameter is smaller, the mechanism provides stronger privacy guarantee [9].

Concentrated Differential Privacy:

Concentrated differential privacy (CDP) is a recent variation of differential privacy which is proposed to make privacy-preserving iterative algorithms more practical than DP while still providing strong privacy guarantees. The CDP framework treats the privacy loss of an outcome,

(2)

as a random variable. Two CDP methods are proposed in the literature. The first one is (

, )-mCDP [12] where is the mean of this privacy loss and the second one is -zCDP which is proposed by Bun and Steinke in [11]. -zCDP [11]

arises from a connection between the moment generating function of

and the Rényi divergence between the distributions of and . Most of the DP mechanisms and applications can be characterized in terms of zCDP, but not in terms of mCDP; so we use zCDP as a tool for analyzing composition under the (, )-DP privacy definition, for a fair comparison between CDP and DP analyses.

3.2 Variational Inference

Assume we are given data where = (, ), with input object/feature and output label , with being the output discrete label space. A model characterizes the relationship from to with parameters (or weights) . Our goal is to tune the parameters of a model that predicts given and

. Bayesian inference in such a model consists of updating some initial belief over parameters

in the form of a prior distribution , after observing data , into an updated belief over these parameters in the form of the posterior distribution . The posterior distribution of a set of items is where the corresponding data likelihood is .

Computing the posterior distribution is often difficult in practice as it requires the computation of analytically intractable integrals, so we need to use approximation techniques. One of such techniques is Variational Inference [25] that turns the inference problem into an optimization problem, which is often more easy to tackle and to monitor convergence. In this approach, true posterior is approximated with a variational distribution that has a simpler form than the posterior. The optimal value of variational parameters is obtained through minimizing the Kullback-Leibler (KL) divergence between and

. This is also equivalent to maximizing the so-called evidence lower bound (ELBO). Given joint distribution

, ELBO of is given as follows:

(3)
(4)

where is expectation taken w.r.t and is the expected log-likelihood.

3.2.1 Stochastic Variational Inference

An efficient method for minibatch-based optimization of the variational lower bound is the Stochastic Variational Inference (SVI) introduced in [26]. The basic trick in SVI is to parameterize the random parameters as a differentiable function where is a random noise variable. This new parameterization allows us to obtain an unbiased differentiable minibatch-based Monte Carlo estimator of and :

(5)
(6)

where is a minibatch of data with random datapoints

. The theory of stochastic approximation tells us that the performance of stochastic gradient optimization crucially depends on the variance of the gradients 

[27]. We follow [7] and use the Local Reparameterization Trick that reduces the variance of this gradient estimator. The idea is to sample separate weight matrices for each data-point inside mini-batch by moving the noise from weights to activations [5, 7].

3.3 Variational Dropout

Dropout is one of the most popular regularization techniques for neural networks which injects multiplicative random noise to the input of each layer during the training procedure. The formalization of dropout is given as:

(7)

where denotes the matrix of input features, is the weight matrix for the current layer, and denotes the matrix of activations. The symbol denotes the elementwise (Hadamard) product of the input matrix with a matrix of independent noise variables . The previous publications [4, 28, 6] show that the weight parameters are less likely to overfit to the training data by adding noise to the input during optimization.

At first, Hinton et al. [29] proposed the Binary Dropout where the elements of

are drawn from a Bernoulli distribution with parameter

, hence each element of the input matrix is put to zero with probability that is also known as dropout rate. Afterwards, the same authors proposed the Gaussian Dropout using continuous noise with same relative mean and variance works as well or better [6]. It is important to use continuous noise instead of discrete one, because adding Gaussian noise to the inputs corresponds to putting Gaussian noise on weights.

Then, Kingma et al. [7] proposed Variational Dropout that generalizes Gaussian dropout [6]

with continuous noise as a variational method. It allows to set individual dropout rates for each neuron or layer and to tune them with a simple gradient descent based method. The main idea of this procedure is to search for posterior approximation in a specific family of distributions:

. That is, putting multiplicative Gaussian noise on weight is equivalent to sampling from . Now becomes a random variable parameterized by .

(8)

Variational Dropout uses as an approximate posterior distribution for a model with a special prior on the weights. The variational parameters and of the distribution are tuned via stochastic variational inference as denoted in Section 3.2.1. During dropout training, is adapted to maximize the expected log-likelihood (4). For this to be consistent with the optimization of a variational lower bound, the prior on the weights has to be such that does not depend on . The prior distribution that meets this requirement is chosen to be the scale invariant log-uniform prior [7]:

(9)

Then, maximization of the variational lower bound (3) becomes equivalent to maximization of the expected log-likelihood with fixed parameter . It means that Gaussian Dropout training is exactly equivalent to Variational Dropout with fixed . However, Variational Dropout provides a way to train dropout rate by optimizing the ELBO. Interestingly, dropout rate

now becomes a variational parameter and not a hyperparameter that allows us to train individual dropout rates

for each layer or even weight [7].

Additive Noise Reparameterization:

Although the local reparameterization trick reduces the variance of the variance of stochastic gradients, original multiplicative noise still yields large-variance gradients:

and very large values of correspond to local optima from which it is hard to escape. To avoid such local optima, Kingma et al. [7] only considered the case of , which corresponds to a binary dropout rate . However, the case of large is very interesting (here we mean separate per weight or neuron). Having high dropout rates () corresponds to a binary dropout rate that approaches . It effectively means that corresponding weight or a neuron is always ignored and can be removed from the model. Molchanov et al. [8] introduced a trick that can train the model within the full range of by reducing the variance of gradients even further. The idea is to replace the multiplicative noise term with an exactly equivalent additive noise term where is treated as a new independent variable:

(10)

After this trick, the ELBO is optimized w.r.t. and . However, is still kept and used throughout the paper, since it has a nice interpretation as a dropout rate and can be obtained from . In addition to reducing the variance of gradients, this trick also helps us to propose a novel variational dropout algorithm that satisfies the differential privacy definition by adding independent Gaussian noise to the updates of each weight.

4 Differentially Private Variational Dropout

In this section, we describe our approach toward differentially private training of neural networks and introduce the proposed differentially private variational dropout (DPVD) algorithm. Algorithm 1 outlines our basic method for training a model with parameters by minimizing the ELBO (3). At each iteration of the training scheme, DPVD takes a minibatch of data, samples the activations using the local reparameterization trick [7], computes an estimate of the lower bound (3) and its gradient using parametrization trick (10). This stochastic gradient is then used to update model parameters via some SGD-based optimization method by iteratively applying the following update equation at iteration :

(11)

where is the learning rate and is the gradient of the lower bound . Here, the goal is to estimate that equals to as given in Section 3.3. The weights are then computed with the following update rule:

(12)

The approach presented here regularizes the neural network by adding random noise . To protect the privacy of training data, the gradients need to be perturbed with Gaussian noise in each iteration; so we use the existing random noise in order to provide privacy.

At each step, we have chosen to perturb parameter updates with zero mean multivariate normal noise with covariance matrix . The algorithm requires several parameters to determine the privacy budget. Sampling frequency for subsampling within the data set, a total number of iterations and clipping threshold are important design decisions. Parameter in noise level determines our total and depends on the total in privacy budget and clipping the gradients using the threshold will lead sensitivity of gradient sum to be . The amount of noise is chosen to be equal to the in (12).

1:  Inputs: Input data , number of data passes , minibatch size , learning rate , noise scale , gradient norm bound .
2:  Initialize randomly
3:  for  to  do
4:     Take a random sample of size with sampling probability
5:     Compute gradient for each :
6:     Clip gradient:
7:     Compute noise:
8:     Update parameter:
9:  end for
10:  Output: .
Algorithm 1 Differentially Private Variational Dropout (DPVD)

In this work, we first calculate the per-iteration privacy budget using the key properties of advanced composition theorem (Theorem 3.20 of [30]) and this method is called DPVD-AC in the experiments. Then, we use a relaxed notion of differential privacy, called zCDP [11] that bounds the moments of the privacy loss random variable and call this method DPVD-zCDP in the experiments. The moments bound yields a tighter tail bound, and consequently, it allows for a higher per-iteration budget than standard DP-methods for a given total privacy budget. The description of how we chose the privacy design parameters and calculate the privacy budget is given in Section 4.3 [31].

5 Experiments and Results

We evaluate our approach on two standard benchmark datasets. MNIST dataset [32] contains 70K hand-written digits (60K for training and 10K for testing). DIGITS [33] dataset consists of 1797

grayscale images (1439 for training and 360 for testing) of handwritten digits. We use a simple feed-forward neural network with ReLU units and softmax of 10 classes for both datasets. All experiments are implemented in Theano 

[34].

Baseline:

Our baseline models use a single hidden layer with 1000 hidden units. For MNIST dataset, we use the minibatch of size 600 and reach an accuracy of

in about 200 epochs. For DIGITS dataset, we use the minibatch of size 100 and reach an accuracy of

in about 100 epochs. All of the results in this section are the average of 10 runs.

(a) DPVD-AC
(b) DPVD-zCDP
Figure 1: Comparison of the test accuracies on MNIST for and the NP case.
Differentially private models:

For the differentially private version, we experiment with the same architecture. To limit sensitivity, we clip the gradient norm of each layer at . We report results for three different noise levels where . For any fixed , is varied between and . There is a slight difference with different values (less than ), but still we choose the best performing for both datasets. We set the initial learning rate for MNIST and for DIGITS and update it per round as . We fixed the decay rate to 1 for both of the datasets.

(a) DPVD-AC
(b) DPVD-zCDP
Figure 2: Comparison of the test accuracies on DIGITS for and the NP case.

In the first set of experiments, we investigate the influence of privacy loss on accuracy. The mini-batch sizes were set to and for MNIST and DIGITS respectively. We ran the algorithm for 200 passes for MNIST and 100 passes for DIGITS. Figure 1(a)2(a) report the performance of the DPVD-AC method and Figure 1(b)2(b) report the performance of the DPVD-zCDP method for different noise levels. These results justify the theoretical claims that lower prediction accuracy is obtained when the privacy protection is increased by decreasing . One more deduction from these results is that variational dropout with the zCDP composition can reach an accuracy very close to the non-private level especially under reasonably strong privacy guarantees (when ).

(a) MNIST
(b) DIGITS
Figure 3: The value as a function of .

Then, we compare the classification accuracy of models learned using two variants of the variational dropout algorithm: DPVD-AC and DPVD-zCDP. As mentioned in Section LABEL:sec:dp_budget, zCDP composition provides a tighter bound on the privacy loss compared to the advanced composition theorem. Here we compare them using some concrete values. The noise level can be computed from the overall privacy loss , the sampling ratio of each minibatch = and the number of epochs (so the number of iterations is = ). For our MNIST and DIGITS experiments, we set = 0.01, = 200 and = 0.05, = 100, respectively. Then, we compute the value of . For example, when = 1, the values are 8.24 for DPVD-AC and 2.68 for DPVD-zCDP on MNIST, and the values are 9.73 for DPVD-AC and 2.97 for DPVD-zCDP on DIGITS. We can see from Figure 3 that we get a much lower noise by using the zCDP for a fixed the privacy loss .

(a) MNIST
(b) DIGITS
Figure 4: Test accuracy results of DPVD-AC and DPVD-zCDP for = 0.1.

Therefore, for our neural network models with a total privacy budget fixed to , the amount of noise added is smaller for zCDP, and the test accuracy is higher. Figure 4 shows the comparison results of DPVD-AC and DPVD-zCDP methods when = 0.1 with non-private model. Both results clearly show that using the zCDP composition further helps in obtaining even more accurate results at a comparable level of privacy.

We compare our methods to the most related algorithm proposed by Abadi et al. [22] and the case when no dropout is used. For the algorithm with no dropout, we use SVI to update the weights of the neural network. We ran all the methods on MNIST and DIGITS with varying . Table 1 reports the test accuracies of all methods. The previous experiments have already demonstrated that DPVD-zCDP significantly improves the classification accuracy. These results also support it and show that dropout improves the prediction accuracy on differentially private neural networks especially when the privacy budget is low.

MNIST DIGITS
= 10 = 1 = 0.1 = 10 = 1 = 0.1
DPVD-AC 0.9462 0.9102 0.8419 0.9361 0.9139 0.8217
DPVD-zCDP 0.9687 0.9327 0.9026 0.9417 0.9278 0.9038
SVI-zCDP (no dropout) 0.9518 0.9126 0.8791 0.9375 0.9107 0.8712
Abadi et al. [22] 0.9701 0.9305 0.8875 0.9450 0.9265 0.8943
Table 1: Comparison of the methods for . Bold values indicate the best results.
Effect of the parameters:

The classification accuracy in neural networks depends on a number of parameters that must be carefully tuned to optimize the performance. For our differentially private models, these factors include the number of hidden units, the number of iterations, the gradient clipping threshold, the minibatch size and the noise level. In the previous section, we compared the effect of different noise levels to the classification accuracy. Here, we demonstrate the effects of the remaining parameters. We control a parameter individually by fixing the rest as constant. For MNIST experiments, we set the parameters as follows: 1000 hidden units, minibatch size of 600, gradient norm bound of 2, initial learning rate of 0.1, 200 epochs and the privacy budget

to 1. The results are presented in Figure 5.

(a) Number of hidden units
(b) Number of epochs
(c) Gradient clipping norm
(d) Minibatch size
Figure 5: Effect of the model parameters on MNIST dataset.

In standard, non-private neural networks, using more hidden units is often preferable and increases the prediction accuracy of the trained model. For differentially private training, using more hidden units leads more noise added at each update due to the increase in the sensitivity of the gradient. However, increasing the number of hidden units does not always decrease accuracy since larger networks are more tolerant to noise. Figure 5(a) shows that accuracy is very close for a hidden unit number in the range of and peaks at 1000.

The number of epochs (so the number of iterations is = ) needs to be sufficient but not too large. The privacy cost in zCDP increases in proportion to and it is more tolerable than DP. We tried several values in the range of and observed that we obtained the best results when is between 100 and 200 for MNIST.

Tuning the gradient clipping threshold depends on the details of the model. If the threshold is too small, the clipped gradient may point in a very different direction from the true gradient. Besides, when we increase the threshold, we add a large amount of noise to the gradients. Abadi et al. [22] proposed that a good way to choose a value for is taking the median of the norms of the unclipped gradients. In our experiments, we tried values in the range of . Figure 5(c) shows that our model is tolerable to the noise up to = 4, then the accuracy decreases marginally.

Finally, we monitor the effect of the minibatch size. In DP settings, choosing smaller minibatch size leads running more epochs, however, the added noise has a smaller relative effect for a larger minibatch. Figure 5(d) shows that relatively larger minibatch sizes give better results. Empirically, we obtain the best accuracy when is around 600 (so the sampling frequency ). Due to space limitations, we only present the results on MNIST data, the results on DIGITS data have very similar behavior.

6 Conclusion

We introduced differentially private variational dropout method that outputs privatized results with accuracy close to the non-private inference results, especially under reasonably strong privacy guarantees. To make effective use of the privacy budget over multiple iterations, we proposed to calculate the cumulative privacy cost by using zCDP. Then, we showed how to perform variational dropout method in private settings. We illustrated the effectiveness of our algorithm on several benchmark datasets. One natural next step is to extend the approach to distributed training of neural networks. The algorithm proposed in the paper are generic and it can be applied to any neural network model. We left its application to other variants of neural networks such as convolutional and recurrent neural networks to future work.

References