On Connecting Stochastic Gradient MCMC and Differential Privacy

12/25/2017 ∙ by Bai Li, et al. ∙ 0

Significant success has been realized recently on applying machine learning to real-world applications. There have also been corresponding concerns on the privacy of training data, which relates to data security and confidentiality issues. Differential privacy provides a principled and rigorous privacy guarantee on machine learning models. While it is common to design a model satisfying a required differential-privacy property by injecting noise, it is generally hard to balance the trade-off between privacy and utility. We show that stochastic gradient Markov chain Monte Carlo (SG-MCMC) -- a class of scalable Bayesian posterior sampling algorithms proposed recently -- satisfies strong differential privacy with carefully chosen step sizes. We develop theory on the performance of the proposed differentially-private SG-MCMC method. We conduct experiments to support our analysis and show that a standard SG-MCMC sampler without any modification (under a default setting) can reach state-of-the-art performance in terms of both privacy and utility on Bayesian learning.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Utilizing large amounts of data has helped machine learning algorithms achieve significant success in many real applications. However, such work also raises privacy concerns. For example, a diagnostic system based on machine learning algorithms may be trained on a large quantity of patient data, such as medical images. It is important to protect training data from adversarial attackers (Shokri et al., 2017)

. However, even the most widely-used machine learning algorithms such as deep learning could implicitly memorize the training data

(Papernot et al., 2016), meaning that the learned model parameters implicitly contain information that could violate the privacy of training data. Such algorithms may be readily attacked.

The above potential model vulnerability can be addressed by differential privacy (DP), a general notion of algorithm privacy (Dwork, 2008; Dwork et al., 2006). It is designed to provide a strong privacy guarantee for general learning procedures, such as statistical analysis and machine learning algorithms, that involve private information.

Among the popular machine learning algorithms, Bayesian inference has realized significant success recently, due to its capacity to leverage expert knowledge and employ uncertainty estimates. Notably, the recently developed stochastic gradient Markov chain Monte Carlo (SG-MCMC) technique enables scalable Bayesian inference in a big-data setting. While there have been many extensions of SG-MCMC, little work has been directed at studying the privacy properties of such algorithms. Specifically,

Wang et al. (2015) showed that an SG-MCMC algorithm with appropriately chosen step sizes preserves differential privacy. In practice, however, their analysis requires the step size to be extremely small to limit the risk of violating privacy. Such a small step size is not practically useful for training models with non-convex posterior distribution landscapes, which is the most common case in recent machine learning models. More details of this issue are discussed in Section 3.1.

On the other hand, Abadi et al. (2016) introduced a new privacy-accounting method, which allows one to keep better track of the privacy loss (defined in Section 2.1) for sequential algorithms. Further, they proposed a differentially-private stochastic gradient descending (DP-SGD) method for training machine learning models privately. Although they showed a significant improvement in calculating the privacy loss, there is no theory showing that their DP-SGD has a guaranteed performance under privacy constraints.

In this paper, built on the notation of the privacy accounting method, we show that using SG-MCMC for training large-scale machine learning models is sufficient to achieve strong differential privacy. Specifically, we combine the advantages of the aforementioned works, and prove that SG-MCMC methods naturally satisfy the definition of differential privacy even without changing their default step size, thus allowing both good utility and strong privacy in practice.

2 Preliminaries

The following notation is used through out the paper. An input database containing data points is represented as , where . The parameters of interest in the model are denoted , e.g.

, these may be the weights of a deep neural network. The identity matrix is denoted

.

2.1 Differential Privacy

The concept of differential privacy was proposed by Dwork (2008) to describe the privacy modeling property of a randomized mechanism (algorithm) on two adjacent datasets.

Definition 1 (Adjacent Datasets)

Two datasets and are called adjacent if they only differ by one record, e.g., for some , where and .

Definition 2 (Differential Privacy)

Given a pair of adjacent datasets and , a randomized mechanism mapping from data space to its range satisfies -differential privacy if for all measurable and all adjacent and

where

denotes the probability of event

, and and are two positive real numbers that indicate the loss of privacy. When , we say the mechanism has -differential privacy.

Differential privacy places constraints on the difference between the outputs of two adjacent inputs and by a random mechanism. If we assume that and only differ by one record , by observing the outputs, any outside attackers are not able to recognize whether the output has resulted from and , as long as and are small enough (making these two probabilities close to each other). Thus, the existence of the record is protected. Since the record in which the two datasets differ by is arbitrary, the privacy protection is applicable for all records.

To better describe the randomness of the outputs of with inputs and

, we define a random variable called privacy loss.

Definition 3 (Privacy Loss)

Let be a randomized mechanism, and and are a pair of adjacent datasets. Let denote any auxiliary input that does not depend on or . For an outcome from the mechanism , the privacy loss at is defined as:

It can be shown that the -DP is equivalent to the tail bound of the distribution of its corresponding privacy loss random variable (Abadi et al., 2016) (see Theorem 1 in the next section), thus this random variable is an important tool for quantifying the privacy loss of a mechanism.

2.2 Moments Accountant Method

A common approach for achieving differential privacy is to introduce random noise, to hide the existence of a particular data point. For example, Laplace and Gaussian mechanisms (Dwork et al., 2014) add i.i.d. Laplace random noise and Gaussian noise, respectively, to an algorithm. While a large amount of noise makes an algorithm differentially private, it may sacrifice the utility of the algorithm. Therefore, in such paradigms, it is important to calculate the smallest amount of noise that is required to achieve a certain level of differential privacy.

The moments accountant method proposed in

Abadi et al. (2016) keeps track of a bound of the moments of the privacy loss random variables defined above. As a result, it allows one to calculate the amount of noise needed to ensure the privacy loss under a given threshold.

Definition 4 (Moments Accountant)

Let be a randomized mechanism, and let and be a pair of adjacent data sets. Let denote any auxiliary input that is independent of both and . The moments accountant with an integer parameter is defined as:

where

is the log of the moment generating function evaluated at

, that is the moment of the privacy loss random variable.

The following moments bound on Gaussian mechanism with random sampling is proved in (Abadi et al., 2016).

Theorem 1

[Composability] Suppose that consists of a sequence of adaptive mechanisms where , and is the range of the th mechanism, i.e., , with the composition operator. Then, for any

where the auxiliary input for is defined as all ’s outputs, , for ; and takes s output, for , as the auxiliary input.

[Tail bound] For any , the mechanism is -DP for

For the rest of this paper, for simplicity we only consider mechanisms that output a real-valued vector. That is,

.

Using the properties above, the following lemma about the moments accountant has been proven in (Abadi et al., 2016):

Lemma 2

Suppose that with . Let and is a mini-batch sample with sampling probability , i.e., with minibatch size of . If , for any positive integer , the mechanism satisfies

In the following, we build our analysis of the differentially-private SG-MCMC based on this lemma.

2.3 Stochastic Gradient Markov Chain Monte Carlo

SG-MCMC is a family of scalable Bayesian sampling algorithms, developed recently to generate approximate samples from a posterior distribution , with a model parameter vector. SG-MCMC mitigates the slow mixing and non-scalability issues encountered by traditional MCMC algorithms, by adopting gradient information of the posterior distribution, and using minibatches of data in each iteration of the algorithm. It is particularly suitable for large-scale Bayesian learning, and thus is becoming increasingly popular.

SG-MCMC algorithms are discretized numerical approximations of continuous-time Itô diffusions (Chen et al., 2015; Ma et al., 2015), whose stationary distributions are designed to coincide with . Formally, an Itô diffusion is written as

(1)

with is the time index; represents the full variables in a system, where typically (thus ) is an augmentation of the model parameters; and is -dimensional Brownian motion. Functions and are assumed to satisfy the Lipschitz continuity condition (Ghosh, 2011).

Based on the Itô diffusion, SG-MCMC algorithms further develop three components for scalable inference: define appropriate functions and in (1) so that their (marginal) stationary distributions coincide with the target posterior distribution ; replace or with unbiased stochastic approximations to reduce the computational complexity, e.g., approximating with a random subset of data points instead of using the full data; and solve the generally intractable continuous-time Itô diffusions with a numerical method, which typically brings estimation errors that are controllable.

The stochastic gradient Langevin dynamic (SGLD) model defines , and , where denotes the unnormalized negative log-posterior, and is the prior distribution of . The stochastic gradient Hamiltonian Monte Carlo (SGHMC) method (Chen et al., 2014) is based on second-order Langevin dynamics, which defines , and

for a scalar ; is an auxiliary variable known as the momentum (Chen et al., 2014; Ding et al., 2014). Similar formulae can be defined for other SG-MCMC algorithms, such as the stochastic gradient thermostat (Ding et al., 2014), and other variants with Riemannian information geometry (Patterson and Teh, 2013; Ma et al., 2015; Li et al., 2016).

To make the algorithms scalable in a large-data setting, i.e., when is large, an unbiased version of is calculated with a random subset of the full data, denoted and defined as

where is a random minibatch of the data with size (typically ).

We typically adopt the popular Euler method to solve the continuous-time diffusion by an -time discretization (step size being ). The Euler method is a first-order numerical integrator, thus inducing an approximation error (Chen et al., 2015). Algorithm 1

illustrates the application of SGLD algorithm with the Euler integrator for differential privacy, which is almost the same as original SGLD except that there is a gradient norm clipping in Step 4 of the algorithm. The norm-clipping step ensures that the computed gradients satisfy the Lipschitz condition, a common assumption on loss functions in a differential-privacy setting

(Song et al., 2013; Bassily et al., 2014; Wang et al., 2015). The reasoning is intuitively clear: since differential privacy requires the output to be non-sensitive to any changes on an arbitrary data point, it is thus crucial to bound the impact of a single data point to the target function. The Lipschitz condition is easily met by clipping the norm of a loss function, a common technique for gradient-based algorithms to prevent gradient explosion (Pascanu et al., 2013).

The clipping is equivalent to using an adaptive step size as in preconditioned SGLD (Li et al., 2016), and thus it does not impact its convergence rate in terms of the estimation accuracy discussed in Section 3.2.

0:  Data of size , size of mini-batch , number of iterations , prior , privacy parameter , gradient norm bound . A decreasing/fixed-step-size sequence . Set .
1:  for  do
2:     Take a random sample with sampling probability .
3:     Calculate
4:     Clip norm:
5:     Sample each coordinate of iid from
6:     Update
7:     Return as a posterior sample (after a predefined burn-in period).
8:     Increment .
9:  end for
10:   and compute the overall privacy cost using a the moment accountant method.
Algorithm 1 Stochastic Gradient Langevin Dynamics with Differential Privacy

3 Privacy Analysis for Stochastic Gradient Langevin Dynamics

We first develop theory to prove Algorithm 1 is -DP under a certain condition. Our theory shows a significant improvement of the differential privacy obtained by SGLD over the most related work by Wang et al. (2015). To study the estimation accuracy (utility) of the algorithm, the corresponding mean square error estimation bounds are then proved under such differential-privacy settings.

3.1 Step size bounds for differentially-private SGLD

Previous work on SG-MCMC has shown that an appropriately chosen decreasing step size sequence can be adopted for an SG-MCMC algorithm (Teh et al., 2016; Chen et al., 2015). For the sequence in the form of , the optimal value is in order to obtain the optimal mean square error bound (defined in Section 3.2). Consequently, we first consider in our analysis below, where the constant of the stepsize can be specified with parameters of the DP setting, shown in Theorem 3. The differential privacy property under a fixed step size is also discussed subsequently.

Theorem 3

If we let the step size decrease at the rate of , there exist positive constants and such that given the sampling probability and the number of iterations , for any , Algorithm 1 satisfies -DP as long as satisfies:

  1. .

Proof  See Section A of the SM.  

Remark 1

In practice, the first condition is easy to satisfy as is often much larger than the step size. The second condition is also easy to satisfy with properly chosen and , and we will verify this condition in our experiments. In the rest of this section, we only focus on the third condition as an upper bound to the step size.

It is now clear that with the optimal decreasing step size sequence (in terms of MSE defined in Section 3.2), Algorithm 1 maintains -DP. There are other variants of SG-MCMC which use fixed step sizes. We show in Theorem 4 that in this case, the algorithm still satisfies -DP.

Theorem 4

Under the same setting as Theorem 3, but using a fixed-step size , Algorithm 1 satisfies -DP whenever for another constant .

Proof  See Section D of the SM.  
In (Wang et al., 2015), the authors proved that the SGLD method is -DP if the step size is small enough to satisfy

This bound is relatively small compared to ours (explained below), thus it is not practical in real applications. To address this problem, Wang et al. (2015) proposed the Hybrid Posterior Sampling algorithm, that uses the One Posterior Sample (OPS) estimator for the “burn-in” period, followed by the SGLD with a small step size to guarantee the differential privacy property. We note that for complicated models, especially with non-convex target posterior landscapes, such an upper bound for step size still brings practical problems even with the OPS. One issue is that the Markov chain will mix very slowly with a small step size, leading to highly correlated samples.

By contrast, our new upper bound for the step size in Theorem 3, , improves the bound in Wang et al. (2015) by a factor of at the first iteration. Note the constant in our bound is empirically smaller than (see the calculating method in Section C of the SM), thus still giving a larger bound overall.

To provide intuition on how our bound compares with that in Wang et al. (2015), consider the MNIST data set with . If we set , , , and , our upper bound can be calculated as , consistent with the default step size when training MNIST (Li et al., 2016). More importantly, our theory indicates that using SGLD with the default step size is able to achieve -DP with a small privacy loss for the MNIST dataset. As a comparison, (Wang et al., 2015) gives a much smaller upper bound of , which is too small too be practically used. More detailed comparison for these two bounds is given in Section 4.1, when considering experimental results.

Finally, note that as in (Wang et al., 2015), our analysis can be easily extended to other SG-MCMC methods such as SGHMC (Chen et al., 2014) and SGNHT (Ding et al., 2014). We do not specify the results here for conciseness.

3.2 Utility Bounds

The above theory indicates that, with a smaller step size, one can manifest an SG-MCMC algorithm that preserves more privacy, e.g., -DP in the limit of zero step size. On the other hand, when the step size approaches zero, we get (theoretically) exact samples from the posterior distributions. In this case, the implication of privacy becomes transparent because changing one data point typically would not impact prediction under a posterior distribution in a Bayesian model. However, as we note above, this does not mean we can choose arbitrarily small step sizes, because this would hinder the exploration of the parameter space, leading to slow mixing.

To measure the mixing and utility property, we investigate the estimation accuracy bounds under the differential privacy setting. Following standard settings for SG-MCMC (Chen et al., 2015; Vollmer et al., 2016), we use the mean square error (MSE) under a target posterior distribution to measure the estimation accuracy for a Bayesian model. Specifically, our utility goal is to evaluate the posterior average of a test function , defined as , with a posterior distribution . The posterior average is typically infeasible to compute, thus we use the sample average, , to approximate , where are the samples from an SG-MCMC algorithm. The MSE we desire is defined as .

Our result is summarized in Proposition 5, an extension of Theorem 3 in (Chen et al., 2017) for the differentially-privacy SG-MCMC with decreasing step sizes. In this section we impose the same assumptions on an SG-MCMC algorithm as in previous work (Vollmer et al., 2016; Chen et al., 2015), which are detailed in Section B of the SM. We assume both the corresponding Itô diffusion (in terms of its coefficients) and the numerical method of an SG-MCMC algorithm to be well behaved.

Proposition 5

Under Assumption 1 in the SM, the MSE of SGLD with a decreasing step size sequence as in Theorem 3 is bounded, for a constant independent of and a constant depending on and , as

where

The bound in Proposition 5 indicates how the MSE decreases to zero w.r.t. the number of iterations and other parameters. It is consistent with standard SG-MCMC, leading to a similar convergence rate. Interestingly, we can also derive the optimal bounds w.r.t. the privacy parameters. For example, the optimal value for when fixing other parameters can be seen as . Consequently, we have in the optimal MSE setting. Different from the bound of standard SG-MCMC Chen et al. (2015), when considering a -DP setting, the MSE bound induces an asymptotic bias term of as long as and are not equal to zero.

We also wish to study the MSE under the fixed-step-size case. Consider a general situation, i.e., , for which Chen et al. (2017) has proved the following MSE bound for a fixed steps size, rephrased in Lemma 6.

Lemma 6

With the same Assumption as Proposition 5, the MSE of SGLD is bounded as***With a slight abuse of notation, the constant is independent of , but might be different from that in Proposition 5.:

Furthermore, the optimal MSE w.r.t. the step size is bounded by

with the optimal step size being .

From Lemma 6, the optimal step size, i.e., , is of a lower order than both our differential-privacy-based algorithm () and the algorithm in Wang et al. (2015), i.e., . This means that for large enough, both ours and the method in Wang et al. (2015) might not run on the optimal step size setting. A remedy for this is to increase the step size at the cost of increasing privacy loss. Because for the same privacy loss, our step sizes are typically larger than in Wang et al. (2015), our algorithm is able to obtain both higher approximate accuracy and differential privacy. Specifically, to guarantee the desired differential-privacy property as stated in Theorem 4, we substitute a step size of into the MSE formula in Lemma 6. Consequently, the MSE is bounded by , which is smaller than for the method in Wang et al. (2015).

4 Experiments

Figure 1: Upper bounds for fixed-step size and decreasing-step size with different privacy loss , as well as the upper bound from Wang et al. (2015).

We test the proposed differentially-private SG-MCMC algorithms by considering several tasks, including logistic regression and deep neural networks, and compare with related Bayesian and optimization methods in terms of both algorithm privacy and utility.

4.1 Upper Bound

We first compare our upper bound for the step size in Section 3.1 with the bound of Wang et al. (2015). Note this upper bound denotes the largest step size allowed to preserve -DP.

In this simulation experiment, we use the following setting: , , , and . We vary from to for different differential-privacy settings, for both ours (fixed and decreasing-step size cases) and the bound in Wang et al. (2015), with results in Figure 1. It is clear that our bounds give much larger step sizes than from Wang et al. (2015) at a same privacy loss, e.g., vs. . Our step sizes appear to be much more practical in real applications.

In the rest of our experiments, we focus on using the decreasing-step size SGLD as it gives a nicer MSE bound as shown in Proposition 5. For the parameters in our bounds, i.e., , the default setting is often chosen to be and ; is typically selected from a range such as . In this experiment, we investigate the sensitivity of our proposed upper bound w.r.t. and when fixing other parameters. The results are plotted in Figure 2, from which we observe that our proposed step size bound is stable in terms of the data size , and is approximately proportional to . Such a conclusion is not a direct implication from the upper bound formula in Theorem 3, as the constant also depends on .

Figure 2: Step size upper bounds for with fixed (top), and with fixed (bottom). In both simulations, we let and .

The result also indicates a rule for choosing step sizes in practice by using our upper bound, which fall into the range of . When using such step sizes, we observe that the standard SGLD automatically preserves -DP even when is small.

4.2 Logistic Regression

In the remaining experiments, we compare our proposed differentially-private SGLD (DP-SGLD) with other methods. The Private Aggregation of Teacher Ensembles (PATE) model proposed in Papernot et al. (2016)

is the state-of-the-art framework for differentially private training of machine learning models. PATE takes advantage of the moment accountant method for privacy loss calculation, and uses a knowledge-transfer technique via semi-supervised learning, to build a teacher-student-based model. This framework first trains multiple teachers with private data; these teachers then differentially and privately release aggregated knowledge, such as label assignments on several public data points, to multiple students. The students then use the released knowledge to train their models in a supervised learning setting, or they can incorporate unlabeled data in a semi-supervised learning setting. The semi-supervised setting generally works for many machine learning models, yet it requires a large amount of non-private unlabeled data for training, which are not always available in practice. Thus, we did not consider this setting in our experiments.

We compare DP-SGLD with PATE and the Hybrid Posterior Sampling algorithm on the Adult data set from the UCI Machine Learning Repository (Lichman, 2013), for a binary classification task with Bayesian logistic regression, under the DP setting. We fix , and compare the classification accuracy while varying

. We repeat each experiment ten times, and report averages and the standard deviations, as illustrated in Figure

3.

Figure 3: Test accuracies on a classification task based on Bayesian logistic regression for One-Posterior Sample (OPS), Hybrid Posterior sampling based on SGLD, and our proposed DP-SGLD with different choice of privacy loss . The non-private baseline is obtained by standard SGLD.

Our proposed DP-SGLD achieves a higher accuracy compared to other methods and is close to the baseline where the plain SGLD is used. In fact, when or above, our DP-SGLD becomes the standard SGLD, therefore has the same test accuracy as the baseline. Note that PATE obtains the worst performance in this experiment. This might be because when is small and without unlabeled data, the students in this framework are restricted to using supervised learning with an extremely small amount of training data.

4.3 Deep Neural Networks

We test our methods for training deep neural networks under differentially-private settings. We compare our methods with PATE and the DP-SGD proposed in Abadi et al. (2016). Since the performance of PATE highly depends on the availability of public unlabeled data, we allow it to access a certain amount of unlabeled data, though it is not a fair comparison to our method. We do not include the results with Hybrid Posterior sampling, as it does not converge due to its small step sizes in the experiments.

We use two datasets: () the standard MNIST dataset for handwritten digit recognition, consisting of 60,000 training examples and 10,000 testing examples (LeCun and Cortes, 2010); and () the Street View House Number (SVHN) dataset, which contains 600,000 RGB images of printed digits obtained from pictures of house number in street view (Netzer et al., ). We use the same network structure as for the PATE model, which contains two stacked convolutional layers and one fully connected layer with ReLUs for MNIST, and two more convolutional layers for SVHN. We use standard Gaussian priors for the weights of the DNN. For the MNIST dataset, the standard SGLD with step size satisfies -DP for and when we set . For the SVHN dataset, the standard SGLD with step size satisfies -DP for and when we set . In both settings, we let to satisfy the second condition in Theorem 3. In addition, we also ran a differentially-private version of the SGHMC for comparison. The test accuracies are shown in Table 1. It is shown that SGLD and SGHMC obtain better test accuracy than the state-of-the-art differential privacy methods, remarkably with much less privacy loss. They even outperformed the non-private baseline model using Adam, due to the advantages of Bayesian modeling.

Dataset Methods Accuracy
Non-Private 99.23%
PATE(100) 98.00%
MNIST PATE(1000) 98.10%
DP-SGLD 99.12%
DP-SGHMC %
Non-Private 92.80%
PATE(100) 82.76%
SVHN PATE(1000) 90.66%
DP-SGLD 92.14%
DP-SGHMC %
Table 1: Test accuracies on MNIST and and SVHN for different methods.

5 Related Work

There have been several papers that have considered differentially-private stochastic gradient based methods. For example, Song et al. (2013)

proposed a differentially-private stochastic gradient descent (SGD) algorithm, which requires a large amount of noise when mini-batches are randomly sampled. The theoretical performance of noisy SGD is studied in

Bassily et al. (2014) for the special case of convex loss functions. Therefore, for a non-convex loss function, a common setting for many machine learning models, there are no theoretical guarantee on performance. In Abadi et al. (2016)

another differentially private SGD was proposed, requiring a smaller variance for added Gaussian noise, yet it still did not provide theoretical guarantees on utility. On the other hand, the standard SG-MCMC has been shown to be able to converge to the target posterior distribution in theory. In this paper, we discuss the effect of our modification for differential privacy on the performance of the SG-MCMC, which endows theoretical guarantees on the bounds for the mean squared error of the posterior mean.

Bayesian modeling provides an effective framework for privacy-preserving data analysis, as posterior sampling naturally introduces noise into the system, leading to differential privacy (Dimitrakakis et al., 2014; Wang et al., 2015). In Foulds et al. (2016), the privacy for sampling from exponential families with a Gibbs sampler was studied. In Wang et al. (2015) a comprehensive analysis was proposed on the differential privacy of SG-MCMC methods. As a comparison, we have derived a tighter bound for the amount of noise required to guarantee a certain differential privacy, yielding a more practical upper bound for the step size.

6 Conclusion

Previous work on differential privacy has modified existing algorithms, or has built complicated frameworks that sacrifice a certain amount of performance for privacy. In some cases the privacy loss may be relatively large. This paper has addressed a privacy analysis for SG-MCMC, a standard class of methods for scalable posterior sampling for Bayesian models. We have significantly relaxed the condition for SG-MCMC methods being differentially private, compared to previous works. Our results indicate that standard SG-MCMC methods have strong privacy guarantees for problems in large scale. In addition, we have proposed theoretical analysis on the estimation performance of differentially private SG-MCMC methods. Our results show that even when there is a strong privacy constraint, the differentially private SG-MCMC still endows a guarantee on the model performance. Our experiments have shown that with our analysis, the standard SG-MCMC methods achieve both state-of-the-art utility and strong privacy compared with related methods on multiple tasks, such as logistic regression and deep neural networks.

Our results also shed lights onto how SG-MCMC methods help improving the generalization for training models, as it is well acknowledged that there is a connection between differential privacy and generalization for a model (cite Learning with Differential Privacy: Stability, Learnability and the Sufficiency and Necessity of ERM Principle). For example, in Saatchi and Wilson (2017), a Bayesian GAN model trained with SGHMC is proposed and shows promising performance in avoiding mode collapse problem in GAN training. According to Arora et al. (2017), the mode collapse problem is potentially caused by weak generalization. Therefore, it is very likely that Bayesian GAN moderated mode collapse problem because SGHMC naturally leads to better generalization.

References

  • Abadi et al. (2016) Martín Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 308–318. ACM, 2016.
  • Arora et al. (2017) S. Arora, R. Ge, Y. Liang, T. Ma, and Y. Zhang. Generalization and equilibrium in generative adversarial nets (GANs). In ICML, 2017.
  • Bassily et al. (2014) Raef Bassily, Adam Smith, and Abhradeep Thakurta. Differentially private empirical risk minimization: Efficient algorithms and tight error bounds. arXiv preprint arXiv:1405.7085, 2014.
  • Chen et al. (2015) C. Chen, N. Ding, and L. Carin. On the convergence of stochastic gradient MCMC algorithms with high-order integrators. In NIPS, 2015.
  • Chen et al. (2017) C. Chen, W. Wang, Y. Zhang, Q. Su, and L. Carin. A convergence analysis for a class of practical variance-reduction stochastic gradient mcmc. (arXiv:1709.01180), 2017. URL https://arxiv.org/abs/1709.01180.
  • Chen et al. (2014) Tianqi Chen, Emily Fox, and Carlos Guestrin. Stochastic gradient hamiltonian monte carlo. In Eric P. Xing and Tony Jebara, editors, Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pages 1683–1691, Bejing, China, 22–24 Jun 2014. PMLR.
  • Dimitrakakis et al. (2014) Christos Dimitrakakis, Blaine Nelson, Aikaterini Mitrokotsa, and Benjamin IP Rubinstein. Robust and private bayesian inference. In International Conference on Algorithmic Learning Theory, pages 291–305. Springer, 2014.
  • Ding et al. (2014) N. Ding, Y. Fang, R. Babbush, C. Chen, R. D. Skeel, and H. Neven. Bayesian sampling using stochastic gradient thermostats. In NIPS, 2014.
  • Dwork (2008) Cynthia Dwork. Differential privacy: A survey of results. In International Conference on Theory and Applications of Models of Computation, pages 1–19. Springer, 2008.
  • Dwork et al. (2006) Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. Springer, 2006.
  • Dwork et al. (2014) Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407, 2014.
  • Foulds et al. (2016) James Foulds, Joseph Geumlek, Max Welling, and Kamalika Chaudhuri. On the theory and practice of privacy-preserving bayesian data analysis. arXiv preprint arXiv:1603.07294, 2016.
  • Ghosh (2011) A. P. Ghosh. Backward and Forward Equations for Diffusion Processes. Wiley Encyclopedia of Operations Research and Management Science, 2011.
  • LeCun and Cortes (2010) Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010. URL http://yann.lecun.com/exdb/mnist/.
  • Li et al. (2016) C. Li, C. Chen, D. Carlson, and L. Carin. Preconditioned stochastic gradient Langevin dynamics for deep neural networks. In AAAI, 2016.
  • Lichman (2013) M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml.
  • Ma et al. (2015) Y. A. Ma, T. Chen, and E. B. Fox. A complete recipe for stochastic gradient MCMC. In NIPS, 2015.
  • Mattingly et al. (2010) J. C. Mattingly, A. M. Stuart, and M. V. Tretyakov. Construction of numerical time-average and stationary measures via Poisson equations. SIAM J. NUMER. ANAL., 48(2):552–577, 2010.
  • (19) Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning.
  • Papernot et al. (2016) Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Semi-supervised knowledge transfer for deep learning from private training data. arXiv preprint arXiv:1610.05755, 2016.
  • Pascanu et al. (2013) Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio.

    On the difficulty of training recurrent neural networks.

    In International Conference on Machine Learning, pages 1310–1318, 2013.
  • Patterson and Teh (2013) S. Patterson and Y. W. Teh. Stochastic gradient Riemannian Langevin dynamics on the probability simplex. In NIPS, 2013.
  • Saatchi and Wilson (2017) Y. Saatchi and A. G. Wilson. Bayesian GAN. In NIPS, 2017.
  • Shokri et al. (2017) Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In Security and Privacy (SP), 2017 IEEE Symposium on, pages 3–18. IEEE, 2017.
  • Song et al. (2013) Shuang Song, Kamalika Chaudhuri, and Anand D Sarwate. Stochastic gradient descent with differentially private updates. In Global Conference on Signal and Information Processing (GlobalSIP), 2013 IEEE, pages 245–248. IEEE, 2013.
  • Teh et al. (2016) Y. W. Teh, A. H. Thiery, and S. J. Vollmer. Consistency and fluctuations for stochastic gradient Langevin dynamics. JMLR, (17):1–33, 2016.
  • Vollmer et al. (2016) S. J. Vollmer, K. C. Zygalakis, and Y. W. Teh. Exploration of the (Non-)Asymptotic bias and variance of stochastic gradient Langevin dynamics. JMLR, 2016.
  • Wang et al. (2015) Yu-Xiang Wang, Stephen Fienberg, and Alex Smola. Privacy for free: Posterior sampling and stochastic gradient monte carlo. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 2493–2502, 2015.

Appendix A Proof of Theorem 3

We first prove Algorithm 1 is -DP if we change the variance of to be for some constant .

It is easy to see that SGLD in Algorithm 1 consists of a sequence of updates for the model parameter . Each update corresponds to a random mechanism defined in Theorem 1, thus we will first derive the moments accountant for each iteration. In each iteration, the only data access is in Step 6. Therefore, in the following, we only focus on the interaction between and the noise , which is essentiallyIn this paper, we only consider the case for which we choose priors that do not depend on the data, as is common in the Bayesian setting. .

To simplify the notation, we let , and the variance of can be rewritten as Later we will show the optimal decreasing ratio for the step size is .. Then we have:

If we let and , we can apply Lemma 2 to calculate the upper bound for the log moment of the privacy loss random variable for the iteration to be

as long as the conditions in Lemma 2 are satisfied, that is and the mini-batch sampling probability .

Using the composability property of the moments accountant in Theorem 1, over iterations, the log moment of the privacy loss random variable is bounded by

According to the tail bound property in Theorem 1, is the minimum of w.r.t. . However, since is an integer, a closed form for this minimum is generally intractable. Nevertheless, to guarantee -DP, it suffices that

(2)

We also require that our choice of parameters satisfies Lemma 2. Consequently, we have

(3)

Since , we can use a similar technique§§§Further explained in Section C of the SM. as in Abadi et al. [2016] to find explicit constants and such that when and , the conditions (2) (3) are satisfied. If we plug in and , we have proved that Algorithm 1 is -DP when .

For the second step of the proof, we prove that Algorithm 1 is -DP when the original variance of is used, i.e., . This is straightforward because when we have as long as the step size is positive. Adding more noise decreases the privacy loss. To satisfy -DP, it suffices to set the variance of as , which gives the original Algorithm 1, a variant of the standard SGLD algorithm with decreasing step size.

In addition, Lemma 2 requires and . Note , we also need to ensure and .

Appendix B Assumptions on SG-MCMC Algorithms

For the diffusion in (1), we first define the generator as:

(4)

where is a measurable function, means the -derivative of , means transpose. for two vectors and , for two matrices and . Under certain assumptions, there exists a function, , such that the following Poisson equation is satisfied Mattingly et al. [2010]:

(5)

where denotes the model average, with being the equilibrium distribution for the diffusion (1), which is assumed to coincide with the posterior distribution . The following assumptions are made for the SG-MCMC algorithms [Vollmer et al., 2016, Chen et al., 2015].

Assumption 1

The diffusion (1) is ergodic. Furthermore, the solution of (5) exists, and the solution functional satisfies the following properties:

  • and its up to 3th-order derivatives , are bounded by a function , i.e., for , .

  • The expectation of on is bounded: .

  • is smooth such that , for some .

Appendix C Calculating Constants in Moment Accountant Methods

For calculating the constants and , which is a part of the moment accoutant method, we refer to https://github.com/tensorflow/models This is under the Apache License, Version 2.0 as an implimentation of the moment accountant method. A comprehensive description for the implimentation can be found int Abadi et al. [2016].

This code allows one to calculate the corresponding given by enumerating all the possible integers under a certain threhosld as the candidate value of and selecting the one that minimizes . Once is determined, it is easy to calculate and for evaluating the upper bound for the step size.

Appendix D Proof of Theorem 4

Claim: Under the same setting as Theorem 3, but using a fixed-step size , Algorithm 1 satisfies -DP whenever for another constant .

Proof  The only change of the proof for fixed step size is that the expression for the variance of the Gaussian noise becomes for fixed step size. We still apply Theorem 1 and Lemma 2 to find the required conditions for -DP:

Using the method described in the previous section, one can find and such that when and satisfy the above conditions. Then if we plug in and , and compare it to , it is easy to see Algorithm 1 satisfies -DP when .  

Appendix E Proof of Proposition 5

Claim: Under Assumption 1 in the section B, the MSE of SGLD with a decreasing step size sequence as in Theorem 3 is bounded, for a constant independent of and a constant depending on and , as

where

Proof 

First, we adopt the MSE formula for the decreasing-step-size SG-MCMC with Euler integrator (1-st order integrator) from Theorem 5 of Chen et al. [2015], which is written as

(6)

where , and is a term related to , which, according to Theorem 3 of Chen et al. [2017], can be simplified as

(7)

Let . Substituting (E) into (6), we have

(8)

Now, if we assume , then we rewrite .

Note . Plug this into the bound in (8), we have: