On the Differential Privacy of Bayesian Inference

12/22/2015 ∙ by Zuhe Zhang, et al. ∙ The University of Melbourne 0

We study how to communicate findings of Bayesian inference to third parties, while preserving the strong guarantee of differential privacy. Our main contributions are four different algorithms for private Bayesian inference on proba-bilistic graphical models. These include two mechanisms for adding noise to the Bayesian updates, either directly to the posterior parameters, or to their Fourier transform so as to preserve update consistency. We also utilise a recently introduced posterior sampling mechanism, for which we prove bounds for the specific but general case of discrete Bayesian networks; and we introduce a maximum-a-posteriori private mechanism. Our analysis includes utility and privacy bounds, with a novel focus on the influence of graph structure on privacy. Worked examples and experiments with Bayesian naïve Bayes and Bayesian linear regression illustrate the application of our mechanisms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

PriBayesian-NB

Code of PriBayesian-NB (experiments in AAAI-16)


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We consider the problem faced by a statistician who analyses data and communicates her findings to a third party . While wants to learn as much as possible from the data, she doesn’t want to learn about any individual datum. This is for example the case where is an insurance agency, the data are medical records, and wants to convey the efficacy of drugs to the agency, without revealing the specific illnesses of individuals in the population. Such requirements of privacy are of growing interest in the learning Chaudhuri and Hsu (2012); Duchi, Jordan, and Wainwright (2013), theoretical computer science Dwork and Smith (2009); McSherry and Talwar (2007) and databases communities Barak et al. (2007); Zhang et al. (2014) due to the impact on individual privacy by real-world data analytics.

In our setting, we assume that is using Bayesian inference

to draw conclusions from observations of a system of random variables by updating a prior distribution on parameters (

i.e., latent variables) to a posterior. Our goal is to release an approximation to the posterior that preserves privacy. We adopt the formalism of differential privacy to characterise how easy it is for to discover facts about the individual data from the aggregate posterior. Releasing the posterior permits external parties to make further inferences at will. For example, a third-party pharmaceutical might use the released posterior as a prior on the efficacy of drugs, and update it with their own patient data. Or they could form a predictive posterior for classification or regression, all while preserving differential privacy of the original data.

Our focus in this paper is Bayesian inference in probabilistic graphical models (PGMs), which are popular as a tool for modelling conditional independence assumptions. Similar to the effect on statistical and computational efficiency of non-private inference, a central tenet of this paper is that independence structure should impact privacy. Our mechanisms and theoretical bounds are the first to establish such a link between PGM graph structure and privacy.

Main Contributions.

We develop the first mechanisms for Bayesian inference on the flexible PGM framework (cf. Table 1

). We propose two posterior perturbation mechanisms for networks with likelihood functions from exponential families and conjugate priors, that add Laplace noise 

Dwork et al. (2006) to posterior parameters (or their Fourier coefficients) to preserve privacy. The latter achieves stealth through consistent posterior updates. For general Bayesian networks, posteriors may be non-parametric. In this case, we explore a mechanism Dimitrakakis et al. (2014) which samples from the posterior to answer queries—no additional noise is injected. We complement our study with a maximum a posterioriestimator that leverages the exponential mechanism McSherry and Talwar (2007). Our utility and privacy bounds connect privacy and graph/dependency structure, and are complemented by illustrative experiments with Bayesian naïve Bayes and linear regression.

Related Work.

Many individual learning algorithms have been adapted to maintain differential privacy, including regularised logistic regression 

Chaudhuri and Monteleoni (2008), the SVM Rubinstein et al. (2012); Chaudhuri, Monteleoni, and Sarwate (2011), PCA Chaudhuri, Sarwate, and Sinha (2012), the functional mechanism Zhang et al. (2012) and trees Jagannathan, Pillaipakkamnatt, and Wright (2009).

Probabilistic graphical models have been used to preserve privacy. Zhang et al. (2014) learned a graphical model from data, in order to generate surrogate data for release; while Williams and McSherry (2010) fit a model to the response of private mechanisms to clean up output and improve accuracy. Xiao and Xiong (2012)

similarly used Bayesian credible intervals to increase the utility of query responses.

Little attention has been paid to private inference in the Bayesian setting. We seek to adapt Bayesian inference to preserve differential privacy when releasing posteriors. Dimitrakakis et al. (20142015) introduce a differentially-private mechanism for Bayesian inference based on posterior sampling—a mechanism on which we build—while Zheng (2015) considers further refinements. Wang, Fienberg, and Smola (2015) explore Monte Carlo approaches to Bayesian inference using the same mechanism, while Mir (2012) was the first to establish differential privacy of the Gibbs estimator McSherry and Talwar (2007) by minimising risk bounds.

This paper is the first to develop mechanisms for differential privacy under the general framework of Bayesian inference on multiple, dependent r.v.’s. Our mechanisms consider graph structure and include a purely Bayesian approach that only places conditions on the prior. We show how the (stochastic) Lipschitz assumptions of Dimitrakakis et al. (2014) lift to graphs of r.v.’s, and bound KL-divergence when releasing an empirical posterior based on a modified prior. While Chaudhuri, Monteleoni, and Sarwate (2011) achieve privacy in regularised Empirical Risk Minimisation through objective randomisation, we do so through conditions on priors. We develop an alternate approach that uses the additive-noise mechanism of Dwork et al. (2006) to perturb posterior parameterisations; and we apply techniques due to Barak et al. (2007), who released marginal tables that maintain consistency in addition to privacy, by adding Laplace noise in the Fourier domain. Our motivation is novel: we wish to guarantee privacy against omniscient attackers and stealth against unsuspecting third parties.

DBN only Privacy Utility type Utility bound
Laplace closeness of posterior
Fourier close posterior params
Sampler if Lipschitz; or    ) stochastic Lipschitz expected utility functional wrt posterior Dimitrakakis et al. (2015)
MAP closeness of MAP
Table 1: Summary of the privacy/utility guarantees for this paper’s mechanisms. See below for parameter definitions.

2 Problem Setting

Consider a Bayesian statistician estimating the parameters of some family of distributions on a system of r.v.’s , where is an index set, with observations denoted , where is the sample space of . has a prior distribution111

Precisely, a probability measure on a

-algebra . on reflecting her prior belief, which she updates on an observation to obtain posterior

where . Posterior updates are iterated over an i.i.d. dataset to .

’s goal is to communicate her posterior distribution to a third party , while limiting the information revealed about the original data. From the point of view of the data provider, is a trusted party.222Cryptographic tools for untrusted do not prevent information leakage to cf. e.g., Pagnin et al. (2014). However, she may still inadvertently reveal information. We assume that is computationally unbounded, and has knowledge of the prior and the family . To guarantee that can gain little additional information about from their communication, uses Bayesian inference to learn from the data, and a differentially-private posterior to ensure disclosure to is carefully controlled.

2.1 Probabilistic Graphical Models

Our main results focus on PGMs which model conditional independence assumptions with joint factorisation

where are the parents of the -th variable in a Bayesian network—a directed acyclic graph with r.v.’s as nodes.

Example 1.

For concreteness, we illustrate some of our mechanisms on systems of Bernoulli r.v.’s . In that case, we represent the conditional distribution of given its parents as Bernoulli with parameters :

The choice of conjugate prior has Beta marginals with parameters , so that:

Given observation , the updated posterior Beta parameters are and if .

2.2 Differential Privacy

communicates to by releasing information about the posterior distribution, via randomised mechanism that maps dataset to a response in set . Dwork et al. (2006) characterise when such a mechanism is private:

Definition 1 (Differential Privacy).

A randomised mechanism is -DP if for any neighbouring , and measurable :

where , are neighbouring if for at most one .

This definition requires that neighbouring datasets induce similar response distributions. Consequently, it is impossible for to identify the true dataset from bounded mechanism query responses. Differential privacy assumes no bounds on adversarial computation or auxiliary knowledge.

3 Privacy by Posterior Perturbation

One approach to differential privacy is to use additive Laplace noise (Dwork et al., 2006). Previous work has focused on the addition of noise directly to the outputs of a non-private mechanism. We are the first to apply Laplace noise to the posterior parameter updates.

3.1 Laplace Mechanism on Posterior Updates

Under the setting of Example 1, we can add Laplace noise to the posterior parameters. Algorithm 1 releases perturbed parameter updates for the Beta posteriors, calculated simply by counting.

1:  Input data ; graph ; parameter
2:  calculate posterior updates: for all
3:  perturb updates: , .
4:  truncate:
5:  output
Algorithm 1 Laplace Mechanism on Posterior Updates

It then adds zero-mean Laplace-distributed noise to the updates . This is the final dependence on . Finally, the perturbed updates are truncated at zero to rule out invalid Beta parameters and are upper truncated at . This yields an upper bound on the raw updates and facilitates an application of McDiarmid’s bounded-differences inequality (cf. Lemma A.1 in the Appendix) in our utility analysis. Note that this truncation only improves utility (relative to the utility pre-truncation), and does not affect privacy.

Privacy.

To establish differential privacy of our mechanism, we must calculate a Lipschitz condition for the vector

called global sensitivity Dwork et al. (2006).

Lemma 1.

For any neighbouring datasets , the corresponding updates satisfy .

Proof.

By changing the observations of one datum, at most two counts associated with each can change by 1. ∎

Corollary 1.

Algorithm 1 preserves -differential privacy.

Proof.

Based on Lemma 1, the intermediate preserve -differential privacy Dwork et al. (2006). Since truncation depends only on , the preserves the same privacy. ∎

Utility on Updates.

Before bounding the effect on the posterior of the Laplace mechanism, we demonstrate a utility bound on the posterior update counts.

Proposition 1.

With probability at least , for , the update counts computed by Algorithm 1 are close to the non-private counts

where

This bound states that w.h.p., none of the updates can be perturbed beyond . This implies the same bound on the deviation between and the revealed truncated .

Utility on Posterior.

We derive our main utility bounds for Algorithm 1 in terms of posteriors, proved in the Appendix. We abuse notation, and use to refer to the prior density; its meaning will be apparent from context. Given priors , the posteriors on observations are

The privacy-preserving posterior parametrised by the output of Algorithm 1 is

It is natural to measure utility by the KL-divergence between the joint product posteriors and , which is the sum of the component-wise divergences, with each having known closed form. In our analysis, the divergence is a random quantity, expressible as the sum , where the randomness is due to the added noise. We demonstrate this r.v. is not too big, w.h.p.

Theorem 1.

Let . Assume that are independent and is a mapping from to : . Given , we have

where and .
Moreover, when , the bound for expectation can be refined as the following

The loss of utility measured by KL-divergence is no more than

with probability at least .

Note that depends on the structure of the network: bounds are better for networks with an underlying graph having smaller average in-degree.

3.2 Laplace Mechanism in the Fourier Domain

Algorithm 1 follows Kerckhoffs’s Principle Kerckhoffs (1883) of “no security through obscurity”: differential privacy defends against a mechanism-aware attacker. However additional stealth may be required in certain circumstances. An oblivious observer will be tipped off to our privacy-preserving activities by our independent perturbations, which are likely inconsistent with one-another (e.g., noisy counts for and will say different things about ). To achieve differential privacy and stealth, we turn to Barak et al. (2007)

’s study of consistent marginal contingency table release. This section presents a particularly natural application to Bayesian posterior updates.

Denote by the contingency table over r.v.’s induced by : i.e., for each combination of variables , component or cell is a non-negative count of the observations in with characteristic . Geometrically is a real-valued function over the -dimensional Boolean hypercube. Then the parameter delta’s of our first mechanism correspond to cells of -way marginal contingency tables where vector and the projection/marginalisation operator is defined as

(1)

We wish to release these statistics as before, however we will not represent them under their Euclidean coordinates but instead in the Fourier basis where

Due to this basis structure and linearity of the projection operator, any marginal contingency table must lie in the span of few projections of Fourier basis vectors Barak et al. (2007):

Theorem 2.

For any table and set of variables , the marginal table on satisfies .

This states that marginal lies in the span of only those (projected) basis vectors with contained in . The number of values needed to update is then , potentially far less than suggested by (1). To release updates for two r.v.’s there may well be significant overlap ; we need to release once, coefficients for in the downward closure of variable neighbourhoods:

Privacy.

By (Barak et al., 2007, Theorem 6) we can apply Laplace additive noise to release these Fourier coefficients.

Corollary 2.

For any , releasing for each the Fourier coefficient (and Algorithm 2) preserves -differential privacy.

Remark 1.

Since , at worst we have noise scale

This compares favourably with Algorithm 1’s noise scale provided no r.v. is child to more than half the graph. Moreover the denser the graph—the more overlap between nodes’ parents and the less conditional independence assumed—the greater the reduction in scale. This is intuitively appealing.

Consistency.

What is gained by passing to the Fourier domain, is that the perturbed marginal tables of Corollary 2 are consistent: anything in the span of projected Fourier basis vectors corresponds to some valid contingency table on with (possibly negative) real-valued cells Barak et al. (2007).

1:  Input data ; graph ; prior parameters ; parameters
2:  define contingency table on
3:  define downward closure
4:  for  do
5:     Fourier coefficient
6:  end for
7:  increment first coefficient
8:  for  do
9:     project marginal for as
10:     for  do
11:        output posterior param
12:     end for
13:  end for
Algorithm 2 Laplace Mechanism in the Fourier Domain

Non-negativity.

So far we have described the first stage of Algorithm 2. The remainder yields stealth by guaranteeing releases that are non-negative w.h.p. We adapt an idea of Barak et al. (2007) to increase the coefficient of Fourier basis vector , affecting a small increment to each cell of the contingency table. While there is an exact minimal amount that would guarantee non-negativity, it is data dependent. Thus our efficient -time approach is randomised.

Corollary 3.

For , adding to ’s coefficient induces a non-negative table w.p. .

Parameter trades off between the probability of non-negativity and the resulting (minor) loss to utility. In the rare event of negativity, re-running Algorithm 2 affords another chance of stealth at the cost of privacy budget . We could alternatively truncate to achieve validity, sacrificing stealth but not privacy.

Utility.

Analogous to Proposition 1, each perturbed marginal is close to its unperturbed version w.h.p.

Theorem 3.

For each and , the perturbed tables in Algorithm 2 satisfy with probability at least :

Note that the scaling of this bound is reasonable since the table involves cells.

4 Privacy by Posterior Sampling

For general Bayesian networks, can release samples from the posterior Dimitrakakis et al. (2014) instead of perturbed samples of the posterior’s parametrisation. We now develop a calculus of building up (stochastic) Lipschitz properties of systems of r.v.’s that are locally (stochastic) Lipschitz. Given smoothness of the entire network, differential privacy and utility of posterior sampling follow.

4.1 (Stochastic) Lipschitz Smoothness of Networks

The distribution family on outcome space , equipped with pseudo metric333Meaning that does not necessarily imply . , is Lipschitz continuous if

Assumption 1 (Lipschitz Continuity).

Let be a metric on . There exists such that, for any :

We fix the distance function to be the absolute log-ratio (cf. differential privacy). Consider a general Bayesian network. The following lemma shows that the individual Lipschitz continuity of the conditional likelihood at every implies the global Lipschitz continuity of the network.

Lemma 2.

If there exists such that , we have , then where .

Note that while Lipschitz continuity holds uniformly for some families e.g.

, the exponential distribution, this is not so for many useful distributions such as the Bernoulli. In such cases a relaxed assumption requires that the prior be concentrated on smooth regions.

Assumption 2 (Stochastic Lipschitz Continuity).

Let the set of -Lipschitz be

Then there exists constants such that, :

Lemma 3.

For the conditional likelihood at each node , define the set of parameters for which Lipschitz continuity holds with Lipschitz constant . If such that , , then where when .

Therefore, (Dimitrakakis et al., 2015, Theorem 7) asserts differential privacy of the Bayesian network’s posterior.

Theorem 4.

Differential privacy is satisfied using the log-ratio distance, for all and :

  1. Under the conditions in Lemma 2:

    i.e., the posterior is -differentially private under pseudo-metric .

  2. Under the conditions in Lemma 3, if uniformly for all for some :

    where ; constants and ; ; and

    the ratio between the maximum and marginal likelihoods of each likelihood function. Note that i.e., the posterior is -differentially private under pseudo-metric for .

4.2 MAP by the Exponential Mechanism

As an application of the posterior sampler, we now turn to releasing MAP point estimates via the exponential mechanism McSherry and Talwar (2007), which samples responses from a likelihood exponential in some score function. By selecting a utility function that is maximised by a target non-private mechanism, the exponential mechanism can be used to privately approximate that target with high utility. It is natural then to select as our utility the posterior likelihood . This is maximised by the MAP estimate.

1:  Input data ; prior ; appropriate smoothness parameters ; parameters distance , privacy
2:  calculate posterior
3:  set
4:  output sampled
Algorithm 3 Mechanism for MAP Point Estimates

Formally, Algorithm 3, under the assumptions of Theorem 4, outputs response with probability proportional to times a base measure . Here is a Lipschitz coefficient for with sup-norm on responses and pseudo-metric on datasets as in the previous section. Providing the base measure is non-trivial in general, but for discrete finite outcome spaces can be uniform McSherry and Talwar (2007). For our mechanism to be broadly applicable, we can safely take as .444In particular the base measure guarantees we have a proper density function: if is bounded by , then we have normalising constant

Figure 1: Effect on Bayesian naïve Bayes predictive-posterior accuracy of varying the privacy level.
Figure 2:

Effect on linear regression of varying prior concentration. Bands indicate standard error over repeats.


Corollary 4.

Algorithm 3 preserves -differential privacy wrt pseudo-metric up to distance .

Proof.

The sensitivity of the posterior score function corresponds to the computed  (Dimitrakakis et al., 2015, Theorem 6) under either Lipschitz assumptions. The result then follows from (McSherry and Talwar, 2007, Theorem 6). ∎

Utility for Algorithm 3 follows from McSherry and Talwar (2007), and states that the posterior likelihood of responses is likely to be close to that of the MAP.

Lemma 4.

Let with maximizer the MAP estimate, and let for . Then .

5 Experiments

Having proposed a number of mechanisms for approximating exact Bayesian inference in the general framework of probabilistic graphical models, we now demonstrate our approaches on two simple, well-known PGMs: the (generative) naïve Bayes classifier, and (discriminative) linear regression. This section, with derivations in the Appendix, illustrates how our approaches are applied, and supports our extensive theoretical results with experimental observation. We focus on the trade-off between privacy and utility (accuracy and MSE respectively), which involves the (private) posterior via a predictive posterior distribution in both case studies.

5.1 Bayesian Discrete Naïve Bayes

An illustrative example for our mechanisms is a Bayesian naïve Bayes model on Bernoulli class and attribute variables , with full conjugate Beta priors. This PGM directly specialises the running Example 1. We synthesised data generated from a naïve Bayes model, with features and examples. Of these we trained our mechanisms on only examples, with uniform Beta priors. We formed predictive posteriors for from which we thresholded at 0.5 to make classification predictions on the remaining, unseen test data so as to evaluate classification accuracy. The results are reported in Figure 2, where average performance is taken over 100 repeats to account for randomness in train/test split, and randomised mechanisms.

The small size of this data represents a challenge in our setting, since privacy is more difficult to preserve under smaller samples Dwork et al. (2006). As expected, privacy incurs a sacrifice to accuracy for all private mechanisms.

For both Laplace mechanisms that perturb posterior updates, note that the Boolean attributes and class label (being sole parent to each) yields nodes and downward closure size . Following our generic mechanisms, the noise added to sufficient statistics is independent on training set size, and is similar in scale. was set for the Fourier approach, so that stealth was achieved 90% of the time—those times that contributed to the plot. Due to the small increments to cell counts for Fourier, necessary to achieve its additional stealth property, we expect a small decrease to utility which is borne out in Figure 2.

For the posterior sampler mechanism, while we can apply Assumption 2 to a Bernoulli-Beta pair to obtain a generalised form of -differential privacy, we wish to compare with our -differentially-private mechanisms and so choose a route which satisfies Assumption 1 as detailed in the Appendix. We trim the posterior before sampling, so that probabilities are lower-bounded. Figure 2 demonstrates that for small , the minimal probability at which to trim is relatively large resulting in a poor approximate posterior. But past a certain threshold, the posterior sampler eventually outperforms the other private mechanisms.

5.2 Bayesian Linear Regression

We next explore a system of continuous r.v.’s in Bayesian linear regression, for which our posterior sampler is most appropriate. We model label

as i.i.d. Gaussian with known-variance and mean a linear function of features, and the linear weights endowed with multivariate Gaussian prior with zero mean and spherical covariance. To satisfy Assumption 

1 we conservatively truncate the Gaussian prior (cf. the Appendix), and sample from the resulting truncated posterior; form a predictive posterior; then compute mean squared error. To evaluate our approach we used the U.S. census records dataset from the Integrated Public Use Microdata Series Minnesota Population Center (2009) with k records and demographic features. To predict Annual Income, we train on data with the remainder for testing. Figure 2 displays MSE under varying prior precision (inverse of covariance) and weights with bounded norm (chosen conservatively). As expected, more concentrated prior (larger ) leads to worse MSE for both mechanisms, as stronger priors reduce data influence. Compared with linear regression, private regression suffers only slightly worse MSE. At the same time the posterior sampler enjoys increasing privacy (that is proportional to the bounded norm as given in the Appendix).

6 Conclusions

We have presented a suite of mechanisms for differentially-private inference in graphical models, in a Bayesian framework. The first two perturb posterior parameters to achieve privacy. This can be achieved either by performing perturbations in the original parameter domain, or in the frequency domain via a Fourier transform. Our third mechanism relies on the choice of a prior, in combination with posterior sampling. We complement our mechanisms for releasing the posterior, with private MAP point estimators. Throughout we have proved utility and privacy bounds for our mechanisms, which in most cases depend on the graph structure of the Bayesian network: naturally, conditional independence affects privacy. We support our new mechanisms and analysis with applications to two concrete models, with experiments exploring the privacy-utility trade-off.

Acknowledgements.

This work was partially supported by the Swiss National Foundation grant “Swiss Sense Synergy” CRSII2-154458.

Appendix A Proofs for Laplace Mechanism on Posterior Updates

a.1 Proof of Proposition 1

Let us denote the event of a Laplace sample exceeding in absolute value as , . Consider the probability of an event that none of the i.i.d. Laplace noise we add to each count exceed in absolute value:

To make sure this probability is no smaller than , we need to be at most to .

a.2 Proof of Theorem 1

Lemma A.1.

(McDiarmid’s inequality) Suppose that random variables are independent, is a mapping from to . For , if satisfies

Then

To prove Theorem 1, we need the following statements.

Lemma A.2.

For constants and , .

Proof.

This follows from applying the mean value theorem to the function on the interval

We need to assume that and are larger than the only turning point of the function which is between and ; is sufficient.555To cover more priors, we could assume that is bounded away from zero, and that at this parameter is maximum below and proceed from there for the second case.

Lemma A.3.

For ,

Proof.

By monotonicity of the function,

and by inequalities , we have

The last inequality follows from

Lemma A.4.

For , satisfies

Proof.

The distribution of is given by,

Then we have

By the same argument, the expectation of is given by .

By plugging and in Lemma A.3 with have