Wishart Mechanism for Differentially Private Principal Components Analysis

11/18/2015 ∙ by Wuxuan Jiang, et al. ∙ Shanghai Jiao Tong University 0

We propose a new input perturbation mechanism for publishing a covariance matrix to achieve (ϵ,0)-differential privacy. Our mechanism uses a Wishart distribution to generate matrix noise. In particular, We apply this mechanism to principal component analysis. Our mechanism is able to keep the positive semi-definiteness of the published covariance matrix. Thus, our approach gives rise to a general publishing framework for input perturbation of a symmetric positive semidefinite matrix. Moreover, compared with the classic Laplace mechanism, our method has better utility guarantee. To the best of our knowledge, Wishart mechanism is the best input perturbation approach for (ϵ,0)-differentially private PCA. We also compare our work with previous exponential mechanism algorithms in the literature and provide near optimal bound while having more flexibility and less computational intractability.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Plenty of machine learning tasks deal with sensitive information such as financial and medical data. A common concern regarding data security arises on account of the rapid development of data mining techniques. Several data privacy definitions are proposed in the literature. Among them differential privacy (DP) has been widely used 

[Dwork et al.2006]. Differential privacy controls the fundamental quantity of information that can be revealed with changing one individual. Beyond a concept in database security, differential privacy has been used by many researchers to develop privacy-preserving learning algorithms [Chaudhuri and Monteleoni2009, Chaudhuri, Monteleoni, and Sarwate2011, Bojarski et al.2014]

. Indeed, this class of algorithms is applied to a large number of machine learning models including logistic regression 

[Chaudhuri and Monteleoni2009]

, support vector machine 

[Chaudhuri, Monteleoni, and Sarwate2011]

, random decision tree 

[Bojarski et al.2014], etc. Accordingly, these methods can protect the raw data even though the output and algorithm itself are published.

Differential privacy (DP) aims to hide the individual information while keeping basic statistics of the whole dataset. A simple idea to achieve this purpose is to add some special noise to the original model. After that, the attacker, who has two outputs generated by slightly different inputs, cannot distinguish whether the output change comes from the artificial noise or input difference. However, the noise might influence the quality of regular performance of the model. Thus, we should carefully trade off between privacy and utility.

No matter what the procedure is, a query, a learning algorithm, a game strategy or something else, we are able to define differential privacy if this procedure takes a dataset as input and returns the corresponding output. In this paper, we study the problem of designing differential private principal component analysis (PCA). PCA reduces the data dimension while keeping the optimal variance. More specifically, it finds a projection matrix by computing a low rank approximation to the sample covariance matrix of the given data points.

Privacy-preserving PCA is a well-studied problem in the literature [Dwork et al.2014, Hardt and Roth2012, Hardt and Roth2013, Hardt and Price2014, Blum et al.2005, Chaudhuri, Sarwate, and Sinha2012, Kapralov and Talwar2013]. It outputs a noisy projection matrix for dimension reduction while preserving the privacy of any single data point. The extant privacy-preserving PCA algorithms have been devised based on two major features: the notion of differential privacy and the stage of randomization. Accordingly, the privacy-preserving PCA algorithms can be divided into distinct categories.

The notion of differential privacy has two types: -DP (also called pure DP) and -DP (also called approximate DP). -DP is a weaker version of

-DP as the former allows the privacy guarantee to be broken with tiny probability (more precisely,

). In the seminal work on privacy-preserving PCA [Dwork et al.2014, Hardt and Roth2012, Hardt and Roth2013, Hardt and Price2014, Blum et al.2005], the authors used the notion of -DP. In contrast, there is only a few work [Chaudhuri, Sarwate, and Sinha2012, Kapralov and Talwar2013], which is based on -DP.

In terms of the stage of randomization, there are two mainstream classes of approaches. The first is randomly computing the eigenspace

[Hardt and Roth2013, Hardt and Price2014, Chaudhuri, Sarwate, and Sinha2012, Kapralov and Talwar2013]. The noise is added in the computing procedure. An alternative way is directly adding noise to the covariance matrix. Then one runs the non-private eigenspace computing algorithm to produce the output. This class of approaches is called input perturbation [Blum et al.2005, Dwork et al.2014]. The input perturbation algorithms publish a noisy sample covariance matrix before computing the eigenspace. Thus, any further operation on the noisy covariance matrix does not violate privacy guarantee. So far as the flexibility is concerned, the input perturbation has better performance because it is not limited only to computing eigenspace. Besides, the input perturbation approach is efficient because it merely takes extra efforts on generating the noise. In view of these advantages, our mechanism for privacy-preserving PCA is also based on input perturbation.

Related Work

blum2005practical (2005) proposed an early input perturbation framework (named SULQ), and the parameters of noise are refined by dwork2006calibrating (2006). dwork2014analyze (2014) proved the state-of-the-art utility bounds for -DP. hardt2012beating (2012) provided a better bound under the coherence assumption. In [Hardt and Roth2013, Hardt and Price2014]

, the authors used a noisy power method to produce the principal eigenvector iteratively with removing the previous generated ones. hardt2014noisy (2014) provided a special case for

-DP as well.

chaudhuri2012near (2012) proposed the first useful privacy-preserving PCA algorithm for -DP based on an exponential mechanism [McSherry and Talwar2007]. kapralov2013differentially (2013) argued that the algorithm in [Chaudhuri, Sarwate, and Sinha2012]

lacks convergence time guarantee and used heuristic tests to check convergence of the chain, which may affect the privacy guarantee. They also devised a mixed algorithm for low rank matrix approximation. However, their algorithm is quite complicated to implement and takes

running time. Here is the dimension of the data point.

Our work is mainly inspired by dwork2014analyze (2014). Since they provided the algorithms for -DP, we seek the similar approach for -DP with a different noise matrix design. As input perturbation methods, blum2005practical (2005) and dwork2014analyze (2014) both used the Gaussian symmetric noise matrix for privately publishing a noisy covariance matrix. A reasonable worry is that the published matrix might be no longer positive semidefinite, a normal attribute for a covariance matrix.

Contribution and Organization

In this paper we propose a new mechanism for privacy-preserving PCA that we call Wishart mechanism. The key idea is to add a Wishart noise matrix to the original sample covariance matrix. A Wishart matrix is always positive semidefinite, which in turn makes the perturbed covariance matrix positive semidefinite. Additionally, Wishart matrix can be regarded as the scatter matrix of some random Gaussian vectors [Gupta and Nagar2000]. Consequently, our Wishart mechanism equivalently adds Gaussian noise to the original data points.

Setting appropriate parameters of Wishart distribution, we derive the -privacy guarantee (Theorem 4). Compared to the present Laplace mechanism, our Wishart mechanism adds less noise (Section 4), which implies our mechanism always has better utility bound. We also provide a general framework for choosing Laplace or Wishart input perturbation for -DP in Section 4.

Not only using the Laplace mechanism as a baseline, we also conduct theoretical analysis to compare our work with other privacy-preserving PCA algorithms based on the -DP. With respect to the different criteria, we provide sample complexity bound (Theorem 7) for comparison with  chaudhuri2012near (2012) and derive the low rank approximation closeness when comparing to  kapralov2013differentially (2013). Other than the principal eigenvector guarantee in [Chaudhuri, Sarwate, and Sinha2012], we have the guarantee for rank- subspace closeness (Theorem 6). With using a stronger definition of adjacent matrices, we achieve a -free utility bound (Theorem 9). Converting the lower bound construction in [Chaudhuri, Sarwate, and Sinha2012, Kapralov and Talwar2013] into our case, we can see the Wishart mechanism is near-optimal.

The remainder of the paper is organized as follows. Section 2 gives the notation and definitions used in our paper. Section 3 lists the baseline and our designed algorithms. Section 4 provides the thorough analysis on privacy and utility guarantee of our mechanism together with comparison to several highly-related work. Finally, we conclude the work in Section 5. Note that we put some proofs and more explanation into the supplementary material.

2 Preliminaries

We first give some notation that will be used in this paper. Let denote the identity matrix. Given an real matrix

, let its full singular value decomposition (SVD) as

, where and are orthogonal (i.e., and ), and is a diagonal matrix with the th diagonal entry being the th largest singular value of . Assume that the rank of is (). This implies that has nonzero singular values. Let and be the first () columns of and , respectively, and be the top sub-block of . Then the matrix is the best rank- approximation to .

The Frobenius norm of is defined as , the spectral norm is defined as , the nuclear norm is define as , and the norm is defined as .

Given a set of raw data points where , we consider the problem of publishing a noisy empirical sample covariance matrix for doing PCA. Following previous work on privacy-preserving PCA, we also assume . The standard PCA computes the sample covariance matrix of the raw data . Since is a symmetric positive semidefinite matrix, its SVD is equivalent to the spectral decomposition. That is, . PCA uses as projection matrix to compute the low-dimensional representation of raw data: .

In this work we use Laplace and Wishart distributions, which are defined as follows.

Definition 1.

A random variable

is said to have a Laplace distribution

, if its probability density function is

Definition 2 ([Gupta and Nagar2000]).

A random symmetric positive definite matrix is said to have a Wishart distribution , if its probability density function is

where and is a positive definite matrix.

Now we introduce the formal definition of differential privacy.

Definition 3.

A randomized mechanism M takes a dataset as input and outputs a structure , where is the range of . For any two adjacent datasets and (with only one distinct entry), is said to be -differential private if for all we have

where is a small parameter controlling the strength of privacy requirement.

This definition actually sets limitation on the similarity of output probability distributions for the given similar inputs. Here the adjacent datasets can have several different interpretations. In the scenario of privacy-preserving PCA, our definition is as follows. Two datasets

and are adjacent provided and for . It should be pointed out that our definition of adjacent datasets is slightly different from  [Kapralov and Talwar2013], which leads to significant difference on utility bounds. We will give more specifically discussions in Section 4.

We also give the definition of -differential privacy. This notion requires less privacy protection so that it often brings better utility guarantee.

Definition 4.

A randomized mechanism takes a dataset as input and outputs a structure , where is the range of . For any two adjacent datasets and (with only one distinct entry), is said to be -differential private if for all we have

Sensitivity analysis is a general approach to achieving differential privacy. The following definitions show the two typical kinds of sensitivity.

Definition 5.

The sensitivity is defined as

The sensitivity is defined as

The sensitivity describes the possible largest change as a result of individual data entry replacement. The sensitivity is used in Laplace Mechanism for -differential privacy, while the sensitivity is used in Gaussian Mechanism for -differential privacy. We list the two mechanisms for comparison.

Theorem 1 (Laplace Mechanism).

Let . Add Laplace noise to each dimension of . This mechanism provides -differential privacy.

Theorem 2 (Gaussian Mechanism).

For , let . Add Gaussian noise to each dimension of . This mechanism provides -differential privacy.

The above mechanisms are all perturbation methods. Another widely used method is exponential mechanism [McSherry and Talwar2007] which is based on sampling techniques.

3 Algorithms

First we look at a general framework of privacy-preserving PCA. According to the definition of differential privacy, a privacy-preserving PCA takes the raw data matrix as input and then calculates the sample covariance matrix . Finally, it computes the top- subspace of as the output.

The traditional approach adds noise to the computing procedure. For example, chaudhuri2012near (2012) and kapralov2013differentially (2013) used a sampling based mechanism during computing eigenvectors to obtain approximate results. Our mechanism adds noise in the first stage, publishing in a differential private manner. Thus, our mechanism takes as input and outputs . Afterwards we follow the standard PCA to compute the top- subspace. This can be seen as a differential private preprocessing procedure.

Our baseline is the Laplace mechanism (Algorithm 1 and Theorem 1). To the best of our knowledge, Laplace mechanism is the only input perturbation method for -DP PCA. Since this private procedure ends before computing the subspace, this shows in sensitivity definition.

1:Raw data matrix ; Privacy parameter ; Number of data ;
2:Draw i.i.d. samples from , then form a symmetric matrix . These samples are put in the upper triangle part. Each entry in lower triangle part is copied from the opposite position.
3:Compute ;
4:Add noise ;
5:;
Algorithm 1 Laplace input perturbation

Note that to make be symmetric, we use a symmetric matrix-variate Laplace distribution in Algorithm 1. However, this mechanism cannot guarantee the positive semi-definiteness of , a desirable attribute for a covariance matrix. This motivates us to use a Wishart noise alternatively, giving rise to the Wishart mechanism in Algorithm 2.

1:Raw data matrix ; Privacy parameter ; Number of data ;
2:Draw a sample from , where has

same eigenvalues equal to

;
3:Compute ;
4:Add noise ;
5:;
Algorithm 2 Wishart input perturbation

4 Analysis

In this section, we are going to conduct theoretical analysis of Algorithms 1 and 2 under the framework of differential private matrix publishing. The theoretical support has two parts: privacy and utility guarantee. The former is the essential requirement for privacy-preserving algorithms and the latter tells how well the algorithm works against a non-private version. Chiefly, we list the valuable theorems and analysis. All the technical proofs omitted can be found in the supplementary material.

Privacy guarantee

We first show that both algorithms satisfy privacy guarantee. Suppose there are two adjacent datasets and where (i.e., only and are distinct). Without loss of generality, we further assume that each data vector has the norm at most 1.

Theorem 3.

Algorithm 1 provides -differential privacy.

This theorem can be quickly proved by some simple derivations so we put the proof in the supplementary material.

Theorem 4.

Algorithm 2 provides -differential privacy.

Proof.

Assume the outputs for the adjacent inputs and are identical (denoted ). Here and . We define the difference matrix . Actually the privacy guarantee is to bound the following term:

As , we have that

Then apply Von Neumann’s trace inequality: For matrices , denote their th-largest singular value as . Then . So that

(1)

Since has rank at most 2, and by singular value inequality , we can bound :

In Algorithm 2, the scale matrix in Wishart distribution has same eigenvalues equal to , which implies . Substituting these terms in Eq. (1) yields

Utility guarantee

Then we give bounds about how far the noisy results are from optimal. Since the Laplace and Wishart mechanisms are both input perturbation methods, their analyses are similar.

In order to ensure privacy guarantee, we add a noise matrix to the input data. Such noise may have effects on the property of the original matrix. For input perturbation methods, the magnitude of the noise matrix directly determines how large the effects are. For example, if the magnitude of the noise matrix is even larger than data, the matrix after perturbation is surely covered by noise. Better utility bound means less noise added. We choose the spectral norm of the noise matrix to measure its magnitude. Since we are investigating the privacy-preserving PCA problem, the usefulness of the subspace of the top- singular vectors is mainly cared.

The noise matrix in the Laplace mechanism is constructed with i.i.d random variables of . Using the tail bound for an ensemble matrix in [Tao2012], we have that the spectral norm of the noise matrix in Algorithm 1 satisfies with high probability.

Then we turn to analyze the Wishart mechanism. We use the tail bound of the Wishart distribution in [Zhu2012]:

Lemma 1 (Tail Bound of Wishart Distribution).

Let . Then for , with probability at most ,

where .

In our settings that and , we thus have that with probability at most ,

Let . Then . So we can say with high probability

For convenience, we write

We can see that the spectral norm of noise matrix generated by the Wishart mechanism is while the Laplace mechanism requires . This implies that the Wishart mechanism adds less noise to obtain privacy guarantee. We list the present four input perturbation approaches for comparison. Compared to the state-of-the-art results about case [Dwork et al.2014], our noise magnitude of is obviously worse than their . It can be seen as the utility gap between -DP and -DP.

Approach Noise magnitude Privacy
Laplace
[Blum et al.2005]
Wishart
[Dwork et al.2014]
Table 1: Spectral norm of noise matrix in input perturbation.

General framework

We are talking about the intrinsic difference between the Laplace and Wishart mechanisms. The key element is the difference matrix of two adjacent matrices. Laplace mechanism adds a noise matrix according to the sensitivity, which equals to . Thus, the spectral norm of noise matrix is . When it comes to the Wishart mechanism, the magnitude of noise is determined by . For purpose of satisfying privacy guarantee, we take . Then the spectral norm of noise matrix is . Consequently, we obtain the following theorem.

Theorem 5.

is a symmetric matrix generated by some input. For two arbitrary adjacent inputs, the generated matrices are and . Let . Using the Wishart mechanism to publish M in differential private manner works better if

otherwise the Laplace mechanism works better.

Top- subspace closeness

We now conduct comparison between our mechanism and the algorithm in [Chaudhuri, Sarwate, and Sinha2012]. chaudhuri2012near (2012) proposed an exponential-mechanism-based method, which outputs the top- subspace by drawing a sample from the matrix Bingham-von Mises-Fisher distribution. wang2013differential (2013) applied this algorithm to private spectral analysis on graph and showed that it outperforms the Laplace mechanism for output perturbation. Because of the scoring function defined, it is hard to directly sample from the original Bingham-von Mises-Fisher distribution. Instead, chaudhuri2012near (2012) used Gibbs sampling techniques to reach an approximate solution. However, there is no guarantee for convergence. They check the convergence heuristically, which may affect the basic privacy guarantee.

First we provide our result on the top- subspace closeness:

Theorem 6.

Let be the top- subspace of in Algorithm 2. Denote the non-noisy subspace as corresponding to . Assume are singular values of . If , then with high probability

We apply the well-known Davis-Kahan theorem [Davis1963] to obtain this result. This theorem characterizes the usefulness of our noisy top- subspace. Nevertheless, chaudhuri2012near (2012) only provided the utility guarantee on the principal eigenvector. So we can only compare the top- subspace closeness, correspondingly.

Before the comparison, we introduce the measure in  [Chaudhuri, Sarwate, and Sinha2012].

Definition 6.

A randomized algorithm is an -close approximation to the top eigenvector if for all data sets of n points, output a vector such that

Under this measure, we derive the sample complexity of the Wishart mechanism.

Theorem 7.

If and , then the Wishart mechanism is a -close approximation to PCA.

Because a useful algorithm should output an eigenvector making close to 1, our condition of is quite weak. Comparing to the sample complexity bound of the algorithm in  [Chaudhuri, Sarwate, and Sinha2012]:

Theorem 8.

If , then the algorithm in  [Chaudhuri, Sarwate, and Sinha2012] is a -close approximation to PCA.

Our result has a factor up to with dropping the term . Actually, the relationship between and heavily depends on the data. Thus, as a special case of top- subspace closeness, our bound for the top- subspace is comparable to  chaudhuri2012near’s (2012).

Low rank approximation

Here we discuss the comparison between the Wishart mechanism and privacy-preserving rank- approximation algorithm proposed in [Kapralov and Talwar2013, Hardt and Price2014]. PCA can be seen as a special case of low rank approximation problems. kapralov2013differentially (2013) combined the exponential and Laplace mechanisms to design a low rank approximation algorithm for a symmetric matrix, providing strict guarantee on convergence. However, the implementation of the algorithm contains too many approximation techniques and it takes time complexity while our algorithm takes running time. hardt2014noisy (2014) proposed an efficient meta algorithm, which can be applied to -differentially private PCA. Additionally, they provided a -differentially private version.

We need to point out that the definition of adjacent matrix in privacy-preserving low rank approximation is different from ours (our definition is the same as [Dwork et al.2014, Chaudhuri, Sarwate, and Sinha2012]). In the definition [Kapralov and Talwar2013, Hardt and Price2014], two matrices and are called adjacent if , while we restrict the difference to a certain form . In fact, we make a stronger assumption so that we are dealing with a case of less sensitivity. This difference impacts the lower bound provided in [Kapralov and Talwar2013].

For the consistence of comparison, we remove the term in Algorithm 2, which means we use the for PCA instead of . This is also used by dwork2014analyze (2014).

Applying Lemma 1 in  [Achlioptas and McSherry2001], we can immediately have the following theorem:

Theorem 9.

Suppose the original matrix is and is the rank- approximation of output by the Wishart mechanism. Denote the -th largest eigenvalue of A as . Then

kapralov2013differentially (2013) provided a bound of and hardt2014noisy (2014) provided for the same scenario. If is larger than , our algorithm will work better. Moreover, our mechanism has better bounds than that of hardt2014noisy (2014) while both algorithms are computationally efficient. kapralov2013differentially (2013) established a lower bound of according to their definition of adjacent matrix. If replaced with our definition, the lower bound will become . The details will be given in the in supplementary material. So our mechanism is near-optimal.

5 Concluding Remarks

We have studied the problem of privately publishing a symmetric matrix and provided an approach for choosing Laplace or Wishart noise properly. In the scenario of PCA, our Wishart mechanism adds less noise than the Laplace, which leads to better utility guarantee. Compared with the privacy-preserving PCA algorithm in  [Chaudhuri, Sarwate, and Sinha2012], our mechanism has reliable rank- utility guarantee while the former [Chaudhuri, Sarwate, and Sinha2012] only has rank-1. For rank-1 approximation we have the comparable performance on sample complexity. Compared with the low rank approximation algorithm in [Kapralov and Talwar2013], the bound of our mechanism does not depend on . Moreover, our method is more tractable computationally. Compared with the tractable algorithm in  [Hardt and Price2014], our utility bound is better.

Since input perturbation only publishes the matrix for PCA, any other procedure can take the noisy matrix as input. Thus, our approach has more flexibility. While other entry-wise input perturbation techniques make the covariance not be positive semidefinite, in our case the noisy covariance matrix still preserves this property.

Acknowledgments

We thank Luo Luo for the meaningful technical discussion. We also thank Yujun Li, Tianfan Fu for support on the early stage of the work. This work is supported by the National Natural Science Foundation of China (No. 61572017) and the Natural Science Foundation of Shanghai City (No. 15ZR1424200).

References

  • [Achlioptas and McSherry2001] Achlioptas, D., and McSherry, F. 2001. Fast computation of low rank matrix approximations. In

    Proceedings of the thirty-third annual ACM symposium on Theory of computing

    , 611–618.
    ACM.
  • [Blum et al.2005] Blum, A.; Dwork, C.; McSherry, F.; and Nissim, K. 2005. Practical privacy: the sulq framework. In Proceedings of the twenty-fourth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, 128–138. ACM.
  • [Bojarski et al.2014] Bojarski, M.; Choromanska, A.; Choromanski, K.; and LeCun, Y. 2014. Differentially-and non-differentially-private random decision trees. arXiv preprint arXiv:1410.6973.
  • [Chaudhuri and Monteleoni2009] Chaudhuri, K., and Monteleoni, C. 2009. Privacy-preserving logistic regression. In Advances in Neural Information Processing Systems, 289–296.
  • [Chaudhuri, Monteleoni, and Sarwate2011] Chaudhuri, K.; Monteleoni, C.; and Sarwate, A. D. 2011. Differentially private empirical risk minimization. The Journal of Machine Learning Research 12:1069–1109.
  • [Chaudhuri, Sarwate, and Sinha2012] Chaudhuri, K.; Sarwate, A.; and Sinha, K. 2012. Near-optimal differentially private principal components. In Advances in Neural Information Processing Systems, 989–997.
  • [Davis1963] Davis, C. 1963. The rotation of eigenvectors by a perturbation. Journal of Mathematical Analysis and Applications 6(2):159–173.
  • [Dwork et al.2006] Dwork, C.; McSherry, F.; Nissim, K.; and Smith, A. 2006. Calibrating noise to sensitivity in private data analysis. In Theory of cryptography. Springer. 265–284.
  • [Dwork et al.2014] Dwork, C.; Talwar, K.; Thakurta, A.; and Zhang, L. 2014. Analyze gauss: optimal bounds for privacy-preserving principal component analysis. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, 11–20. ACM.
  • [Gupta and Nagar2000] Gupta, A. K., and Nagar, D. K. 2000. Matrix Variate Distributions. Chapman & Hall/CRC.
  • [Hardt and Price2014] Hardt, M., and Price, E. 2014. The noisy power method: A meta algorithm with applications. In Advances in Neural Information Processing Systems, 2861–2869.
  • [Hardt and Roth2012] Hardt, M., and Roth, A. 2012. Beating randomized response on incoherent matrices. In Proceedings of the forty-fourth annual ACM symposium on Theory of computing, 1255–1268. ACM.
  • [Hardt and Roth2013] Hardt, M., and Roth, A. 2013. Beyond worst-case analysis in private singular vector computation. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, 331–340. ACM.
  • [Kapralov and Talwar2013] Kapralov, M., and Talwar, K. 2013. On differentially private low rank approximation. In Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, 1395–1414. SIAM.
  • [McSherry and Talwar2007] McSherry, F., and Talwar, K. 2007. Mechanism design via differential privacy. In Foundations of Computer Science, 2007. FOCS’07. 48th Annual IEEE Symposium on, 94–103. IEEE.
  • [Tao2012] Tao, T. 2012.

    Topics in random matrix theory

    , volume 132.
    American Mathematical Soc.
  • [Wang, Wu, and Wu2013] Wang, Y.; Wu, X.; and Wu, L. 2013. Differential privacy preserving spectral graph analysis. In Advances in Knowledge Discovery and Data Mining. Springer. 329–340.
  • [Zhu2012] Zhu, S. 2012. A short note on the tail bound of wishart distribution. arXiv preprint arXiv:1212.5860.

Appendix A Proof of privacy guarantee

The basic settings are the same as section 4.

Proof of Theorem  3

In order to prove Theorem 3, we first give the following lemma.

Lemma 2.

For mechanism , the sensitivity satisfies

Proof.

Suppose and . Then the sensitivity of can be converted to the following optimization problem:

Setting and for , we can have a lower bound . Then applying the triangle inequality, we have the upper bound:

Now applying Lemma 2 to Theorem 1 immediately obtains the privacy guarantee for the Laplace mechanism.

Appendix B Proof of utility guarantee

Proof of Theorem  6

Proof.

We use the following two lemmas.

Lemma 3 (Davis-Kahan theorem [Davis1963]).

Let the -th eigenvector of and be and . Denote and . If , then

Lemma 4 (Weyl’s inequality).

If , and are Hermitian matrices such that . Let the -th eigenvalues of , and be , and , respectively. For , we have

In our case, and are both symmetric positive semidefinite (because of the property of Wishart distribution). So the eigenvalues equal to singular values. Then we use Lemma 4 with and . We obtain

Applying Lemma3 with and leads to

Under the assumption , we finally have

Using the property

we finish the proof. ∎

Proof of Theorem  7

We are going to find the condition on sample complexity to satisfy -close approximation.

Proof.

Set in Theorem 6. Then

The condition requires the last term to have a upper bound of 1, which implies . Let , which is , we have that with probability ,

Under the condition

Which yields . ∎

Appendix C Lower bound for low rank approximation

We mainly follow the construction of kapralov2013differentially (2013) and make a slight modification to fit into our definition of adjacent matrices.

Lemma 5.

Define . For each there exists family with , where such that for .

Theorem 10.

Suppose the original matrix is and is the rank- approximation of output by the any -differential private mechanism. Denote the -th largest eigenvalue of A as . Then

Proof.

Take a set in Lemma 5. Construct a series of matrices where . Then

Let . Then letting , we have

Using Markov’s inequality leads to

Here is the main difference between our definition and kapralov2013differentially (2013). They consider the distance from to is at most since . In our framework, is a dataset consisting of data groups, each one is . Changing to means replacing data points with brand new ones. So we consider the distance is at most .

The algorithm should put at least half of the probability mass into . Meanwhile, to satisfy the privacy guarantee

So , we have

Which implies and completes our proof. ∎