1 Introduction
Differential privacy (DP) algorithms provide strong privacy guarantees by typically perturbing some statistics of a given dataset, which appear in the outputs of an algorithm [1]
. The amount of noise added for the perturbation is set in order to compensate for any difference in the probability of any outcome of an algorithm by adding a single individual’s datapoint to or removing it from the data. So, in order to develop a DP algorithm, one first needs to analyse the maximum difference in the probability of any outcome, which often called
sensitivity, to set the level of additive noise.In this note, we’re interested in developing a privacy preserving iteratively reweighted least squares (IRLS) method. In the compressed sensing literature [2], IRLS is used for solving the L1 minimisation problem, in which a closedform update of parameters in each step is available. This IRLS solution in each step is a simple multiplication between the inverse of the weighted second moment matrix and the weighted first moment vector. Due to the inverse of the second moment matrix, analysing the sensitivity becomes challenging. Previous work [3] assumes each feature of each datapoint is
drawn from a standard normal distribution, and analysed the sensitivity of the inverse of the second moment matrix. Unfortunately, the assumption on each features being independent is often not realistic.
Another challenge in developing a privacy preserving IRLS method comes from the iterative nature of the IRLS algorithm. The conventional DP composition theorem (Theorem 3.16 in [1]) states that multiple iterations of a DP algorithm faces a linearly degrading privacy, which yields DP after iterations. A more advanced composition theorem (Theorem 3.20 in [1]) yields (, )DP. The new variable (stating the mechanism’s failure probability) that needs to be set to a very small value, which makes the cumulative privacy loss still relatively high. To compensate for the privacy loss, one needs to add a significant amount of noise to the IRLS solution to avoid revealing any individual information from the output of the algorithm.
In this note, we tackle these challenges by : (1) we analyse the sensitivity of the weighted second moment matrix and the weighted first moment vector separately and perturb each moment by adding noise consistent with its own sensitivity. Then, we do the multiplication of the inverse of perturbed second moment matrix and the perturbed first moment vector. This inversion and multiplication can be viewed as a postprocessing step, which doesn’t alter the privacy level. Since we perturb each moment separately, this method does not require any restrictive assumptions on the data. In addition, the noise variance naturally scales with the amount of data. (2) we apply the
concentrated differential privacy formalism, a more relaxed version of differential privacy, to obtain more accurate estimates for the same cumulative privacy loss, compared to DP and its ()relaxation. In the following, we start by describing our privacy preserving IRLS algorithm.2 Privacy preserving IRLS
Given a dataset which consists of inputoutput pairs where we assume and . The iteratively reweighted least squares solution has the form:
(1) 
where is a design matrix in which the th row is the transposed th input (of length ), and is a column vector of outputs. We denote , and . Here is a diagonal matrix with diagonal . Here we set and compute norm constrained least squares. To avoid dividing by , we set
(2) 
where is the th row. sets the sparsity (number of nonzero values) of the IRLS solution.
We will perturb each of these statistics and by certain amounts, such that each statistic is differentially private in each iteration.
differentially private moment by Laplace mechanism.
For perturbing , we use the Laplace mechanism. To use the Laplace mechanism, we first need to analyse the following L1sensitivity:
(3)  
Hence, the following Laplace mechanism produces differentially private moment of :
(4) 
where .
differentially private moment by Gaussian mechanism.
One could perturb the first moment by using the Gaussian mechanism. To use the Gaussian mechanism, one needs to analyse the L2sensitivity, which is straightforwardly coming from Eq.(3)
(5) 
where , where for .
differentially private moment .
We perturb by adding Wishart noise following [4], which provides strong privacy guarantees and significantly higher utility than other methods (e.g., [5, 6, 7]) as illustrated in [4] when perturbing positive definite matrices.
To draw Wishart noise, we first draw Gaussian random variables:
(6) 
and construct a matrix
(7) 
then is a differentially private second moment matrix. Proof follows the paper [4]. The matrix is a sample from a Wishart distribution . The probability ratio between a noisedup version given a dataset (where is the exact second moment matrix given ) and given a neighbouring dataset (where is the exact second moment matrix given ) is given by
(8)  
(9)  
(10)  
3 Concentrated differential privacy for IRLS
Here we adopt a relaxed version of DP, the socalled concentrated differential privacy (CDP) in order to significantly lower the amounts of noise to add to the moments without compromising on cumulative privacy loss over several iterations.
According to Theorem 3.5 in [8], any DP algorithm is ()CDP. Furthermore, theorem 3.4 states that Jcomposition of ()CDP mechanism guarantees ()CDP. Suppose we perturb some key statistic in each IRLS iteration using the Laplace mechanism. Denote the difference in statistic given dataset x and y by . The conventional composition theorem says that I should add Lap() in each iteration to ensure DP after J iterations. Now suppose we perturb the key statistic in each iteration by adding Laplace noise drawn from Lap(), which, according to Theorem 3.5 in [8], gives us a ()CDP solution. According to Theorem 3.4 in [8], after J iterations, we obtain a ()CDP solution. What we want to make sure is if the expected privacy loss is equal to our privacy budget , i.e., . Using Taylor’s expansion, we can rewrite the left hand side by , which we can lower bound by ignoring the infinite sum, . Hence, the largest should be less than equal to .
This says, in each iteration, the key statistic should be perturbed by adding Laplace noise drawn from Lap(), in order to obtain a ()CDP solution after iterations. In the IRLS algorithm, we have two statistics to perturb in each iteration. Suppose we perturb each statistic to ensure DP. Then, we can modify the result above by replacing with for the IRLS algorithm. Hence, each perturbation should result in DP parameter, where This gives us the CDP IRLS algorithm below.
4 Experiments
Our simulated dataset consists of datatpoints, each with dimensional covariates, generated using i.i.d. draws , then normalise such that the largest squared L2 norm is 1. We generated the true parameter from . We generated each observation from , where . We also normalised such that the largest squared L2norm is 1.
Acknowledgements
This work is supported by Qualcomm.
References
 [1] Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci., 9:211–407, August 2014.
 [2] R. Chartrand and Wotao Yin. Iteratively reweighted algorithms for compressive sensing. In 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 3869–3872, March 2008.
 [3] Or Sheffet. Private approximations of the 2ndmoment matrix using existing techniques in linear regression. CoRR, abs/1507.00056, 2015.

[4]
Rutgers Hafiz Imtiaz, Anand D. Sarwate.
Symmetric matrix perturbation for differentiallyprivate principal component analysis.
In ICCASP, 2016. 
[5]
Cynthia Dwork, Kunal Talwar, Abhradeep Thakurta, and Li Zhang.
Analyze gauss: optimal bounds for privacypreserving principal
component analysis.
In
Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31  June 03, 2014
, pages 11–20, 2014.  [6] Moritz Hardt and Eric Price. The noisy power method: A meta algorithm with applications. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2861–2869. Curran Associates, Inc., 2014.
 [7] Kamalika Chaudhuri, Anand Sarwate, and Kaushik Sinha. Nearoptimal differentially private principal components. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 989–997. Curran Associates, Inc., 2012.
 [8] C. Dwork and G. N. Rothblum. Concentrated Differential Privacy. ArXiv eprints, March 2016.
Comments
There are no comments yet.