A note on privacy preserving iteratively reweighted least squares

05/24/2016 ∙ by Mijung Park, et al. ∙ 0

Iteratively reweighted least squares (IRLS) is a widely-used method in machine learning to estimate the parameters in the generalised linear models. In particular, IRLS for L1 minimisation under the linear model provides a closed-form solution in each step, which is a simple multiplication between the inverse of the weighted second moment matrix and the weighted first moment vector. When dealing with privacy sensitive data, however, developing a privacy preserving IRLS algorithm faces two challenges. First, due to the inversion of the second moment matrix, the usual sensitivity analysis in differential privacy incorporating a single datapoint perturbation gets complicated and often requires unrealistic assumptions. Second, due to its iterative nature, a significant cumulative privacy loss occurs. However, adding a high level of noise to compensate for the privacy loss hinders from getting accurate estimates. Here, we develop a practical algorithm that overcomes these challenges and outputs privatised and accurate IRLS solutions. In our method, we analyse the sensitivity of each moments separately and treat the matrix inversion and multiplication as a post-processing step, which simplifies the sensitivity analysis. Furthermore, we apply the concentrated differential privacy formalism, a more relaxed version of differential privacy, which requires adding a significantly less amount of noise for the same level of privacy guarantee, compared to the conventional and advanced compositions of differentially private mechanisms.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Differential privacy (DP) algorithms provide strong privacy guarantees by typically perturbing some statistics of a given dataset, which appear in the outputs of an algorithm [1]

. The amount of noise added for the perturbation is set in order to compensate for any difference in the probability of any outcome of an algorithm by adding a single individual’s datapoint to or removing it from the data. So, in order to develop a DP algorithm, one first needs to analyse the maximum difference in the probability of any outcome, which often called

sensitivity, to set the level of additive noise.

In this note, we’re interested in developing a privacy preserving iteratively reweighted least squares (IRLS) method. In the compressed sensing literature [2], IRLS is used for solving the L-1 minimisation problem, in which a closed-form update of parameters in each step is available. This IRLS solution in each step is a simple multiplication between the inverse of the weighted second moment matrix and the weighted first moment vector. Due to the inverse of the second moment matrix, analysing the sensitivity becomes challenging. Previous work [3] assumes each feature of each datapoint is

drawn from a standard normal distribution, and analysed the sensitivity of the inverse of the second moment matrix. Unfortunately, the assumption on each features being independent is often not realistic.

Another challenge in developing a privacy preserving IRLS method comes from the iterative nature of the IRLS algorithm. The conventional DP composition theorem (Theorem 3.16 in [1]) states that multiple iterations of a -DP algorithm faces a linearly degrading privacy, which yields -DP after iterations. A more advanced composition theorem (Theorem 3.20 in [1]) yields (, )-DP. The new variable (stating the mechanism’s failure probability) that needs to be set to a very small value, which makes the cumulative privacy loss still relatively high. To compensate for the privacy loss, one needs to add a significant amount of noise to the IRLS solution to avoid revealing any individual information from the output of the algorithm.

In this note, we tackle these challenges by : (1) we analyse the sensitivity of the weighted second moment matrix and the weighted first moment vector separately and perturb each moment by adding noise consistent with its own sensitivity. Then, we do the multiplication of the inverse of perturbed second moment matrix and the perturbed first moment vector. This inversion and multiplication can be viewed as a post-processing step, which doesn’t alter the privacy level. Since we perturb each moment separately, this method does not require any restrictive assumptions on the data. In addition, the noise variance naturally scales with the amount of data. (2) we apply the

concentrated differential privacy formalism, a more relaxed version of differential privacy, to obtain more accurate estimates for the same cumulative privacy loss, compared to DP and its ()-relaxation. In the following, we start by describing our privacy preserving IRLS algorithm.

2 Privacy preserving IRLS

Given a dataset which consists of input-output pairs where we assume and . The iteratively reweighted least squares solution has the form:


where is a design matrix in which the th row is the transposed th input (of length ), and is a column vector of outputs. We denote , and . Here is a diagonal matrix with diagonal . Here we set and compute norm constrained least squares. To avoid dividing by , we set


where is the th row. sets the sparsity (number of non-zero values) of the IRLS solution.

We will perturb each of these statistics and by certain amounts, such that each statistic is differentially private in each iteration.

-differentially private moment by Laplace mechanism.

For perturbing , we use the Laplace mechanism. To use the Laplace mechanism, we first need to analyse the following L1-sensitivity:


Hence, the following Laplace mechanism produces -differentially private moment of :


where .

-differentially private moment by Gaussian mechanism.

One could perturb the first moment by using the Gaussian mechanism. To use the Gaussian mechanism, one needs to analyse the L2-sensitivity, which is straightforwardly coming from Eq.(3)


where , where for .

-differentially private moment .

We perturb by adding Wishart noise following [4], which provides strong privacy guarantees and significantly higher utility than other methods (e.g., [5, 6, 7]) as illustrated in [4] when perturbing positive definite matrices.

To draw Wishart noise, we first draw Gaussian random variables:


and construct a matrix


then is a -differentially private second moment matrix. Proof follows the paper [4]. The matrix is a sample from a Wishart distribution . The probability ratio between a noised-up version given a dataset (where is the exact second moment matrix given ) and given a neighbouring dataset (where is the exact second moment matrix given ) is given by


3 Concentrated differential privacy for IRLS

Here we adopt a relaxed version of DP, the so-called concentrated differential privacy (CDP) in order to significantly lower the amounts of noise to add to the moments without compromising on cumulative privacy loss over several iterations.

According to Theorem 3.5 in [8], any -DP algorithm is ()-CDP. Furthermore, theorem 3.4 states that J-composition of ()-CDP mechanism guarantees ()-CDP. Suppose we perturb some key statistic in each IRLS iteration using the Laplace mechanism. Denote the difference in statistic given dataset x and y by . The conventional composition theorem says that I should add Lap() in each iteration to ensure -DP after J iterations. Now suppose we perturb the key statistic in each iteration by adding Laplace noise drawn from Lap(), which, according to Theorem 3.5 in [8], gives us a ()-CDP solution. According to Theorem 3.4 in [8], after J iterations, we obtain a ()-CDP solution. What we want to make sure is if the expected privacy loss is equal to our privacy budget , i.e., . Using Taylor’s expansion, we can rewrite the left hand side by , which we can lower bound by ignoring the infinite sum, . Hence, the largest should be less than equal to .

This says, in each iteration, the key statistic should be perturbed by adding Laplace noise drawn from Lap(), in order to obtain a ()-CDP solution after iterations. In the IRLS algorithm, we have two statistics to perturb in each iteration. Suppose we perturb each statistic to ensure -DP. Then, we can modify the result above by replacing with for the IRLS algorithm. Hence, each perturbation should result in -DP parameter, where This gives us the -CDP IRLS algorithm below.

0:  Dataset
0:  -IRDP least squares solution after -iteration
  (1) Compute and add either Laplace or Gaussian noise by Eq.(4) or Eq.(5)
  (2) Compute and add Wishart noise by Eq.(7)
  (3) Compute the -CDP least squares solution by .
Algorithm 1 ()-CDP IRLS algorithm via moment perturbation

4 Experiments

Our simulated dataset consists of datatpoints, each with dimensional covariates, generated using i.i.d. draws , then normalise such that the largest squared L2 norm is 1. We generated the true parameter from . We generated each observation from , where . We also normalised such that the largest squared L2-norm is 1.

Figure 1: We tested ()-CDP-IRLS (gau: Gaussian mechanism for mean perturbation, lap: Laplace mechanism for mean perturbation), -DP-IRLS (using the conventional composition theorem), and ()-DP-IRLS (using the Advanced composition theorem) for and with varying for which we generated independent datasets. For each IRLS solution, we computed the log-likelihood of test data ( of training data), then divided by the number of test points to show the log-likelihood per test point. CDP-IRLS requires significantly less data than DP-IRLS for the same level of expected privacy.


This work is supported by Qualcomm.


  • [1] Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci., 9:211–407, August 2014.
  • [2] R. Chartrand and Wotao Yin. Iteratively reweighted algorithms for compressive sensing. In 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 3869–3872, March 2008.
  • [3] Or Sheffet. Private approximations of the 2nd-moment matrix using existing techniques in linear regression. CoRR, abs/1507.00056, 2015.
  • [4] Rutgers Hafiz Imtiaz, Anand D. Sarwate.

    Symmetric matrix perturbation for differentially-private principal component analysis.

    In ICCASP, 2016.
  • [5] Cynthia Dwork, Kunal Talwar, Abhradeep Thakurta, and Li Zhang. Analyze gauss: optimal bounds for privacy-preserving principal component analysis. In

    Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31 - June 03, 2014

    , pages 11–20, 2014.
  • [6] Moritz Hardt and Eric Price. The noisy power method: A meta algorithm with applications. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2861–2869. Curran Associates, Inc., 2014.
  • [7] Kamalika Chaudhuri, Anand Sarwate, and Kaushik Sinha. Near-optimal differentially private principal components. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 989–997. Curran Associates, Inc., 2012.
  • [8] C. Dwork and G. N. Rothblum. Concentrated Differential Privacy. ArXiv e-prints, March 2016.