DPVIm: Differentially Private Variational Inference Improved

10/28/2022
by   Joonas Jälkö, et al.
0

Differentially private (DP) release of multidimensional statistics typically considers an aggregate sensitivity, e.g. the vector norm of a high-dimensional vector. However, different dimensions of that vector might have widely different magnitudes and therefore DP perturbation disproportionately affects the signal across dimensions. We observe this problem in the gradient release of the DP-SGD algorithm when using it for variational inference (VI), where it manifests in poor convergence as well as high variance in outputs for certain variational parameters, and make the following contributions: (i) We mathematically isolate the cause for the difference in magnitudes between gradient parts corresponding to different variational parameters. Using this as prior knowledge we establish a link between the gradients of the variational parameters, and propose an efficient while simple fix for the problem to obtain a less noisy gradient estimator, which we call aligned gradients. This approach allows us to obtain the updates for the covariance parameter of a Gaussian posterior approximation without a privacy cost. We compare this to alternative approaches for scaling the gradients using analytically derived preconditioning, e.g. natural gradients. (ii) We suggest using iterate averaging over the DP parameter traces recovered during the training, to reduce the DP-induced noise in parameter estimates at no additional cost in privacy. Finally, (iii) to accurately capture the additional uncertainty DP introduces to the model parameters, we infer the DP-induced noise from the parameter traces and include that in the learned posteriors to make them noise aware. We demonstrate the efficacy of our proposed improvements through various experiments on real data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/01/2021

Differentially Private SGD with Sparse Gradients

To protect sensitive training data, differentially private stochastic gr...
research
06/19/2020

Differentially Private Variational Autoencoders with Term-wise Gradient Aggregation

This paper studies how to learn variational autoencoders with a variety ...
research
03/20/2023

Make Landscape Flatter in Differentially Private Federated Learning

To defend the inference attacks and mitigate the sensitive information l...
research
11/26/2019

Gradient Perturbation is Underrated for Differentially Private Convex Optimization

Gradient perturbation, widely used for differentially private optimizati...
research
02/28/2023

Arbitrary Decisions are a Hidden Cost of Differentially-Private Training

Mechanisms used in privacy-preserving machine learning often aim to guar...
research
11/25/2021

DP-SEP! Differentially Private Stochastic Expectation Propagation

We are interested in privatizing an approximate posterior inference algo...
research
04/21/2023

DP-Adam: Correcting DP Bias in Adam's Second Moment Estimation

We observe that the traditional use of DP with the Adam optimizer introd...

Please sign up or login with your details

Forgot password? Click here to reset