Gradient Sparsification Can Improve Performance of Differentially-Private Convex Machine Learning

11/30/2020
by   Farhad Farokhi, et al.
0

We use gradient sparsification to reduce the adverse effect of differential privacy noise on performance of private machine learning models. To this aim, we employ compressed sensing and additive Laplace noise to evaluate differentially-private gradients. Noisy privacy-preserving gradients are used to perform stochastic gradient descent for training machine learning models. Sparsification, achieved by setting the smallest gradient entries to zero, can reduce the convergence speed of the training algorithm. However, by sparsification and compressed sensing, the dimension of communicated gradient and the magnitude of additive noise can be reduced. The interplay between these effects determines whether gradient sparsification improves the performance of differentially-private machine learning models. We investigate this analytically in the paper. We prove that, for small privacy budgets, compression can improve performance of privacy-preserving machine learning models. However, for large privacy budgets, compression does not necessarily improve the performance. Intuitively, this is because the effect of privacy-preserving noise is minimal in large privacy budget regime and thus improvements from gradient sparsification cannot compensate for its slower convergence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2021

Not all noise is accounted equally: How differentially private learning benefits from large sampling rates

Learning often involves sensitive data and as such, privacy preserving e...
research
08/20/2019

AdaCliP: Adaptive Clipping for Private SGD

Privacy preserving machine learning algorithms are crucial for learning ...
research
01/16/2023

Enforcing Privacy in Distributed Learning with Performance Guarantees

We study the privatization of distributed learning and optimization stra...
research
03/18/2020

Predicting Performance of Asynchronous Differentially-Private Learning

We consider training machine learning models using Training data located...
research
06/24/2019

The Value of Collaboration in Convex Machine Learning with Differential Privacy

In this paper, we apply machine learning to distributed private data own...
research
01/31/2022

Lessons from the AdKDD'21 Privacy-Preserving ML Challenge

Designing data sharing mechanisms providing performance and strong priva...
research
11/01/2022

On the Interaction Between Differential Privacy and Gradient Compression in Deep Learning

While differential privacy and gradient compression are separately well-...

Please sign up or login with your details

Forgot password? Click here to reset