Input Perturbation: A New Paradigm between Central and Local Differential Privacy

02/20/2020
by   Yilin Kang, et al.
0

Traditionally, there are two models on differential privacy: the central model and the local model. The central model focuses on the machine learning model and the local model focuses on the training data. In this paper, we study the input perturbation method in differentially private empirical risk minimization (DP-ERM), preserving privacy of the central model. By adding noise to the original training data and training with the `perturbed data', we achieve (ϵ,δ)-differential privacy on the final model, along with some kind of privacy on the original data. We observe that there is an interesting connection between the local model and the central model: the perturbation on the original data causes the perturbation on the gradient, and finally the model parameters. This observation means that our method builds a bridge between local and central model, protecting the data, the gradient and the model simultaneously, which is more superior than previous central methods. Detailed theoretical analysis and experiments show that our method achieves almost the same (or even better) performance as some of the best previous central methods with more protections on privacy, which is an attractive result. Moreover, we extend our method to a more general case: the loss function satisfies the Polyak-Lojasiewicz condition, which is more general than strong convexity, the constraint on the loss function in most previous work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/20/2020

Differentially Private ERM Based on Data Perturbation

In this paper, after observing that different training data instances af...
research
10/23/2019

Weighted Distributed Differential Privacy ERM: Convex and Non-convex

Distributed machine learning is an approach allowing different parties t...
research
11/03/2021

Privately Publishable Per-instance Privacy

We consider how to privately share the personalized privacy losses incur...
research
01/08/2019

Data Masking with Privacy Guarantees

We study the problem of data release with privacy, where data is made av...
research
05/07/2021

Differential Privacy for Pairwise Learning: Non-convex Analysis

Pairwise learning focuses on learning tasks with pairwise loss functions...
research
03/24/2023

On the connection between the ABS perturbation methodology and differential privacy

This paper explores analytical connections between the perturbation meth...
research
02/13/2020

BiSample: Bidirectional Sampling for Handling Missing Data with Local Differential Privacy

Local differential privacy (LDP) has received much interest recently. In...

Please sign up or login with your details

Forgot password? Click here to reset