Label differential privacy via clustering

10/05/2021
by   Hossein Esfandiari, et al.
0

We present new mechanisms for label differential privacy, a relaxation of differentially private machine learning that only protects the privacy of the labels in the training set. Our mechanisms cluster the examples in the training set using their (non-private) feature vectors, randomly re-sample each label from examples in the same cluster, and output a training set with noisy labels as well as a modified version of the true loss function. We prove that when the clusters are both large and high-quality, the model that minimizes the modified loss on the noisy training set converges to small excess risk at a rate that is comparable to the rate for non-private learning. We describe both a centralized mechanism in which the entire training set is stored by a trusted curator, and a distributed mechanism where each user stores a single labeled example and replaces her label with the label of a randomly selected user from the same cluster. We also describe a learning problem in which large clusters are necessary to achieve both strong privacy and either good precision or good recall. Our experiments show that randomizing the labels within each cluster significantly improves the privacy vs. accuracy trade-off compared to applying uniform randomized response to the labels, and also compared to learning a model via DP-SGD.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2023

Have it your way: Individualized Privacy Assignment for DP-SGD

When training a machine learning model with differential privacy, one se...
research
08/02/2023

Causal Inference with Differentially Private (Clustered) Outcomes

Estimating causal effects from randomized experiments is only feasible i...
research
03/04/2022

Differentially Private Label Protection in Split Learning

Split learning is a distributed training framework that allows multiple ...
research
01/24/2018

Training Set Debugging Using Trusted Items

Training set bugs are flaws in the data that adversely affect machine le...
research
10/15/2021

The Privacy-preserving Padding Problem: Non-negative Mechanisms for Conservative Answers with Differential Privacy

Differentially private noise mechanisms commonly use symmetric noise dis...
research
02/25/2022

Does Label Differential Privacy Prevent Label Inference Attacks?

Label differential privacy (LDP) is a popular framework for training pri...
research
11/23/2022

Private Multi-Winner Voting for Machine Learning

Private multi-winner voting is the task of revealing k-hot binary vector...

Please sign up or login with your details

Forgot password? Click here to reset