Differentially Private Label Protection in Split Learning

03/04/2022
by   Xin Yang, et al.
0

Split learning is a distributed training framework that allows multiple parties to jointly train a machine learning model over vertically partitioned data (partitioned by attributes). The idea is that only intermediate computation results, rather than private features and labels, are shared between parties so that raw training data remains private. Nevertheless, recent works showed that the plaintext implementation of split learning suffers from severe privacy risks that a semi-honest adversary can easily reconstruct labels. In this work, we propose (Transcript Private Split Learning), a generic gradient perturbation based split learning framework that provides provable differential privacy guarantee. Differential privacy is enforced on not only the model weights, but also the communicated messages in the distributed computation setting. Our experiments on large-scale real-world datasets demonstrate the robustness and effectiveness of against label leakage attacks. We also find that have a better utility-privacy trade-off than baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2022

Differentially Private AUC Computation in Vertical Federated Learning

Federated learning has gained great attention recently as a privacy-enha...
research
02/25/2022

Does Label Differential Privacy Prevent Label Inference Attacks?

Label differential privacy (LDP) is a popular framework for training pri...
research
03/10/2022

Clustering Label Inference Attack against Practical Split Learning

Split learning is deemed as a promising paradigm for privacy-preserving ...
research
11/25/2021

Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning

Split learning is a popular technique used to perform vertical federated...
research
10/05/2021

Label differential privacy via clustering

We present new mechanisms for label differential privacy, a relaxation o...
research
01/11/2022

Feature Space Hijacking Attacks against Differentially Private Split Learning

Split learning and differential privacy are technologies with growing po...
research
05/22/2023

EXACT: Extensive Attack for Split Learning

Privacy-Preserving machine learning (PPML) can help us train and deploy ...

Please sign up or login with your details

Forgot password? Click here to reset