Clustering Label Inference Attack against Practical Split Learning

03/10/2022
by   Junlin Liu, et al.
0

Split learning is deemed as a promising paradigm for privacy-preserving distributed learning, where the learning model can be cut into multiple portions to be trained at the participants collaboratively. The participants only exchange the intermediate learning results at the cut layer, including smashed data via forward-pass (i.e., features extracted from the raw data) and gradients during backward-propagation.Understanding the security performance of split learning is critical for various privacy-sensitive applications.With the emphasis on private labels, this paper proposes a passive clustering label inference attack for practical split learning. The adversary (either clients or servers) can accurately retrieve the private labels by collecting the exchanged gradients and smashed data.We mathematically analyse potential label leakages in split learning and propose the cosine and Euclidean similarity measurements for clustering attack. Experimental results validate that the proposed approach is scalable and robust under different settings (e.g., cut layer positions, epochs, and batch sizes) for practical split learning.The adversary can still achieve accurate predictions, even when differential privacy and gradient compression are adopted for label protections.

READ FULL TEXT
research
03/04/2022

Differentially Private Label Protection in Split Learning

Split learning is a distributed training framework that allows multiple ...
research
05/22/2023

EXACT: Extensive Attack for Split Learning

Privacy-Preserving machine learning (PPML) can help us train and deploy ...
research
04/19/2023

Secure Split Learning against Property Inference, Data Reconstruction, and Feature Space Hijacking Attacks

Split learning of deep neural networks (SplitNN) has provided a promisin...
research
08/18/2023

Defending Label Inference Attacks in Split Learning under Regression Setting

As a privacy-preserving method for implementing Vertical Federated Learn...
research
03/02/2022

Label Leakage and Protection from Forward Embedding in Vertical Federated Learning

Vertical federated learning (vFL) has gained much attention and been dep...
research
10/18/2022

Making Split Learning Resilient to Label Leakage by Potential Energy Loss

As a practical privacy-preserving learning method, split learning has dr...
research
06/10/2022

Binarizing Split Learning for Data Privacy Enhancement and Computation Reduction

Split learning (SL) enables data privacy preservation by allowing client...

Please sign up or login with your details

Forgot password? Click here to reset