Secure Split Learning against Property Inference, Data Reconstruction, and Feature Space Hijacking Attacks

04/19/2023
by   Yunlong Mao, et al.
0

Split learning of deep neural networks (SplitNN) has provided a promising solution to learning jointly for the mutual interest of a guest and a host, which may come from different backgrounds, holding features partitioned vertically. However, SplitNN creates a new attack surface for the adversarial participant, holding back its practical use in the real world. By investigating the adversarial effects of highly threatening attacks, including property inference, data reconstruction, and feature hijacking attacks, we identify the underlying vulnerability of SplitNN and propose a countermeasure. To prevent potential threats and ensure the learning guarantees of SplitNN, we design a privacy-preserving tunnel for information exchange between the guest and the host. The intuition is to perturb the propagation of knowledge in each direction with a controllable unified solution. To this end, we propose a new activation function named R3eLU, transferring private smashed data and partial loss into randomized responses in forward and backward propagations, respectively. We give the first attempt to secure split learning against three threatening attacks and present a fine-grained privacy budget allocation scheme. The analysis proves that our privacy-preserving SplitNN solution provides a tight privacy budget, while the experimental results show that our solution performs better than existing solutions in most cases and achieves a good tradeoff between defense and model usability.

READ FULL TEXT
research
03/10/2022

Clustering Label Inference Attack against Practical Split Learning

Split learning is deemed as a promising paradigm for privacy-preserving ...
research
08/18/2023

Defending Label Inference Attacks in Split Learning under Regression Setting

As a privacy-preserving method for implementing Vertical Federated Learn...
research
06/20/2020

Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks

This paper investigates capabilities of Privacy-Preserving Deep Learning...
research
04/16/2023

Shuffled Transformer for Privacy-Preserving Split Learning

In conventional split learning, training and testing data often face sev...
research
03/02/2022

EnclaveTree: Privacy-preserving Data Stream Training and Inference Using TEE

The classification service over a stream of data is becoming an importan...
research
10/03/2022

Privacy-Preserving Feature Coding for Machines

Automated machine vision pipelines do not need the exact visual content ...
research
04/06/2023

Quantifying and Defending against Privacy Threats on Federated Knowledge Graph Embedding

Knowledge Graph Embedding (KGE) is a fundamental technique that extracts...

Please sign up or login with your details

Forgot password? Click here to reset