UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning

08/20/2021
by   Ege Erdogan, et al.
0

Training deep neural networks requires large scale data, which often forces users to work in a distributed or outsourced setting, accompanied with privacy concerns. Split learning framework aims to address this concern by splitting up the model among the client and the server. The idea is that since the server does not have access to client's part of the model, the scheme supposedly provides privacy. We show that this is not true via two novel attacks. (1) We show that an honest-but-curious split learning server, equipped only with the knowledge of the client neural network architecture, can recover the input samples and also obtain a functionally similar model to the client model, without the client being able to detect the attack. (2) Furthermore, we show that if split learning is used naively to protect the training labels, the honest-but-curious server can infer the labels with perfect accuracy. We test our attacks using three benchmark datasets and investigate various properties of the overall system that affect the attacks' effectiveness. Our results show that plaintext split learning paradigm can pose serious security risks and provide no more than a false sense of security.

READ FULL TEXT

page 8

page 14

research
02/16/2023

Defense Mechanisms Against Training-Hijacking Attacks in Split Learning

Distributed deep learning frameworks enable more efficient and privacy-a...
research
08/20/2021

SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split Learning

Distributed deep learning frameworks, such as split learning, have recen...
research
02/27/2022

Split HE: Fast Secure Inference Combining Split Learning and Homomorphic Encryption

This work presents a novel protocol for fast secure inference of neural ...
research
09/27/2020

Addressless: A New Internet Server Model to Prevent Network Scanning

Eliminating unnecessary exposure is a principle of server security. The ...
research
04/12/2021

Practical Defences Against Model Inversion Attacks for Split Neural Networks

We describe a threat model under which a split network-based federated l...
research
01/27/2023

Multi-limb Split Learning for Tumor Classification on Vertically Distributed Data

Brain tumors are one of the life-threatening forms of cancer. Previous s...
research
10/06/2022

Cyber-Resilient Privacy Preservation and Secure Billing Approach for Smart Energy Metering Devices

Most of the smart applications, such as smart energy metering devices, d...

Please sign up or login with your details

Forgot password? Click here to reset