Unleashing the Tiger: Inference Attacks on Split Learning

12/04/2020
by   Dario Pasquini, et al.
0

We investigate the security of split learning – a novel collaborative machine learning framework that enables peak performance by requiring minimal resources consumption. In the paper, we make explicit the vulnerabilities of the protocol and demonstrate its inherent insecurity by introducing general attack strategies targeting the reconstruction of clients' private training sets. More prominently, we demonstrate that a malicious server can actively hijack the learning process of the distributed model and bring it into an insecure state that enables inference attacks on clients' data. We implement different adaptations of the attack and test them on various datasets as well as within realistic threat scenarios. To make our results reproducible, we made our code available at https://github.com/pasquini-dario/SplitNN_FSHA.

READ FULL TEXT

page 5

page 6

page 7

research
07/04/2023

Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks

Distributed Collaborative Machine Learning (DCML) is a potential alterna...
research
02/19/2023

On Feasibility of Server-side Backdoor Attacks on Split Learning

Split learning is a collaborative learning design that allows several pa...
research
02/16/2023

Defense Mechanisms Against Training-Hijacking Attacks in Split Learning

Distributed deep learning frameworks enable more efficient and privacy-a...
research
05/09/2022

ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning

This work aims to tackle Model Inversion (MI) attack on Split Federated ...
research
06/10/2022

Blades: A Simulator for Attacks and Defenses in Federated Learning

Federated learning enables distributed training across a set of clients,...
research
02/20/2023

Poisoning Web-Scale Training Datasets is Practical

Deep learning models are often trained on distributed, webscale datasets...
research
06/03/2021

Defending against Backdoor Attacks in Natural Language Generation

The frustratingly fragile nature of neural network models make current n...

Please sign up or login with your details

Forgot password? Click here to reset