On Feasibility of Server-side Backdoor Attacks on Split Learning

02/19/2023
by   Behrad Tajalli, et al.
0

Split learning is a collaborative learning design that allows several participants (clients) to train a shared model while keeping their datasets private. Recent studies demonstrate that collaborative learning models, specifically federated learning, are vulnerable to security and privacy attacks such as model inference and backdoor attacks. Backdoor attacks are a group of poisoning attacks in which the attacker tries to control the model output by manipulating the model's training process. While there have been studies regarding inference attacks on split learning, it has not yet been tested for backdoor attacks. This paper performs a novel backdoor attack on split learning and studies its effectiveness. Despite traditional backdoor attacks done on the client side, we inject the backdoor trigger from the server side. For this purpose, we provide two attack methods: one using a surrogate client and another using an autoencoder to poison the model via incoming smashed data and its outgoing gradient toward the innocent participants. We did our experiments using three model architectures and three publicly available datasets in the image domain and ran a total of 761 experiments to evaluate our attack methods. The results show that despite using strong patterns and injection methods, split learning is highly robust and resistant to such poisoning attacks. While we get the attack success rate of 100 dataset, in most of the other cases, our attack shows little success when increasing the cut layer.

READ FULL TEXT

page 1

page 9

research
08/20/2021

SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split Learning

Distributed deep learning frameworks, such as split learning, have recen...
research
02/16/2023

Defense Mechanisms Against Training-Hijacking Attacks in Split Learning

Distributed deep learning frameworks enable more efficient and privacy-a...
research
12/04/2020

Unleashing the Tiger: Inference Attacks on Split Learning

We investigate the security of split learning – a novel collaborative ma...
research
07/16/2023

On the Robustness of Split Learning against Adversarial Attacks

Split learning enables collaborative deep learning model training while ...
research
05/09/2022

ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning

This work aims to tackle Model Inversion (MI) attack on Split Federated ...
research
04/12/2021

Practical Defences Against Model Inversion Attacks for Split Neural Networks

We describe a threat model under which a split network-based federated l...
research
08/27/2019

Key Protected Classification for Collaborative Learning

Large-scale datasets play a fundamental role in training deep learning m...

Please sign up or login with your details

Forgot password? Click here to reset