Defense Mechanisms Against Training-Hijacking Attacks in Split Learning

02/16/2023
by   Ege Erdogan, et al.
0

Distributed deep learning frameworks enable more efficient and privacy-aware training of deep neural networks across multiple clients. Split learning achieves this by splitting a neural network between a client and a server such that the client computes the initial set of layers, and the server computes the rest. However, this method introduces a unique attack vector for a malicious server attempting to recover the client's private inputs: the server can direct the client model towards learning any task of its choice, e.g. towards outputting easily invertible values. With a concrete example already proposed (Pasquini et al., ACM CCS '21), such training-hijacking attacks present a significant risk for the data privacy of split learning clients. We propose two methods for a split learning client to detect if it is being targeted by a training-hijacking attack or not. We experimentally evaluate our methods' effectiveness, compare them with other potential solutions, and discuss various points related to their use. Our conclusion is that by using the method that best suits their use case, split learning clients can consistently detect training-hijacking attacks and thus keep the information gained by the attacker at a minimum.

READ FULL TEXT

page 7

page 8

research
08/20/2021

SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split Learning

Distributed deep learning frameworks, such as split learning, have recen...
research
08/20/2021

UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning

Training deep neural networks requires large scale data, which often for...
research
02/19/2023

On Feasibility of Server-side Backdoor Attacks on Split Learning

Split learning is a collaborative learning design that allows several pa...
research
10/09/2019

ExpertMatcher: Automating ML Model Selection for Clients using Hidden Representations

Recently, there has been the development of Split Learning, a framework ...
research
07/31/2018

Revisiting Client Puzzles for State Exhaustion Attacks Resilience

In this paper, we address the challenges facing the adoption of client p...
research
12/01/2022

Split Learning without Local Weight Sharing to Enhance Client-side Data Privacy

Split learning (SL) aims to protect user data privacy by splitting deep ...
research
12/04/2020

Unleashing the Tiger: Inference Attacks on Split Learning

We investigate the security of split learning – a novel collaborative ma...

Please sign up or login with your details

Forgot password? Click here to reset