On the Robustness of Split Learning against Adversarial Attacks

07/16/2023
by   Mingyuan Fan, et al.
0

Split learning enables collaborative deep learning model training while preserving data privacy and model security by avoiding direct sharing of raw data and model details (i.e., sever and clients only hold partial sub-networks and exchange intermediate computations). However, existing research has mainly focused on examining its reliability for privacy protection, with little investigation into model security. Specifically, by exploring full models, attackers can launch adversarial attacks, and split learning can mitigate this severe threat by only disclosing part of models to untrusted servers.This paper aims to evaluate the robustness of split learning against adversarial attacks, particularly in the most challenging setting where untrusted servers only have access to the intermediate layers of the model.Existing adversarial attacks mostly focus on the centralized setting instead of the collaborative setting, thus, to better evaluate the robustness of split learning, we develop a tailored attack called SPADV, which comprises two stages: 1) shadow model training that addresses the issue of lacking part of the model and 2) local adversarial attack that produces adversarial examples to evaluate.The first stage only requires a few unlabeled non-IID data, and, in the second stage, SPADV perturbs the intermediate output of natural samples to craft the adversarial ones. The overall cost of the proposed attack process is relatively low, yet the empirical attack effectiveness is significantly high, demonstrating the surprising vulnerability of split learning to adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2018

Adversarial Examples in Deep Learning: Characterization and Divergence

The burgeoning success of deep learning has raised the security and priv...
research
02/19/2023

On Feasibility of Server-side Backdoor Attacks on Split Learning

Split learning is a collaborative learning design that allows several pa...
research
03/26/2021

Vulnerability Due to Training Order in Split Learning

Split learning (SL) is a privacy-preserving distributed deep learning me...
research
09/23/2019

Adversarial Examples for Deep Learning Cyber Security Analytics

As advances in Deep Neural Networks demonstrate unprecedented levels of ...
research
12/04/2022

Security Analysis of SplitFed Learning

Split Learning (SL) and Federated Learning (FL) are two prominent distri...
research
12/10/2020

Composite Adversarial Attacks

Adversarial attack is a technique for deceiving Machine Learning (ML) mo...
research
11/21/2021

Denoised Internal Models: a Brain-Inspired Autoencoder against Adversarial Attacks

Despite its great success, deep learning severely suffers from robustnes...

Please sign up or login with your details

Forgot password? Click here to reset