Feature Space Hijacking Attacks against Differentially Private Split Learning

01/11/2022
by   Grzegorz Gawron, et al.
32

Split learning and differential privacy are technologies with growing potential to help with privacy-compliant advanced analytics on distributed datasets. Attacks against split learning are an important evaluation tool and have been receiving increased research attention recently. This work's contribution is applying a recent feature space hijacking attack (FSHA) to the learning process of a split neural network enhanced with differential privacy (DP), using a client-side off-the-shelf DP optimizer. The FSHA attack obtains client's private data reconstruction with low error rates at arbitrarily set DP epsilon levels. We also experiment with dimensionality reduction as a potential attack risk mitigation and show that it might help to some extent. We discuss the reasons why differential privacy is not an effective protection in this setting and mention potential other risk mitigation methods.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 8

page 9

page 10

page 11

research
03/08/2023

Considerations on the Theory of Training Models with Differential Privacy

In federated learning collaborative learning takes place by a set of cli...
research
05/28/2023

Training Private Models That Know What They Don't Know

Training reliable deep learning models which avoid making overconfident ...
research
05/22/2023

EXACT: Extensive Attack for Split Learning

Privacy-Preserving machine learning (PPML) can help us train and deploy ...
research
03/08/2023

Differential Privacy Meets Neural Network Pruning

A major challenge in applying differential privacy to training deep neur...
research
03/04/2022

Differentially Private Label Protection in Split Learning

Split learning is a distributed training framework that allows multiple ...
research
01/01/2021

Disclosure Risk from Homogeneity Attack in Differentially Private Frequency Distribution

Homogeneity attack allows adversaries to obtain the exact values on the ...
research
08/08/2019

De-anonymization Attacks on Neuroimaging Datasets

Advances in imaging technologies, combined with inexpensive storage, hav...

Please sign up or login with your details

Forgot password? Click here to reset