Love or Hate? Share or Split? Privacy-Preserving Training Using Split Learning and Homomorphic Encryption

09/19/2023
by   Tanveer Khan, et al.
0

Split learning (SL) is a new collaborative learning technique that allows participants, e.g. a client and a server, to train machine learning models without the client sharing raw data. In this setting, the client initially applies its part of the machine learning model on the raw data to generate activation maps and then sends them to the server to continue the training process. Previous works in the field demonstrated that reconstructing activation maps could result in privacy leakage of client data. In addition to that, existing mitigation techniques that overcome the privacy leakage of SL prove to be significantly worse in terms of accuracy. In this paper, we improve upon previous works by constructing a protocol based on U-shaped SL that can operate on homomorphically encrypted data. More precisely, in our approach, the client applies homomorphic encryption on the activation maps before sending them to the server, thus protecting user privacy. This is an important improvement that reduces privacy leakage in comparison to other SL-based works. Finally, our results show that, with the optimum set of parameters, training with HE data in the U-shaped SL setting only reduces accuracy by 2.65 to training on plaintext. In addition, raw training data privacy is preserved.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/20/2023

Split Ways: Privacy-Preserving Training of Encrypted Data Using Split Learning

Split Learning (SL) is a new collaborative learning technique that allow...
research
09/15/2023

A More Secure Split: Enhancing the Security of Privacy-Preserving Split Learning

Split learning (SL) is a new collaborative learning technique that allow...
research
08/30/2023

Split Without a Leak: Reducing Privacy Leakage in Split Learning

The popularity of Deep Learning (DL) makes the privacy of sensitive data...
research
03/16/2020

Can We Use Split Learning on 1D CNN Models for Privacy Preserving Training?

A new collaborative learning, called split learning, was recently introd...
research
08/17/2023

Optimal Resource Allocation for U-Shaped Parallel Split Learning

Split learning (SL) has emerged as a promising approach for model traini...
research
04/17/2023

SplitAMC: Split Learning for Robust Automatic Modulation Classification

Automatic modulation classification (AMC) is a technology that identifie...
research
06/27/2019

Privacy-Preserving Distributed Learning with Secret Gradient Descent

In many important application domains of machine learning, data is a pri...

Please sign up or login with your details

Forgot password? Click here to reset