Can We Use Split Learning on 1D CNN Models for Privacy Preserving Training?

03/16/2020
by   Sharif Abuadbba, et al.
0

A new collaborative learning, called split learning, was recently introduced, aiming to protect user data privacy without revealing raw input data to a server. It collaboratively runs a deep neural network model where the model is split into two parts, one for the client and the other for the server. Therefore, the server has no direct access to raw data processed at the client. Until now, the split learning is believed to be a promising approach to protect the client's raw data; for example, the client's data was protected in healthcare image applications using 2D convolutional neural network (CNN) models. However, it is still unclear whether the split learning can be applied to other deep learning models, in particular, 1D CNN. In this paper, we examine whether split learning can be used to perform privacy-preserving training for 1D CNN models. To answer this, we first design and implement an 1D CNN model under split learning and validate its efficacy in detecting heart abnormalities using medical ECG data. We observed that the 1D CNN model under split learning can achieve the same accuracy of 98.9% like the original (non-split) model. However, our evaluation demonstrates that split learning may fail to protect the raw data privacy on 1D CNN models. To address the observed privacy leakage in split learning, we adopt two privacy leakage mitigation techniques: 1) adding more hidden layers to the client side and 2) applying differential privacy. Although those mitigation techniques are helpful in reducing privacy leakage, they have a significant impact on model accuracy. Hence, based on those results, we conclude that split learning alone would not be sufficient to maintain the confidentiality of raw sequential data in 1D CNN models.

READ FULL TEXT
research
09/15/2023

A More Secure Split: Enhancing the Security of Privacy-Preserving Split Learning

Split learning (SL) is a new collaborative learning technique that allow...
research
04/17/2023

SplitAMC: Split Learning for Robust Automatic Modulation Classification

Automatic modulation classification (AMC) is a technology that identifie...
research
09/19/2023

Love or Hate? Share or Split? Privacy-Preserving Training Using Split Learning and Homomorphic Encryption

Split learning (SL) is a new collaborative learning technique that allow...
research
06/10/2022

Binarizing Split Learning for Data Privacy Enhancement and Computation Reduction

Split learning (SL) enables data privacy preservation by allowing client...
research
04/16/2023

Shuffled Transformer for Privacy-Preserving Split Learning

In conventional split learning, training and testing data often face sev...
research
08/22/2022

Split-U-Net: Preventing Data Leakage in Split Learning for Collaborative Multi-Modal Brain Tumor Segmentation

Split learning (SL) has been proposed to train deep learning models in a...
research
01/04/2023

Privacy and Efficiency of Communications in Federated Split Learning

Everyday, large amounts of sensitive data is distributed across mobile p...

Please sign up or login with your details

Forgot password? Click here to reset