Vulnerability Due to Training Order in Split Learning

03/26/2021
by   Harshit Madaan, et al.
10

Split learning (SL) is a privacy-preserving distributed deep learning method used to train a collaborative model without the need for sharing of patient's raw data between clients. In split learning, an additional privacy-preserving algorithm called no-peek algorithm can be incorporated, which is robust to adversarial attacks. The privacy benefits offered by split learning make it suitable for practice in the healthcare domain. However, the split learning algorithm is flawed as the collaborative model is trained sequentially, i.e., one client trains after the other. We point out that the model trained using the split learning algorithm gets biased towards the data of the clients used for training towards the end of a round. This makes SL algorithms highly susceptible to the order in which clients are considered for training. We demonstrate that the model trained using the data of all clients does not perform well on the client's data which was considered earliest in a round for training the model. Moreover, we show that this effect becomes more prominent with the increase in the number of clients. We also demonstrate that the SplitFedv3 algorithm mitigates this problem while still leveraging the privacy benefits provided by split learning.

READ FULL TEXT
research
12/23/2020

Comparison of Privacy-Preserving Distributed Deep Learning Methods in Healthcare

In this paper, we compare three privacy-preserving distributed learning ...
research
05/18/2021

Federated Learning With Highly Imbalanced Audio Data

Federated learning (FL) is a privacy-preserving machine learning method ...
research
12/27/2019

Split Learning for collaborative deep learning in healthcare

Shortage of labeled data has been holding the surge of deep learning in ...
research
08/22/2020

Multiple Classification with Split Learning

Privacy issues were raised in the process of training deep learning in m...
research
07/16/2023

On the Robustness of Split Learning against Adversarial Attacks

Split learning enables collaborative deep learning model training while ...
research
03/23/2023

A Privacy-Preserving Energy Theft Detection Model for Effective Demand-Response Management in Smart Grids

The detection of energy thefts is vital for the safety of the whole smar...

Please sign up or login with your details

Forgot password? Click here to reset