Security Analysis of SplitFed Learning

12/04/2022
by   Momin Ahmad Khan, et al.
0

Split Learning (SL) and Federated Learning (FL) are two prominent distributed collaborative learning techniques that maintain data privacy by allowing clients to never share their private data with other clients and servers, and fined extensive IoT applications in smart healthcare, smart cities, and smart industry. Prior work has extensively explored the security vulnerabilities of FL in the form of poisoning attacks. To mitigate the effect of these attacks, several defenses have also been proposed. Recently, a hybrid of both learning techniques has emerged (commonly known as SplitFed) that capitalizes on their advantages (fast training) and eliminates their intrinsic disadvantages (centralized model updates). In this paper, we perform the first ever empirical analysis of SplitFed's robustness to strong model poisoning attacks. We observe that the model updates in SplitFed have significantly smaller dimensionality as compared to FL that is known to have the curse of dimensionality. We show that large models that have higher dimensionality are more susceptible to privacy and security attacks, whereas the clients in SplitFed do not have the complete model and have lower dimensionality, making them more robust to existing model poisoning attacks. Our results show that the accuracy reduction due to the model poisoning attack is 5x lower for SplitFed compared to FL.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/12/2020

Achieving Security and Privacy in Federated Learning Systems: Survey, Research Challenges and Future Directions

Federated learning (FL) allows a server to learn a machine learning (ML)...
research
04/26/2023

Blockchain-based Federated Learning with SMPC Model Verification Against Poisoning Attack for Healthcare Systems

Due to the rising awareness of privacy and security in machine learning ...
research
07/04/2023

Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks

Distributed Collaborative Machine Learning (DCML) is a potential alterna...
research
09/19/2023

SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks

While Federated learning (FL) is attractive for pulling privacy-preservi...
research
01/03/2022

DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection

Federated Learning (FL) allows multiple clients to collaboratively train...
research
03/21/2023

STDLens: Model Hijacking-Resilient Federated Learning for Object Detection

Federated Learning (FL) has been gaining popularity as a collaborative l...
research
07/16/2023

On the Robustness of Split Learning against Adversarial Attacks

Split learning enables collaborative deep learning model training while ...

Please sign up or login with your details

Forgot password? Click here to reset