Suppressing Poisoning Attacks on Federated Learning for Medical Imaging

07/15/2022
by   Naif Alkhunaizi, et al.
0

Collaboration among multiple data-owning entities (e.g., hospitals) can accelerate the training process and yield better machine learning models due to the availability and diversity of data. However, privacy concerns make it challenging to exchange data while preserving confidentiality. Federated Learning (FL) is a promising solution that enables collaborative training through exchange of model parameters instead of raw data. However, most existing FL solutions work under the assumption that participating clients are honest and thus can fail against poisoning attacks from malicious parties, whose goal is to deteriorate the global model performance. In this work, we propose a robust aggregation rule called Distance-based Outlier Suppression (DOS) that is resilient to byzantine failures. The proposed method computes the distance between local parameter updates of different clients and obtains an outlier score for each client using Copula-based Outlier Detection (COPOD). The resulting outlier scores are converted into normalized weights using a softmax function, and a weighted average of the local parameters is used for updating the global model. DOS aggregation can effectively suppress parameter updates from malicious clients without the need for any hyperparameter selection, even when the data distributions are heterogeneous. Evaluation on two medical imaging datasets (CheXpert and HAM10000) demonstrates the higher robustness of DOS method against a variety of poisoning attacks in comparison to other state-of-the-art methods. The code can be found here https://github.com/Naiftt/SPAFD.

READ FULL TEXT
research
07/19/2022

FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients

Federated learning (FL) is vulnerable to model poisoning attacks, in whi...
research
08/03/2022

A New Implementation of Federated Learning for Privacy and Security Enhancement

Motivated by the ever-increasing concerns on personal data privacy and t...
research
08/14/2023

DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks

Federated learning is a promising direction to tackle the privacy issues...
research
06/26/2023

FeSViBS: Federated Split Learning of Vision Transformer with Block Sampling

Data scarcity is a significant obstacle hindering the learning of powerf...
research
12/28/2022

XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning

Federated Learning (FL) has received increasing attention due to its pri...
research
06/25/2023

Exploring Data Redundancy in Real-world Image Classification through Data Selection

Deep learning models often require large amounts of data for training, l...
research
01/23/2023

BayBFed: Bayesian Backdoor Defense for Federated Learning

Federated learning (FL) allows participants to jointly train a machine l...

Please sign up or login with your details

Forgot password? Click here to reset