Defending against the Label-flipping Attack in Federated Learning

07/05/2022
by   Najeeb Moharram Jebreel, et al.
11

Federated learning (FL) provides autonomy and privacy by design to participating peers, who cooperatively build a machine learning (ML) model while keeping their private data in their devices. However, that same autonomy opens the door for malicious peers to poison the model by conducting either untargeted or targeted poisoning attacks. The label-flipping (LF) attack is a targeted poisoning attack where the attackers poison their training data by flipping the labels of some examples from one class (i.e., the source class) to another (i.e., the target class). Unfortunately, this attack is easy to perform and hard to detect and it negatively impacts on the performance of the global model. Existing defenses against LF are limited by assumptions on the distribution of the peers' data and/or do not perform well with high-dimensional models. In this paper, we deeply investigate the LF attack behavior and find that the contradicting objectives of attackers and honest peers on the source class examples are reflected in the parameter gradients corresponding to the neurons of the source and target classes in the output layer, making those gradients good discriminative features for the attack detection. Accordingly, we propose a novel defense that first dynamically extracts those gradients from the peers' local updates, and then clusters the extracted gradients, analyzes the resulting clusters and filters out potential bad updates before model aggregation. Extensive empirical analysis on three data sets shows the proposed defense's effectiveness against the LF attack regardless of the data distribution or model dimensionality. Also, the proposed defense outperforms several state-of-the-art defenses by offering lower test error, higher overall accuracy, higher source class accuracy, lower attack success rate, and higher stability of the source class accuracy.

READ FULL TEXT

page 14

page 15

page 16

page 18

research
07/02/2022

FL-Defender: Combating Targeted Attacks in Federated Learning

Federated learning (FL) enables learning a global machine learning model...
research
04/29/2023

FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients Inspection

Federated learning (FL) enables multiple clients to train a model withou...
research
03/22/2022

Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis

Model poisoning attacks on federated learning (FL) intrude in the entire...
research
02/21/2022

Backdoor Defense in Federated Learning Using Differential Testing and Outlier Detection

The goal of federated learning (FL) is to train one global model by aggr...
research
08/08/2023

Backdoor Federated Learning by Poisoning Backdoor-Critical Layers

Federated learning (FL) has been widely deployed to enable machine learn...
research
11/27/2022

Navigation as the Attacker Wishes? Towards Building Byzantine-Robust Embodied Agents under Federated Learning

Federated embodied agent learning protects the data privacy of individua...
research
01/11/2022

RFLBAT: A Robust Federated Learning Algorithm against Backdoor Attack

Federated learning (FL) is a distributed machine learning paradigm where...

Please sign up or login with your details

Forgot password? Click here to reset