Backdoor Defense in Federated Learning Using Differential Testing and Outlier Detection

02/21/2022
by   Yein Kim, et al.
0

The goal of federated learning (FL) is to train one global model by aggregating model parameters updated independently on edge devices without accessing users' private data. However, FL is susceptible to backdoor attacks where a small fraction of malicious agents inject a targeted misclassification behavior in the global model by uploading polluted model updates to the server. In this work, we propose DifFense, an automated defense framework to protect an FL system from backdoor attacks by leveraging differential testing and two-step MAD outlier detection, without requiring any previous knowledge of attack scenarios or direct access to local model parameters. We empirically show that our detection method prevents a various number of potential attackers while consistently achieving the convergence of the global model comparable to that trained under federated averaging (FedAvg). We further corroborate the effectiveness and generalizability of our method against prior defense techniques, such as Multi-Krum and coordinate-wise median aggregation. Our detection method reduces the average backdoor accuracy of the global model to below 4

READ FULL TEXT
research
07/02/2023

FedDefender: Backdoor Attack Defense in Federated Learning

Federated Learning (FL) is a privacy-preserving distributed machine lear...
research
01/24/2021

Untargeted Poisoning Attack Detection in Federated Learning via Behavior Attestation

Federated Learning (FL) is a paradigm in Machine Learning (ML) that addr...
research
11/08/2021

BARFED: Byzantine Attack-Resistant Federated Averaging Based on Outlier Elimination

In federated learning, each participant trains its local model with its ...
research
08/21/2023

Federated Learning Robust to Byzantine Attacks: Achieving Zero Optimality Gap

In this paper, we propose a robust aggregation method for federated lear...
research
07/05/2022

Defending against the Label-flipping Attack in Federated Learning

Federated learning (FL) provides autonomy and privacy by design to parti...
research
03/22/2022

Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis

Model poisoning attacks on federated learning (FL) intrude in the entire...
research
09/14/2022

Federated Learning based on Defending Against Data Poisoning Attacks in IoT

The rapidly expanding number of Internet of Things (IoT) devices is gene...

Please sign up or login with your details

Forgot password? Click here to reset