BaFFLe: Backdoor detection via Feedback-based Federated Learning

11/04/2020
by   Sebastien Andreina, et al.
0

Recent studies have shown that federated learning (FL) is vulnerable to poisoning attacks which aim at injecting a backdoor into the global model. These attacks are effective, even when performed by a single client, and undetectable by most existing defensive techniques. In this paper, we propose a novel defense, dubbed BaFFLe—Backdoor detection via Feedback-based Federated Learning—to secure FL against backdoor attacks. The core idea behind BaFFLe is to leverage data of multiple clients not only for training but also for uncovering model poisoning. Namely, we exploit the availability of multiple, rich datasets at the various clients by incorporating a feedback loop into the FL process to integrate the views of those clients when deciding whether a given model update is genuine or not. We show that this powerful construct can achieve very high detection rates against state-of-the-art backdoor attacks, even when relying on straightforward methods to validate the model. Namely, we show by means of evaluation using the CIFAR-10 and FEMNIST datasets that, by combining the feedback loop with a method that suspects poisoning attempts by assessing the per-class classification performance of the updated model, BaFFLe reliably detects state-of-the-art semantic-backdoor attacks with a detection accuracy of 100 solution can detect an adaptive attack which is tuned to bypass the defense.

READ FULL TEXT
research
03/16/2022

Client-Wise Targeted Backdoor in Federated Learning

Federated Learning (FL) emerges from the privacy concerns traditional ma...
research
01/06/2021

FLGUARD: Secure and Private Federated Learning

Recently, a number of backdoor attacks against Federated Learning (FL) h...
research
05/13/2022

FLAD: Adaptive Federated Learning for DDoS Attack Detection

Federated Learning (FL) has been recently receiving increasing considera...
research
08/23/2021

Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Federated Learning

While recent works have indicated that federated learning (FL) is vulner...
research
03/21/2023

STDLens: Model Hijacking-Resilient Federated Learning for Object Detection

Federated Learning (FL) has been gaining popularity as a collaborative l...
research
07/01/2023

Fedward: Flexible Federated Backdoor Defense Framework with Non-IID Data

Federated learning (FL) enables multiple clients to collaboratively trai...
research
06/11/2022

Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency

Federated learning (FL) is a distributed machine learning approach where...

Please sign up or login with your details

Forgot password? Click here to reset