FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients

07/19/2022
by   Zaixi Zhang, et al.
16

Federated learning (FL) is vulnerable to model poisoning attacks, in which malicious clients corrupt the global model via sending manipulated model updates to the server. Existing defenses mainly rely on Byzantine-robust FL methods, which aim to learn an accurate global model even if some clients are malicious. However, they can only resist a small number of malicious clients in practice. It is still an open challenge how to defend against model poisoning attacks with a large number of malicious clients. Our FLDetector addresses this challenge via detecting malicious clients. FLDetector aims to detect and remove the majority of the malicious clients such that a Byzantine-robust FL method can learn an accurate global model using the remaining clients. Our key observation is that, in model poisoning attacks, the model updates from a client in multiple iterations are inconsistent. Therefore, FLDetector detects malicious clients via checking their model-updates consistency. Roughly speaking, the server predicts a client's model update in each iteration based on its historical model updates using the Cauchy mean value theorem and L-BFGS, and flags a client as malicious if the received model update from the client and the predicted model update are inconsistent in multiple iterations. Our extensive experiments on three benchmark datasets show that FLDetector can accurately detect malicious clients in multiple state-of-the-art model poisoning attacks. After removing the detected malicious clients, existing Byzantine-robust FL methods can learn accurate global models.Our code is available at https://github.com/zaixizhang/FLDetector.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/13/2022

AFLGuard: Byzantine-robust Asynchronous Federated Learning

Federated learning (FL) is an emerging machine learning paradigm, in whi...
research
10/20/2022

FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information

Federated learning is vulnerable to poisoning attacks in which malicious...
research
04/28/2022

Shielding Federated Learning: Robust Aggregation with Adaptive Client Selection

Federated learning (FL) enables multiple clients to collaboratively trai...
research
03/31/2023

Secure Federated Learning against Model Poisoning Attacks via Client Filtering

Given the distributed nature, detecting and defending against the backdo...
research
01/23/2023

BayBFed: Bayesian Backdoor Defense for Federated Learning

Federated learning (FL) allows participants to jointly train a machine l...
research
10/15/2020

Mitigating Byzantine Attacks in Federated Learning

Prior solutions for mitigating Byzantine failures in federated learning,...
research
07/15/2022

Suppressing Poisoning Attacks on Federated Learning for Medical Imaging

Collaboration among multiple data-owning entities (e.g., hospitals) can ...

Please sign up or login with your details

Forgot password? Click here to reset