Learning to Detect Malicious Clients for Robust Federated Learning

02/01/2020
by   Suyi Li, et al.
1

Federated learning systems are vulnerable to attacks from malicious clients. As the central server in the system cannot govern the behaviors of the clients, a rogue client may initiate an attack by sending malicious model updates to the server, so as to degrade the learning performance or enforce targeted model poisoning attacks (a.k.a. backdoor attacks). Therefore, timely detecting these malicious model updates and the underlying attackers becomes critically important. In this work, we propose a new framework for robust federated learning where the central server learns to detect and remove the malicious model updates using a powerful detection model, leading to targeted defense. We evaluate our solution in both image classification and sentiment analysis tasks with a variety of machine learning models. Experimental results show that our solution ensures robust federated learning that is resilient to both the Byzantine attacks and the targeted model poisoning attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2022

FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information

Federated learning is vulnerable to poisoning attacks in which malicious...
research
07/18/2021

RobustFed: A Truth Inference Approach for Robust Federated Learning

Federated learning is a prominent framework that enables clients (e.g., ...
research
10/22/2019

Abnormal Client Behavior Detection in Federated Learning

In federated learning systems, clients are autonomous in that their beha...
research
10/19/2021

TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks

Federated learning—multi-party, distributed learning in a decentralized ...
research
08/14/2022

Long-Short History of Gradients is All You Need: Detecting Malicious and Unreliable Clients in Federated Learning

Federated learning offers a framework of training a machine learning mod...
research
10/28/2020

Mitigating Backdoor Attacks in Federated Learning

Malicious clients can attack federated learning systems by using malicio...
research
10/17/2022

Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning

Federated learning is particularly susceptible to model poisoning and ba...

Please sign up or login with your details

Forgot password? Click here to reset