Long-Short History of Gradients is All You Need: Detecting Malicious and Unreliable Clients in Federated Learning

08/14/2022
by   Ashish Gupta, et al.
0

Federated learning offers a framework of training a machine learning model in a distributed fashion while preserving privacy of the participants. As the server cannot govern the clients' actions, nefarious clients may attack the global model by sending malicious local gradients. In the meantime, there could also be unreliable clients who are benign but each has a portion of low-quality training data (e.g., blur or low-resolution images), thus may appearing similar as malicious clients. Therefore, a defense mechanism will need to perform a three-fold differentiation which is much more challenging than the conventional (two-fold) case. This paper introduces MUD-HoG, a novel defense algorithm that addresses this challenge in federated learning using long-short history of gradients, and treats the detected malicious and unreliable clients differently. Not only this, but we can also distinguish between targeted and untargeted attacks among malicious clients, unlike most prior works which only consider one type of the attacks. Specifically, we take into account sign-flipping, additive-noise, label-flipping, and multi-label-flipping attacks, under a non-IID setting. We evaluate MUD-HoG with six state-of-the-art methods on two datasets. The results show that MUD-HoG outperforms all of them in terms of accuracy as well as precision and recall, in the presence of a mixture of multiple (four) types of attackers as well as unreliable clients. Moreover, unlike most prior works which can only tolerate a low population of harmful users, MUD-HoG can work with and successfully detect a wide range of malicious and unreliable clients - up to 47.5 total population. Our code is open-sourced at https://github.com/LabSAINT/MUD-HoG_Federated_Learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/01/2020

Learning to Detect Malicious Clients for Robust Federated Learning

Federated learning systems are vulnerable to attacks from malicious clie...
research
10/02/2022

FLCert: Provably Secure Federated Learning against Poisoning Attacks

Due to its distributed nature, federated learning is vulnerable to poiso...
research
08/27/2022

Network-Level Adversaries in Federated Learning

Federated learning is a popular strategy for training models on distribu...
research
02/03/2021

Provably Secure Federated Learning against Malicious Clients

Federated learning enables clients to collaboratively learn a shared glo...
research
08/14/2023

DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks

Federated learning is a promising direction to tackle the privacy issues...
research
08/01/2023

Physics-Driven Spectrum-Consistent Federated Learning for Palmprint Verification

Palmprint as biometrics has gained increasing attention recently due to ...
research
05/09/2023

Balancing Privacy and Security in Federated Learning with FedGT: A Group Testing Framework

We propose FedGT, a novel framework for identifying malicious clients in...

Please sign up or login with your details

Forgot password? Click here to reset