TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks

10/19/2021
by   Atul Sharma, et al.
9

Federated learning—multi-party, distributed learning in a decentralized environment—is vulnerable to model poisoning attacks, even more so than centralized learning approaches. This is because malicious clients can collude and send in carefully tailored model updates to make the global model inaccurate. This motivated the development of Byzantine-resilient federated learning algorithms, such as Krum, Bulyan, FABA, and FoolsGold. However, a recently developed untargeted model poisoning attack showed that all prior defenses can be bypassed. The attack uses the intuition that simply by changing the sign of the gradient updates that the optimizer is computing, for a set of malicious clients, a model can be diverted from the optima to increase the test error rate. In this work, we develop TESSERACT—a defense against this directed deviation attack, a state-of-the-art model poisoning attack. TESSERACT is based on a simple intuition that in a federated learning setting, certain patterns of gradient flips are indicative of an attack. This intuition is remarkably stable across different learning algorithms, models, and datasets. TESSERACT assigns reputation scores to the participating clients based on their behavior during the training phase and then takes a weighted contribution of the clients. We show that TESSERACT provides robustness against even a white-box version of the attack.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/01/2020

Learning to Detect Malicious Clients for Robust Federated Learning

Federated learning systems are vulnerable to attacks from malicious clie...
research
06/10/2022

Blades: A Simulator for Attacks and Defenses in Federated Learning

Federated learning enables distributed training across a set of clients,...
research
08/21/2022

Byzantines can also Learn from History: Fall of Centered Clipping in Federated Learning

The increasing popularity of the federated learning framework due to its...
research
08/08/2023

Pelta: Shielding Transformers to Mitigate Evasion Attacks in Federated Learning

The main premise of federated learning is that machine learning model up...
research
10/04/2022

Invariant Aggregator for Defending Federated Backdoor Attacks

Federated learning is gaining popularity as it enables training of high-...
research
10/28/2020

Mitigating Backdoor Attacks in Federated Learning

Malicious clients can attack federated learning systems by using malicio...
research
10/24/2022

Detection and Prevention Against Poisoning Attacks in Federated Learning

This paper proposes and investigates a new approach for detecting and pr...

Please sign up or login with your details

Forgot password? Click here to reset