Invariant Aggregator for Defending Federated Backdoor Attacks

10/04/2022
by   Xiaoyang Wang, et al.
0

Federated learning is gaining popularity as it enables training of high-utility models across several clients without directly sharing their private data. As a downside, the federated setting makes the model vulnerable to various adversarial attacks in the presence of malicious clients. Specifically, an adversary can perform backdoor attacks to control model predictions via poisoning the training dataset with a trigger. In this work, we propose a mitigation for backdoor attacks in a federated learning setup. Our solution forces the model optimization trajectory to focus on the invariant directions that are generally useful for utility and avoid selecting directions that favor few and possibly malicious clients. Concretely, we consider the sign consistency of the pseudo-gradient (the client update) as an estimation of the invariance. Following this, our approach performs dimension-wise filtering to remove pseudo-gradient elements with low sign consistency. Then, a robust mean estimator eliminates outliers among the remaining dimensions. Our theoretical analysis further shows the necessity of the defense combination and illustrates how our proposed solution defends the federated learning model. Empirical results on three datasets with different modalities and varying number of clients show that our approach mitigates backdoor attacks with a negligible cost on the model utility.

READ FULL TEXT
research
07/29/2020

Dynamic Federated Learning Model for Identifying Adversarial Clients

Federated learning, as a distributed learning that conducts the training...
research
09/13/2021

SignGuard: Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering

Gradient-based training in federated learning is known to be vulnerable ...
research
10/19/2021

TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks

Federated learning—multi-party, distributed learning in a decentralized ...
research
10/20/2022

FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information

Federated learning is vulnerable to poisoning attacks in which malicious...
research
10/24/2022

Detection and Prevention Against Poisoning Attacks in Federated Learning

This paper proposes and investigates a new approach for detecting and pr...
research
04/29/2022

Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling

Recent advances in federated learning have demonstrated its promising ca...
research
08/24/2023

A Huber Loss Minimization Approach to Byzantine Robust Federated Learning

Federated learning systems are susceptible to adversarial attacks. To co...

Please sign up or login with your details

Forgot password? Click here to reset