Multi-metrics adaptively identifies backdoors in Federated learning

by   Siquan Huang, et al.

The decentralized and privacy-preserving nature of federated learning (FL) makes it vulnerable to backdoor attacks aiming to manipulate the behavior of the resulting model on specific adversary-chosen inputs. However, most existing defenses based on statistical differences take effect only against specific attacks, especially when the malicious gradients are similar to benign ones or the data are highly non-independent and identically distributed (non-IID). In this paper, we revisit the distance-based defense methods and discover that i) Euclidean distance becomes meaningless in high dimensions and ii) malicious gradients with diverse characteristics cannot be identified by a single metric. To this end, we present a simple yet effective defense strategy with multi-metrics and dynamic weighting to identify backdoors adaptively. Furthermore, our novel defense has no reliance on predefined assumptions over attack settings or data distributions and little impact on benign performance. To evaluate the effectiveness of our approach, we conduct comprehensive experiments on different datasets under various attack settings, where our method achieves the best defensive performance. For instance, we achieve the lowest backdoor accuracy of 3.06 significant superiority over previous defenses. The results also demonstrate that our method can be well-adapted to a wide range of non-IID degrees without sacrificing the benign performance.


page 1

page 2

page 3

page 4


FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning

Federated Learning (FL) is a distributed learning paradigm that enables ...

Backdoor Federated Learning by Poisoning Backdoor-Critical Layers

Federated learning (FL) has been widely deployed to enable machine learn...

Avoid Adversarial Adaption in Federated Learning by Multi-Metric Investigations

Federated Learning (FL) trains machine learning models on data distribut...

Defending Against Backdoors in Federated Learning with Robust Learning Rate

Federated Learning (FL) allows a set of agents to collaboratively train ...

When Attackers Meet AI: Learning-empowered Attacks in Cooperative Spectrum Sensing

Defense strategies have been well studied to combat Byzantine attacks th...

Bankrupting Sybil Despite Churn

A Sybil attack occurs when an adversary pretends to be multiple identiti...

MANDERA: Malicious Node Detection in Federated Learning via Ranking

Federated learning is a distributed learning paradigm which seeks to pre...

Please sign up or login with your details

Forgot password? Click here to reset