XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning

12/28/2022
by   Jianyi Zhang, et al.
0

Federated Learning (FL) has received increasing attention due to its privacy protection capability. However, the base algorithm FedAvg is vulnerable when it suffers from so-called backdoor attacks. Former researchers proposed several robust aggregation methods. Unfortunately, many of these aggregation methods are unable to defend against backdoor attacks. What's more, the attackers recently have proposed some hiding methods that further improve backdoor attacks' stealthiness, making all the existing robust aggregation methods fail. To tackle the threat of backdoor attacks, we propose a new aggregation method, X-raying Models with A Matrix (XMAM), to reveal the malicious local model updates submitted by the backdoor attackers. Since we observe that the output of the Softmax layer exhibits distinguishable patterns between malicious and benign updates, we focus on the Softmax layer's output in which the backdoor attackers are difficult to hide their malicious behavior. Specifically, like X-ray examinations, we investigate the local model updates by using a matrix as an input to get their Softmax layer's outputs. Then, we preclude updates whose outputs are abnormal by clustering. Without any training dataset in the server, the extensive evaluations show that our XMAM can effectively distinguish malicious local model updates from benign ones. For instance, when other methods fail to defend against the backdoor attacks at no more than 20 in the black-box mode and about 30 Besides, under adaptive attacks, the results demonstrate that XMAM can still complete the global model training task even when there are 40 clients. Finally, we analyze our method's screening complexity, and the results show that XMAM is about 10-10000 times faster than the existing methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/28/2022

Shielding Federated Learning: Robust Aggregation with Adaptive Client Selection

Federated learning (FL) enables multiple clients to collaboratively trai...
research
05/22/2022

Robust Quantity-Aware Aggregation for Federated Learning

Federated learning (FL) enables multiple clients to collaboratively trai...
research
11/01/2021

Robust Federated Learning via Over-The-Air Computation

This paper investigates the robustness of over-the-air federated learnin...
research
07/15/2022

Suppressing Poisoning Attacks on Federated Learning for Medical Imaging

Collaboration among multiple data-owning entities (e.g., hospitals) can ...
research
12/27/2020

FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

Byzantine-robust federated learning aims to enable a service provider to...
research
07/13/2022

Enhanced Security and Privacy via Fragmented Federated Learning

In federated learning (FL), a set of participants share updates computed...
research
03/18/2023

FedRight: An Effective Model Copyright Protection for Federated Learning

Federated learning (FL), an effective distributed machine learning frame...

Please sign up or login with your details

Forgot password? Click here to reset