Mitigating Backdoor Attacks in Federated Learning

10/28/2020
by   Chen Wu, et al.
6

Malicious clients can attack federated learning systems by using malicious data, including backdoor samples, during the training phase. The compromised global model will perform well on the validation dataset designed for the task. However, a small subset of data with backdoor patterns may trigger the model to make a wrong prediction. Previously, there was an arms race. Attackers tried to conceal attacks and defenders tried to detect attacks during the aggregation stage of training on the server-side in a federated learning system. In this work, we propose a new method to mitigate backdoor attacks after the training phase. Specifically, we designed a federated pruning method to remove redundant neurons in the network and then adjust the model's extreme weight values. Experiments conducted on distributed Fashion-MNIST have shown that our method can reduce the average attack success rate from 99.7 of test accuracy on the validation dataset. To minimize the pruning influence on test accuracy, we can fine-tune after pruning, and the attack success rate drops to 6.4

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/01/2020

Learning to Detect Malicious Clients for Robust Federated Learning

Federated learning systems are vulnerable to attacks from malicious clie...
research
03/16/2022

MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients

Existing model poisoning attacks to federated learning assume that an at...
research
10/19/2021

TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks

Federated learning—multi-party, distributed learning in a decentralized ...
research
02/03/2021

Provably Secure Federated Learning against Malicious Clients

Federated learning enables clients to collaboratively learn a shared glo...
research
01/18/2022

Model Transferring Attacks to Backdoor HyperNetwork in Personalized Federated Learning

This paper explores previously unknown backdoor risks in HyperNet-based ...
research
01/31/2022

Securing Federated Sensitive Topic Classification against Poisoning Attacks

We present a Federated Learning (FL) based solution for building a distr...
research
11/29/2018

Analyzing Federated Learning through an Adversarial Lens

Federated learning distributes model training among a multitude of agent...

Please sign up or login with your details

Forgot password? Click here to reset