Defending Against Backdoors in Federated Learning with Robust Learning Rate

07/07/2020
by   Mustafa Safa Ozdayi, et al.
0

Federated Learning (FL) allows a set of agents to collaboratively train a model in a decentralized fashion without sharing their potentially sensitive data. This makes FL suitable for privacy-preserving applications. At the same time, FL is susceptible to adversarial attacks due to decentralized and unvetted data. One important line of attacks against FL is the backdoor attacks. In a backdoor attack, an adversary tries to embed a backdoor trigger functionality to the model during training which can later be activated to cause a desired misclassification. To prevent such backdoor attacks, we propose a lightweight defense that requires no change to the FL structure. At a high level, our defense is based on carefully adjusting the server's learning rate, per dimension, at each round based on the sign information of agent's updates. We first conjecture the necessary steps to carry a successful backdoor attack in FL setting, and then, explicitly formulate the defense based on our conjecture. Through experiments, we provide empirical evidence to the support of our conjecture. We test our defense against backdoor attacks under different settings, and, observe that either backdoor is completely eliminated, or its accuracy is significantly reduced. Overall, our experiments suggests that our approach significantly outperforms some of the recently proposed defenses in the literature. We achieve this by having minimal influence over the accuracy of the trained models.

READ FULL TEXT

page 5

page 8

page 11

research
04/29/2023

FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients Inspection

Federated learning (FL) enables multiple clients to train a model withou...
research
12/08/2020

Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective

Federated learning (FL) is a popular distributed learning framework that...
research
08/08/2023

Backdoor Federated Learning by Poisoning Backdoor-Critical Layers

Federated learning (FL) has been widely deployed to enable machine learn...
research
10/20/2022

Analyzing the Robustness of Decentralized Horizontal and Vertical Federated Learning Architectures in a Non-IID Scenario

Federated learning (FL) allows participants to collaboratively train mac...
research
03/12/2023

Multi-metrics adaptively identifies backdoors in Federated learning

The decentralized and privacy-preserving nature of federated learning (F...
research
05/26/2022

PerDoor: Persistent Non-Uniform Backdoors in Federated Learning using Adversarial Perturbations

Federated Learning (FL) enables numerous participants to train deep lear...
research
02/03/2023

Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks

In this work, besides improving prediction accuracy, we study whether pe...

Please sign up or login with your details

Forgot password? Click here to reset