Get Rid Of Your Trail: Remotely Erasing Backdoors in Federated Learning

04/20/2023
by   Manaar Alam, et al.
0

Federated Learning (FL) enables collaborative deep learning training across multiple participants without exposing sensitive personal data. However, the distributed nature of FL and the unvetted participants' data makes it vulnerable to backdoor attacks. In these attacks, adversaries inject malicious functionality into the centralized model during training, leading to intentional misclassifications for specific adversary-chosen inputs. While previous research has demonstrated successful injections of persistent backdoors in FL, the persistence also poses a challenge, as their existence in the centralized model can prompt the central aggregation server to take preventive measures to penalize the adversaries. Therefore, this paper proposes a methodology that enables adversaries to effectively remove backdoors from the centralized model upon achieving their objectives or upon suspicion of possible detection. The proposed approach extends the concept of machine unlearning and presents strategies to preserve the performance of the centralized model and simultaneously prevent over-unlearning of information unrelated to backdoor patterns, making the adversaries stealthy while removing backdoors. To the best of our knowledge, this is the first work that explores machine unlearning in FL to remove backdoors to the benefit of adversaries. Exhaustive evaluation considering image classification scenarios demonstrates the efficacy of the proposed method in efficient backdoor removal from the centralized model, injected by state-of-the-art attacks across multiple configurations.

READ FULL TEXT

page 10

page 11

page 13

research
05/26/2022

PerDoor: Persistent Non-Uniform Backdoors in Federated Learning using Adversarial Perturbations

Federated Learning (FL) enables numerous participants to train deep lear...
research
01/06/2021

FLGUARD: Secure and Private Federated Learning

Recently, a number of backdoor attacks against Federated Learning (FL) h...
research
06/09/2021

FedDICE: A ransomware spread detection in a distributed integrated clinical environment using federated learning and SDN based mitigation

An integrated clinical environment (ICE) enables the connection and coor...
research
05/21/2022

Secure and Efficient Decentralized Federated Learning with Data Representation Protection

Federated learning (FL) is a promising technical support to the vision o...
research
11/20/2020

Towards Building a Robust and Fair Federated Learning System

Federated Learning (FL) has emerged as a promising practical framework f...
research
04/21/2023

Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning

Federated learning (FL) is vulnerable to poisoning attacks, where advers...
research
06/18/2021

Accumulative Poisoning Attacks on Real-time Data

Collecting training data from untrusted sources exposes machine learning...

Please sign up or login with your details

Forgot password? Click here to reset