Learning to Backdoor Federated Learning

03/06/2023
by   Henger Li, et al.
0

In a federated learning (FL) system, malicious participants can easily embed backdoors into the aggregated model while maintaining the model's performance on the main task. To this end, various defenses, including training stage aggregation-based defenses and post-training mitigation defenses, have been proposed recently. While these defenses obtain reasonable performance against existing backdoor attacks, which are mainly heuristics based, we show that they are insufficient in the face of more advanced attacks. In particular, we propose a general reinforcement learning-based backdoor attack framework where the attacker first trains a (non-myopic) attack policy using a simulator built upon its local data and common knowledge on the FL system, which is then applied during actual FL training. Our attack framework is both adaptive and flexible and achieves strong attack performance and durability even under state-of-the-art defenses.

READ FULL TEXT
research
10/19/2022

Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning

Gradient inversion attack enables recovery of training samples from mode...
research
08/08/2023

Backdoor Federated Learning by Poisoning Backdoor-Critical Layers

Federated learning (FL) has been widely deployed to enable machine learn...
research
02/16/2019

A Little Is Enough: Circumventing Defenses For Distributed Learning

Distributed learning is central for large-scale training of deep-learnin...
research
02/07/2022

Blind leads Blind: A Zero-Knowledge Attack on Federated Learning

Attacks on Federated Learning (FL) can severely reduce the quality of th...
research
07/02/2022

FL-Defender: Combating Targeted Attacks in Federated Learning

Federated learning (FL) enables learning a global machine learning model...
research
05/24/2022

Towards a Defense against Backdoor Attacks in Continual Federated Learning

Backdoor attacks are a major concern in federated learning (FL) pipeline...
research
04/21/2023

Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning

Federated learning (FL) is vulnerable to poisoning attacks, where advers...

Please sign up or login with your details

Forgot password? Click here to reset