Neurotoxin: Durable Backdoors in Federated Learning

06/12/2022
by   Zhengming Zhang, et al.
0

Due to their decentralized nature, federated learning (FL) systems have an inherent vulnerability during their training to adversarial backdoor attacks. In this type of attack, the goal of the attacker is to use poisoned updates to implant so-called backdoors into the learned model such that, at test time, the model's outputs can be fixed to a given target for certain inputs. (As a simple toy example, if a user types "people from New York" into a mobile keyboard app that uses a backdoored next word prediction model, then the model could autocomplete the sentence to "people from New York are rude"). Prior work has shown that backdoors can be inserted into FL models, but these backdoors are often not durable, i.e., they do not remain in the model after the attacker stops uploading poisoned updates. Thus, since training typically continues progressively in production FL systems, an inserted backdoor may not survive until deployment. Here, we propose Neurotoxin, a simple one-line modification to existing backdoor attacks that acts by attacking parameters that are changed less in magnitude during training. We conduct an exhaustive evaluation across ten natural language processing and computer vision tasks, and we find that we can double the durability of state of the art backdoors.

READ FULL TEXT
research
11/22/2021

FedCV: A Federated Learning Framework for Diverse Computer Vision Tasks

Federated Learning (FL) is a distributed learning paradigm that can lear...
research
01/29/2022

Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models

A central tenet of Federated learning (FL), which trains models without ...
research
10/30/2022

Two Models are Better than One: Federated Learning Is Not Private For Google GBoard Next Word Prediction

In this paper we present new attacks against federated learning when use...
research
07/09/2020

Attack of the Tails: Yes, You Really Can Backdoor Federated Learning

Due to its decentralized nature, Federated Learning (FL) lends itself to...
research
10/19/2022

Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning

Gradient inversion attack enables recovery of training samples from mode...
research
08/25/2022

Reduce Communication Costs and Preserve Privacy: Prompt Tuning Method in Federated Learning

Federated learning (FL) has enabled global model training on decentraliz...
research
11/07/2022

Resilience of Wireless Ad Hoc Federated Learning against Model Poisoning Attacks

Wireless ad hoc federated learning (WAFL) is a fully decentralized colla...

Please sign up or login with your details

Forgot password? Click here to reset