Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling

04/29/2022
by   KiYoon Yoo, et al.
0

Recent advances in federated learning have demonstrated its promising capability to learn on decentralized datasets. However, a considerable amount of work has raised concerns due to the potential risks of adversaries participating in the framework to poison the global model for an adversarial purpose. This paper investigates the feasibility of model poisoning for backdoor attacks through rare word embeddings of NLP models in text classification and sequence-to-sequence tasks. In text classification, less than 1% of adversary clients suffices to manipulate the model output without any drop in the performance of clean sentences. For a less complex dataset, a mere 0.1% of adversary clients is enough to poison the global model effectively. We also propose a technique specialized in the federated learning scheme called gradient ensemble, which enhances the backdoor performance in all experimental settings.

READ FULL TEXT

page 5

page 13

research
07/29/2020

Dynamic Federated Learning Model for Identifying Adversarial Clients

Federated learning, as a distributed learning that conducts the training...
research
10/24/2022

Detection and Prevention Against Poisoning Attacks in Federated Learning

This paper proposes and investigates a new approach for detecting and pr...
research
11/18/2019

Can You Really Backdoor Federated Learning?

The decentralized nature of federated learning makes detecting and defen...
research
05/10/2023

FedSOV: Federated Model Secure Ownership Verification with Unforgeable Signature

Federated learning allows multiple parties to collaborate in learning a ...
research
10/04/2022

Invariant Aggregator for Defending Federated Backdoor Attacks

Federated learning is gaining popularity as it enables training of high-...
research
10/30/2022

Two Models are Better than One: Federated Learning Is Not Private For Google GBoard Next Word Prediction

In this paper we present new attacks against federated learning when use...
research
07/11/2020

Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification

It has been proved that deep neural networks are facing a new threat cal...

Please sign up or login with your details

Forgot password? Click here to reset