Attack of the Tails: Yes, You Really Can Backdoor Federated Learning

07/09/2020
by   Hongyi Wang, et al.
18

Due to its decentralized nature, Federated Learning (FL) lends itself to adversarial attacks in the form of backdoors during training. The goal of a backdoor is to corrupt the performance of the trained model on specific sub-tasks (e.g., by classifying green cars as frogs). A range of FL backdoor attacks have been introduced in the literature, but also methods to defend against them, and it is currently an open question whether FL systems can be tailored to be robust against backdoors. In this work, we provide evidence to the contrary. We first establish that, in the general case, robustness to backdoors implies model robustness to adversarial examples, a major open problem in itself. Furthermore, detecting the presence of a backdoor in a FL model is unlikely assuming first order oracles or polynomial time. We couple our theoretical results with a new family of backdoor attacks, which we refer to as edge-case backdoors. An edge-case backdoor forces a model to misclassify on seemingly easy inputs that are however unlikely to be part of the training, or test data, i.e., they live on the tail of the input distribution. We explain how these edge-case backdoors can lead to unsavory failures and may have serious repercussions on fairness, and exhibit that with careful tuning at the side of the adversary, one can insert them across a range of machine learning tasks (e.g., image classification, OCR, text prediction, sentiment analysis).

READ FULL TEXT
research
10/20/2022

Analyzing the Robustness of Decentralized Horizontal and Vertical Federated Learning Architectures in a Non-IID Scenario

Federated learning (FL) allows participants to collaboratively train mac...
research
05/13/2022

FLAD: Adaptive Federated Learning for DDoS Attack Detection

Federated Learning (FL) has been recently receiving increasing considera...
research
02/10/2021

Meta Federated Learning

Due to its distributed methodology alongside its privacy-preserving feat...
research
06/08/2023

FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs

This paper introduces FedMLSecurity, a benchmark that simulates adversar...
research
06/12/2022

Neurotoxin: Durable Backdoors in Federated Learning

Due to their decentralized nature, federated learning (FL) systems have ...
research
11/15/2020

Dynamic backdoor attacks against federated learning

Federated Learning (FL) is a new machine learning framework, which enabl...
research
05/26/2022

PerDoor: Persistent Non-Uniform Backdoors in Federated Learning using Adversarial Perturbations

Federated Learning (FL) enables numerous participants to train deep lear...

Please sign up or login with your details

Forgot password? Click here to reset