Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning

10/17/2022
by   Yuxin Wen, et al.
6

Federated learning is particularly susceptible to model poisoning and backdoor attacks because individual users have direct control over the training data and model updates. At the same time, the attack power of an individual user is limited because their updates are quickly drowned out by those of many other users. Existing attacks do not account for future behaviors of other users, and thus require many sequential updates and their effects are quickly erased. We propose an attack that anticipates and accounts for the entire federated learning pipeline, including behaviors of other clients, and ensures that backdoors are effective quickly and persist even after multiple rounds of community updates. We show that this new attack is effective in realistic scenarios where the attacker only contributes to a small fraction of randomly sampled rounds and demonstrate this attack on image classification, next-word prediction, and sentiment analysis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/01/2020

Learning to Detect Malicious Clients for Robust Federated Learning

Federated learning systems are vulnerable to attacks from malicious clie...
research
06/10/2021

Gradient Disaggregation: Breaking Privacy in Federated Learning by Reconstructing the User Participant Matrix

We show that aggregated model updates in federated learning may be insec...
research
03/16/2022

MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients

Existing model poisoning attacks to federated learning assume that an at...
research
07/04/2023

An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems

Federated online learning to rank (FOLTR) aims to preserve user privacy ...
research
07/02/2018

How To Backdoor Federated Learning

Federated learning enables multiple participants to jointly construct a ...
research
11/29/2018

Analyzing Federated Learning through an Adversarial Lens

Federated learning distributes model training among a multitude of agent...
research
06/24/2022

Data Leakage in Federated Averaging

Recent attacks have shown that user data can be recovered from FedSGD up...

Please sign up or login with your details

Forgot password? Click here to reset