Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Federated Learning

08/23/2021
by   Virat Shejwalkar, et al.
0

While recent works have indicated that federated learning (FL) is vulnerable to poisoning attacks by compromised clients, we show that these works make a number of unrealistic assumptions and arrive at somewhat misleading conclusions. For instance, they often use impractically high percentages of compromised clients or assume unrealistic capabilities for the adversary. We perform the first critical analysis of poisoning attacks under practical production FL environments by carefully characterizing the set of realistic threat models and adversarial capabilities. Our findings are rather surprising: contrary to the established belief, we show that FL, even without any defenses, is highly robust in practice. In fact, we go even further and propose novel, state-of-the-art poisoning attacks under two realistic threat models, and show via an extensive set of experiments across three benchmark datasets how (in)effective poisoning attacks are, especially when simple defense mechanisms are used. We correct previous misconceptions and give concrete guidelines that we hope will encourage our community to conduct more accurate research in this space and build stronger (and more realistic) attacks and defenses.

READ FULL TEXT

page 1

page 6

page 8

research
11/05/2021

Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups

Federated learning (FL) enables a set of entities to collaboratively tra...
research
11/04/2020

BaFFLe: Backdoor detection via Feedback-based Federated Learning

Recent studies have shown that federated learning (FL) is vulnerable to ...
research
02/10/2021

Meta Federated Learning

Due to its distributed methodology alongside its privacy-preserving feat...
research
06/08/2023

FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs

This paper introduces FedMLSecurity, a benchmark that simulates adversar...
research
08/08/2023

Backdoor Federated Learning by Poisoning Backdoor-Critical Layers

Federated learning (FL) has been widely deployed to enable machine learn...
research
05/16/2023

Keep It Simple: Fault Tolerance Evaluation of Federated Learning with Unreliable Clients

Federated learning (FL), as an emerging artificial intelligence (AI) app...
research
06/06/2023

Avoid Adversarial Adaption in Federated Learning by Multi-Metric Investigations

Federated Learning (FL) trains machine learning models on data distribut...

Please sign up or login with your details

Forgot password? Click here to reset