MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients

03/16/2022
by   Xiaoyu Cao, et al.
0

Existing model poisoning attacks to federated learning assume that an attacker has access to a large fraction of compromised genuine clients. However, such assumption is not realistic in production federated learning systems that involve millions of clients. In this work, we propose the first Model Poisoning Attack based on Fake clients called MPAF. Specifically, we assume the attacker injects fake clients to a federated learning system and sends carefully crafted fake local model updates to the cloud server during training, such that the learnt global model has low accuracy for many indiscriminate test inputs. Towards this goal, our attack drags the global model towards an attacker-chosen base model that has low accuracy. Specifically, in each round of federated learning, the fake clients craft fake local model updates that point to the base model and scale them up to amplify their impact before sending them to the cloud server. Our experiments show that MPAF can significantly decrease the test accuracy of the global model, even if classical defenses and norm clipping are adopted, highlighting the need for more advanced defenses.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2022

FLCert: Provably Secure Federated Learning against Poisoning Attacks

Due to its distributed nature, federated learning is vulnerable to poiso...
research
07/02/2018

How To Backdoor Federated Learning

Federated learning enables multiple participants to jointly construct a ...
research
11/28/2019

Free-riders in Federated Learning: Attacks and Defenses

Federated learning is a recently proposed paradigm that enables multiple...
research
10/28/2020

Mitigating Backdoor Attacks in Federated Learning

Malicious clients can attack federated learning systems by using malicio...
research
10/17/2022

Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning

Federated learning is particularly susceptible to model poisoning and ba...
research
12/27/2020

FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

Byzantine-robust federated learning aims to enable a service provider to...
research
11/22/2021

Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data

Local Differential Privacy (LDP) protocols enable an untrusted server to...

Please sign up or login with your details

Forgot password? Click here to reset