Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning

04/21/2023
by   Hangtao Zhang, et al.
0

Federated learning (FL) is vulnerable to poisoning attacks, where adversaries corrupt the global aggregation results and cause denial-of-service (DoS). Unlike recent model poisoning attacks that optimize the amplitude of malicious perturbations along certain prescribed directions to cause DoS, we propose a Flexible Model Poisoning Attack (FMPA) that can achieve versatile attack goals. We consider a practical threat scenario where no extra knowledge about the FL system (e.g., aggregation rules or updates on benign devices) is available to adversaries. FMPA exploits the global historical information to construct an estimator that predicts the next round of the global model as a benign reference. It then fine-tunes the reference model to obtain the desired poisoned model with low accuracy and small perturbations. Besides the goal of causing DoS, FMPA can be naturally extended to launch a fine-grained controllable attack, making it possible to precisely reduce the global accuracy. Armed with precise control, malicious FL service providers can gain advantages over their competitors without getting noticed, hence opening a new attack surface in FL other than DoS. Even for the purpose of DoS, experiments show that FMPA significantly decreases the global accuracy, outperforming six state-of-the-art attacks.The code can be found at https://github.com/ZhangHangTao/Poisoning-Attack-on-FL.

READ FULL TEXT
research
07/16/2020

Data Poisoning Attacks Against Federated Learning Systems

Federated learning (FL) is an emerging paradigm for distributed training...
research
01/24/2021

Untargeted Poisoning Attack Detection in Federated Learning via Behavior Attestation

Federated Learning (FL) is a paradigm in Machine Learning (ML) that addr...
research
03/06/2023

Learning to Backdoor Federated Learning

In a federated learning (FL) system, malicious participants can easily e...
research
10/26/2021

FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective

Federated learning (FL) is a popular distributed learning framework that...
research
07/19/2022

MUD-PQFed: Towards Malicious User Detection in Privacy-Preserving Quantized Federated Learning

Federated Learning (FL), a distributed machine learning paradigm, has be...
research
04/20/2023

Get Rid Of Your Trail: Remotely Erasing Backdoors in Federated Learning

Federated Learning (FL) enables collaborative deep learning training acr...
research
11/29/2021

Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated Learning

Inserting a backdoor into the joint model in federated learning (FL) is ...

Please sign up or login with your details

Forgot password? Click here to reset