Local Model Poisoning Attacks to Byzantine-Robust Federated Learning

11/26/2019
by   Minghong Fang, et al.
0

In federated learning, multiple client devices jointly learn a machine learning model: each client device maintains a local model for its local training dataset, while a master device maintains a global model via aggregating the local models from the client devices. The machine learning community recently proposed several federated learning methods that were claimed to be robust against Byzantine failures (e.g., system failures, adversarial manipulations) of certain client devices. In this work, we perform the first systematic study on local model poisoning attacks to federated learning. We assume an attacker has compromised some client devices, and the attacker manipulates the local model parameters on the compromised client devices during the learning process such that the global model has a large testing error rate. We formulate our attacks as optimization problems and apply our attacks to four recent Byzantine-robust federated learning methods. Our empirical results on four real-world datasets show that our attacks can substantially increase the error rates of the models learnt by the federated learning methods that were claimed to be robust against Byzantine failures of some client devices. We generalize two defenses for data poisoning attacks to defend against our local model poisoning attacks. Our evaluation results show that one defense can effectively defend against our attacks in some cases, but the defenses are not effective enough in other cases, highlighting the need for new defenses against our local model poisoning attacks to federated learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2021

Byzantine-Robust Federated Learning via Credibility Assessment on Non-IID Data

Federated learning is a novel framework that enables resource-constraine...
research
01/19/2023

On the Vulnerability of Backdoor Defenses for Federated Learning

Federated Learning (FL) is a popular distributed machine learning paradi...
research
09/11/2019

Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging

Federated learning enables training collaborative machine learning model...
research
11/27/2022

Navigation as the Attacker Wishes? Towards Building Byzantine-Robust Embodied Agents under Federated Learning

Federated embodied agent learning protects the data privacy of individua...
research
11/28/2019

Free-riders in Federated Learning: Attacks and Defenses

Federated learning is a recently proposed paradigm that enables multiple...
research
03/07/2023

Can Decentralized Learning be more robust than Federated Learning?

Decentralized Learning (DL) is a peer–to–peer learning approach that all...
research
08/14/2018

Mitigating Sybils in Federated Learning Poisoning

Machine learning (ML) over distributed data is relevant to a variety of ...

Please sign up or login with your details

Forgot password? Click here to reset