Analyzing Federated Learning through an Adversarial Lens

11/29/2018
by   Arjun Nitin Bhagoji, et al.
0

Federated learning distributes model training among a multitude of agents, who, guided by privacy concerns, perform training using their local data but share only model parameter updates, for iterative aggregation at the server. In this work, we explore the threat of model poisoning attacks on federated learning initiated by a single, non-colluding malicious agent where the adversarial objective is to cause the model to misclassify a set of chosen inputs with high confidence. We explore a number of strategies to carry out this attack, starting with simple boosting of the malicious agent's update to overcome the effects of other agents' updates. To increase attack stealth, we propose an alternating minimization strategy, which alternately optimizes for the training loss and the adversarial objective. We follow up by using parameter estimation for the benign agents' updates to improve on attack success. Finally, we use a suite of interpretability techniques to generate visual explanations of model decisions for both benign and malicious models and show that the explanations are nearly visually indistinguishable. Our results indicate that even a highly constrained adversary can carry out model poisoning attacks while simultaneously maintaining stealth, thus highlighting the vulnerability of the federated learning setting and the need to develop effective defense strategies.

READ FULL TEXT

page 11

page 17

page 18

research
02/07/2022

Preserving Privacy and Security in Federated Learning

Federated learning is known to be vulnerable to security and privacy iss...
research
06/21/2020

Free-rider Attacks on Model Aggregation in Federated Learning

Free-rider attacks on federated learning consist in dissimulating partic...
research
11/27/2022

Navigation as the Attacker Wishes? Towards Building Byzantine-Robust Embodied Agents under Federated Learning

Federated embodied agent learning protects the data privacy of individua...
research
05/08/2019

Robust Federated Training via Collaborative Machine Teaching using Trusted Instances

Federated learning performs distributed model training using local data ...
research
11/08/2021

BARFED: Byzantine Attack-Resistant Federated Averaging Based on Outlier Elimination

In federated learning, each participant trains its local model with its ...
research
10/28/2020

Mitigating Backdoor Attacks in Federated Learning

Malicious clients can attack federated learning systems by using malicio...
research
10/17/2022

Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning

Federated learning is particularly susceptible to model poisoning and ba...

Please sign up or login with your details

Forgot password? Click here to reset