Model Transferring Attacks to Backdoor HyperNetwork in Personalized Federated Learning

01/18/2022
by   Phung Lai, et al.
0

This paper explores previously unknown backdoor risks in HyperNet-based personalized federated learning (HyperNetFL) through poisoning attacks. Based upon that, we propose a novel model transferring attack (called HNTROJ), i.e., the first of its kind, to transfer a local backdoor infected model to all legitimate and personalized local models, which are generated by the HyperNetFL model, through consistent and effective malicious local gradients computed across all compromised clients in the whole training process. As a result, HNTROJ reduces the number of compromised clients needed to successfully launch the attack without any observable signs of sudden shifts or degradation regarding model utility on legitimate data samples making our attack stealthy. To defend against HNTROJ, we adapted several backdoor-resistant FL training algorithms into HyperNetFL. An extensive experiment that is carried out using several benchmark datasets shows that HNTROJ significantly outperforms data poisoning and model replacement attacks and bypasses robust training algorithms.

READ FULL TEXT

page 17

page 21

page 22

research
10/28/2022

Local Model Reconstruction Attacks in Federated Learning and their Uses

In this paper, we initiate the study of local model reconstruction attac...
research
04/25/2023

Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning

In a federated learning (FL) system, distributed clients upload their lo...
research
05/22/2022

Test-Time Robust Personalization for Federated Learning

Federated Learning (FL) is a machine learning paradigm where many client...
research
01/13/2022

Jamming Attacks on Federated Learning in Wireless Networks

Federated learning (FL) offers a decentralized learning environment so t...
research
09/17/2022

pFedDef: Defending Grey-Box Attacks for Personalized Federated Learning

Personalized federated learning allows for clients in a distributed syst...
research
08/14/2023

DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks

Federated learning is a promising direction to tackle the privacy issues...
research
10/28/2020

Mitigating Backdoor Attacks in Federated Learning

Malicious clients can attack federated learning systems by using malicio...

Please sign up or login with your details

Forgot password? Click here to reset