Local Model Reconstruction Attacks in Federated Learning and their Uses

10/28/2022
by   Ilias Driouich, et al.
0

In this paper, we initiate the study of local model reconstruction attacks for federated learning, where a honest-but-curious adversary eavesdrops the messages exchanged between a targeted client and the server, and then reconstructs the local/personalized model of the victim. The local model reconstruction attack allows the adversary to trigger other classical attacks in a more effective way, since the local model only depends on the client's data and can leak more private information than the global model learned by the server. Additionally, we propose a novel model-based attribute inference attack in federated learning leveraging the local model reconstruction attack. We provide an analytical lower-bound for this attribute inference attack. Empirical results using real world datasets confirm that our local reconstruction attack works well for both regression and classification tasks. Moreover, we benchmark our novel attribute inference attack against the state-of-the-art attacks in federated learning. Our attack results in higher reconstruction accuracy especially when the clients' datasets are heterogeneous. Our work provides a new angle for designing powerful and explainable attacks to effectively quantify the privacy risk in FL.

READ FULL TEXT
research
10/31/2021

Efficient passive membership inference attack in federated learning

In cross-device federated learning (FL) setting, clients such as mobiles...
research
01/18/2022

Model Transferring Attacks to Backdoor HyperNetwork in Personalized Federated Learning

This paper explores previously unknown backdoor risks in HyperNet-based ...
research
03/22/2022

Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis

Model poisoning attacks on federated learning (FL) intrude in the entire...
research
05/09/2022

ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning

This work aims to tackle Model Inversion (MI) attack on Split Federated ...
research
02/01/2021

Curse or Redemption? How Data Heterogeneity Affects the Robustness of Federated Learning

Data heterogeneity has been identified as one of the key features in fed...
research
12/05/2022

FedCC: Robust Federated Learning against Model Poisoning Attacks

Federated Learning has emerged to cope with raising concerns about priva...
research
01/01/2021

Fidel: Reconstructing Private Training Samples from Weight Updates in Federated Learning

With the increasing number of data collectors such as smartphones, immen...

Please sign up or login with your details

Forgot password? Click here to reset