Source Inference Attacks in Federated Learning

by   Hongsheng Hu, et al.

Federated learning (FL) has emerged as a promising privacy-aware paradigm that allows multiple clients to jointly train a model without sharing their private data. Recently, many studies have shown that FL is vulnerable to membership inference attacks (MIAs) that can distinguish the training members of the given model from the non-members. However, existing MIAs ignore the source of a training member, i.e., the information of which client owns the training member, while it is essential to explore source privacy in FL beyond membership privacy of examples from all clients. The leakage of source information can lead to severe privacy issues. For example, identification of the hospital contributing to the training of an FL model for COVID-19 pandemic can render the owner of a data record from this hospital more prone to discrimination if the hospital is in a high risk region. In this paper, we propose a new inference attack called source inference attack (SIA), which can derive an optimal estimation of the source of a training member. Specifically, we innovatively adopt the Bayesian perspective to demonstrate that an honest-but-curious server can launch an SIA to steal non-trivial source information of the training members without violating the FL protocol. The server leverages the prediction loss of local models on the training members to achieve the attack effectively and non-intrusively. We conduct extensive experiments on one synthetic and five real datasets to evaluate the key factors in an SIA, and the results show the efficacy of the proposed source inference attack.


page 1

page 2

page 3

page 4


Active Membership Inference Attack under Local Differential Privacy in Federated Learning

Federated learning (FL) was originally regarded as a framework for colla...

Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios

Federated learning (FL) naturally faces the problem of data heterogeneit...

Efficient passive membership inference attack in federated learning

In cross-device federated learning (FL) setting, clients such as mobiles...

Federated Uncertainty-Aware Learning for Distributed Hospital EHR Data

Recent works have shown that applying Machine Learning to Electronic Hea...

Attribute Inference Attack of Speech Emotion Recognition in Federated Learning Settings

Speech emotion recognition (SER) processes speech signals to detect and ...

Data Leakage in Tabular Federated Learning

While federated learning (FL) promises to preserve privacy in distribute...

Perfectly Accurate Membership Inference by a Dishonest Central Server in Federated Learning

Federated Learning is expected to provide strong privacy guarantees, as ...

Please sign up or login with your details

Forgot password? Click here to reset