PPA: Preference Profiling Attack Against Federated Learning

02/10/2022
by   Chunyi Zhou, et al.
0

Federated learning (FL) trains a global model across a number of decentralized participants, each with a local dataset. Compared to traditional centralized learning, FL does not require direct local datasets access and thus mitigates data security and privacy concerns. However, data privacy concerns for FL still exist due to inference attacks, including known membership inference, property inference, and data inversion. In this work, we reveal a new type of privacy inference attack, coined Preference Profiling Attack (PPA), that accurately profiles private preferences of a local user. In general, the PPA can profile top-k, especially for top-1, preferences contingent on the local user's characteristics. Our key insight is that the gradient variation of a local user's model has a distinguishable sensitivity to the sample proportion of a given class, especially the majority/minority class. By observing a user model's gradient sensitivity to a class, the PPA can profile the sample proportion of the class in the user's local dataset and thus the user's preference of the class is exposed. The inherent statistical heterogeneity of FL further facilitates the PPA. We have extensively evaluated the PPA's effectiveness using four datasets from the image domains of MNIST, CIFAR10, Products-10K and RAF-DB. Our results show that the PPA achieves 90 respectively. More importantly, in the real-world commercial scenarios of shopping (i.e., Products-10K) and the social network (i.e., RAF-DB), the PPA gains a top-1 attack accuracy of 78 ordered items, and 88 Although existing countermeasures such as dropout and differential privacy protection can lower the PPA's accuracy to some extent, they unavoidably incur notable global model deterioration.

READ FULL TEXT

page 8

page 12

research
02/24/2023

Active Membership Inference Attack under Local Differential Privacy in Federated Learning

Federated learning (FL) was originally regarded as a framework for colla...
research
06/13/2023

Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios

Federated learning (FL) naturally faces the problem of data heterogeneit...
research
06/15/2021

Privacy Assessment of Federated Learning using Private Personalized Layers

Federated Learning (FL) is a collaborative scheme to train a learning mo...
research
10/19/2022

Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning

Gradient inversion attack enables recovery of training samples from mode...
research
01/29/2022

Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models

A central tenet of Federated learning (FL), which trains models without ...
research
06/18/2022

Measuring Lower Bounds of Local Differential Privacy via Adversary Instantiations in Federated Learning

Local differential privacy (LDP) gives a strong privacy guarantee to be ...
research
07/06/2020

Sharing Models or Coresets: A Study based on Membership Inference Attack

Distributed machine learning generally aims at training a global model b...

Please sign up or login with your details

Forgot password? Click here to reset