Disparate Vulnerability: on the Unfairness of Privacy Attacks Against Machine Learning

06/02/2019
by   Mohammad Yaghini, et al.
0

A membership inference attack (MIA) against a machine learning model enables an attacker to determine whether a given data record was part of the model's training dataset or not. Such attacks have been shown to be practical both in centralized and federated settings, and pose a threat in many privacy-sensitive domains such as medicine or law enforcement. In the literature, the effectiveness of these attacks is invariably reported using metrics computed across the whole population. In this paper, we take a closer look at the attack's performance across different subgroups present in the data distributions. We introduce a framework that enables us to efficiently analyze the vulnerability of machine learning models to MIA. We discover that even if the accuracy of MIA looks no better than random guessing over the whole population, subgroups are subject to disparate vulnerability, i.e., certain subgroups can be significantly more vulnerable than others. We provide a theoretical definition for MIA vulnerability which we validate empirically both on synthetic and real data.

READ FULL TEXT

page 17

page 18

page 19

page 20

research
03/12/2021

On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models

With an increase in low-cost machine learning APIs, advanced machine lea...
research
06/28/2018

Towards Demystifying Membership Inference Attacks

Membership inference attacks seek to infer membership of individual trai...
research
10/06/2021

On The Vulnerability of Recurrent Neural Networks to Membership Inference Attacks

We study the privacy implications of deploying recurrent neural networks...
research
12/12/2017

Vulnerability of Complex Networks in Center-Based Attacks

We study the vulnerability of synthetic as well as real-world networks i...
research
02/07/2019

Shoulder Surfing: From An Experimental Study to a Comparative Framework

Shoulder surfing is an attack vector widely recognized as a real threat ...
research
05/06/2019

Cognitive Triaging of Phishing Attacks

In this paper we employ quantitative measurements of cognitive vulnerabi...
research
06/15/2022

Architectural Backdoors in Neural Networks

Machine learning is vulnerable to adversarial manipulation. Previous lit...

Please sign up or login with your details

Forgot password? Click here to reset