Privacy for All: Demystify Vulnerability Disparity of Differential Privacy against Membership Inference Attack

01/24/2020
by   Bo Zhang, et al.
0

Machine learning algorithms, when applied to sensitive data, pose a potential threat to privacy. A growing body of prior work has demonstrated that membership inference attack (MIA) can disclose specific private information in the training data to an attacker. Meanwhile, the algorithmic fairness of machine learning has increasingly caught attention from both academia and industry. Algorithmic fairness ensures that the machine learning models do not discriminate a particular demographic group of individuals (e.g., black and female people). Given that MIA is indeed a learning model, it raises a serious concern if MIA “fairly” treats all groups of individuals equally. In other words, whether a particular group is more vulnerable against MIA than the other groups. This paper examines the algorithmic fairness issue in the context of MIA and its defenses. First, for fairness evaluation, it formalizes the notation of vulnerability disparity (VD) to quantify the difference of MIA treatment on different demographic groups. Second, it evaluates VD on four real-world datasets, and shows that VD indeed exists in these datasets. Third, it examines the impacts of differential privacy, as a defense mechanism of MIA, on VD. The results show that although DP brings significant change on VD, it cannot eliminate VD completely. Therefore, fourth, it designs a new mitigation algorithm named FAIRPICK to reduce VD. An extensive set of experimental results demonstrate that FAIRPICK can effectively reduce VD for both with and without the DP deployment.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/21/2019

Effects of Differential Privacy and Data Skewness on Membership Inference Vulnerability

Membership inference attacks seek to infer the membership of individual ...
research
10/23/2020

Differentially Private Learning Does Not Bound Membership Inference

Training machine learning models on privacy-sensitive data has become a ...
research
11/01/2020

Monitoring-based Differential Privacy Mechanism Against Query-Flooding Parameter Duplication Attack

Public intelligent services enabled by machine learning algorithms are v...
research
03/05/2022

The Impact of Differential Privacy on Group Disparity Mitigation

The performance cost of differential privacy has, for some applications,...
research
02/06/2020

Mitigating Query-Flooding Parameter Duplication Attack on Regression Models with High-Dimensional Gaussian Mechanism

Public intelligent services enabled by machine learning algorithms are v...
research
05/31/2021

Model Mis-specification and Algorithmic Bias

Machine learning algorithms are increasingly used to inform critical dec...
research
03/01/2022

Explainability for identification of vulnerable groups in machine learning models

If a prediction model identifies vulnerable individuals or groups, the u...

Please sign up or login with your details

Forgot password? Click here to reset