Differentially Private and Fair Classification via Calibrated Functional Mechanism

01/14/2020
by   Jiahao Ding, et al.
0

Machine learning is increasingly becoming a powerful tool to make decisions in a wide variety of applications, such as medical diagnosis and autonomous driving. Privacy concerns related to the training data and unfair behaviors of some decisions with regard to certain attributes (e.g., sex, race) are becoming more critical. Thus, constructing a fair machine learning model while simultaneously providing privacy protection becomes a challenging problem. In this paper, we focus on the design of classification model with fairness and differential privacy guarantees by jointly combining functional mechanism and decision boundary fairness. In order to enforce ϵ-differential privacy and fairness, we leverage the functional mechanism to add different amounts of Laplace noise regarding different attributes to the polynomial coefficients of the objective function in consideration of fairness constraint. We further propose an utility-enhancement scheme, called relaxed functional mechanism by adding Gaussian noise instead of Laplace noise, hence achieving (ϵ,δ)-differential privacy. Based on the relaxed functional mechanism, we can design (ϵ,δ)-differentially private and fair classification model. Moreover, our theoretical analysis and empirical results demonstrate that our two approaches achieve both fairness and differential privacy while preserving good utility and outperform the state-of-the-art algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2020

Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy

Deployment of deep learning in different fields and industries is growin...
research
12/03/2018

Differentially Private Obfuscation Mechanisms for Hiding Probability Distributions

We propose a formal model for the privacy of user attributes in terms of...
research
06/02/2019

Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness

In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM)...
research
11/23/2022

Differentially Private Fair Division

Fairness and privacy are two important concerns in social decision-makin...
research
06/23/2019

The Cost of a Reductions Approach to Private Fair Optimization

We examine a reductions approach to fair optimization and learning where...
research
01/28/2019

Utility Preserving Secure Private Data Release

Differential privacy mechanisms that also make reconstruction of the dat...
research
09/22/2022

Improving Utility for Privacy-Preserving Analysis of Correlated Columns using Pufferfish Privacy

Surveys are an important tool for many areas of social science research,...

Please sign up or login with your details

Forgot password? Click here to reset