Differentially Private Fair Learning

12/06/2018
by   Matthew Jagielski, et al.
14

We design two learning algorithms that simultaneously promise differential privacy and equalized odds, a 'fairness' condition that corresponds to equalizing false positive and negative rates across protected groups. Our first algorithm is a simple private implementation of the post-processing approach of [Hardt et al. 2016]. This algorithm has the merit of being exceedingly simple, but must be able to use protected group membership explicitly at test time, which can be viewed as 'disparate treatment'. The second algorithm is a differentially private version of the algorithm of [Agarwal et al. 2018], an oracle-efficient algorithm that can be used to find the optimal fair classifier, given access to a subroutine that can solve the original (not necessarily fair) learning problem. This algorithm need not have access to protected group membership at test time. We identify new tradeoffs between fairness, accuracy, and privacy that emerge only when requiring all three properties, and show that these tradeoffs can be milder if group membership may be used at test time.

READ FULL TEXT

page 11

page 12

page 15

research
06/23/2019

The Cost of a Reductions Approach to Private Fair Optimization

We examine a reductions approach to fair optimization and learning where...
research
12/27/2021

Differentially-Private Sublinear-Time Clustering

Clustering is an essential primitive in unsupervised machine learning. W...
research
05/29/2019

Fair Decision Making using Privacy-Protected Data

Data collected about individuals is regularly used to make decisions tha...
research
02/11/2021

Investigating Trade-offs in Utility, Fairness and Differential Privacy in Neural Networks

To enable an ethical and legal use of machine learning algorithms, they ...
research
12/05/2020

FAIROD: Fairness-aware Outlier Detection

Fairness and Outlier Detection (OD) are closely related, as it is exactl...
research
05/23/2017

Learning to Succeed while Teaching to Fail: Privacy in Closed Machine Learning Systems

Security, privacy, and fairness have become critical in the era of data ...
research
12/07/2020

Improving Fairness and Privacy in Selection Problems

Supervised learning models have been increasingly used for making decisi...

Please sign up or login with your details

Forgot password? Click here to reset