DeepAI AI Chat
Log In Sign Up

Differentially Private Fair Learning

by   Matthew Jagielski, et al.

We design two learning algorithms that simultaneously promise differential privacy and equalized odds, a 'fairness' condition that corresponds to equalizing false positive and negative rates across protected groups. Our first algorithm is a simple private implementation of the post-processing approach of [Hardt et al. 2016]. This algorithm has the merit of being exceedingly simple, but must be able to use protected group membership explicitly at test time, which can be viewed as 'disparate treatment'. The second algorithm is a differentially private version of the algorithm of [Agarwal et al. 2018], an oracle-efficient algorithm that can be used to find the optimal fair classifier, given access to a subroutine that can solve the original (not necessarily fair) learning problem. This algorithm need not have access to protected group membership at test time. We identify new tradeoffs between fairness, accuracy, and privacy that emerge only when requiring all three properties, and show that these tradeoffs can be milder if group membership may be used at test time.


page 11

page 12

page 15


The Cost of a Reductions Approach to Private Fair Optimization

We examine a reductions approach to fair optimization and learning where...

Differentially-Private Sublinear-Time Clustering

Clustering is an essential primitive in unsupervised machine learning. W...

Fair Decision Making using Privacy-Protected Data

Data collected about individuals is regularly used to make decisions tha...

Investigating Trade-offs in Utility, Fairness and Differential Privacy in Neural Networks

To enable an ethical and legal use of machine learning algorithms, they ...

Fairness Certificates for Differentially Private Classification

In this work, we theoretically study the impact of differential privacy ...

FAIROD: Fairness-aware Outlier Detection

Fairness and Outlier Detection (OD) are closely related, as it is exactl...

Learning to Succeed while Teaching to Fail: Privacy in Closed Machine Learning Systems

Security, privacy, and fairness have become critical in the era of data ...