Differentially Private Fair Learning

12/06/2018
by   Matthew Jagielski, et al.
14

We design two learning algorithms that simultaneously promise differential privacy and equalized odds, a 'fairness' condition that corresponds to equalizing false positive and negative rates across protected groups. Our first algorithm is a simple private implementation of the post-processing approach of [Hardt et al. 2016]. This algorithm has the merit of being exceedingly simple, but must be able to use protected group membership explicitly at test time, which can be viewed as 'disparate treatment'. The second algorithm is a differentially private version of the algorithm of [Agarwal et al. 2018], an oracle-efficient algorithm that can be used to find the optimal fair classifier, given access to a subroutine that can solve the original (not necessarily fair) learning problem. This algorithm need not have access to protected group membership at test time. We identify new tradeoffs between fairness, accuracy, and privacy that emerge only when requiring all three properties, and show that these tradeoffs can be milder if group membership may be used at test time.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset