Log In Sign Up

Stochastic Differentially Private and Fair Learning

by   Andrew Lowy, et al.

Machine learning models are increasingly used in high-stakes decision-making systems. In such applications, a major concern is that these models sometimes discriminate against certain demographic groups such as individuals with certain race, gender, or age. Another major concern in these applications is the violation of the privacy of users. While fair learning algorithms have been developed to mitigate discrimination issues, these algorithms can still leak sensitive information, such as individuals' health or financial records. Utilizing the notion of differential privacy (DP), prior works aimed at developing learning algorithms that are both private and fair. However, existing algorithms for DP fair learning are either not guaranteed to converge or require full batch of data in each iteration of the algorithm to converge. In this paper, we provide the first stochastic differentially private algorithm for fair learning that is guaranteed to converge. Here, the term "stochastic" refers to the fact that our proposed algorithm converges even when minibatches of data are used at each iteration (i.e. stochastic optimization). Our framework is flexible enough to permit different fairness notions, including demographic parity and equalized odds. In addition, our algorithm can be applied to non-binary classification tasks with multiple (non-binary) sensitive attributes. As a byproduct of our convergence analysis, we provide the first utility guarantee for a DP algorithm for solving nonconvex-strongly concave min-max problems. Our numerical experiments show that the proposed algorithm consistently offers significant performance gains over the state-of-the-art baselines, and can be applied to larger scale problems with non-binary target/sensitive attributes.


page 1

page 2

page 3

page 4


Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach

A critical concern in data-driven decision making is to build models who...

Rényi Fair Inference

Machine learning algorithms have been increasingly deployed in critical ...

FERMI: Fair Empirical Risk Minimization via Exponential Rényi Mutual Information

In this paper, we propose a new notion of fairness violation, called Exp...

Evaluating the Fairness Impact of Differentially Private Synthetic Data

Differentially private (DP) synthetic data is a promising approach to ma...

Fair NLP Models with Differentially Private Text Encoders

Encoded text representations often capture sensitive attributes about in...

Generation of Differentially Private Heterogeneous Electronic Health Records

Electronic Health Records (EHRs) are commonly used by the machine learni...

Investigating Trade-offs in Utility, Fairness and Differential Privacy in Neural Networks

To enable an ethical and legal use of machine learning algorithms, they ...