The Cost of a Reductions Approach to Private Fair Optimization

06/23/2019
by   Daniel Alabi, et al.
0

We examine a reductions approach to fair optimization and learning where a black-box optimizer is used to learn a fair model for classification or regression [Alabi et al., 2018, Agarwal et al., 2018] and explore the creation of such fair models that adhere to data privacy guarantees (specifically differential privacy). For this approach, we consider two suites of use cases: the first is for optimizing convex performance measures of the confusion matrix (such as G-mean and H-mean); the second is for satisfying statistical definitions of algorithmic fairness (such as equalized odds, demographic parity, and the gini index of inequality). The reductions approach to fair optimization can be abstracted as the constrained group-objective optimization problem where we aim to optimize an objective that is a function of losses of individual groups, subject to some constraints. We present two differentially private algorithms: an (ϵ, 0) exponential sampling algorithm and an (ϵ, δ) algorithm that uses a linear optimizer to incrementally move toward the best decision. We analyze the privacy and utility guarantees of these empirical risk minimization algorithms. Compared to a previous method for ensuring differential privacy subject to a relaxed form of the equalized odds fairness constraint, the (ϵ, δ) differentially private algorithm we present provides asymptotically better sample complexity guarantees. The technique of using an approximate linear optimizer oracle to achieve privacy might be applicable to other problems not considered in this paper. Finally, we show an algorithm-agnostic lower bound on the accuracy of any solution to the problem of (ϵ, 0) or (ϵ, δ) private constrained group-objective optimization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/06/2018

Differentially Private Fair Learning

We design two learning algorithms that simultaneously promise differenti...
research
02/11/2021

Investigating Trade-offs in Utility, Fairness and Differential Privacy in Neural Networks

To enable an ethical and legal use of machine learning algorithms, they ...
research
02/27/2019

Private Center Points and Learning of Halfspaces

We present a private learner for halfspaces over an arbitrary finite dom...
research
01/14/2020

Differentially Private and Fair Classification via Calibrated Functional Mechanism

Machine learning is increasingly becoming a powerful tool to make decisi...
research
11/23/2022

Differentially Private Fair Division

Fairness and privacy are two important concerns in social decision-makin...
research
06/22/2020

Differentially Private Convex Optimization with Feasibility Guarantees

This paper develops a novel differentially private framework to solve co...
research
09/03/2019

Differentially Private Objective Perturbation: Beyond Smoothness and Convexity

One of the most effective algorithms for differentially private learning...

Please sign up or login with your details

Forgot password? Click here to reset