Rawlsian Fair Adaptation of Deep Learning Classifiers

05/31/2021
by   Kulin Shah, et al.
0

Group-fairness in classification aims for equality of a predictive utility across different sensitive sub-populations, e.g., race or gender. Equality or near-equality constraints in group-fairness often worsen not only the aggregate utility but also the utility for the least advantaged sub-population. In this paper, we apply the principles of Pareto-efficiency and least-difference to the utility being accuracy, as an illustrative example, and arrive at the Rawls classifier that minimizes the error rate on the worst-off sensitive sub-population. Our mathematical characterization shows that the Rawls classifier uniformly applies a threshold to an ideal score of features, in the spirit of fair equality of opportunity. In practice, such a score or a feature representation is often computed by a black-box model that has been useful but unfair. Our second contribution is practical Rawlsian fair adaptation of any given black-box deep learning model, without changing the score or feature representation it computes. Given any score function or feature representation and only its second-order statistics on the sensitive sub-populations, we seek a threshold classifier on the given score or a linear threshold classifier on the given feature representation that achieves the Rawls error rate restricted to this hypothesis class. Our technical contribution is to formulate the above problems using ambiguous chance constraints, and to provide efficient algorithms for Rawlsian fair adaptation, along with provable upper bounds on the Rawls error rate. Our empirical results show significant improvement over state-of-the-art group-fair algorithms, even without retraining for fairness.

READ FULL TEXT
research
10/30/2019

What is Fair? Exploring Pareto-Efficiency for Fairness Constrained Classifiers

The potential for learned models to amplify existing societal biases has...
research
06/07/2022

Inferring Unfairness and Error from Population Statistics in Binary and Multiclass Classification

We propose methods for making inferences on the fairness and accuracy of...
research
10/13/2022

FARE: Provably Fair Representation Learning

Fair representation learning (FRL) is a popular class of methods aiming ...
research
04/09/2021

Implementing Fair Regression In The Real World

Most fair regression algorithms mitigate bias towards sensitive sub popu...
research
05/26/2020

Review of Mathematical frameworks for Fairness in Machine Learning

A review of the main fairness definitions and fair learning methodologie...
research
01/29/2019

Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions

When the average performance of a prediction model varies significantly ...
research
01/31/2022

Fair Wrapping for Black-box Predictions

We introduce a new family of techniques to post-process ("wrap") a black...

Please sign up or login with your details

Forgot password? Click here to reset