Consider the Alternatives: Navigating Fairness-Accuracy Tradeoffs via Disqualification

10/02/2021
by   Guy N. Rothblum, et al.
0

In many machine learning settings there is an inherent tension between fairness and accuracy desiderata. How should one proceed in light of such trade-offs? In this work we introduce and study γ-disqualification, a new framework for reasoning about fairness-accuracy tradeoffs w.r.t a benchmark class H in the context of supervised learning. Our requirement stipulates that a classifier should be disqualified if it is possible to improve its fairness by switching to another classifier from H without paying "too much" in accuracy. The notion of "too much" is quantified via a parameter γ that serves as a vehicle for specifying acceptable tradeoffs between accuracy and fairness, in a way that is independent from the specific metrics used to quantify fairness and accuracy in a given task. Towards this objective, we establish principled translations between units of accuracy and units of (un)fairness for different accuracy measures. We show γ-disqualification can be used to easily compare different learning strategies in terms of how they trade-off fairness and accuracy, and we give an efficient reduction from the problem of finding the optimal classifier that satisfies our requirement to the problem of approximating the Pareto frontier of H.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset