Fairness for Whom? Critically reframing fairness with Nash Welfare Product

by   Ansh Patel, et al.

Recent studies on disparate impact in machine learning applications have sparked a debate around the concept of fairness along with attempts to formalize its different criteria. Many of these approaches focus on reducing prediction errors while maximizing sole utility of the institution. This work seeks to reconceptualize and critically frame the existing discourse on fairness by underlining the implicit biases embedded in common understandings of fairness in the literature and how they contrast with its corresponding economic and legal definitions. This paper expands the concept of utility and fairness by bringing in concepts from established literature in welfare economics and game theory. We then translate these concepts for the algorithmic prediction domain by defining a formalization of Nash Welfare Product that seeks to expand utility by collapsing that of the institution using the prediction tool and the individual subject to the prediction into one function. We then apply a modulating function that makes the fairness and welfare trade-offs explicit based on designated policy goals and then apply it to a temporal model to take into account the effects of decisions beyond the scope of one-shot predictions. We apply this on a binary classification problem and present results of a multi-epoch simulation based on the UCI Adult Income dataset and a test case analysis of the ProPublica recidivism dataset that show that expanding the concept of utility results in a fairer distribution correcting for the embedded biases in the dataset without sacrificing the classifier accuracy.


page 1

page 2

page 3

page 4


Welfare and Distributional Impacts of Fair Classification

Current methodologies in machine learning analyze the effects of various...

A Justice-Based Framework for the Analysis of Algorithmic Fairness-Utility Trade-Offs

In prediction-based decision-making systems, different perspectives can ...

Statistical Equity: A Fairness Classification Objective

Machine learning systems have been shown to propagate the societal error...

Simplicity Creates Inequity: Implications for Fairness, Stereotypes, and Interpretability

Algorithmic predictions are increasingly used to aid, or in some cases s...

The Pursuit of Algorithmic Fairness: On "Correcting" Algorithmic Unfairness in a Child Welfare Reunification Success Classifier

The algorithmic fairness of predictive analytic tools in the public sect...

Fair and Efficient Resource Allocation with Externalities

In resource allocation, it is common to assume that agents have a utilit...

Please sign up or login with your details

Forgot password? Click here to reset