A Justice-Based Framework for the Analysis of Algorithmic Fairness-Utility Trade-Offs

06/06/2022
by   Corinna Hertweck, et al.
0

In prediction-based decision-making systems, different perspectives can be at odds: The short-term business goals of the decision makers are often in conflict with the decision subjects' wish to be treated fairly. Balancing these two perspectives is a question of values. We provide a framework to make these value-laden choices clearly visible. For this, we assume that we are given a trained model and want to find decision rules that balance the perspective of the decision maker and of the decision subjects. We provide an approach to formalize both perspectives, i.e., to assess the utility of the decision maker and the fairness towards the decision subjects. In both cases, the idea is to elicit values from decision makers and decision subjects that are then turned into something measurable. For the fairness evaluation, we build on the literature on welfare-based fairness and ask what a fair distribution of utility (or welfare) looks like. In this step, we build on well-known theories of distributive justice. This allows us to derive a fairness score that we then compare to the decision maker's utility for many different decision rules. This way, we provide an approach for balancing the utility of the decision maker and the fairness towards the decision subjects for a prediction-based decision-making system.

READ FULL TEXT
research
06/18/2020

Algorithmic Decision Making with Conditional Fairness

Nowadays fairness issues have raised great concerns in decision-making s...
research
10/19/2018

Fairness for Whom? Critically reframing fairness with Nash Welfare Product

Recent studies on disparate impact in machine learning applications have...
research
06/13/2018

Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making

We draw attention to an important, yet largely overlooked aspect of eval...
research
06/06/2022

Distributive Justice as the Foundational Premise of Fair ML: Unification, Extension, and Interpretation of Group Fairness Metrics

Group fairness metrics are an established way of assessing the fairness ...
research
09/12/2018

Simplicity Creates Inequity: Implications for Fairness, Stereotypes, and Interpretability

Algorithmic predictions are increasingly used to aid, or in some cases s...
research
03/06/2013

Utility-Based Abstraction and Categorization

We take a utility-based approach to categorization. We construct general...
research
10/24/2020

On Fair Virtual Conference Scheduling: Achieving Equitable Participant and Speaker Satisfaction

The (COVID-19) pandemic-induced restrictions on travel and social gather...

Please sign up or login with your details

Forgot password? Click here to reset