RAGUEL: Recourse-Aware Group Unfairness Elimination

08/30/2022
by   Aparajita Haldar, et al.
0

While machine learning and ranking-based systems are in widespread use for sensitive decision-making processes (e.g., determining job candidates, assigning credit scores), they are rife with concerns over unintended biases in their outcomes, which makes algorithmic fairness (e.g., demographic parity, equal opportunity) an objective of interest. 'Algorithmic recourse' offers feasible recovery actions to change unwanted outcomes through the modification of attributes. We introduce the notion of ranked group-level recourse fairness, and develop a 'recourse-aware ranking' solution that satisfies ranked recourse fairness constraints while minimizing the cost of suggested modifications. Our solution suggests interventions that can reorder the ranked list of database records and mitigate group-level unfairness; specifically, disproportionate representation of sub-groups and recourse cost imbalance. This re-ranking identifies the minimum modifications to data points, with these attribute modifications weighted according to their ease of recourse. We then present an efficient block-based extension that enables re-ranking at any granularity (e.g., multiple brackets of bank loan interest rates, multiple pages of search engine results). Evaluation on real datasets shows that, while existing methods may even exacerbate recourse unfairness, our solution – RAGUEL – significantly improves recourse-aware fairness. RAGUEL outperforms alternatives at improving recourse fairness, through a combined process of counterfactual generation and re-ranking, whilst remaining efficient for large-scale datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2019

Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search

Recently, policymakers, regulators, and advocates have raised awareness ...
research
01/04/2022

Parity-based Cumulative Fairness-aware Boosting

Data-driven AI systems can lead to discrimination on the basis of protec...
research
09/17/2019

AdaFair: Cumulative Fairness Adaptive Boosting

The widespread use of ML-based decision making in domains with high soci...
research
06/04/2018

iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making

People are rated and ranked, towards algorithmic decision making in an i...
research
12/30/2022

Detection of Groups with Biased Representation in Ranking

Real-life tools for decision-making in many critical domains are based o...
research
04/01/2022

A Versatile Framework for Evaluating Ranked Lists in terms of Group Fairness and Relevance

We present a simple and versatile framework for evaluating ranked lists ...
research
07/11/2023

Towards A Scalable Solution for Improving Multi-Group Fairness in Compositional Classification

Despite the rich literature on machine learning fairness, relatively lit...

Please sign up or login with your details

Forgot password? Click here to reset