A Pre-processing Method for Fairness in Ranking

10/29/2021
by   Ryosuke Sonoda, et al.
0

Fair ranking problems arise in many decision-making processes that often necessitate a trade-off between accuracy and fairness. Many existing studies have proposed correction methods such as adding fairness constraints to a ranking model's loss. However, the challenge of correcting the data bias for fair ranking remains, and the trade-off of the ranking models leaves room for improvement. In this paper, we propose a fair ranking framework that evaluates the order of training data in a pairwise manner as well as various fairness measurements in ranking. This study is the first proposal of a pre-processing method that solves fair ranking problems using the pairwise ordering method with our best knowledge. The fair pairwise ordering method is prominent in training the fair ranking models because it ensures that the resulting ranking likely becomes parity across groups. As far as the fairness measurements in ranking are represented as a linear constraint of the ranking models, we proved that the minimization of loss function subject to the constraints is reduced to the closed solution of the minimization problem augmented by weights to training data. This closed solution inspires us to present a practical and stable algorithm that iterates the optimization of weights and model parameters. The empirical results over real-world datasets demonstrated that our method outperforms the existing methods in the trade-off between accuracy and fairness over real-world datasets and various fairness measurements.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/12/2019

Pairwise Fairness for Ranking and Regression

We present pairwise metrics of fairness for ranking and regression model...
research
08/25/2023

Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and Ex-Post Fairness

In learning-to-rank (LTR), optimizing only the relevance (or the expecte...
research
09/24/2020

Ranking for Individual and Group Fairness Simultaneously

Search and recommendation systems, such as search engines, recruiting to...
research
05/18/2022

Debiasing Neural Retrieval via In-batch Balancing Regularization

People frequently interact with information retrieval (IR) systems, howe...
research
02/05/2023

Improving Fair Training under Correlation Shifts

Model fairness is an essential element for Trustworthy AI. While many te...
research
03/03/2023

FairShap: A Data Re-weighting Approach for Algorithmic Fairness based on Shapley Values

In this paper, we propose FairShap, a novel and interpretable pre-proces...
research
07/25/2023

Explainable Disparity Compensation for Efficient Fair Ranking

Ranking functions that are used in decision systems often produce dispar...

Please sign up or login with your details

Forgot password? Click here to reset