Individually Fair Ranking

03/19/2021
by   Amanda Bower, et al.
0

We develop an algorithm to train individually fair learning-to-rank (LTR) models. The proposed approach ensures items from minority groups appear alongside similar items from majority groups. This notion of fair ranking is based on the definition of individual fairness from supervised learning and is more nuanced than prior fair LTR approaches that simply ensure the ranking model provides underrepresented items with a basic level of exposure. The crux of our method is an optimal transport-based regularizer that enforces individual fairness and an efficient algorithm for optimizing the regularizer. We show that our approach leads to certifiably individually fair LTR models and demonstrate the efficacy of our method on ranking tasks subject to demographic biases.

READ FULL TEXT

page 6

page 7

research
06/21/2023

Sampling Individually-Fair Rankings that are Always Group Fair

Rankings on online platforms help their end-users find the relevant info...
research
06/06/2023

Matched Pair Calibration for Ranking Fairness

We propose a test of fairness in score-based ranking systems called matc...
research
09/03/2019

Avoiding Resentment Via Monotonic Fairness

Classifiers that achieve demographic balance by explicitly using protect...
research
03/02/2021

The KL-Divergence between a Graph Model and its Fair I-Projection as a Fairness Regularizer

Learning and reasoning over graphs is increasingly done by means of prob...
research
07/27/2022

Should Bank Stress Tests Be Fair?

Regulatory stress tests have become the primary tool for setting capital...
research
05/05/2021

When Fair Ranking Meets Uncertain Inference

Existing fair ranking systems, especially those designed to be demograph...
research
11/01/2021

Calibrating Explore-Exploit Trade-off for Fair Online Learning to Rank

Online learning to rank (OL2R) has attracted great research interests in...

Please sign up or login with your details

Forgot password? Click here to reset