Fairness for Robust Learning to Rank

12/12/2021
by   Omid Memarrast, et al.
0

While conventional ranking systems focus solely on maximizing the utility of the ranked items to users, fairness-aware ranking systems additionally try to balance the exposure for different protected attributes such as gender or race. To achieve this type of group fairness for ranking, we derive a new ranking system based on the first principles of distributional robustness. We formulate a minimax game between a player choosing a distribution over rankings to maximize utility while satisfying fairness constraints against an adversary seeking to minimize utility while matching statistics of the training data. We show that our approach provides better utility for highly fair rankings than existing baseline methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/20/2018

Fairness of Exposure in Rankings

Rankings are ubiquitous in the online world today. As we have transition...
research
08/05/2023

Group Membership Bias

When learning to rank from user interactions, search and recommendation ...
research
05/22/2018

Reducing Disparate Exposure in Ranking: A Learning To Rank Approach

In this paper we consider a ranking problem in which we would like to or...
research
02/18/2021

Maximizing Marginal Fairness for Dynamic Learning to Rank

Rankings, especially those in search and recommendation systems, often d...
research
07/15/2022

FairFuse: Interactive Visual Support for Fair Consensus Ranking

Fair consensus building combines the preferences of multiple rankers int...
research
09/19/2023

Towards Measuring Fairness in Grid Layout in Recommender Systems

There has been significant research in the last five years on ensuring t...
research
06/15/2020

xOrder: A Model Agnostic Post-Processing Framework for Achieving Ranking Fairness While Maintaining Algorithm Utility

Algorithmic fairness has received lots of interests in machine learning ...

Please sign up or login with your details

Forgot password? Click here to reset