Group Membership Bias

08/05/2023
by   Ali Vardasbi, et al.
0

When learning to rank from user interactions, search and recommendation systems must address biases in user behavior to provide a high-quality ranking. One type of bias that has recently been studied in the ranking literature is when sensitive attributes, such as gender, have an impact on a user's judgment about an item's utility. For example, in a search for an expertise area, some users may be biased towards clicking on male candidates over female candidates. We call this type of bias group membership bias or group bias for short. Increasingly, we seek rankings that not only have high utility but are also fair to individuals and sensitive groups. Merit-based fairness measures rely on the estimated merit or utility of the items. With group bias, the utility of the sensitive groups is under-estimated, hence, without correcting for this bias, a supposedly fair ranking is not truly fair. In this paper, first, we analyze the impact of group bias on ranking quality as well as two well-known merit-based fairness metrics and show that group bias can hurt both ranking and fairness. Then, we provide a correction method for group bias that is based on the assumption that the utility score of items in different groups comes from the same distribution. This assumption has two potential issues of sparsity and equality-instead-of-equity, which we use an amortized approach to solve. We show that our correction method can consistently compensate for the negative impact of group bias on ranking quality and fairness metrics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/12/2021

Fairness for Robust Learning to Rank

While conventional ranking systems focus solely on maximizing the utilit...
research
06/19/2023

Correcting Underrepresentation and Intersectional Bias for Fair Classification

We consider the problem of learning from data corrupted by underrepresen...
research
09/24/2020

Ranking for Individual and Group Fairness Simultaneously

Search and recommendation systems, such as search engines, recruiting to...
research
01/30/2019

Noise-tolerant fair classification

Fair machine learning concerns the analysis and design of learning algor...
research
09/18/2023

Predictive Uncertainty-based Bias Mitigation in Ranking

Societal biases that are contained in retrieved documents have received ...
research
03/30/2022

Robust Reputation Independence in Ranking Systems for Multiple Sensitive Attributes

Ranking systems have an unprecedented influence on how and what informat...
research
05/11/2022

De-biasing "bias" measurement

When a model's performance differs across socially or culturally relevan...

Please sign up or login with your details

Forgot password? Click here to reset