Societal Biases in Retrieved Contents: Measurement Framework and Adversarial Mitigation for BERT Rankers

04/28/2021
by   Navid Rekabsaz, et al.
0

Societal biases resonate in the retrieved contents of information retrieval (IR) systems, resulting in reinforcing existing stereotypes. Approaching this issue requires established measures of fairness in respect to the representation of various social groups in retrieval results, as well as methods to mitigate such biases, particularly in the light of the advances in deep ranking models. In this work, we first provide a novel framework to measure the fairness in the retrieved text contents of ranking models. Introducing a ranker-agnostic measurement, the framework also enables the disentanglement of the effect on fairness of collection from that of rankers. To mitigate these biases, we propose AdvBert, a ranking model achieved by adapting adversarial bias mitigation for IR, which jointly learns to predict relevance and remove protected attributes. We conduct experiments on two passage retrieval collections (MSMARCO Passage Re-ranking and TREC Deep Learning 2019 Passage Re-ranking), which we extend by fairness annotations of a selected subset of queries regarding gender attributes. Our results on the MSMARCO benchmark show that, (1) all ranking models are less fair in comparison with ranker-agnostic baselines, and (2) the fairness of Bert rankers significantly improves when using the proposed AdvBert models. Lastly, we investigate the trade-off between fairness and utility, showing that we can maintain the significant improvements in fairness without any significant loss in utility.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2022

Debiasing Neural Retrieval via In-batch Balancing Regularization

People frequently interact with information retrieval (IR) systems, howe...
research
09/18/2023

Predictive Uncertainty-based Bias Mitigation in Ranking

Societal biases that are contained in retrieved documents have received ...
research
05/01/2020

Do Neural Ranking Models Intensify Gender Bias?

Concerns regarding the footprint of societal biases in information retri...
research
11/30/2022

Fair Ranking with Noisy Protected Attributes

The fair-ranking problem, which asks to rank a given set of items to max...
research
05/06/2023

Fairness in Image Search: A Study of Occupational Stereotyping in Image Retrieval and its Debiasing

Multi-modal search engines have experienced significant growth and wides...
research
07/27/2023

Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment

Algorithmic fairness has been a serious concern and received lots of int...
research
11/03/2020

University of Washington at TREC 2020 Fairness Ranking Track

InfoSeeking Lab's FATE (Fairness Accountability Transparency Ethics) gro...

Please sign up or login with your details

Forgot password? Click here to reset