Fair Ranking with Noisy Protected Attributes

11/30/2022
by   Anay Mehrotra, et al.
0

The fair-ranking problem, which asks to rank a given set of items to maximize utility subject to group fairness constraints, has received attention in the fairness, information retrieval, and machine learning literature. Recent works, however, observe that errors in socially-salient (including protected) attributes of items can significantly undermine fairness guarantees of existing fair-ranking algorithms and raise the problem of mitigating the effect of such errors. We study the fair-ranking problem under a model where socially-salient attributes of items are randomly and independently perturbed. We present a fair-ranking framework that incorporates group fairness requirements along with probabilistic information about perturbations in socially-salient attributes. We provide provable guarantees on the fairness and utility attainable by our framework and show that it is information-theoretically impossible to significantly beat these guarantees. Our framework works for multiple non-disjoint attributes and a general class of fairness constraints that includes proportional and equal representation. Empirically, we observe that, compared to baselines, our algorithm outputs rankings with higher fairness, and has a similar or better fairness-utility trade-off compared to baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/21/2023

Sampling Individually-Fair Rankings that are Always Group Fair

Rankings on online platforms help their end-users find the relevant info...
research
12/11/2018

Learning Controllable Fair Representations

Learning data representations that are transferable and fair with respec...
research
06/10/2021

Fair Classification with Adversarial Perturbations

We study fair classification in the presence of an omniscient adversary ...
research
08/11/2021

Estimation of Fair Ranking Metrics with Incomplete Judgments

There is increasing attention to evaluating the fairness of search syste...
research
01/29/2019

Quantifying the Impact of User Attention on Fair Group Representation in Ranked Lists

In this work we introduce a novel metric for verifying group fairness in...
research
04/28/2021

Societal Biases in Retrieved Contents: Measurement Framework and Adversarial Mitigation for BERT Rankers

Societal biases resonate in the retrieved contents of information retrie...
research
07/27/2023

Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment

Algorithmic fairness has been a serious concern and received lots of int...

Please sign up or login with your details

Forgot password? Click here to reset