Robust Reputation Independence in Ranking Systems for Multiple Sensitive Attributes

03/30/2022
by   Guilherme Ramos, et al.
0

Ranking systems have an unprecedented influence on how and what information people access, and their impact on our society is being analyzed from different perspectives, such as users' discrimination. A notable example is represented by reputation-based ranking systems, a class of systems that rely on users' reputation to generate a non-personalized item-ranking, proved to be biased against certain demographic classes. To safeguard that a given sensitive user's attribute does not systematically affect the reputation of that user, prior work has operationalized a reputation independence constraint on this class of systems. In this paper, we uncover that guaranteeing reputation independence for a single sensitive attribute is not enough. When mitigating biases based on one sensitive attribute (e.g., gender), the final ranking might still be biased against certain demographic groups formed based on another attribute (e.g., age). Hence, we propose a novel approach to introduce reputation independence for multiple sensitive attributes simultaneously. We then analyze the extent to which our approach impacts on discrimination and other important properties of the ranking system, such as its quality and robustness against attacks. Experiments on two real-world datasets show that our approach leads to less biased rankings with respect to multiple users' sensitive attributes, without affecting the system's quality and robustness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2020

Reputation (In)dependence in Ranking Systems: Demographics Influence Over Output Disparities

Recent literature on ranking systems (RS) has considered users' exposure...
research
08/05/2023

Group Membership Bias

When learning to rank from user interactions, search and recommendation ...
research
09/24/2021

Detect and Perturb: Neutral Rewriting of Biased and Sensitive Text via Gradient-based Decoding

Written language carries explicit and implicit biases that can distract ...
research
10/15/2018

Assessing and Remedying Coverage for a Given Dataset

Data analysis impacts virtually every aspect of our society today. Often...
research
08/17/2021

Identifying Biased Subgroups in Ranking and Classification

When analyzing the behavior of machine learning algorithms, it is import...
research
06/07/2023

M^3Fair: Mitigating Bias in Healthcare Data through Multi-Level and Multi-Sensitive-Attribute Reweighting Method

In the data-driven artificial intelligence paradigm, models heavily rely...
research
08/09/2020

Diverse Group Formation Based on Multiple Demographic Features

The goal of group formation is to build a team to accomplish a specific ...

Please sign up or login with your details

Forgot password? Click here to reset