Exploring and Mitigating Gender Bias in Recommender Systems with Explicit Feedback

12/05/2021
by   Shrikant Saxena, et al.
1

Recommender systems are indispensable because they influence our day-to-day behavior and decisions by giving us personalized suggestions. Services like Kindle, Youtube, and Netflix depend heavily on the performance of their recommender systems to ensure that their users have a good experience and to increase revenues. Despite their popularity, it has been shown that recommender systems reproduce and amplify the bias present in the real world. The resulting feedback creates a self-perpetuating loop that deteriorates the user experience and results in homogenizing recommendations over time. Further, biased recommendations can also reinforce stereotypes based on gender or ethnicity, thus reinforcing the filter bubbles that we live in. In this paper, we address the problem of gender bias in recommender systems with explicit feedback. We propose a model to quantify the gender bias present in book rating datasets and in the recommendations produced by the recommender systems. Our main contribution is to provide a principled approach to mitigate the bias being produced in the recommendations. We theoretically show that the proposed approach provides unbiased recommendations despite biased data. Through empirical evaluation on publicly available book rating datasets, we further show that the proposed model can significantly reduce bias without significant impact on accuracy. Our method is model agnostic and can be applied to any recommender system. To demonstrate the performance of our model, we present the results on four recommender algorithms, two from the K-nearest neighbors family, UserKNN and ItemKNN, and the other two from the matrix factorization family, Alternating least square and Singular value decomposition.

READ FULL TEXT

page 13

page 14

page 15

research
01/01/2020

Modeling and Counteracting Exposure Bias in Recommender Systems

What we discover and see online, and consequently our opinions and decis...
research
05/10/2019

Recommending Dream Jobs in a Biased Real World

Machine learning models learn what we teach them to learn. Machine learn...
research
08/29/2023

Providing Previously Unseen Users Fair Recommendations Using Variational Autoencoders

An emerging definition of fairness in machine learning requires that mod...
research
01/02/2021

A Survey of Latent Factor Models for Recommender Systems and Personalization

Recommender systems aim to personalize the experience of a user and are ...
research
03/08/2023

Unbiased Learning to Rank with Biased Continuous Feedback

It is a well-known challenge to learn an unbiased ranker with biased fee...
research
02/17/2016

Recommendations as Treatments: Debiasing Learning and Evaluation

Most data for evaluating and training recommender systems is subject to ...
research
10/12/2022

Equal Experience in Recommender Systems

We explore the fairness issue that arises in recommender systems. Biased...

Please sign up or login with your details

Forgot password? Click here to reset