Mitigating Mainstream Bias in Recommendation via Cost-sensitive Learning

07/25/2023
by   Roger Zhe Li, et al.
0

Mainstream bias, where some users receive poor recommendations because their preferences are uncommon or simply because they are less active, is an important aspect to consider regarding fairness in recommender systems. Existing methods to mitigate mainstream bias do not explicitly model the importance of these non-mainstream users or, when they do, it is in a way that is not necessarily compatible with the data and recommendation model at hand. In contrast, we use the recommendation utility as a more generic and implicit proxy to quantify mainstreamness, and propose a simple user-weighting approach to incorporate it into the training process while taking the cost of potential recommendation errors into account. We provide extensive experimental results showing that quantifying mainstreamness via utility is better able at identifying non-mainstream users, and that they are indeed better served when training the model in a cost-sensitive way. This is achieved with negligible or no loss in overall recommendation accuracy, meaning that the models learn a better balance across users. In addition, we show that research of this kind, which evaluates recommendation quality at the individual user level, may not be reliable if not using enough interactions when assessing model performance.

READ FULL TEXT
research
08/21/2020

The Connection Between Popularity Bias, Calibration, and Fairness in Recommendation

Recently there has been a growing interest in fairness-aware recommender...
research
02/02/2021

Leave No User Behind: Towards Improving the Utility of Recommender Systems for Non-mainstream Users

In a collaborative-filtering recommendation scenario, biases in the data...
research
06/25/2021

Balancing Accuracy and Fairness for Interactive Recommendation with Reinforcement Learning

Fairness in recommendation has attracted increasing attention due to bia...
research
11/16/2022

Mitigating Frequency Bias in Next-Basket Recommendation via Deconfounders

Recent studies on Next-basket Recommendation (NBR) have achieved much pr...
research
07/29/2023

Recommendation Unlearning via Matrix Correction

Recommender systems are important for providing personalized services to...
research
07/06/2023

BHEISR: Nudging from Bias to Balance – Promoting Belief Harmony by Eliminating Ideological Segregation in Knowledge-based Recommendations

In the realm of personalized recommendation systems, the increasing conc...
research
07/03/2014

Reducing Offline Evaluation Bias in Recommendation Systems

Recommendation systems have been integrated into the majority of large o...

Please sign up or login with your details

Forgot password? Click here to reset