Two-Sided Fairness in Non-Personalised Recommendations

11/10/2020
by   Aadi Swadipto Mondal, et al.
8

Recommender systems are one of the most widely used services on several online platforms to suggest potential items to the end-users. These services often use different machine learning techniques for which fairness is a concerning factor, especially when the downstream services have the ability to cause social ramifications. Thus, focusing on the non-personalised (global) recommendations in news media platforms (e.g., top-k trending topics on Twitter, top-k news on a news platform, etc.), we discuss on two specific fairness concerns together (traditionally studied separately)—user fairness and organisational fairness. While user fairness captures the idea of representing the choices of all the individual users in the case of global recommendations, organisational fairness tries to ensure politically/ideologically balanced recommendation sets. This makes user fairness a user-side requirement and organisational fairness a platform-side requirement. For user fairness, we test with methods from social choice theory, i.e., various voting rules known to better represent user choices in their results. Even in our application of voting rules to the recommendation setup, we observe high user satisfaction scores. Now for organisational fairness, we propose a bias metric which measures the aggregate ideological bias of a recommended set of items (articles). Analysing the results obtained from voting rule-based recommendation, we find that while the well-known voting rules are better from the user side, they show high bias values and clearly not suitable for organisational requirements of the platforms. Thus, there is a need to build an encompassing mechanism by cohesively bridging ideas of user fairness and organisational fairness. In this abstract paper, we intend to frame the elementary ideas along with the clear motivation behind the requirement of such a mechanism.

READ FULL TEXT

page 1

page 2

research
07/07/2021

A Graph-based Approach for Mitigating Multi-sided Exposure Bias in Recommender Systems

Fairness is a critical system-level objective in recommender systems tha...
research
11/10/2021

Understanding and Mitigating Multi-Sided Exposure Bias in Recommender Systems

Fairness is a critical system-level objective in recommender systems tha...
research
03/25/2020

Unfair Exposure of Artists in Music Recommendation

Fairness in machine learning has been studied by many researchers. In pa...
research
12/02/2020

FAST: A Fairness Assured Service Recommendation Strategy Considering Service Capacity Constraint

An excessive number of customers often leads to a degradation in service...
research
11/29/2021

What Drives Readership? An Online Study on User Interface Types and Popularity Bias Mitigation in News Article Recommendations

Personalized news recommender systems support readers in finding the rig...
research
09/10/2023

Exploring Social Choice Mechanisms for Recommendation Fairness in SCRUF

Fairness problems in recommender systems often have a complexity in prac...
research
07/08/2022

An Approach to Ensure Fairness in News Articles

Recommender systems, information retrieval, and other information access...

Please sign up or login with your details

Forgot password? Click here to reset