Unfair Exposure of Artists in Music Recommendation

03/25/2020 ∙ by Himan Abdollahpouri, et al. ∙ University of Colorado Boulder TU Eindhoven 0

Fairness in machine learning has been studied by many researchers. In particular, fairness in recommender systems has been investigated to ensure the recommendations meet certain criteria with respect to certain sensitive features such as race, gender etc. However, often recommender systems are multi-stakeholder environments in which the fairness towards all stakeholders should be taken care of. It is well-known that the recommendation algorithms suffer from popularity bias; few popular items are over-recommended which leads to the majority of other items not getting proportionate attention. This bias has been investigated from the perspective of the users and how it makes the final recommendations skewed towards popular items in general. In this paper, however, we investigate the impact of popularity bias in recommendation algorithms on the provider of the items (i.e. the entities who are behind the recommended items). Using a music dataset for our experiments, we show that, due to some biases in the algorithms, different groups of artists with varying degrees of popularity are systematically and consistently treated differently than others.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recommender systems have been widely used in a variety of different domains such as movies, music, online dating etc. Their goal is to help users find relevant items which are difficult or otherwise time-consuming to find in the absence of such systems. Music streaming services such as Spotify and Pandora have become extremely popular due to their effective recommendations and helping listeners find relevant songs and artists.

One of the limitations of the recommendation algorithms is the problem of popularity bias [2]: popular items are being recommended too frequently while the majority of other items do not get the deserved attention. This bias and methods to tackle it have been studied by many researchers but its impact on other stakeholders of the recommendations has yet to be explored.

In this paper, we investigate the impact of popularity bias on the fairness of the recommendations from the perspective of the providers of the items. We use a sample of the LastFM music dataset created by Kowald et al. [9] containing the information about users listening history to different song tracks. The providers in this case are artists since they are the ones who provided the songs. We intend to analyze how different groups of artists with different degrees of popularity are being served by these algorithms. We use several well-known recommendation algorithms including a neighborhood-based model based called User-based collaborative filtering (UserKNN), a matrix factorization based model called non-negative matrix factorization (NMF), a simple user-item average technique which predicts ratings based on the average rating for any given item by a user (UserItemAvg), a random algorithm which randomly recommends items (Random), and most-popular item recommendation, Most-pop which recommends the same top items to everyone, for our analysis.

2 Related work

The problem of popularity bias and the challenges it creates for the recommender system has been well studied by other researchers [6, 7, 11]. Authors in the mentioned works have mainly explored the overall accuracy of the recommendations in the presence of long-tail distribution in rating data. Moreover, some other researchers have proposed algorithms that can control this bias and give more chance to long-tail items to be recommended [5, 10, 3].

In addition, the impact of this bias on users has been studied by Abdollahpouri et al. [4] where authors show users with niche taste are affected the most by the popularity bias. In this work, however, we focus on the fairness of recommendations with respect to artists’ expectations. That is, we want to see how popularity bias in the input data is causing the recommendations to deviate from the true expectations of different artists.

3 Popularity bias in recommendation

Figure  0(a) shows the distribution of artist popularity in the LastFM dataset. We can see that it has an extreme long-tail shape indicating few popular artists taking up the majority of the listening interactions. The log-scale of this plot is shown in  0(b) for a smoother illustration. One might say these plots show that there would be no unfairness in the algorithms as users clearly are interested in certain popular artists as can be seen in the plot. However, we want to show that the algorithms are amplifying this already existing bias and it is this amplification that we call unfair.

(a) Original
(b) Log-scale
Figure 1: Artist popularity in LastFM data

4 Fairness towards artists

In this paper, we call a disproportionate exposure of songs from different artists relative to what their potential listener pool could be as unfair recommendation. In other words, if songs from a certain artist could have been recommended to users but they are only recommended to users (), some kind of unfair treatment exists in the recommendation algorithm. We define three different groups of artists based on their degree of popularity: 1) High-P (i.e. Mainstream), 2) Mid-P (i.e. Middle), and 3) Low-P (i.e. Niche). We used the method used in [8] to find the cutting points to split the artists into these groups. The number of artists fall within each group are 389, 7292, and 345,124(345K), respectively. That shows that the majority of artists have low popularity and only few artists (less than 0.01% of the artists) fall into the Mainstream category based on the number of times their songs are being played by the listeners (songs from these few artists take more than 30% of the total listening counts). To measure the unfairness towards each group we first define Group Average Popularity for each group which is an indication of how popular the artists in each group are on average:

(1)

where is the popularity of each artist (i.e. the number of times her songs are being played by the listeners). To measure unfairness, we then calculate the change in GAP for each group when looking at their group average popularity in data versus in recommendations.

(2)

subscript and represent the recommendations and training data.

Positive values for show over-promotion of songs from artists belonging to a certain group while negative values indicates under-representing them.

Figure  2 shows the for several algorithms for three different groups of artists. We can clearly see that Mainstream group has the highest positive for all algorithms except for Random indicating over-promotion of the songs from the already popular artists. Niche and Middle groups both have negative showing the suppression of songs from these groups by the algorithms. This is indeed something that needs to be addressed since the vast majority of artists fall withing these two groups. We used Random and most popular (Most-pop) algorithms mainly for comparison purpose to see how other algorithms perform in comparison with these two. Random has the least bias and Most-pop has the highest possible bias. We can see that Random algorithm is in favor of Niche group but hurting the other two groups with higher popularity which was expected since Random treats all group equally while the real proportion of the interactions for these groups in the data is not equally distributed. On the other hand, the Most-pop algorithm which only recommends top songs to everybody shows the maximum amount of unfairness for group and the highest bias in favor of group. Other algorithms perform somewhere in between but they all discriminate more against and while over-promoting the

Figure 2: The values for three different groups of artists: Mainstream (High-P), Middle (Mid-P), and Niche (Low-P)

5 Conclusion and Future Work

In this paper we argue that fairness in recommendation needs to be investigated from the perspective of all the stakeholders involved in a recommender system as these systems often are a multi-stakeholder environments [1]. In this paper, we investigated the unfairness of popularity bias from the perspective of item providers: artists. We showed that the existing popularity bias in rating data can cause unfair exposure of songs from artists with different levels of popularity. Generally, in recommender systems, not much attention is given to the provider side of the products (in this case artists). However, in order for multi-sided platforms such as Spotify (listeners vs. artists), AirBnB (travellers vs hosts), eBay(buyers vs sellers) to sustain their business, it is crucial to study how these algorithms are affecting different stakeholders involved in addition to their impact on users. For future work, we will extend our analysis on more datasets, more algorithms, and we will investigate several other metrics to quantify the unfairness against item providers.

References

  • [1] H. Abdollahpouri, G. Adomavicius, R. Burke, I. Guy, D. Jannach, T. Kamishima, J. Krasnodebski, and L. Pizzato Multistakeholder recommendation: survey and research directions. User Modeling and User-Adapted Interaction, pp. 1–32. Cited by: §5.
  • [2] H. Abdollahpouri, R. Burke, and B. Mobasher (2017) Controlling popularity bias in learning to rank recommendation. In Proceedings of the 11th ACM conference on Recommender systems, pp. 42–46. Cited by: §1.
  • [3] H. Abdollahpouri, R. Burke, and B. Mobasher (2019) Managing popularity bias in recommender systems with personalized re-ranking.. In Florida AI Research Symposium (FLAIRS), pp. To appear. Cited by: §2.
  • [4] H. Abdollahpouri, M. Mansoury, R. Burke, and B. Mobasher (2019) The unfairness of popularity bias in recommendation. In RecSys workshop on Recommendation in Multi-Stakeholder Environments, Cited by: §2.
  • [5] G. Adomavicius and Y. Kwon (2012) Improving aggregate recommendation diversity using ranking-based techniques. IEEE Transactions on Knowledge and Data Engineering 24 (5), pp. 896–911. External Links: ISSN 1041-4347, Document Cited by: §2.
  • [6] C. Anderson (2006) The long tail: why the future of business is selling more for less. Hyperion. Cited by: §2.
  • [7] E. Brynjolfsson, Y. J. Hu, and M. D. Smith (2006) From niches to riches: anatomy of the long tail. Sloan Management Review, pp. 67–71. Cited by: §2.
  • [8] Ò. Celma and P. Cano (2008) From hits to niches? or how popular artists can bias music recommendation and discovery. In In Proceedings of the 2nd KDD Workshop on Large-Scale Recommender Systems and the Netflix Prize Competition, pp. 1–8. Cited by: §4.
  • [9] K. Dominik, S. Markus, and L. Elisabeth (2019) The unfairness of popularity bias in music recommendation: a reproducibility study. arXiv preprint arXiv:1912.04696. Cited by: §1.
  • [10] T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma (2014) Correcting popularity bias by enhancing recommendation neutrality. In Poster Proceedings of the 8th ACM Conference on Recommender Systems, RecSys 2014, Foster City, Silicon Valley, CA, USA, October 6-10, 2014, External Links: Link Cited by: §2.
  • [11] Y. Park and A. Tuzhilin (2008) The long tail of recommender systems and how to leverage it. In Proceedings of the 2008 ACM conference on Recommender systems, pp. 11–18. Cited by: §2.