Popularity bias in recommender systems refers to a disparity of item popularities in the recommendation lists. Most commonly, this means that a disproportionally higher number of popular items than less popular ones are recommended (Ekstrand et al., 2018). The existence of such a popularity bias has been evidenced in different domains already, e.g., movies (Abdollahpouri et al., 2019b), music (Kowald et al., 2020), or product reviews (Abdollahpouri et al., 2017). Collaborative filtering recommenders are particularly prone to popularity biases because the data they are trained on already exhibit an imbalance towards popular items, i.e., more user–item interactions are available for popular items than less popular ones (Abdollahpouri et al., 2019a).
The distribution of item popularities in most domains, in particular in the music domain, which we target in this work, shows a long-tail characteristic (Celma, 2010). A recommendation algorithm introduces no further algorithmic bias when the distribution of popularity values of recommended items (tracks) exactly matches the distribution of popularity values of already consumed items (listening history) for each user.
We identify two shortcomings of existing studies of popularity bias: First, popularity bias is commonly quantified using simple statistical aggregation metrics, predominantly comparing arithmetic means computed on some count of the user–item interactions (Abdollahpouri et al., 2019b; Kowald et al., 2020)
. These are not robust against outliers often present in music listening data. Second, popularity bias is typically studied irrespective of user characteristics. Therefore, the extent to which users of different groups (e.g., age, gender, or cultural background) are affected remains unclear. We set out to approach these shortcomings in the music domain by posing the following research questions:
RQ1: Which novel insights into popularity bias can be obtained by quantifying algorithmic popularity bias based on the median, a variety of statistical moments, and similarity measures between popularity distributions?
RQ2: Do algorithmic popularity biases affect users of different genders in the same way?
We find that users of different genders are affected by algorithm-inflected bias differently, such that the majority of the models expose female users to more biased results. Also, algorithms less inclined towards popularity bias amplification do not necessarily perform worse in terms of utility (NDCG). Finally, the studied metrics provide novel insights into popularity bias in comparison with only using average differences.
2. Related Work
We focuse on popularity bias, a well-studied form of bias in recommender systems research. This form of bias refers to the underrepresentation of less popular items in the produced recommendations and can lead to a significantly worse recommendation quality for consumers of long tail or niche items (Kowald et al., 2020; Lex et al., 2020; Abdollahpouri et al., 2019b; Jannach et al., 2015). Abdollahpouri et al. (2019b) show that state-of-the-art movie recommendation algorithms suffer from popularity bias, and introduce the delta-GAP metric to quantify the level of underrepresentation. As shown in Kowald et al. (2020), in particular users interested in niche, unpopular items suffer from a worse recommendation quality. The authors use the delta-GAP metric in the domain of music recommendations, and find that the delta-GAP metric does not show a difference between “niche” and “mainstream” users. The reason for this could be that a group-based metric is not suitable for the complexity of music styles, as user groups can be quite diverse within themselves (Kowald et al., 2021). Zhu et al. (2020) address a related problem of item under-recommendation bias, expressing it with ranking-based statistical parity and ranking-based equal opportunity metrics. Boratto et al. (2021) propose metrics quantifying the degree to which a recommender equally treats items along the popularity tail.
In contrast to these works, we study differences between popularity distributions of consumed and recommended items for each user. We express them in terms of the median as well as several statistical moments and similarity measures. In addition, we combine research strands on popularity bias and gender bias by analyzing how female and male listeners are affected by popularity bias.
3. Measuring Popularity Bias
We introduce ways to express popularity bias as quantified dissimilarity between popularity distributions of recommended and consumed items for each user.
3.1. Track Popularity Distributions
We define popularity of a track as the sum of its play counts over all users in the dataset, namely
. We then use these popularity estimates to derive the popularity distribution over each user’s listening history and recommendation list. In order to make the popularity distributionover a user’s listening history comparable to the respective distribution over the recommendation lists, we consider only the top of the recommendation list so that its length (number of tracks) matches the length of the user’s listening history . Therefore, we define the popularity distribution over the listening history and the recommendation list of user as follows:
To gain a better understanding of these distributions, Figure 0(a) shows an example of popularity distributions over a user’s listening history and the corresponding recommendation list produced by the SLIM recommender algorithm.
3.2.1. Delta Metrics of Popularity Bias
In order to measure the differences between these distributions, we first introduce a series of delta metrics to calculate the discrepancies between the listening history and recommendation list popularity distributions of each user, and then aggregate them to achieve per-system results. We study five (percent delta) metrics where the metric is one of the following: , , , , . If and are the results of application of the same metric to the two respective distributions, the respective for the user is calculated as:
Positive and indicate that overall more popular tracks are recommended to the user. Since is sensitive to outliers, the interplay between these metrics provides additional information about the changes in popularity. Positive means that the list of recommended items is more diverse in terms of different popularity values than the user’s history. This can also mean an increase in bias towards more popular items, as the most popular items are sparsely distributed across the popularity range. Positive denotes that the right tail of the recommendation list distribution is heavier (with respect to the left tails) than the one belonging to the user-history distribution. A positive value therefore means that more items tend to have lower popularity from the range of the distribution. Finally, positive
shows that the tails of the recommended tracks’ popularity distribution are heavier than of its counterpart, and the distribution itself is in a way closer to uniform distribution.
Finally, the discussed metrics describe the difference between the distributions for a particular user. In order to represent the change across all users, we take the median of the per-user values.
3.2.2. Kullback–Leibler Divergence and Kendall’s as Measures for Popularity Bias
In order to compare the entire popularity distributions, we utilize Kullback–Leibler Divergence () and Kendall’s (). For each user, we apply these metrics to the corresponding and decile-binned with respect to the popularity distribution over the whole collection (). The bins are chosen in such a way that the cumulative popularity of all tracks of the collection belonging into one bin constitutes approximately of the total popularity of all tracks of the whole collection. Figure 0(b) shows the distributions from Figure 0(a) binned this way. In our dataset, the bin corresponding to the most popular tracks is constituted by only items whose popularity ranges from about 7k to 47k total play counts. Each bin covers items that are roughly half as popular as the next decile bin and two times as popular as the previous decile bin. Such binning allows the two metrics to be less sensitive to minor differences between the distributions and concentrate on the shifts between different popularity categories.
estimates the dissimilarity of two distributions, in our case, between the user’s listening history and recommendation list popularity distributions. It is defined as
where and are decile-binned and normalized versions of the distributions and represent the ten bins. compares the two distributions and increases with every mismatch in the item counts. It is particularly sensitive to the case when for a bin the user gets recommended fewer tracks than they have in their listening history.
While Divergence is sensitive to actual count changes, Kendall’s metric reflects whether the order of bins is the same for the two distributions when ranked according to the respective counts. Kendall’s is calculated as , where represents the number of pairs of bins that have the same respective ranking in both distributions (concordant pairs) and the number of pairs of bins that have the different respective ranking in the two distribution (discordant pairs). For example, looking at Figure 0(b), the first two bins are concordant () as in both cases, more items fall into the second bin. While the first and the last bins are discordant () as in the listening history distribution, the first bin has more items. However, the recommended distribution shows the opposite. This way, shows whether there are common patterns (correlations) in the two distributions, and it reaches its maximum value of when the two distributions are identical from the bin-ranking point of view. Similar to metrics, we use the median of the per-user values to measure the differences across all users for and .
Statistics of the dataset. Number of Users, Tracks and listening events (LEs) are reported across F(emale) and M(ale) separately and also together (All). Mean and standard deviation (indicated after) of the interactions of users with tracks and LEs are indicated in the last three columns, respectively.
4. Experiment Setup
4.1. Recommendation Algorithms
Random Item (RAND): A baseline algorithm that recommends for each user random items. It avoids recommending already consumed items.
Most Popular Items (POP): A baseline that implements a heuristic-based algorithm that recommends the same set of overall most popular items to each user.
Item k-Nearest Neighbors (ItemKNN) (Deshpande and Karypis, 2004): A neighborhood-based algorithm that recommends items based on item-to-item similarity. Specifically, an item is recommended to a user if the item is similar to the items previously selected by the user. ItemKNN uses statistical measures to compute the item-to-item similarities.
Sparse Linear Method (SLIM) (Ning and Karypis, 2011): Also a neighborhood-based algorithm, but instead of using predefined similarity metrics, the item-to-item similarity is learned directly from the data with a regression model.
Alternating Least Squares (ALS) (Hu et al., 2008): A matrix factorization approach that learns user and item embeddings such that the dot product of these two approximates the original user-item interaction matrix.
Matrix factorization with Bayesian Personalized Ranking (BPR) (Rendle et al., 2012): Learns user and item embeddings, however, with an optimization function that aims to rank the items consumed by the users according to their preferences (hence, personalized ranking) instead of predicting the rating for a specific pair of user and item.
For training the models, we use the same hyperparameter settings as provided byMelchiorre et al. (2021).
denotes a median increase in popularity variance (between listening history and recommended list) ofover all users. The corresponding value of means that the variance increase for this group is .
4.2. Dataset and Evaluation Protocol
We perform experiments on LFM-2b-DemoBias (Melchiorre et al., 2021), a subset of the LFM-2b dataset111http://www.cp.jku.at/datasets/LFM-2b. As in (Melchiorre et al., 2021), we only consider user-track interactions with a playcount (PC) ¿ 1, possibly avoiding using spurious interactions likely introduced by noise. Furthermore, we only consider tracks listened to by at least 5 different users and, likewise, only users who listened to at least 5 different tracks. Moreover, we only consider listening events within the last 5 years, letting us focus more on possible popularity biases in the recent years. Lastly, we consider binary user-track interactions, i.e., 1 if the user has listened to the track at least once, 0 otherwise.
The procedure described above results in a subset of 23k users over 1.6 million items. We finalize data preparation by sampling 100k tracks uniformly-at-random, which ensures that tracks of different popularity levels are equally likely to be included in the final dataset. The statistics of the final dataset are reported in Table 1. We find that males represent the majority group in the dataset and that they create of all listening events.
As evaluation protocol, we employ a user-based split strategy (Liang et al., 2018; Marlin, 2004), i.e., we split the 19,972 users in the dataset into train, validation, and test user groups via a 60-20-20 ratio split. We carry out 5-fold cross validation and change these user groups in a round-robin fashion. The users in the training set and all their interactions are used to train the recommendation algorithms. For testing and validation, we follow standard setups (Liang et al., 2018; Steck, 2019)
and randomly sample 80% of the users’ items as input for the recommendation models and use the remaining 20% to calculate the evaluation metric.
5. Results and Discussion
The results are shown in Table 2. Each value in the All rows, regarding the popularity bias metrics, shows the median value of the distribution of a given metric over all users. For instance, of 72.6% for ALS denotes that the median increase in popularity variance is 72.6 percent between user’s listening history and items recommended to each user across all users. SLIM 1.66 expresses that the median difference between user history popularity distributions and the corresponding recommended tracks popularity distributions is 1.66 in terms of Divergence. The reported results regarding the genders indicate the changes in values in respect to the All values.
Both baseline algorithms (RAND and POP) show poor results on accuracy metrics. Notably, on the popularity bias metrics, they show divergent behavior. Decreasing of metrics of , , and increasing of and indicate that RAND provides a list of tracks whose popularity distribution is closer to uniform than those from users’ listening histories. POP
has an opposite trend, as the recommended tracks’ popularity distribution has a more pronounced peak, is skewed, and shifted towards more popular items. It also shows a substantial median increase of variance in popularity, which can be explained by the fact that in our dataset, the most popular tracks are sparsely distributed across a wide range of popularity values (track in the popularity range between 7k and 47k of total play counts). Thus, recommending tracks from this category leads to a high variance. High values for for both baselines also indicate that the overall popularity distributions of the recommended items are highly different from those of the users’ listening histories. The random recommender demonstrates a higher median Kendall’s , which means that its output better correlates with users’ histories in terms of popularity distribution. Both neighborhood-based models (i.e., ItemKNN and SLIM) show a high performance in terms of NDCG and a moderate popularity bias in their recommendations according to the metrics, which is lower compared to VAE and ALS. In particular, SLIM shows higher value in and compared to ItemKNN, suggesting that the item-to-item similarities learned by SLIM favors more popular items in the recommendations. ItemKNN displays lower and higher Kendall’s than SLIM, which means that its results better approximate users’ listening histories (we attribute this to ItemKNN being less sensitive to bias in the data as it does not require trainable parameters). These observations regarding the performance of the models indicate that a decrease in popularity bias does not necessarily lead to a significant performance drop. Comparing ALS with BPR, we can observe an opposite behavior. While providing less biased results, BPR shows the poorest performance among all non-baseline algorithms. While VAE is similarly biased in terms of all metrics as POP, it achieves a higher performance according to NDCG.
Comparing metrics between the two gender groups, we note that and is higher for female users. That means that their recommendations contain more popular items and/or items of higher popularity than the ones they usually listen to, and for this user group, that effect is more pronounced (hence larger values). Considering that is lower for the female users, we conclude that their recommendations are less diverse in terms of track popularity while consisting of more popular items. Judging by , as well as Kendall’s , we can suggest that most recommender algorithms provide recommendations with comparable popularity distributions to both male and female users. At the same time, a slightly larger may mean a larger shift towards popular items for female users. ItemKNN is the least biased algorithm in our study. It features low absolute values of , and , meaning that its recommendations consist of tracks comparable to the user’s listening history in terms of average popularity and variety. High Kendall’s means that the shape of the popularity distribution of the recommendations best matches the user’s history among all tested algorithms. Still, it is slightly biased towards more popular items, as shown by negative and (which combined with high Kendall’s signalizes about a shift of the distribution).
6. Conclusions and Future Direction
In this paper, we examine to what extent various music recommender systems amplify item popularity bias. We study seven metrics of popularity bias deviation and analyze the results of seven recommender algorithms for users of different genders and for the overall population in the dataset. Addressing RQ1, we observe that the studied metrics capture considerably different aspects of difference between popularity distributions of consumed and recommended items. While and tell us about overall trends (are recommended tracks more or less popular than consumed ones), expresses the change in the diversity between listening histories and recommendation lists, and and hint on the difference of shapes between the two distributions. Finally, Divergence and Kendall’s allow insight into how well the distributions match on a more granular level. With regard to RQ2, we found that while the investigated algorithms display various levels of popularity bias, the majority of them (VAE, ItemKNN, BPR, ALS
) expose the female users to more popularity biased results. In the future, we will approach mitigating model-imposed popularity bias, e.g., through adversarial training or incorporating bias into the loss function of the recommenders, as well as finding more expressive metrics describing differences in the popularity distributions. Additionally, we plan to split our users into groups according to mainstreaminess as in(Kowald et al., 2020) to compare our metrics with the group-based delta-GAP metric used in that work.
Acknowledgements.This work was funded by the H2020 project AI4EU (GA: 825619), the Austrian Science Fund (FWF): P33526, and the FFG COMET program.
- Controlling popularity bias in learning-to-rank recommendation. In Proceedings of the eleventh ACM conference on recommender systems, pp. 42–46. Cited by: §1.
- Managing popularity bias in recommender systems with personalized re-ranking. In The thirty-second international flairs conference, Cited by: §1.
- The unfairness of popularity bias in recommendation. In Proceedings of the Workshop on Recommendation in Multi-stakeholder Environments co-located with the 13th ACM Conference on Recommender Systems (RecSys 2019), Copenhagen, Denmark, September 20, 2019, CEUR Workshop Proceedings, Vol. 2440. External Links: Cited by: §1, §1, §2.
- Connecting user and item perspectives in popularity debiasing for collaborative recommendation. Information Processing & Management 58 (1), pp. 102387. External Links: Cited by: §2.
- Music recommendation and discovery - the long tail, long fail, and long play in the digital music space. Springer. External Links: Cited by: §1.
- Are we really making much progress? a worrying analysis of recent neural recommendation approaches. In Proceedings of the 13th ACM Conference on Recommender Systems, pp. 101–109. Cited by: §4.1.
- Item-based top-n recommendation algorithms. ACM Transactions on Information Systems (TOIS) 22 (1), pp. 143–177. Cited by: 3rd item.
- All the cool kids, how do they fit in?: popularity and demographic biases in recommender evaluation and effectiveness. In Conference on Fairness, Accountability and Transparency, pp. 172–186. Cited by: §1.
- Collaborative filtering for implicit feedback datasets. In 2008 Eighth IEEE International Conference on Data Mining, pp. 263–272. Cited by: 5th item.
- Beyond ”hitting the hits”: generating coherent music playlist continuations with the right tracks. In Proceedings of the 9th ACM Conference on Recommender Systems, RecSys ’15, New York, NY, USA, pp. 187–194. External Links: Cited by: §2.
Support the underground: characteristics of beyond-mainstream music listeners.
EPJ Data Science10 (1), pp. 1–26. Cited by: §2.
- The unfairness of popularity bias in music recommendation: A reproducibility study. In Advances in Information Retrieval - 42nd European Conference on IR Research, ECIR 2020, Lisbon, Portugal, April 14-17, 2020, Proceedings, Part II, Lecture Notes in Computer Science, Vol. 12036, pp. 35–42. External Links: Cited by: §1, §1, §2, §6.
- Modeling popularity and temporal drift of music genre preferences. Transactions of the International Society for Music Information Retrieval 3 (1). Cited by: §2.
- Variational autoencoders for collaborative filtering. In Proceedings of the 2018 world wide web conference, pp. 689–698. Cited by: 7th item, §4.2.
Collaborative filtering: a machine learning perspective. University of Toronto Toronto. Cited by: §4.2.
- Investigating gender fairness of recommendation algorithms in the music domain. Information Processing & Management 58 (5), pp. 102666. External Links: Cited by: §4.1, §4.1, §4.2.
- Slim: sparse linear methods for top-n recommender systems. In 2011 IEEE 11th International Conference on Data Mining, pp. 497–506. Cited by: 4th item.
- BPR: bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618. Cited by: 6th item.
- Embarrassingly shallow autoencoders for sparse data. In The World Wide Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, L. Liu, R. W. White, A. Mantrach, F. Silvestri, J. J. McAuley, R. Baeza-Yates, and L. Zia (Eds.), pp. 3251–3257. External Links: Cited by: §4.2.
- Measuring and mitigating item under-recommendation bias in personalized ranking systems. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 449–458. External Links: Cited by: §2.