Bias Disparity in Collaborative Recommendation: Algorithmic Evaluation and Comparison

08/02/2019 ∙ by Masoud Mansoury, et al. ∙ University of Colorado Boulder DePaul University TU Eindhoven 0

Research on fairness in machine learning has been recently extended to recommender systems. One of the factors that may impact fairness is bias disparity, the degree to which a group's preferences on various item categories fail to be reflected in the recommendations they receive. In some cases biases in the original data may be amplified or reversed by the underlying recommendation algorithm. In this paper, we explore how different recommendation algorithms reflect the tradeoff between ranking quality and bias disparity. Our experiments include neighborhood-based, model-based, and trust-aware recommendation algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Recommender systems are powerful tools in extracting users preferences and suggesting desired items. These systems, while accurate, may suffer from a lack of fairness to specific groups of users. Research in fairness-aware recommender systems have shown that the outputs of recommendation algorithms are, in some cases, biased against protected groups ekstrand2018. As a result, this discrimination among users will degrade users’ satisfaction, loyalty, and effectiveness of recommender systems, and at worst, it can lead to or perpetuate undesirable social dynamics.

Discrimination in recommendation output can originate from different sources. It may stem from the underlying biases in the input data virginia2018; burke2017 used for training. On the other hand, the discriminative behavior may be the result of recommendation algorithms kamishima2011; zembel2013; yao2017.

In this paper, we examine the effectiveness of recommendation algorithms in capturing different groups’ interests across item categories. We compare different recommendation algorithms in terms of how they capture the categorical preferences of users and reflect them in the recommendation delivered.

It is important to note that in this paper, although we do not directly measure the fairness of recommendation algorithms, we study bias disparity of recommendation algorithms as an important factor that affects fairness. The benefit of studying bias disparity in recommender systems is that, depending on the domain, knowing which algorithms produce more or less disparity from users’ stated preferences can allow system designers to better control the recommendation output. In our analysis of bias disparity, we also take into account item coverage in recommended lists. A recommendation algorithm with higher item coverage signifies that majority of item providers in the system will have equal chance to be shown to users.

Our analysis includes a variety of recommendation algorithms: neighborhood models, factorization models, and trust-aware recommendation algorithms. In particular we investigate the performance of trust-aware recommendation algorithms. In these algorithms, besides items ratings, explicit trust ratings are used as side information to enhance the quality of input values for recommender systems. It has been shown that using explicit trust ratings will provide advantages for recommender systems Massa:2007a. First, since trust ratings can be propagated, they can help overcome cold-start issue in recommender systems. Secondly, trust-aware methods are robust against shilling attacks in recommender systems Lam:2004a. In this paper, we also analyze the performance of these algorithms in addressing bias disparity in recommender systems.

The motivation behind this research is analyzing the performance of recommendation algorithms in preference deviation across item categories for a specific group of users (e.g., male vs. female). Given protected and unprotected groups, we aim to compare the ability of recommendation algorithms to generate recommendations equally well for each group based on their preferences in training data. Therefore, no matter what the context of the dataset is, given protected/unprotected groups and item categories, we are interested in comparing recommendation algorithms for their ability to recommend preferred item categories to these groups of users.

For experiments, we prepared a sample of publicly-available Yelp dataset for research on fairness-aware recommender systems. Our experiments are performed on multiple recommendation algorithms and the results are evaluated in terms of bias disparity and average disparity along with ranking quality and item coverage.

2. Background

The problem of unfair outputs in machine learning applications is well studied kamiran2010; dwork2012; bozdag:2013 and also it has been extended to recommender systems. Various studies have considered fairness in recommendation results burke2017.

One research direction in fairness-aware recommender systems is providing fair recommendations for consumers. Burke et. al. in burke2017 have shown that inclusion of a balanced neighborhood regularization to SLIM algorithm can improve the equity of recommendations for protected and unprotected groups. Based on their definition for protected and unprotected groups, their solution takes into account the group fairness of recommendation outputs. Analogously, Yao and Huang in yao2017 improved the equity of recommendation results by adding fairness terms to objective function in model-based recommendation algorithms. They proposed four fairness metrics that capture the degree of unfairness in recommendation outputs and added these metrics to learning objective function to further optimize it for fair results.

Zhu et al. in zhu2018

proposed a fairness-aware tensor-based recommender systems to improve the equity of recommendations while maintaining the recommendation quality. The idea in their paper is isolating sensitive information from latent factor matrices of the tensor model and then using this information to generate fairness-aware recommendations.

Besides consumer fairness, provider fairness is another research direction in fairness-aware recommender systems. Provider fairness refers to the fact that items belong to each provider have equal chance to be shown in the recommended lists. This is known as popularity bias and usually measured by item coverage.

Abdollahpouri et al., himan2017 addressed popularity bias in learning-to-rank algorithms by inclusion of fairness-aware regularization term into objective function. They showed that the fairness-aware regularization term controls the recommendations being toward popular items.

Jannach et al., Jannach2015 conducted a comprehensive set of analysis on popularity bias of several recommendation algorithms. They analyzed recommended items by different recommendation algorithms in terms of their average ratings and their popularity. While it is very dependent to the characteristics of the data sets, they found that some algorithms (e.g., SlopeOne

, KNN techniques, and ALS-variant of factorization models) focus mostly on high-rated items which bias them toward a small sets of items (low coverage). Also, they found that some algorithms (e.g., ALS-variants of factorization model) tend to recommend popular items, while some other algorithms (e.g.,

UserKNN and SlopeOne) tend to recommend less-popular items.

Multi-stakeholder recommender systems simultaneously take into account the fairness of all stakeholders or entities in a multi-sided platform. The main goal of multi-stakeholder recommendations is maximizing the fairness of all stakeholders. Consumers and providers are the major stakeholders in most multi-sided platforms burke2016; himan2019.

Surer et al. in surer2018 proposed a multi-stakeholder optimization model that works as a post-processing approach for standard recommendation algorithms. In this model, a set of constraints for providers are considered when generating recommendation lists for end users. Also, Liu and Burke in liu2018 proposed a fairness-aware re-ranking approach that iteratively balances the ranking quality and provider fairness. In this post-processing approach, users’ tolerance for diversity list is also considered to find trade-off between accuracy and provider fairness.

3. Fairness Metrics

In this paper, we compare the performance of state-of-the-art recommendation algorithms in terms of bias disparity in recommended lists. We also consider ranking quality and item coverage of recommendation algorithms as two important additional metrics.

We use two metrics to measure changes in bias for groups of users given item categories: bias disparity and average disparity.

Bias disparity measures how much an individual’s recommendation list deviates from his or her original preferences in the training set virginia2018. Given a group of users, , and an item category, , bias disparity is defined as follow:

(1)

where () is the bias value of group on category in training data (recommendation list). is defined by:

(2)

where is the fraction of item category in the dataset defined as . is the preference ratio of group on category calculated as:

(3)

where

is the binarized user-item matrix. If user

has rated item , then , otherwise .

The bias value of group on category in the recommendation list, , is defined similarly.

On the other hand, average disparity measures how much preference disparity between training data and recommendation list for one group of users (e.g., unprotected groups) is different from that for another group of users (e.g., protected group). Inspired by value unfairness metric proposed by Yao and Huang yao2017, we introduce the average disparity as:

(4)

where and are unprotected and protected groups, respectively. and return number of items from category in recommendation lists and training data, respectively, that are rated by users in group .

As part of our analysis, we also measure item coverage of recommended lists which is an important consideration in provider-side fairness. Given the whole set of items in the system, , and whole recommendation lists for all users, , item coverage measures what percentage of items in the system appeared in recommendation lists and can be calculated as:

(5)

4. Experiments

4.1. Experimental setup

For comparing the effects of recommendation algorithms on bias and on item coverage, we performed an extensive experiments on state-of-the-art recommendation algorithms. Experiments are performed on model-based, neighborhood-based, and trust-aware recommendation algorithms.

Our experiments on neighborhood-based recommendation algorithms include user-based collaborative filtering (UserKNN) Resnick:1994a and item-based collaborative filtering (ItemKNN) sarwar2001. Also, our experiments on model-based recommendation algorithms include biased matrix factorization (BiasedMF) Koren:2009a, combined explicit and implicit model (SVD++) Koren:2008a, list-wise matrix factorization (ListRankMF) shi2010, and the sparse linear method (SLIM) Ning2011. Finally, our experiments on trust-aware recommendation algorithms include trust-aware neighborhood model (TrustKNN) Massa:2007a

, trust-based singular value decomposition (

TrustSVD) Guibing2015, social regularization-based method (SoReg) ma2011, trust-based matrix factorization (TrustMF) Bo:2017, and social matrix factorization (SocialMF) jamali2010. Besides above well-known recommendation algorithms, we also performed experiments on two naive algorithms: random and most popular.

For sensitivity analysis, we performed extensive experiments with different parameter configurations for each algorithm. Table 1 shows the parameter configurations we used for our experiments.

parameter values
#neighbors {10,20,30,40,50,70,100,200}
shrinkage {10,30,50,100,200}
similarity {pcc,cos}
user regularization {0.0001,0.001,0.005,0.01}
item regularization {0.0001,0.001,0.005,0.01}
bias regularization {0.0001,0.001,0.005,0.01}
implicit regularization {0.0001,0.001,0.005,0.01}
learning rate {0.0001,0.001,0.005,0.01}
#iterations {10,30,50,100}
#factors {10,30,50,100,150,200,300}
-norm {0.005,0.05,0.5,2,5}
-norm {0.005,0.05,0.5,2,5}
Table 1. Parameter configuration

We performed 5-fold cross validation, and in the test condition, generated recommendation lists of size 10 for each user. Then, we evaluated nDCG, item coverage, bias disparity, and average disparity at list size 10. Results were averaged over all users and then over all folds. We used librec-auto and LibRec 2.0 for all experiments mansoury2018automating; Guo2015.

(a) Male
(b) Female
Figure 1. Bias disparity for model-based recommendation algorithms. The x-axis is the top 10 most preferred categories for male and female on training data and y-axis is bias value computed by equation 2. The numbers on each bar shows the bias disparity computed by equation 1. Numbers in bold show the lowest bias disparity for each category.
(a) Male
(b) Female
Figure 2. Bias disparity for memory-based recommendation algorithms. The x-axis is the top 10 most preferred categories for male and female on training data and y-axis is bias value computed by equation 2. The numbers on each bar shows the bias disparity computed by equation 1. Numbers in bold show the lowest bias disparity for each category.
(a) nDCG vs. average disparity
(b) nDCG vs. item coverage
Figure 3. Comparison of recommendation algorithms by ranking quality and item coverage/average disparity.

4.2. Yelp dataset

For our experiments, we use a subset of Yelp dataset from round 12 of Yelp Challenge111https://www.yelp.com/dataset. In this sample, each user has rated at least 40 businesses and each business is rated by at least 40 users. Thus, there are 1,355 users who provided 100,409 ratings on 1,272 businesses. The range of ratings is 1 (not preferred) to 5 (preferred). The density of rating matrix is 5.826.

This Yelp dataset also has information about users friendship. Each user has selected a set of other users as her friends. We interpret this relationships as a trust network. When user selects user as a friend, it means that user trusts user with respect to the corresponding domain or category. In this dataset, 919 users have expressed their trustworthiness to 1,172 users and there are 26,453 trust relationships between users. With regard to the number of users, the density of trust matrix is 2.456.

In order to evaluate the recommendation outputs in terms of bias disparity and average disparity, specific information about users and items is needed. First, we need to define users group based on users demographic information and item category based on item contents. In Yelp dataset, there is no useful information about user to define users’ group. To overcome this issue, we prepared the dataset by extracting users’ gender from users’ name. To do this, we use an existing online tool222https://gender-api.com to extract users’ gender. In this tool, for each user name as input, it will return the predicted gender, number of samples used for prediction, and prediction accuracy. Hence, it enables us to increase the reliability of extracted genders by taking outputs with high accuracy and fair amount of samples.

Moreover, information about items’ category is provided in the dataset. Each business in Yelp dataset is assigned multiple relevant categories.

Overall, the prepared dataset has four separate sets:

  1. The rating data that each user provided to businesses.

  2. Explicit trust data that each user has selected trusted (friends) users.

  3. Users information that consists of users’ gender.

  4. Items category that consists of several category for each business.

By using this dataset, we define the set and set as categories assigned to each business. The dataset is available at https://github.com/masoudmansoury/yelp_core40.

4.3. Experimental results

In this section, we compare the performance of recommendation algorithms across the different metrics discussed earlier. First, we show the bias disparity of recommendations results on top 10 most preferred item categories. Second, we show average disparity for each algorithm on all categories. For sensible comparison, we also take into account the ranking quality and item coverage.

4.3.1. Bias disparity

Results on model-based recommendation algorithms on top 10 most preferred item categories for male and female are shown in Figure 1. Figure 0(a) shows the bias disparity for male individuals and Figure 0(b) shows the bias disparity for female individuals. Since there is always a trade-off between accuracy and non-accuracy metrics (e.g., nDCG vs. fairness), for comparison, the fairness analysis is conducted on recommendation outputs that give the same nDCG (highest possible) for all recommendation algorithms. For model-based recommendation algorithms, the nDCG value is set to . This setting guarantees that the fairness of recommendation algorithms is compared in same condition for all algorithms.

As it is shown in Figure 1, in most cases, SoReg provides lower bias disparity on top 10 most preferred categories for male and female groups. For males in Figure 0(a), SoReg and SLIM generated more stable outputs compared to other algorithms with the lowest bias disparity in 40% cases. On the other hand, for female, SoReg and ListRankMF generated recommendations with the lowest bias disparity of 50% and 40% cases, respectively, when compared to other recommendation algorithms.

In Figure 1, we did not report the results for BiasedMF, SVD++, SocialMF, TrustMF, and random and most popular item recommendations because these algorithms either did not recommend any items from top 10 most preferred categories, or their ranking quality was lower than specified value for other algorithms.

Results on neighborhood-based recommendation algorithms for male and female groups are shown in Figure 2. The nDCG values for neighborhood algorithms are all set to . Figure 1(a) shows the bias disparity of neighborhood models for male. TrustKNN generated more stable recommendations compared to other algorithms with 50% top 10 most categories. Also, for other categories, its output is very close to the best one. Moreover, a better output in terms of bias disparity can be observed in Figure 1(b) for female. On 60% of top 10 most preferred categories, TrustKNN worked better that other neighborhood algorithms.

4.3.2. Average disparity

Figure 3 compares the performance of recommendation algorithms with respect to two criteria: 1) how accurately recommendation algorithms generate stable (i.e. low disparity) recommendations for unprotected and protected groups, 2) how accurately recommendation algorithms are able to equally recommend the items belonging to all providers when generating recommendations (provider-side fairness).

For all experiments that we performed with different hyperparameters, the best and worst nDCG for each algorithm are reported in Figure

3.

Random guess algorithm is a naive approach that randomly recommends a list of items to each user. Although this algorithm has low accuracy, it has the highest item coverage and lower average disparity compared to other recommendation algorithms. This algorithm does not take any preferences into account and unlikely to provide good results for any user. Also, most popular item recommendation is another naive, non-personalized, algorithm that only recommends items with the highest number of ratings to each user. Although it has high ranking quality and average disparity similar to model-based recommendation algorithms, it has the lowest item coverage. These algorithms provide baselines that other algorithms should be expected to beat.

For neighborhood models, TrustKNN showed better performance. Although it has lower ranking quality than UserKNN and ItemKNN, it has significantly better item coverage and average disparity. One possible reason for low nDCG of TrustKNN can be high sparsity of trust matrix. Using a propagation model for reducing the sparsity of trust matrix may increase the ranking quality of TrustKNN. Overall, neighborhood algorithms worked better than model-based algorithms in terms of all metrics. This is due to the fact that the rating data for these experiments is very dense and all users are heavy raters.

For model-based algorithms, SLIM shows better performance compared to other algorithms. From Figure 2(a), while showing high nDCG, it has the lowest average disparity and in terms of item coverage, it has comparable coverage to other model-based algorithms. This result is also consistent with the definition of SLIM algorithm which is an extension of ItemKNN and analogous to neighborhood algorithms, it showed significant performance.

In addition, ListRankMF is another model-based algorithm that, although having high accuracy and item coverage, has average disparity is as high as other algorithms. Also, for model-based trust-aware recommendation algorithms, although SoReg showed significant reduction in bias disparity on the top 10 most preferred categories, it did not improve the average disparity on all categories.

5. Conclusion

In this paper, we examined the effectiveness of recommendation algorithms in generating outputs with lower bias disparity for different groups of users across item categories. We measured the performance of recommendation algorithms in terms of bias disparity on top 10 most preferred item categories, average disparity, ranking quality, and item coverage. A comprehensive sets of experiments showed that neighborhood models work significantly better than other algorithms, particularly trust-aware neighborhood model that outperformed other algorithms. Also, we observed that in most cases, having additional information along with rating data can enhance the performance of recommender systems.

For future work, we would like to investigate individual fairness by considering the performance of recommendation algorithms in capturing individual users’ interest across different item categories. Also, we are interested to repeat the experiments in this paper on another sample of Yelp dataset with sparser rating data and denser trust data to see how recommendation algorithms are able to control bias disparity.

References