New Fairness Metrics for Recommendation that Embrace Differences

06/29/2017 ∙ by Sirui Yao, et al. ∙ Virginia Polytechnic Institute and State University 0

We study fairness in collaborative-filtering recommender systems, which are sensitive to discrimination that exists in historical data. Biased data can lead collaborative filtering methods to make unfair predictions against minority groups of users. We identify the insufficiency of existing fairness metrics and propose four new metrics that address different forms of unfairness. These fairness metrics can be optimized by adding fairness terms to the learning objective. Experiments on synthetic and real data show that our new metrics can better measure fairness than the baseline, and that the fairness objectives effectively help reduce unfairness.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

This paper introduces new measures of unfairness in algorithmic recommendation and demonstrates how to optimize these metrics to reduce different forms of unfairness. Since recommender systems make predictions based on observed data, they can easily inherit bias that may already exist. To address this issue, we first describe a process that leads to unfairness in recommender systems and identify the insufficiency of demographic parity for this setting. We then propose four new unfairness metrics that address different forms of unfairness. To improve model fairness, we provide five fairness objectives that can be optimized as regularizers.

We focus on a frequently practiced approach for recommendation called collaborative filtering. With this approach, predictions are made based on co-occurrence statistics, and most methods assume that the missing ratings are missing at random. Unfortunately, research has shown that sampled ratings have markedly different properties from the users’ true preferences (marlin2012collaborative, ; marlin:recsys09, ), which is a potential source of unfairness.

We consider a running example of unfair recommendation in education (sacin2009recommendation, ; thai2010recommender, ; dascalu2016educational, ), in which the underrepresentation of women in science, technology, engineering, and mathematics (STEM) topics (beede2011women, ; smith2011women, ; griffith2010persistence, ) causes the learned model to underestimate women’s preferences and be biased towards men. We find this setting a serious motivation to advance understanding of unfairness—and methods to reduce unfairness—in recommendation.

Related Work

Various studies have considered algorithmic fairness in problems such as classification (pedreshi2008discrimination, ; lum2016statistical, ; zafar2017fairness, ). Removing sensitive features (e.g., gender, race, or age) is often insufficient for fairness. Features are often correlated, so other unprotected attributes can be related to the sensitive features (kamishima2011fairness, ; zemel2013learning, ). Moreover, in problems such as collaborative filtering, algorithms do not directly consider measured features and instead infer latent user attributes from their behavior.

Another frequently practiced strategy for encouraging fairness is to enforce demographic parity, which is to ensure that the overall proportion of members in the protected group receiving positive (negative) classifications are identical to the proportion of the population as a whole (zemel2013learning, ). Based on this non-parity unfairness concept, Kamishima et al. (kamishima2011fairness, ; kamishima2012enhancement, ; kamishima2013efficiency, ) try to solve the unfairness issue in recommender systems by adding a regularization term that enforces demographic parity. However, demographic parity is only appropriate when preferences are unrelated to the sensitive features. In recommendation, user preferences are indeed influenced by sensitive features such as gender, race and age (chausson2010watches, ; daymont1984job, ).

To address the issues of demographic parity, Hardt et al. (hardt2016equality, ) measure unfairness with the true positive rate and true negative rate. They propose that, in a binary setting, given a decision , a protected attribute and the true label , the constraints are (hardt2016equality, ) . This idea encourages equal opportunity and no longer relies on the assumption of demographic parity, that the target variable is independent of sensitive features. Similarly, Calders et al. (calders2013controlling, )

propose to impose constrains on the residuals of linear regression models, which requires not only the mean prediction but also the mean residuals to be the same across groups. These ideas form the basis of the unfairness metrics we propose for recommendation.

2. Fairness Objectives for Collaborative Filtering

This section introduces fairness objectives for collaborative filtering. We begin by reviewing the matrix factorization method. We then describe the various fairness objectives we consider, providing formal definitions and discussion of their motivations.

2.1. Matrix Factorization

We consider the task of collaborative filtering using matrix factorization (koren2009matrix, ). We have a set of users indexed from 1 to and a set of items indexed from 1 to . For the th user, let be a variable indicating which group the th user belongs to. For the th item, let indicate the item group that it belongs to. Let be the preference score of the th user for the th item.

The matrix-factorization formulation assumes that each rating can be represented as , where is a

-dimensional vector representing the

th user, is a -dimensional vector representing the th item, and and are scalar bias terms for the user and item, respectively. The matrix-factorization learning algorithm seeks to learn these parameters from observed ratings , typically by minimizing a regularized, squared reconstruction error:


where and are the vectors of bias terms, and represents the Frobenius norm.

2.2. Fairness Metrics

We consider a binary group feature distinguishing disadvantaged and advantaged groups. In the STEM example, the disadvantaged group may be women and non-binary gender identities, and the advantaged group may be men.

The first metric is value unfairness

, which measures inconsistency in signed estimation error across the user types, computed as


where is the average predicted score for the th item from disadvantaged users, is the average predicted score for advantaged users, and and are the average ratings for the disadvantaged and advantaged users, respectively. Value unfairness occurs when one class of user is consistently given higher or lower predictions than their true preferences.

The second metric is absolute unfairness, which measures inconsistency in absolute estimation error across user types, computed as


Absolute unfairness is unsigned, so it captures the quality of prediction for each user type.

The third metric is underestimation unfairness, which measures inconsistency in how much the predictions underestimate the true ratings:


where is the hinge function, i.e., if and 0 otherwise. Underestimation unfairness is important in settings where missing recommendations are more critical than extra recommendations.

Conversely, the fourth new metric is overestimation unfairness, which measures inconsistency in how much the predictions overestimate the true ratings:


Finally, a non-parity unfairness measure based on the regularization term introduced by Kamishima et al. (kamishima2011fairness, ) can be computed as the absolute difference between the overall average ratings of disadvantaged users and that of advantaged users .

To optimize the metric(s), we solve for a local minimum of .

3. Experiments

We run experiments on simulated course-recommendation data and real movie rating data (harper2016movielens, ).

3.1. Synthetic data

In our synthetic experiments, we consider four user groups and three item groups . The user groups represent women who do not enjoy STEM topics (W), women who do enjoy STEM topics (WS), men who do not enjoy STEM topics (M), and men who do (MS). The item groups represent courses that tend to appeal to women (Fem), STEM courses, and courses that tend to appeal to men (Masc). We generate simulated course-recommendation data with two stochastic block models (holland1976local, )

. Our rating block model determines the probability that a user in a user group likes an item in an item group


We use two observation block models that determine the probability a user in a user group rates an item in an item group: one with uniform observation probability for all groups and one with unbalanced observation probabilities inspired by real-world biases


We define two different user group distributions: one in which each of the four groups is exactly a quarter of the population, and an imbalanced setting where 0.4 of the population is in W, 0.1 in WS, 0.4 in MS, and 0.1 in M. This heavy imbalance is inspired by some of the severe gender imbalance in certain STEM areas today.

Unfairness from different types of underrepresentation

Using standard matrix factorization, we measure the various unfairness metrics under the different sampling settings. We average over five random trials and plot the average score in Fig. 1. In each trial, we generated ratings by 400 users and 300 items with the block models. We label the settings as follows: uniform user groups and uniform observation probabilities (U), uniform groups and biased observation probabilities (O), biased user group populations and uniform observations (P), and biased populations and observations (P+O).

Figure 1. Average unfairness scores for standard matrix factorization on synthetic data generated from different underrepresentation schemes.
Unfairness Error Value Absolute Underestimation Overestimation Non-Parity
None 0.317 1.3e-02 0.649 1.8e-02 0.443 2.2e-02 0.107 6.5e-03 0.544 2.0e-02 0.362 1.6e-02
Value 0.130 1.0e-02 0.245 1.4e-02 0.177 1.5e-02 0.063 4.1e-03 0.199 1.5e-02 0.324 1.2e-02
Absolute 0.205 8.8e-03 0.535 1.6e-02 0.267 1.3e-02 0.135 6.2e-03 0.400 1.4e-02 0.294 1.0e-02
Under 0.269 1.6e-02 0.512 2.3e-02 0.401 2.4e-02 0.060 3.5e-03 0.456 2.3e-02 0.357 1.6e-02
Over 0.130 6.5e-03 0.296 1.2e-02 0.172 1.3e-02 0.074 6.0e-03 0.228 1.1e-02 0.321 1.2e-02
Non-Parity 0.324 1.3e-02 0.697 1.8e-02 0.453 2.2e-02 0.124 6.9e-03 0.573 1.9e-02 0.251 1.0e-02
Table 1. Average error and unfairness metrics for synthetic data using different fairness objectives. The best scores and those that are statistically indistinguishable from the best are printed in bold. Each row represents a different unfairness penalty, and each column is the measured metric on the expected value of unseen ratings.
Unfairness Error Value Absolute Underestimation Overestimation Non-Parity
None 0.887 1.9e-03 0.234 6.3e-03 0.126 1.7e-03 0.107 1.6e-03 0.153 3.9e-03 0.036 1.3e-03
Value 0.886 2.2e-03 0.223 6.9e-03 0.128 2.2e-03 0.102 1.9e-03 0.148 4.9e-03 0.041 1.6e-03
Absolute 0.887 2.0e-03 0.235 6.2e-03 0.124 1.7e-03 0.110 1.8e-03 0.151 4.2e-03 0.023 2.7e-03
Under 0.888 2.2e-03 0.233 6.8e-03 0.128 1.8e-03 0.102 1.7e-03 0.156 4.2e-03 0.058 9.3e-04
Over 0.885 1.9e-03 0.234 5.8e-03 0.125 1.6e-03 0.112 1.9e-03 0.148 4.1e-03 0.015 2.0e-03
Non-Parity 0.887 1.9e-03 0.236 6.0e-03 0.126 1.6e-03 0.110 1.7e-03 0.152 3.9e-03 0.010 1.5e-03
Table 2. Average error and unfairness metrics for movie-rating data using different fairness objectives.

The statistics suggest that each underrepresentation type contributes to various forms of unfairness. For all metrics except parity, there is a strict order of unfairness, where uniform data is the most fair and biasing the populations and observations causes the most unfairness. Because of the observation bias, there is actually non-parity in the labeled ratings, so a high non-parity score does not necessarily indicate an unfair situation. These tests verify that unfairness can occur with imbalanced populations or observations even when the measured ratings accurately represent user preferences.

Optimization of unfairness metrics

We optimize fairness objectives under the most imbalanced setting: the user populations are imbalanced, and the sampling rate is skewed. We optimize for 500 iterations of Adam

(kingma2014adam, ).

The results are listed in Table 1. The learning algorithm successfully minimizes the unfairness penalties, generalizing to unseen, held-out user-item pairs. And reducing any unfairness metric does not lead to a significant increase in reconstruction error. The combined objective “Over+Under” leads to scores that are close to the minimum of each metric except parity.

3.2. Real data

We use the Movielens Million Dataset (harper2016movielens, ), which contains ratings in [1,5] by 6,040 users and 3,883 movies. We manually selected five genres (action, crime, musical, romance, and sci-fi) that each have different forms of gender imbalance and only consider movies that list these genres. Then we filtered the users to only consider those who rated at least 50 of the selected movies. After filtering by genre and rating frequency, we have 2,953 users and 1,006 movies in the dataset.

We run three trials in which we randomly split the ratings into training and testing sets, the average scores are listed in Table 2. As in the synthetic setting, the results show that optimizing each unfairness metric leads to the best performance on that metric without a significant change in the reconstruction error.

4. Conclusion

In this paper, we discussed various types of unfairness that can occur in collaborative filtering. We demonstrate that these forms of unfairness can occur even when the observed rating data accurately reflects the users’ preferences. We propose four fairness metrics and demonstrate that augmenting matrix factorization objectives with these metrics as penalty functions enables their minimization. Our experiments on synthetic and real data show that minimization of these unfairness metrics is possible with no significant increase in reconstruction error. However, no single objective was the best for all unfairness metrics, so it remains necessary for practitioners to consider precisely which form of unfairness is most important in their application and optimize that specific objective.

Future Work

While our work here focused on improving fairness among user groups, we did not address fair treatment of different item groups. The model could be biased towards certain items, e.g., performing better for some items than others. Achieving fairness for both user and items may be important when considering that the items may also suffer from discrimination or bias, e.g., when courses are taught by instructors with different demographics.

Moreover, our fairness metrics assume that users rate items according to their true preferences. This assumption is likely violated in real data, since ratings can also be influenced by environmental factors. E.g., in education, a student’s rating for a course also depends on whether the course has an inclusive and welcoming learning environment. However, addressing this type of bias may require additional information or external interventions beyond the provided rating data.

Finally, we are investigating methods to reduce unfairness by directly modeling the two-stage sampling process we used in Section 3.1. Explicitly modeling the rating and observation probabilities as separate variables may enable a principled, probabilistic approach to address these forms of data imbalance.


  • [1] David N Beede, Tiffany A Julian, David Langdon, George McKittrick, Beethika Khan, and Mark E Doms. Women in STEM: A gender gap to innovation. U.S. Department of Commerce, Economics and Statistics Administration, 2011.
  • [2] Toon Calders, Asim Karim, Faisal Kamiran, Wasif Ali, and Xiangliang Zhang. Controlling attribute effect in linear regression. In Data Mining (ICDM), 2013 IEEE 13th International Conference on, pages 71–80. IEEE, 2013.
  • [3] Olivia Chausson. Who watches what? Assessing the impact of gender and personality on film preferences., 2010.
  • [4] Maria-Iuliana Dascalu, Constanta-Nicoleta Bodea, Monica Nastasia Mihailescu, Elena Alice Tanase, and Patricia Ordoñez de Pablos. Educational recommender systems and their application in lifelong learning. Behaviour & Information Technology, 35(4):290–297, 2016.
  • [5] Thomas N. Daymont and Paul J. Andrisani. Job preferences, college major, and the gender gap in earnings. Journal of Human Resources, pages 408–428, 1984.
  • [6] Amanda L. Griffith. Persistence of women and minorities in STEM field majors: Is it the school that matters? Economics of Education Review, 29(6):911–922, 2010.
  • [7] Moritz Hardt, Eric Price, Nati Srebro, et al.

    Equality of opportunity in supervised learning.

    In Advances in Neural Information Processing Systems, pages 3315–3323, 2016.
  • [8] F Maxwell Harper and Joseph A Konstan. The Movielens datasets: History and context. ACM Transactions on Interactive Intelligent Systems (TiiS), 5(4):19, 2016.
  • [9] Paul W. Holland and Samuel Leinhardt. Local structure in social networks. Sociological Methodology, 7:1–45, 1976.
  • [10] Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. Enhancement of the neutrality in recommendation. In Proceedings of the 2nd Workshop on Human Decision Making in Recommender Systems (Decisions@RecSys), pages 8–14, 2012.
  • [11] Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. Efficiency improvement of neutrality-enhanced recommendation. In Proceedings of the 3rd Workshop on Human Decision Making in Recommender Systems (Decisions@RecSys), pages 1–8, 2013.
  • [12] Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. Fairness-aware learning through regularization approach. In 11th International Conference on Data Mining Workshops (ICDMW), pages 643–650. IEEE, 2011.
  • [13] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [14] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8), 2009.
  • [15] Kristian Lum and James Johndrow. A statistical framework for fair predictive algorithms. arXiv preprint arXiv:1610.08077, 2016.
  • [16] Benjamin Marlin, Richard S Zemel, Sam Roweis, and Malcolm Slaney. Collaborative filtering and the missing at random assumption. arXiv preprint arXiv:1206.5267, 2012.
  • [17] Benjamin M. Marlin and Richard S. Zemel. Collaborative prediction and ranking with non-random missing data. In Proceedings of the Third ACM Conference on Recommender Systems, pages 5–12. ACM, 2009.
  • [18] Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. Discrimination-aware data mining. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 560–568. ACM, 2008.
  • [19] Cesar Vialardi Sacin, Javier Bravo Agapito, Leila Shafti, and Alvaro Ortigosa. Recommendation in higher education using data mining techniques. In Educational Data Mining, 2009.
  • [20] Emma Smith. Women into science and engineering? Gendered participation in higher education STEM subjects. British Educational Research Journal, 37(6):993–1014, 2011.
  • [21] Nguyen Thai-Nghe, Lucas Drumond, Artus Krohn-Grimberghe, and Lars Schmidt-Thieme. Recommender system for predicting student performance. Procedia Computer Science, 1(2):2811–2819, 2010.
  • [22] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. Fairness constraints: Mechanisms for fair classification. arXiv preprint arXiv:1507.05259, 2017.
  • [23] Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In Proceedings of the 30th International Conference on Machine Learning, pages 325–333, 2013.