Leveraging Multi-Method Evaluation for Multi-Stakeholder Settings

12/14/2019 ∙ by Christine Bauer, et al. ∙ Leopold Franzens Universität Innsbruck Johannes Kepler University Linz 0

In this paper, we focus on recommendation settings with multiple stakeholders with possibly varying goals and interests, and argue that a single evaluation method or measure is not able to evaluate all relevant aspects in such a complex setting. We reason that employing a multi-method evaluation, where multiple evaluation methods or measures are combined and integrated, allows for getting a richer picture and prevents blind spots in the evaluation outcome.



There are no comments yet.


page 1

page 2

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

In recommender systems (RS) research, we observe a strong focus on advancing systems such that they accurately predict items that an individual user may be interested in. The approach of evaluating an RS is thereby largely focused on system-centric methods and metrics (e.g., recall and precision in leave--out analyses (Knijnenburg and Willemsen, 2015)). By employing such an evaluation approach and aiming at optimizing these metrics, the following crucial components in the ecosystem are neglected (Burke, 2017; Ekstrand and Willemsen, 2016): (i) multiple stakeholders are embedded in the ecosystem, but current research largely considers merely the role of the end consumer; (ii) the stakeholders typically have diverging interests and objectives for an RS; however, accurately predicting a user’s interests is the predominant focus in current RS research; and (iii) with taking a mainly accuracy-driven, system-centric approach to evaluation, many aspects that determine a user’s experience with an RS are not considered (Knijnenburg and Willemsen, 2015). This results in an incomplete picture of user experience, leaving “blind spots” that are not captured in the quality evaluation of an RS. Although studies (Azaria et al., 2013) could show that a lower accuracy rate may increase the business utility (e.g., revenue) without any significant drop in user satisfaction, the objectives and interests of stakeholders other than the user are typically not the focus of research in academic settings in the RS community.

In this paper, we call for considering the multiple stakeholders in RS evaluation and postulate that only taking a multi-method evaluation approach allows for capturing and assessing the various interests, objectives, and experiences of these very stakeholders; thus, contributing to eliminating the blind spots in RS evaluation.

To illustrate the opportunities of multi-method evaluations in evaluating an RS from multiple stakeholders’ perspectives, we exemplarily focus on the music domain. In Section 2, we outline related work. Section 3 points out the multitude and diversity of stakeholders to be considered in the evaluation of music RS. In Section 4, we exemplify for selected stakeholders how the integration of multiple evaluation methods contributes to eliminating blind spots in RS evaluation while balancing the stakeholders’ interests.

2. Related Work

The idea to combine different research methods is not a new one. The concept of mixed methods research (Creswell, 2003), for instance, combines quantitative and qualitative research approaches. It has been termed the third methodological paradigm, with quantitative and qualitative methods representing the first and second paradigm respectively (Teddlie and Tashakkori, 2009). Yet, it seems that mixed methods research appears to attract considerable interest but is rarely brought into practice (Ågerfalk, 2013). From a practical point of view, the reasons for the low adoption of evaluations leveraging multiple methods are manifold, including higher costs, higher complexity, wider skill requirements compared to adopting one method only (Celik et al., 2018).

For RS research, Gunawardana and Shani (2015) point out that there is an extensive number of aspects that may be considered when assessing the performance of a recommendation algorithm. Indeed, already early research on RS pointed towards the wide variety of metrics available for system-centric RS evaluation, including classification metrics, predictive metrics, coverage metrics, confidence metrics, and learning rate metrics (Said et al., 2012). As accuracy-driven evaluation has been shown not to be able to capture all the aspects that are relevant for user satisfaction (Konstan and Riedl, 2012), more user-relevant metrics and measures have been introduced and considered over time (Kaminskas and Bridge, 2016) (so-called “quality factors beyond accuracy” (Jannach et al., 2016)). This wider range of objectives includes qualities such as novelty, serendipity (Herlocker et al., 2004), or diversity (Kaminskas and Bridge, 2016).

Kohavi et al. (2013) stress the importance of applying multiple metrics also in the field of A/B testing and online experiments, pointing out that different metrics reflect different concerns. For A/B testing in RS research, Ekstrand and Willemsen (2016) emphasize the need to include methods and metrics that go beyond the typical A/B behavior metrics. They argue that the currently dominating RS evaluation based on implicit feedback and A/B testing (they refer to this combination as “behaviorism”) is often very limited in its ability to explain why users acted in a particular way. They emphasize that experiments need to be thoroughly grounded in theory and point to the advantages of collecting subjective responses from users which may help to explain their behavior.

Jannach and Adomavicius (2017) point out that academic research in the field of RS tends to focus on the consumer’s perspective with the goal to maximize the consumer’s utility (measured in terms of the most accurate items for a user), while maximizing the provider’s utility (e.g., in terms of profit) appears to be neglected. While industry research on RS will naturally build around the provider’s perspective, publications in this area are scarce (Zanker et al., 2019).

3. Digital Music Stakeholders

Various stakeholders are involved in the digital music value chain (Abdollahpouri and Essinger, 2017). From songwriters who create songs; to performers (e.g., (solo) artists, bands); to music producers who take a broad role in the production of a track; to record companies, including the three major labels; to music platform providers with huge repositories of music tracks, acting at the interface to music consumers; and hundreds of millions of music consumers with different music preferences and various objectives for using RS (e.g., discovering previously unknown items, rediscovering items not having listened to in a while ); to society at large with its social, economic, and political objectives and needs.

Some stakeholders focus on user experience, where the goal is to propose “the right music, to the right user, at the right moment” 

(Laplante, 2014). Other stakeholders have business-oriented utility functions (Abdollahpouri and Essinger, 2017). For instance, artists will most likely want to have their own songs recommended to consumers. While some artists may be fine with any of their songs being recommended, others may prefer to increase the playcount of a particular song (e.g., to reach the top charts, which would open an opportunity to draw an even broader audience; or some song may generate higher revenues than others due to contract rules). Achieving additional 1,000 playcounts will not get apparent for highly popular artists with yearly playcounts of several billions, but could be an important milestone for a comparatively less popular (local) artist.

4. Balancing Stakeholder Interests in Evaluation

In the following, we aim to make the case for multi-method evaluations that contribute to identifying the strong and weak spots of a music RS for the stakeholders involved, where we focus on the users’ and artists’ perspectives in this section.

From a user perspective, recommendations that are adequate in terms of system-centric measures—e.g., the predictive accuracy of recommendation algorithms—do not necessarily meet a user’s expectations (Konstan and Riedl, 2012). User-centric evaluation methods, in contrast, involve users who interact with an RS (Knijnenburg and Willemsen, 2015) to gather user feedback (Beel et al., 2013) either implicitly or explicitly (depending on the concrete evaluation design). Such methods measure a user’s perceived quality of the RS at the time of recommendation, e.g., by established questionnaires (Pu et al., 2011). Still, relying solely on user-centric methods does not reveal the accuracy of the recommendations, because, given the vast amount of items, users are not able to judge whether a given recommendation was indeed the most relevant one (Beel et al., 2013).

Measuring accuracy does not capture the recommendations’ usefulness for users, because higher accuracy scores do not necessarily imply higher user satisfaction (McNee et al., 2006). For instance, a user’s most favorite song is an accurate prediction; still, repeating the same song five times is, though accurate, likely not a satisfying experience. Hence, we argue that for evaluating the user’s perspective of a RS—the user being only one of the many stakeholders involved—multiple evaluation methods and measures are required. This may include combining a set of different measures (ranging from recall and precision to serendipity, list diversity or novelty) or integrating different evaluation methods (ranging from leave-n-out offline experiments to user studies and A/B testing). Furthermore, although A/B-testing using user’s implicit feedback is effective for testing the impact of different algorithms or designs on user behavior—and is, thus, frequently considered the “golden standard” for recommender evaluation—, it has limited ability in explaining why users acted in a particular way (Ekstrand and Willemsen, 2016). Additional information (e.g., users’ subjective responses) is necessary to allow for explaining behavior.

In short, sticking to a single evaluation method narrows our view on the RS, literally having blinders on, while devising and evaluating RS. We can borrow from social and behavioral sciences, where, e.g., mixed-methods research combines quantitative and qualitative evaluations using different designs (Creswell, 2003). Creswell’s proposed designs include—among others—the convergent parallel design and the sequential design. In the convergent parallel design, two evaluation methods are first applied in parallel, and finally integrated into a single interpretation. The sequential design uses sequential timing, employing the methods in distinct phases. The second phase of the study, using the second method, is designed such that it follows from the results of the first phase. Depending on the research goal and the concrete choice of methods, researchers may either interpret how the second phase’s results help to explain the initial results (explanatory design) or they build on the exploratory results of the first phase to subsequently employ a different method (in the second phase) to test or generalize the initial findings (exploratory design). For instance, Kamehkhosh and Jannach (2017) showed that in the field of music RS, the results of a conducted offline evaluation could be reproduced with online studies assessing the users’ perceived quality of recommendations. Similarly, for the Recommender Systems Challenge 2017, participants firstly evaluated their prototypes in offline evaluations, before actually deploying them and evaluating them in the live system utilizing A/B tests (Abel et al., 2017), showing that many of RS who performed well in offline evaluations were able to repeat this in online experiments. However, some of the devised RS also performed substantially worse in online experiments—highlighting a shortcoming that was not revealed by evaluating from solely an offline perspective. Along the same lines, Ekstrand and Willemsen (Ekstrand and Willemsen, 2016) state that utilizing behaviorism for evaluation purposes (e.g., through A/B tests) is not sufficient to understand why users act in a particular way and, for instance, like a particular recommendation.

While academic research in the field of RS tends to focus on maximizing the users’ utility, some authors (e.g., (Jannach and Adomavicius, 2017)) emphasize the importance of profit (or value) maximization. Profit maximization may not only be a goal for platform providers, but also for artists who are the content providers for music platforms. From an artist’s perspective, a good RS recommends the respective artist’s songs sufficiently frequently, which may ultimately lead to playcounts, likes, purchases, profit maximization, etc. Evaluating for profit may, though, leave blind spots. For example, depending on the chosen strategy, an artist may want to emphasize other values such as expanding the audience (thus, reaching new listeners) or increasing the listening or purchase volume within the current fan base. Hence, metrics such as number of unique listeners per artist, the sum of playcounts over all songs of an artist, and metrics such as profit-per-audience type may be valuable for RS optimization and need to be considered in the RS evaluation strategy. Accordingly, evaluation efforts need to elicit and integrate the artists’ goals and preferences need to be elicited and integrated into the evaluation efforts. While evaluation on a per-artist-basis might be interesting for the individual artists (e.g., for a comparison between platforms and their integrated RS), it may not be adequate for an overall RS evaluation. Still, an RS needs to be evaluated for its ability to serve the various strategies and for revealing potential tendencies towards the one or other strategy. As the targeted strategy might correlate with artist characteristics (e.g., top-of-the-top artists vs. “long tail” artist; early career vs. come-back phase vs. long-term career; mainstream artists vs. niche genres), it might be in the society’s interest to evaluate for and ensure a balance.

Having given these examples, we emphasize that, due to interdependencies between the RS and the various stakeholders’ actions, the entire RS ecosystem has to be taken into account in the evaluation. For instance, low accuracy of recommendations and low user experience are not likely to continuously increase profits for the platform provider and all kinds of artists; high accuracy does not automatically imply high user experience and may not contribute to profit maximization.

5. Conclusions

In this position paper, we exemplarily focused on the digital music ecosystem to illustrate that multiple stakeholders are impacted by music RS, and discussed the opportunities of multi-method evaluations to consider the multiple stakeholders’ perspectives. We emphasize that—irrespective of the application domain—there are always multiple stakeholders involved in recommendation settings. Hence, there are always multiple—and possibly diverging—perspectives and goals of these very stakeholders which need to be considered in evaluating an RS. Consequently, multiple evaluation methods and criteria have to be combined and possibly also weighted.

Multi-method evaluations allow for gathering a richer and more integrated picture of the quality of a RS and contributes to understanding the various phenomena involved in a multi-stakeholder setting, for which one method in isolation would be insufficient (Venkatesh et al., 2013).

This research is supported by the Austrian Science Fund (FWF): V579.


  • H. Abdollahpouri and S. Essinger (2017) Multiple stakeholders in music recommender systems. In 1st International Workshop on Value-Aware and Multistakeholder Recommendation at RecSys 2017, VAMS ’17. External Links: 1708.00120 Cited by: §3, §3.
  • F. Abel, Y. Deldjoo, M. Elahi, and D. Kohlsdorf (2017) Recsys challenge 2017: offline and online evaluation. In Proceedings of the Eleventh ACM Conference on Recommender Systems, New York, NY, USA, pp. 372–373. Cited by: §4.
  • P. J. Ågerfalk (2013) Embracing diversity through mixed methods research. European Journal of Information Systems 22 (3), pp. 251–256. External Links: ISSN 1476-9344, Document, Link Cited by: §2.
  • A. Azaria, A. Hassidim, S. Kraus, A. Eshkol, O. Weintraub, and I. Netanely (2013) Movie recommender system for profit maximization. In Proceedings of the 7th ACM Conference on Recommender Systems, RecSys ’13, New York, NY, USA, pp. 121–128. External Links: ISBN 978-1-4503-2409-0, Link, Document Cited by: §1.
  • J. Beel, M. Genzmehr, S. Langer, A. Nürnberger, and B. Gipp (2013) A comparative analysis of offline and online evaluations and discussion of research paper recommender system evaluation. In Proceedings of the International Workshop on Reproducibility and Replication in Recommender Systems Evaluation, RepSys ’13, New York, NY, USA, pp. 7–14. External Links: ISBN 978-1-4503-2465-6, Document Cited by: §4.
  • R. Burke (2017) Multisided fairness for recommendation. In

    4th Workshop on Fairness, Accountability, and Transparency in Machine Learning

    FAT/ML ’17. External Links: 1707.00093 Cited by: §1.
  • I. Celik, I. Torre, F. Koceva, C. Bauer, E. Zangerle, and B. Knijnenburg (2018) UMAP 2018 intelligent user-adapted interfaces: design and multi-modal evaluation (iuadaptme). In Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization, UMAP ’18, New York, NY, USA, pp. 137–139. External Links: ISBN 978-1-4503-5784-5, Link, Document Cited by: §2.
  • J. W. Creswell (2003) Research design: qualitative, quantitative, and mixed methods approaches. 2nd edition, Sage Publications, Thousand Oaks, CA, USA. Cited by: §2, §4.
  • M. D. Ekstrand and M. C. Willemsen (2016) Behaviorism is not enough: better recommendations through listening to users. In Proceedings of the 10th ACM Conference on Recommender Systems, RecSys ’16, New York, NY, USA, pp. 221–224. External Links: Document, ISBN 978-1-4503-4035-9 Cited by: §1, §2, §4, §4.
  • A. Gunawardana and G. Shani (2015) Evaluating recommender systems. In Recommender Systems Handbook, F. Ricci, L. Rokach, and B. Shapira (Eds.), pp. 265–308. External Links: ISBN 978-1-4899-7637-6, Document, Link Cited by: §2.
  • J. L. Herlocker, J. A. Konstan, L. G. Terveen, and J. T. Riedl (2004) Evaluating collaborative filtering recommender systems. ACM Transaction on Information Systems 22 (1), pp. 5–53. External Links: ISSN 1046-8188, Link, Document Cited by: §2.
  • D. Jannach and G. Adomavicius (2017) Price and profit awareness in recommender systems. In 1st International Workshop on Value-Aware and Multistakeholder Recommendation at RecSys 2017, VAMS ’17. External Links: 1707.08029 Cited by: §2, §4.
  • D. Jannach, P. Resnick, A. Tuzhilin, and M. Zanker (2016) Recommender systems — beyond matrix completion. Commun. ACM 59 (11), pp. 94–102. External Links: ISSN 0001-0782, Link, Document Cited by: §2.
  • I. Kamehkhosh and D. Jannach (2017) User perception of next-track music recommendations. In Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization, UMAP ’17, New York, NY, USA, pp. 113–121. External Links: ISBN 978-1-4503-4635-1, Link, Document Cited by: §4.
  • M. Kaminskas and D. Bridge (2016) Diversity, serendipity, novelty, and coverage: a survey and empirical analysis of beyond-accuracy objectives in recommender systems. ACM Transactions on Interactive Intelligent Systems 7 (1), pp. 2:1–2:42. External Links: ISSN 2160-6455, Link, Document Cited by: §2.
  • B. P. Knijnenburg and M. C. Willemsen (2015) Evaluating recommender systems with user experiments. In Recommender Systems Handbook, F. Ricci, L. Rokach, and B. Shapira (Eds.), pp. 309–352. External Links: ISBN 978-1-4899-7637-6, Document, Link Cited by: §1, §4.
  • R. Kohavi, A. Deng, B. Frasca, T. Walker, Y. Xu, and N. Pohlmann (2013) Online controlled experiments at large scale. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’13, New York, NY, USA, pp. 1168–1176. External Links: ISBN 978-1-4503-2174-7, Link, Document Cited by: §2.
  • J. A. Konstan and J. Riedl (2012) Recommender systems: from algorithms to user experience. User Modeling and User-Adapted Interaction 22 (1), pp. 101–123. External Links: ISSN 1573-1391, Document, Link Cited by: §2, §4.
  • A. Laplante (2014) Improving music recommender systems: what can we learn from research on music tags?. Conference Proceedings In 15th International Society for Music Information Retrieval Conference, ISMIR ’14, pp. 451–456. Cited by: §3.
  • S. M. McNee, J. Riedl, and J. A. Konstan (2006) Being accurate is not enough: how accuracy metrics have hurt recommender systems. In CHI ’06 Extended Abstracts on Human Factors in Computing Systems, CHI EA ’06, New York, NY, USA, pp. 1097–1101. External Links: ISBN 1-59593-298-4, Link, Document Cited by: §4.
  • P. Pu, L. Chen, and R. Hu (2011) A user-centric evaluation framework for recommender systems. In Proceedings of the 5th ACM Conference on Recommender Systems, RecSys ’11, New York, NY, USA, pp. 157–164. External Links: ISBN 978-1-4503-0683-6, Link, Document Cited by: §4.
  • A. Said, D. Tikk, K. Stumpf, Y. Shi, M. Larson, and P. Cremonesi (2012) Recommender systems evaluation: a 3d benchmark. In Proceedings of the Workshop on Recommendation Utility Evaluation: Beyond RMSE, RUE ’12, Vol. 910, pp. 21–23. External Links: Link Cited by: §2.
  • C. Teddlie and A. Tashakkori (2009) Foundations of mixed methods research: integrating quantitative and qualitative approaches in the social and behavioral sciences. Sage Publications, Thousand Oaks, CA, USA. Cited by: §2.
  • V. Venkatesh, S. A. Brown, and H. Bala (2013) Bridging the qualitative-quantitative divide: guidelines for conducting mixed methods research in information systems. MIS Quarterly 37 (1), pp. 21–54. External Links: ISSN 0276-7783 Cited by: §5.
  • M. Zanker, L. Rook, and D. Jannach (2019) Measuring the impact of online personalisation: past, present and future. International Journal of Human-Computer Studies. External Links: Document, ISSN 1071-5819, Link Cited by: §2.