Exploring Users' Perception of Collaborative Explanation Styles

05/02/2018 ∙ by Ludovik Coba, et al. ∙ Free University of Bozen-Bolzano Delft University of Technology 0

Collaborative filtering systems heavily depend on user feedback expressed in product ratings to select and rank items to recommend. In this study we explore how users value different collaborative explanation styles following the user-based or item-based paradigm. Furthermore, we explore how the characteristics of these rating summarizations, like the total number of ratings and the mean rating value, influence the decisions of online users. Results, based on a choice-based conjoint experimental design, show that the mean indicator has a higher impact compared to the total number of ratings. Finally, we discuss how these empirical results can serve as an input to developing algorithms that foster items with a, consequently, higher probability of choice based on their rating summarizations or their explainability due to these ratings when ranking recommendations.



There are no comments yet.


page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

User ratings are one of the key ingredient to collaborative filtering algorithms to automatically assess how likely items might match users’ tastes.

Although, recently, implicit signals on users’ actual behavior have turned out to possess even more predictive power for practical systems (gomez2016netflix), ratings still play a dominant role in constructing the value and quality perception of an item in the eyes of online consumers (duan2008online).

Collaborative explanations (Friedrich2011ASystems) provide justifications for recommendations by displaying information about the rating behavior of a users’ or items’ neighborhood, as has been already identified by Herlocker et al. (Herlocker2000ExplainingRecommendations). Also, with the products in their catalogs, e-commerce sites usually provide at least rating summary statistics along with an information about the origin the ratings.

In this paper we therefore present a choice-based conjoint study that investigates two aspects of these collaborative explanations. The first aspect regards the users’ perception of three different origins for collaborative rating summarizations, i.e.:

  • summaries derived from ratings of users with similar online behavior to the current user in terms of ratings, purchases or clicks (user-style explanations),

  • summarizations based on the ratings from the social-network friends of the current user (social explanations), and

  • ratings of the current user given to similar items, e.g., this is how you rated similar movies to this one (item-style explanations).

The second aspect of our study relates to the two dominant characteristics of a rating summarization, namely number of ratings and mean value, and how they impact the choice behavior of users. When investigating preferences for the origin of ratings, our results show that users clearly prefer rating summarizations justified by similar users (user-style explanations) or similar items (item-style explanations), over ratings from social network friends. While results on the characteristics of the ratings summarizations show that - all things being equal - users are clearly biased towards selecting items with higher means as opposed to larger numbers of ratings. Thus, this study provides clear indications about the degree of persuasiveness (yoo2012persuasive) of these different aspects of collaborative explanations.

After outlining related work in Section 2, we will give details on the choice-based conjoint methodology used for performing the user study in Section 3. In Section LABEL:sec:results we outline obtained results and finally, in Section LABEL:sec:discussion, discuss implications for recommender systems research.

2. Related work

Explanations for recommendations have received considerable research attention over the past years, as summarized by (Tintarev2015ExplainingEvaluation) and (Nunes2017ASystems). There are different ways of explaining recommendations based on collaborative filtering mechanisms as presented in Herlocker et al. (Herlocker2000ExplainingRecommendations). They explored 21 different interfaces and demonstrated that specifically the “user” style (see Figure 1) improves the acceptance of recommendations.

Figure 1. Collaborative user-style explanation from (Herlocker2000ExplainingRecommendations).

The “user” style of explanation provides information about the neighborhood, which is determined based on a generic notion of similarity between users when analyzing their observed behavior or expressed opinions (i.e., buys, clicks, ratings etc.). Please notice that social links (e.g. Facebook friends, see Figure 3) can be considered as a special case of the user style of justifications (Papadimitriou2012ASystems). As far as the user style of explanation is concerned, several collaborative filtering recommender systems, such as Amazon, adopted the following style of justification: “Customers who bought item also bought items ”.

Figure 2. Items style explanation example from the Netflix system.

In the so-called item style of explanation, the justifications are of the following form: “Item is recommended because you highly rated/bought item ”. Thus, the system depicts those items i.e., , that influenced the recommendation of item the most. Bilgic et al. (Bilgic2005ExplainingPromotion) claimed that the item style is better than the user style, because it allows users to accurately formulate their true opinion of an item.

Several works researched the effectiveness of this explanation strategy (Cosley2003; Bilgic2005ExplainingPromotion; Papadimitriou2012ASystems). Rating summary statistics have become common patterns to explain recommendations in many domains(Cremonesi2017UserApplications).

In this line of research, Cosley et al. (Cosley2003) noticed, for instance, that presenting fine grained rating information in a recommendation is highly desirable. However it might bias the users’ opinion,i.e. promote items as opposed to increasing the effectiveness.

Figure 3. Example of justification using Facebook friends.

In contrast to the aforementioned works, however, we are interested in shedding light on users’ trade-off between rating numbers and their mean values when they have to make a choice.

Conjoint analysis is a market research technique suitable for revealing user preferences and trade-offs in the decision making process(Rao2014ChoiceAnalysis). Conjoint analysis has successfully been employed in a wide range of areas, such as education, health, tourism, and human computer interaction.

Cho et al.(Cho2015TheStyle) conducted a conjoint experiment to investigate elders’ preference over smart-phone application icons. The authors explored the dynamics of two attributes (degree of realism and level abstraction) one with four levels and one with two levels, and ran their user study with a total of 30 respondents.

In the field of recommender systems and online decision support, Zanker and Schoberegger (Zanker2014AnSystems) employed a ranking-based conjoint experiment to understand the persuasive power of different explanation styles over the users’ preferences. More recently, Carbonell et al. (Carbonell2018ChoosingPhysician) observed that users select physicians based on considerations of user generated content such as ratings and comments rather than the official descriptions of the physicians’ qualifications. The authors used a choice-based conjoint design to understand, which features influenced the users choice, and suggested that including these results in recommender systems would improve the decision making process.

However, to the best of our knowledge, the persuasive effect of the characteristics in rating summarizations has not yet been studied. The conjoint methodology as employed in market research for decades represents a best practice in order to quantify the perceived utility of the characteristics of different rating summarizations.

3. Methodology and design

Choice-based Conjoint (CBC) analysis is a frequently used approach to determine users’ preferences over a wide range of attributes characterizing products or services (Chu2009AYahoo; Kuhfeld2010DiscreteChoice). The Choice-Based Conjoint (CBC) methodology is also denoted as Discrete Choice Experiment by several authors (louviere2010discrete). In this Section, we explain the used approach in investigating the user’s perception of the rating summarizations and explain how we developed and deployed the CBC questionnaire.

The study is divided into two tasks, one designed to investigate how users perceive different origins of ratings, and the other to investigate the trade-off mechanisms between different characteristics of rating summarizations. We had two separate designs of the experiment in order to consider and test attribute levels that are representative for both, the item-style and the user-style of explanations in the movie domain.

3.1. Acceptance of the origin of ratings

We measured users’ preference for three different origins of ratings summarizations, two variations of the user-style of explanations (i.e. similar users and friends on social networks), and the item-style of explanations (i.e. user’s ratings on similar items). We designed three profiles, each introduced with one of the sentences presented in Table 1, and followed by the identical rating summarization, thus only the origin of the ratings summarized below differed.

Figure 4. Example of choice between two different origins of ratings. The movie poster was adopted from: https://peach.blender.org/.

Users had three binary choices between two out of the three different categories of origins of the ratings, like depicted in Figure 4.

For this task respondents were confronted with the following choice scenario:

“Assume that you find yourself in the situation that you want to make a choice between two different movies to watch. Furthermore, assume that you only care about the origin of the ratings that are presented for each movie, i.e. ratings from other users that had similar preferences like you in the past, ratings of your friends on Facebook or your own ratings for movies that are similar to the one you look at. We therefore would like to ask you about your preference if solely based on this origin of the ratings.”

Origin of Ratings
1 This is how users with similar ratings like you rated this item
2 This is how your friend on Facebook rated this item
3 This is how you rated similar movies on our platform
Table 1. The stimuli presented in the origin of ratings experiment.

This task round was completed with a manipulation check to validate respondents’ correct perception of our stimuli. In the manipulation check, we asked participants about the strategy they had employed in the making of their choices. Based on their answers, we only included those participants who reportedly had noticed that the origin of the summarized ratings differed between choices like ratings from similar users or on similar items. Thus, we removed those respondents who reportedly solely relied on their gut feeling for making their decision.

3.2. Choice-Based Conjoint (CBC) methodology

By collecting answers from different choice sets, researchers can quantify the impact of an attribute level on the preference of respondents (Hauber2016StatisticalForce). In conjoint designs, products (a.k.a., profiles) are modeled by sets of categorical or quantitative attributes, which can have different levels. In CBC experiments, participants have to repeatedly select one profile from different sets of choices, which nicely matches real-world settings when users are confronted with recommendation lists.

3.2.1. Selection of attributes

The first step in building a conjoint design is determining the attributes and their corresponding levels. The most striking characteristics of rating summarizations (see, for instance, Figure 5) are the number of ratings and the mean rating value selected as attributes in our conjoint choice design.

Figure 5. Example plot of a ratings summarization from Amazon.com. Their model is not just a raw data average of the reviews but also considers factors such as the age of the review.

The total number of ratings is often seen as a proxy to measure an item’s popularity, and many well known algorithms are implemented to recommend items that are frequently rated (Jannach2015). Following the argument of  (deLanghe2016NavigatingRatings) a big number of ratings with a slightly lower rating mean should be preferred over higher means based on a much lower total number of ratings. This leads us to the second attribute of this study, the mean rating value

. Formally, a rating summary statistic is a frequency distribution on the class of discrete rating values. Thus, besides the total number of ratings and the mean, also the factors variance and skewness are needed for an approximate description of a unimodal rating distribution

111Empirically, one can also observe bimodal rating distributions as depicted, for instance, in Figure 5. In our design, we controlled for variance and skewness of rating distributions by keeping them fixed. In order to ensure a representative choice of attribute levels for the movie domain, we relied on the Netflix dataset (see Table 2) to identify real-world levels for characterizing rating frequency distributions. Note, that the Netflix dataset itself is not needed to reproduce our study, but only the attribute levels derived from the dataset as described in this paper. The Netflix dataset consists of 17,770 items, 480,189 users and contains 100,480,507 ratings on a discrete scale ranging from 1 to 5. It has been heavily used in recommender systems research and provides evidence for the relatively high number of ratings on movie items.

Number of ratings 100,480,507
Rating’s domain [1;5]
Mean rating value 3.6
# of items 17,770
Average # of ratings per item 5654.5
# of users 480,189
Average # of ratings per user 209.3
Table 2. Netflix datasets.
Figure 6. Rank distribution of users based on the (a) number of ratings and (b) mean value, in the Netflix dataset.
Figure 7. Rank distribution of users based on the (a) number of ratings and (b) mean value, in the Netflix dataset.
Attribute Levels
Item based User based
A1: Number of Ratings L1: 39 (small) L1: 290 (small)
L2: 96 (medium) L2: 560 (medium)
L3: 259 (big) L3: 2970 (big)
A2: Mean Rating L1: 3.4 (low) L1: 2.9 (low)
L2: 3.7 (average) L2: 3.3 (average)
L3: 4 (high) L3: 3.6 (high)
Table 3. Attributes and attribute levels in the Ratings values experiment.

User-style rating summarization depends on ratings given by other users on the same item, while item-style summarizes the ratings for similar items of the current user. We therefore opted for two different level combinations for number of ratings and mean attributes that we tested on two different samples of participants.

In order to determine the attribute levels for the item-style, we analyzed the distribution of ratings per user in the Netflix movie dataset. Figure 5(a)

shows the rank distribution of the users based on the total number of ratings. The 25th, 50th and 75th percentiles (i.e., lower quartile, median and upper quartile) of the number of ratings are 39, 96, and 259, which we, henceforth, denote as the

Small, Medium and Large condition for the number of ratings. Next, Figure 5(b) analogously depicts the rank distribution of the mean rating values. The 25th, 50th and 75th percentiles have rounded mean rating values of 3.4, 3.7, and 4, respectively, which are our Low, Average and High conditions for the mean rating values.

While, for the run on the user-style we determined the levels by analyzing the rating distributions per item. Figure 6(b) shows the rank distribution of the users based on the total number of ratings. Again, the 25th, 50th and 75th percentiles of the number of ratings are 290, 560, and 2970 (denoted as the Small, Medium and Large conditions for the number of ratings). Note, that these levels are several times bigger than for the item-style, since obviously items attract high numbers of ratings in the movie domain. Next, Figure 5(b) analogously depicts the rank distribution of the mean rating values. As before, the 25th, 50th and 75th percentiles have rounded mean rating values of 2.9, 3.3, and 3.6 (Low, Average and High conditions for the mean rating values), smaller than analog conditions in the items based, explainable by the higher density of interactions per item rather than per user.

Table 3 summarizes the selected attributes and the selected values for each level. In addition, we controlled for variance and skewness of the rating frequency distributions by fixing them with the median values from the respective Netflix rank distributions for both runs of the user study (variance: 1 and skewness: -.5).

3.2.2. Study design

Figure 8. An example snapshot of a choice set, with three different rating summary profiles based on different attribute levels. The movie poster was adopted from: https://peach.blender.org/.

Conjoint choice experiments require a set of profiles, and a design how profiles are distributed into a number of choice sets.

The identified attribute levels allow us to build a Full-Factorial design (Zwerina1996ADesigns), that consists of all possible combinations of attributes and levels, thus 2 attributes 3 levels each result in 9 different profiles. All profiles represent statistically feasible level combinations, while, for instance, a mean rating of 5 with a variance different from 0 would obviously be unfeasible.

In order to build the choice sets, and draw the most information on the interaction and main effects, three principles needed to be respected: level balance, orthogonality and minimal overlap (Zwerina1996ADesigns). Level balance requires attribute levels to appear with equal frequency in the different choice sets. Orthogonality ensures that main and interaction effects are uncorrelated; this is achieved by having all attribute levels vary independently of each other. Overlap among levels for an attribute (i.e., identical attribute values for two or more profiles within the same choice set) reduces the collected information. We used the D-efficiency metric to measure the statistical effectiveness of our design (Johnson2013ConstructingForce):


Where is the number of observations in the design, as before, is the number of parameters, and is the standardized orthogonal contrast coding of the matrix (Kuhfeld2010). In matrix columns correspond to the levels of each attribute. Each rows of the matrix , Figure LABEL:fig:des_mat, where a single row is a binary representation of a profile in a choice set ().

Coding is the process or replacing our design levels by the set of indicator or coded variables. For determining the efficiency of the design we used the standard orthogonal contrast coding as recommended by (Zwerina1996ADesigns). Please notice that the sum of squares of the column in a standard orthogonal coding matrix is equal to the number of levels (e.g. if has two levels, the sum of squares of the columns of is 2). Thus, if is orthogonal and balanced where is a identity matrix. In this case, the denominator terms in Formula 1 cancel each other, thus the efficiency is .