Latent Unexpected and Useful Recommendation

05/04/2019 ∙ by Pan Li, et al. ∙ NYU college 0

Providing unexpected recommendations is an important task for recommender systems. To do this, we need to start from the expectations of users and deviate from these expectations when recommending items. Previously proposed approaches model user expectations in the feature space, making them limited to the items that the user has visited or expected by the deduction of associated rules, without including the items that the user could also expect from the latent, complex and heterogeneous interactions between users, items and entities. In this paper, we define unexpectedness in the latent space rather than in the feature space and develop a novel Latent Convex Hull (LCH) method to provide unexpected recommendations. Extensive experiments on two real-world datasets demonstrate the effectiveness of the proposed model that significantly outperforms alternative state-of-the-art unexpected recommendation methods in terms of unexpectedness measures while achieving the same level of accuracy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Recommender systems have been playing an important role in the process of information dissemination and online commerce, which assist the user in filtering for the best content while shaping their consumption behavior patterns at the same time. However, many classical recommender systems are facing the problem of a filter bubble (Pariser, 2011; Nguyen et al., 2014)

, which means that the targeted users would only get recommendations of a small portion of the available items, and they tend to get more recommendations of the items that they are most familiar with. For example, a Harry Potter fan may feel unsatisfied if the system keeps recommending Harry Potter series. This type of filter bubble phenomenon has motivated researchers to introduce several evaluation metrics beyond accuracy, including unexpectedness, serendipity, novelty and diversity

(Shani and Gunawardana, 2011).

Previous research introduces multiple alternative definitions of these measures, the goal of which is to provide novel, surprising and not previously seen recommendations. For example in (Adamopoulos and Tuzhilin, 2015), the authors define unexpectedness as s distance of an item from the set of expectations and show the proposed approach achieves strong recommendation performance. However, one problem with this approach is that the set of expected items is defined in the limited sense as a closure of a set of previously consumed items, while a more comprehensive approach would look into the latent, complex and heterogeneous relations between users, items and entities and form the unexpectedness accordingly. These relations can be modeled using the concept of Heterogeneous Information Network (HIN) (Sun and Han, 2013) that contains multiple types of objects and multiple types of links within a single network.

However to compute unexpectedness, it is hard to define the distance from the set of expected items in the HIN due to its discrete and complicated structure. In addition, latent relations between users and items are missing in the model, as it is not sufficient to accurately provide recommendations using only the explicit relations between users and items (Zhang et al., 2017).

Therefore to address these problems, we propose to define the unexpectedness as the distance of an item from the expected item sets not in the feature space (features and attributes of users and items) but in the latent space (feature and attribute embeddings). We utilize the heterogeneous random walk mechanism to obtain the network embeddings of HIN. Then we define the unexpectedness as the euclidean distance from item embeddings to the latent convex hull of the embeddings of the expected items. This approach has several advantages, including the guarantee of feasibility of the recommendation optimization and match with cognition theory (Gärdenfors, 2004) of conceptual space, as we will describe in detail in Section 3.

In this paper, we make the following contributions:

(1)We propose to apply deep-learning based method to the unexpected recommendation task and define unexpectedness in the latent space rather than in the feature space.

(2)We propose to formulate expectations of a user as a convex hull generated by all the previously consumed items in the latent space. This convex hull approach has strong theoretical foundations as shown in (Zenker and Gärdenfors, 2015).

(3)Unlike the previously proposed approaches, we model the set of expectations in the feature space of HIN, which captures the complex and heterogeneous relations between users, items and entities. We subsequently map the HIN into the latent space and construct the convex hull from the latent structure for unexpectedness computation.

(4)We conduct extensive experiments on two real-world datasets and show that the proposed method consistently and significantly outperforms the other baseline models and the state-of-the-art unexpected recommendation algorithms in terms of various unexpectedness metrics, while achieving the same level of recommendation accuracy. Also, our method achieves higher maximum convex hull coverage than the baseline models and therefore recommends more semantically diverse items to the users.

The rest of the paper is organized as follows. We discuss the related work in Section 2 and present our proposed model in Section 3. Experimental design on the Yelp Dataset and TripAdvisor Dataset are described in Section 4 and the results as well as discussions are presented in Section 5. Finally, Section 6 summarizes our contributions and concludes the paper.

2. Related Work

In this section, we will introduce the prior literature on unexpectedness, alternative definition of expected set and state-of-the-art unexpected recommendation algorithms while pointing out the limitation of the previous models and comparing them with our proposed approach. We also describe the previous study on heterogeneous information network at the end of this section.

Researchers have addressed the importance of incorporating unexpectedness in recommendations (Kotkov et al., 2018), which could help overcome the overspecialization problem (Adamopoulos and Tuzhilin, 2015; Iaquinta et al., 2010), broaden user preferences (Herlocker et al., 2004; Zhang et al., 2012; Zheng et al., 2015) and increase user satisfaction (Adamopoulos and Tuzhilin, 2015; Zhang et al., 2012; Lu et al., 2012). Note that, the concepts of unexpectedness and serendipity are closely related with each other, but still different in terms of definition and calculation. In particular, serendipity involves a positive emotional response of the user about a previously unknown item and measures how surprising these recommendations are (Shani and Gunawardana, 2011).

Unexpectedness, on the other hand, measures the recommendations to users of those items that are not included in their consideration sets and depart from what they would expect from the recommender system. (Kontonasios et al., 2012) surveys different methods for discovering the unexpected patterns using frequent itemsets, tiles, association rules and classification rules; (Murakami et al., 2007; Ge et al., 2010) defines unexpectedness as the deviation of a recommender system from the results obtained from a primitive prediction model; (Akiyama et al., 2010) defines unexpectedness as an unlikely combination of item features; and (Adamopoulos and Tuzhilin, 2015) that defines unexpectedness as the distance of item from the set of expected items. However, these definitions do not consider the entity information in user reviews that bridge the expectation between users and items, which is crucial in modeling the expectation and preferences of certain users as pointed out in (Yu et al., 2013; Shi et al., 2018; Yu et al., 2014). Besides, these definitions determine the unexpectedness on the feature space, so they fail to capture the latent semantic relationship between users and items. In addition, these definitions only focus on the explicit correlation between users and items without considering the situation that the user could inference the expectation based on the historical behaviors. To address all the limitations, in this paper we propose to define the unexpectedness as the distance of item from the closure set of expected items for user in the latent space.

Research have also proposed various unexpected recommendation models, including Serendipitous Personalized Ranking(Lu et al., 2012) that extends traditional personalized ranking methods by considering item popularity in AUC optimization; Auralist(Zhang et al., 2012) that balances between the desired goals of accuracy, diversity, novelty and serendipity simultaneously; and HOM-LIN (Adamopoulos and Tuzhilin, 2015) that defines unexpectedness as the distance between items and the expected set of users. However as pointed out before, these models do not consider the latent interaction between users and items as well as the complexity and heterogeneous relations from heterogeneous entities, while the proposed Latent Convex Hull approach fits into the gap and achieves significantly better performance. We list the comparison between proposed model and the literature in Table 1.

Another body of related work is around utilizing heterogeneous information network (Shi et al., 2017) and its embeddings for modeling complex heterogeneous context information and providing better recommendations. (Shi et al., 2018) transforms the learned node embeddings by a set of fusion functions and subsequently integrated into an extended matrix factorization model for the rating prediction task. (Han et al., 2018)

extracts different aspect-level similarity matrices of users and items through heterogeneous information network, and then feeds an deep neural network to learn aspect-level latent factors for recommendation.

(Dong et al., 2017) formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. However, all these previous work only focus on the usefulness and accuracy of recommendations, while failing to take unexpectedness into account, which is very important due to the previous literature (Kotkov et al., 2018).

Algorithms LCH SPR Auralist HOM-LIN Random
Latent Embeddings
HIN
User Reviews
Domain Knowledge
Past Transactions
Ratings

Table 1. Comparison of Unexpected Recommendation Methods

3. Model

In this section, we describe our proposed Latent Convex Hull (LCH) model and unexpected recommendation algorithms. We will introduce the definition, sources, modeling and understanding of the unexpectedness, the setup of the feature space, intuition and advantages of using latent convex hull, the mapping from the feature space to the latent space and the unexpected utility function based on the proposed definition.

3.1. Definition of Unexpectedness

Following the prior literature (Adamopoulos and Tuzhilin, 2015), the definition of unexpectedness starts with the modeling of the ”expected set”, the set of items that the user either previously encountered or closely related to them. Intuitively, users should have zero unexpectedness with respect to the items they have visited, purchased or rated before. It is worth noticing, however, that the expected items could be more than that because of the various interactions and relatedness between users and items. In particular the set of expected items contains those that either viewed by the user or could be expected by the complex relations with those items that the user has viewed before. The closure of the ”expected” items forms the ”expected set” of the user, and we define unexpectedness as the distance of an item from the closure of the set of expected items for user.

Note that, we can define the unexpectedness in the feature space or in the latent space using the same approach. However, due to the discrete and complicated structure, it is difficult to model the expected items in the feature space. Therefore we propose to define the unexpectedness in the latent space, as we will describe in detail in the following section. We visualize the definition of these concepts in the latent space in Figure 0(a) and 0(b).

(a) Visualization of Latent Space
(b) Visualization of Latent Convex Hull
Figure 1. Visualization

3.2. Feature Space: Heterogeneous Information Network

In order to model the set of expectations of the user, it is important to select an appropriate data structure capturing the expected relations between users, items and entities in the feature space. Intuitively, in the case of restaurant recommendations, the customer might get the expectation of certain restaurant because the customer (1) has visited that restaurant before, (2) has visited restaurants that are very similar (e.g., of the same franchise or of the same category), (3) has enjoyed the same cuisines served in that restaurant at other places and (4) gets to know that restaurant from the friends. This suggests that we should consider not only the direct interactions between users and items, but also the intermediate information from certain attributes and entities simultaneously. To capture the complex and multi-dimensional relations in the data record, we propose to use heterogeneous information network (HIN) (Sun and Han, 2013) that contains multiple types of objects and multiple types of links in a single network. Specifically, the heterogeneous information network includes users, items, transactions, ratings, entities extracted from reviews and the meta-data information. We link the associated entities with corresponding users and items in the network. As an example, Figure 2 demonstrates HIN for the restaurant application and show the relations between users, items and entities.

Figure 2. Visualization of HIN

3.3. Latent Space: Network Embeddings

Note that, due to the discrete and complicated structure of heterogeneous information network, it’s very hard to properly define the concept of ”unexpectedness” and the distance metric on the feature space. Motivated by the goal to capture latent semantic interactions between users and items, we will introduce the deep-learning based network embedding approach in this section.

To learn effective node representations for the heterogeneous network , following the setup in (Dong et al., 2017)

, we enable the skip-gram mechanism to maximize the probability of having the heterogeneous context

, given a node :

where denotes the neighborhood of with the th type of nodes and defines the conditional probability of having a context node given a node . To transform the structure of heterogeneous information network into skip-grams for optimization, we follow the natural idea of heterogeneous random walk to generate paths of multiple types of nodes in the network. Specifically, given a heterogeneous information network , we generate the meta-path scheme as a path that is denoted in the form of wherein defines the composite relations between the start and the end of the heterogeneous random walk. The transition probability within each random walk between two nodes is defined as follows:

where stands for the transition coefficient between the type of node and the type of node . In our user-item-entity heterogeneous information network, we have 6 different transition coefficients which sums to one: stands for the number of nodes of type in the neighborhood of . In the heterogeneous information network, we perform heterogeneous random walk starting from each node iteratively and get the collection of meta-paths.

Note that, there are several benefits of utilizing heterogeneous random walks over other graph traversing approaches in HIN. First, heterogeneous random walks are computationally efficient in terms of both space and time requirements. In addition, heterogeneous random walk increases the effective sampling rate by reusing samples across different source nodes as it imposes graph connectivity in the sample generation process. And finally, it provides us a convenient way to address the heterogeneous influence of different types of nodes and links in HIN, as we can apply optimization algorithm to learn the transition coefficients efficiently. In this way, the embeddings of users and items could be obtained from the skip-gram mechanism (Mikolov et al., 2013) on the meta-paths of heterogeneous random walk.

3.4. Latent Convex Hull as Expected Set

As explained before, we choose to take the closure in the latent space rather than original feature space. We utilize the well-defined concept convex hull as a natural closure of the expected item embeddings. This approach provides the following advantages:

(1)The convexity property guarantees the feasibility of the recommendation as an optimization problem. Note that in our setting, the objective function, i.e., the utility function will be a linear combination of the rating and unexpectedness measures, so the convexity of the ”expected set” will automatically imply the convexity of the objective function. The domain set of the optimization is a finite set of available items, so by Slater’s Condition (Slater, 2014), the primal problem is guaranteed feasible.

(2)The convex hull corresponds to the cognition theory of conceptual space (Gärdenfors, 2004), a geometric structure that represents a number of quality dimensions that denote basic features by which concepts and objects can be compared, such as weight, color, taste, temperature, pitch, and the three ordinary spatial dimensions.. In the application of unexpected recommender system, the conceptual space includes ratings that users given to items and unexpectedness that measures the familiarity of users to items. According to the research in (Zenker and Gärdenfors, 2015), natural categories are convex regions in conceptual spaces, therefore it is natural to model the closure as convex hull following the conceptual space theory.

(3)The convex hull could capture semantic interactions between users and items. Compared to the alternative definitions of an expected set, including Content-Based Similarity and Associate Rule Learning approach, the convex hull could utilize richer information to discover the relationship between users and items more precisely, including the intermediate effect from entities by the convex extension of expectation.

Based on these good properties, the proposed Latent Convex Hull model is a strong approach to define expected set for users and unexpectedness based on the expectations.

3.5. Unexpected Recommendation: Latent Convex Hull

Based on the network embeddings and the continuous structure of the latent space, we could now construct the expected set for each user using the proposed definition of unexpectedness. As described in the previous section, convex hull has certain advantages over other geometric structure to model the closure of expected set, so in this paper, we define the unexpectedness between each user/item pair as the distance between the item embedding and the latent convex hull generated from the user embedding and its neighbors. Specifically, we calculate the euclidean distance from the given point to the boundaries of the convex hull of the expected items. We assure that the euclidean distance is well-defined and unique by the hyperplane separation theorem . Note that, the unexpectedness metric will take

negative value if the given item is inside the convex hull (which means that the given item may lie deep within the user’s expected set and the user could be overfamiliar with that item), and take positive value if it is indeed outside of the convex hull. The unexpectedness is formally defined below, where stands for the latent convex hull generated by the user.

Once we set up the definition of unexpectedness, we could perform the unexpected recommendation based on the hybrid utility function:

which incorporates the linear combination of ratings (which stand for usefulness) and unexpectedness. The key idea lies in that, instead of recommending the similar items that the users are very familiar with as the classical recommenders do, we wish to recommend unexpected and relevant items to the users that they might have not thought about, but indeed fit well to their satisfactions. Those two adversarial forces work together to get the optimal solution and thus get the best performance in terms of accuracy and unexpectedness measures.

4. Experiments

To validate the superiority of our approach, we conduct extensive experiments on two distinctive real-world datasets and compare our methods with the state-of-the-art baseline recommendation models. In this section, we will introduce the two datasets, evaluation metrics and baseline models. Specifically, our experiments are designed to address the following research questions:

RQ1: How does the proposed definition of ”expected set” perform compared to the alternative definitions?

RQ2: How does our model perform compared to the state-of-the-art unexpected recommendation models?

RQ3: Can our model reach a higher coverage of the latent space compared to other unexpected recommendation models?

4.1. Datasets

We conduct extensive experiments on two real-world datasets to evaluate the performance of our proposed model: the Yelp Challenge Dataset Round 12 111https://www.yelp.com/dataset/challenge, which consists of 5,996,996 reviews from 1,518,169 users on 188,593 businesses on Yelp platform and also contains the category of restaurants, the friendship between users and information about time and location; the TripAdvisor Dataset 222http://www.cs.cmu.edu/ jiweil/html/hotel-review.html

, which consists of 878,561 reviews from 576,689 users of 3,945 businesses on TripAdvisor platform. We list the descriptive statistics of these two datasets in Table

2. To address the cold-start issue, we filter out users and items that appear less than 5 times in the dataset.

Dataset Yelp TripAdvisor
Number of Reviews 5,996,996 878,561
Number of Unique Businesses 188,593 576,689
Number of Unique Users 1,518,169 3,945
Average Reviews Per User 4 2
Average Reviews Per Business 32 222

Table 2. Descriptive Statistics of Two Datasets

4.2. Evaluation Metrics

To compare the performance of our proposed unexpected recommendation model and the baseline models, we follow (Herlocker et al., 2004) and measure its recommendation performance in terms of RMSE, MAE, Precision@N and Recall@N metrics. Besides, to measure the unexpected recommendation performance, we also compute Serendipity, Diversity and Coverage performance metrics following their definitions in (Ge et al., 2010)

: Serendipity = (RS& PM)/PM, Diversity = (RS& PM &USEFUL)/PM, where RS stands for the recommended items using the selected model, PM stands for the recommendation results using a primitive prediction algorithm (usually selected as the linear regression) and USEFUL stands for the items whose utility is above certain threshold. Coverage

(Ge et al., 2010) is computed as the percentage of distinctive recommended items over all the distinctive items in the dataset.

4.3. Baseline Models

To validate the effectiveness of our proposed LCH modeling of the expected set, we compare it with the alternative approaches in terms of expected set and algorithms. The baseline models to compute the expected set include Base, CBS and ARL as described in (Adamopoulos and Tuzhilin, 2015). In addition, we also compare to the RO (Rating Only) model that does not include the unexpectedness component.

  • Base. The set of expected recommendations consists only the set of items that she or he has already rated. In particular in the dataset, the item is also considered expected if the user has already rated certain item that belongs to the same franchise or brand.

  • CBS. (Content-Based Similarity) The set of expected recommendations consists the set of items in the Base set and the items that are sufficiently correlated with those measured by the semantic similarity between review texts.

  • ARL. (Associate Rule Learning) The set of expected recommendations consists the set of items in the Base set and the items that are closely related to those in the Base set. Specifically, two restaurants are related if they are in the same category or overlap more than half of their cuisines or overlap more than half of their customers.

Besides, we also implement several state-of-the-art unexpected recommendation models and compare their performance with LCH in terms of unexpectedness, serendipity, diversity and the convex hull coverage in the latent space. The baseline models include

  • SPR (Lu et al., 2012). Serendipitous Personalized Ranking is a simple and effective method for serendipitous item recommendation that extends traditional personalized ranking methods by considering item popularity in AUC optimization, which makes the ranking sensitive to the popularity of negative examples.

  • Auralist (Zhang et al., 2012). Auralist is a personalized recommendation system that balances between the desired goals of accuracy, diversity, novelty and serendipity simultaneously. Specifically in the music recommendation, the authors combine Artist-based LDA recommendation with two novel components: Listener Diversity and Musical Bubbles. We adjust the algorithm to fit in our restaurant and hotel recommendation scenario.

  • HOM-LIN (Adamopoulos and Tuzhilin, 2015). It is the state-of-the-art unexpected recommendation algorithm, where the author propose to define unexpectedness as the distance between items and the expected set of users. In our experiment, we select Hom-Lin as the baseline model, which obtains the best performance compared to other variations according to that paper.

  • Random. Random is the baseline model where we randomly recommend items to users without considering any information about the ratings, unexpectedness, utility and so on.

5. Results

In this section, we report the experiment results and give answers to the research questions in Section 4.

Dataset Algorithm Expected Set RMSE MAE Precision@5 Recall@5 Unexpectedness Serendipity Diversity Coverage
Yelp FM RO 0.9197 0.6815 0.7699 0.6123 -0.0326 0.0978 0.0135 0.5369
CH 0.9233 0.6860 0.7642 0.6137 0.0377* 0.2203* 0.1122* 0.5443
Base 0.9389 0.7178 0.7371 0.5880 -0.0002 0.1223 0.0808 0.5430
ARL 0.9364 0.7607 0.7083 0.5833 0.0102 0.1030 0.0906 0.5482
CBS 0.9384 0.7552 0.7282 0.5830 0.0079 0.1232 0.0928 0.5482
CoCluster RO 0.9509 0.7153 0.7239 0.5909 -0.0369 0.1819 0.0508 0.5830
CH 0.9631 0.6968 0.7237 0.5967 0.0447* 0.2278* 0.1224* 0.7601*
Base 0.9795 0.7334 0.7211 0.5941 0.0003 0.1370 0.0908 0.5393
ARL 0.9764 0.7607 0.7083 0.5833 0.0104 0.1030 0.0925 0.5482
CBS 1.0425 0.8069 0.7359 0.5490 0.0104 0.1204 0.0925 0.5482
SVD RO 0.9132 0.7069 0.7680 0.5983 -0.0346 0.1294 0.0395 0.5424
CH 0.9263 0.7094 0.7639 0.6112 0.0351* 0.2326* 0.1126* 0.5351
Base 0.9479 0.7433 0.7640 0.5755 0.0020 0.0999 0.0908 0.5424
ARL 0.9359 0.7303 0.7605 0.5888 0.0079 0.0566 0.0987 0.5424
CBS 1.0152 0.7803 0.7632 0.5344 0.0056 0.0876 0.0728 0.5535
NMF RO 0.9526 0.7171 0.7249 0.5852 -0.0350 0.1909 0.0547 0.5959
CH 0.9632 0.6973 0.7165 0.5852 0.0447* 0.2343* 0.1206* 0.7638
Base 0.9889 0.7772 0.7171 0.5788 0.0037 0.1403 0.0699 0.5830
ARL 0.9793 0.7636 0.7105 0.5795 0.0102 0.1029 0.0728 0.5774
CBS 1.0388 0.8040 0.7322 0.5442 0.0125 0.1207 0.0896 0.5482
KNN RO 0.9123 0.7048 0.7688 0.6085 -0.0336 0.0977 0.0130 0.5369
CH 0.9251 0.7060 0.7632 0.6136 0.0367* 0.2103* 0.1022* 0.5443
Base 0.9476 0.7443 0.7685 0.5805 -0.0004 0.1043 0.0834 0.5442
ARL 0.9352 0.7300 0.7702 0.5985 0.0079 0.0611 0.0856 0.5480
CBS 1.0143 0.7794 0.7710 0.5451 0.0102 0.0885 0.0724 0.5461
TripAdvisor FM RO 1.1105 0.8340 0.6768 0.9590 -0.0922 0.3979 0.0017 0.1798
CH 1.1275 0.8445 0.7040 0.9656 0.0643* 0.4631* 0.0493* 0.1798
Base 1.1550 0.8452 0.6772 0.8715 -0.0266 0.4591 0.0301 0.1807
ARL 1.1512 0.8323 0.6803 0.9001 0.0097 0.4501 0.0365 0.1802
CBS 1.1343 0.8392 0.6809 0.9065 0.0122 0.4485 0.0332 0.1802
CoCluster RO 1.0178 0.7643 0.6845 0.9732 -0.0934 0.3973 0.0015 0.1855
CH 1.0511 0.8048 0.6947 0.9692 0.0652* 0.4619* 0.0471* 0.1798
Base 1.0657 0.8452 0.6917 0.8715 -0.0266 0.4393 0.0210 0.1807
ARL 1.0577 0.8220 0.6801 0.9103 0.0179 0.4401 0.0371 0.1802
CBS 1.0573 0.8292 0.6902 0.9077 0.0122 0.4423 0.0302 0.1802
SVD RO 0.9868 0.7533 0.7210 0.9465 -0.0931 0.3967 0.0006 0.1798
CH 1.0214 0.7890 0.7182 0.8911 0.0644* 0.4621* 0.0499* 0.1798
Base 1.0368 0.8216 0.7087 0.8099 -0.0262 0.4594 0.0298 0.1807
ARL 1.0354 0.8079 0.6992 0.8227 0.0009 0.4499 0.0333 0.1802
CBS 1.0345 0.7998 0.6999 0.8385 0.0207 0.4487 0.0366 0.1802
NMF RO 1.0241 0.7709 0.6850 0.9681 -0.0927 0.3979 0.0010 0.1798
CH 1.0575 0.8111 0.6869 0.9655 0.0644* 0.4627* 0.0499* 0.1798
Base 1.0672 0.8463 0.6922 0.8723 -0.0270 0.4598 0.0261 0.1807
ARL 1.0552 0.8323 0.6902 0.9021 0.0109 0.4501 0.0365 0.5480
CBS 1.0543 0.8392 0.6969 0.9015 0.0222 0.4485 0.0334 0.5461
KNN RO 0.9940 0.7531 0.6969 0.9689 -0.0933 0.3979 0.0019 0.1798
CH 1.0275 0.7945 0.7040 0.9256 0.0643* 0.4631* 0.0492* 0.1798
Base 1.0434 0.8279 0.7012 0.8318 -0.0266 0.4593 0.0200 0.1802
ARL 1.0352 0.8000 0.7002 0.8985 0.0019 0.4511 0.0256 0.1802
CBS 1.0343 0.8094 0.7010 0.8451 0.0002 0.4585 0.0224 0.1802

Table 3. Validation of Unexpected Recommendation on the two datasets. ”RO”: Rating Only, ’LCH”: Latent Convex Hull, ”CBS”: Content-Based Similarity, ”ARL”: Associate Rule Learning, ”*” stands for 95% statistical significance

5.1. RQ1: Comparison of Expected Sets

To validate our proposed definition of unexpectedness, we compare the recommendation performance using alternative definition of expected sets introduced in (Adamopoulos and Tuzhilin, 2015) and corresponding unexpected distance. We also include the results of standard recommendation, i.e., using rating only (RO) for recommendation. In addition, to verify the robustness of the experimental settings, we conduct the cross-validation experiment using five popular collaborative filtering algorithms including k-Nearest Neighborhood approach (KNN) (Altman, 1992)

, the Singular Value Decomposition approach (SVD)

(Sarwar, Karypis, Konstan, and Riedl, Sarwar et al.), the Co-Clustering approach (George and Merugu, 2005), the Non-Negative Matrix Factorization approach (NMF) (Lee and Seung, 2001) and the Factorization Machine approach (FM) (Rendle, 2010). We conduct these experiments on two real-world datasets, resulting in 400 experiments in total.

The performance results are reported in Table 3 and also in Figure 3 and 4 that are based on Table 3

. The results show that our proposed model consistently and significantly outperforms the baseline models over all the experimental settings. In paricular our model significantly increases the serendipity, unexpectedness and diversity measures, while still performing as good as the baseline models in terms of accuracy measures including RMSE, MAE, Precision and Recall. More specifically, we observe over 100% increase in unexpectedness, 80% increase in serendipity and 20% increase in diversity measures on average, while the differences between the proposed and baseline models are statistically insignificant in terms of RMSE, MAE. Precision and Recall measures. To sum up, the answer to RQ1 is that our proposed definition of ”expected set” using Latent Convex Hull approach performs consistently and significantly better than all other baseline methods.

It is also worth noting that some of the baseline models obtain negative values of unexpectedness, as reported in Table 3. Based on our definition of unexpectedness in the previous section, the metric will take negative value if the given item is inside the convex hull (which means that the user could be overfamiliar with that item), and take positive value if it lies outside of the convex hull. These negative values indicate that the alternative definitions of expected set suffer from the problem of filter bubble. Our proposed approach, however, achieves superior performance in terms of unexpectedness for all the experimental settings, which supports the claim that it is indeed a powerful tool to address the filter bubbles problem.

(a) Unexpectedness
(b) Serendipity
(c) Diversity
Figure 3. Recommendation Performance of Yelp
(a) Unexpectedness
(b) Serendipity
(c) Diversity
Figure 4. Recommendation Performance of TripAdvisor

5.2. RQ2: One-Time Recommendation

To show that our model could indeed provide more unexpected recommendations than the state-of-the-art methods, we provide a set of one-time recommendation for all the users in the dataset to compare the unexpectedness performance. In particular each user is recommended a set of 10 items based on the past transactions and we use the same measures in the previous section to evaluate the unexpected recommendation performance. The experiment results are reported in Table 4 and 5, which show that our proposed model consistently and significantly outperforms all other baselines in terms of Coverage, Unexpectedness, Serendipity and Diversity meaasures. To sum up, the answer to RQ2 would be that our proposed LCH model performs consistently and significantly better than all the other state-of-the-art unexpected recommendation models.

Algorithms Coverage Unexpectedness Serendipity Diversity
LCH 0.3524 0.4998 1.0 0.6208
SPR 0.1697 0.4668 0.972 0.4532
Auralist 0.1457 0.4663 0.9637 0.6047
HOM-LIN 0.1365 0.4251 0.8629 0.6000
Random 0.1457 0.3733 0.8848 0.5763

Table 4. Comparison of Unexpected Performance:Yelp
Algorithms Coverage Unexpectedness Serendipity Diversity
LCH 0.2597 0.5582 0.9969 0.864
SPR 0.1834 0.4739 0.9593 0.8175
Auralist 0.1834 0.4728 0.9562 0.8553
HOM-LIN 0.2144 0.4722 0.9629 0.8117
Random 0.2173 0.3733 0.9468 0.835

Table 5. Comparison of Unexpected Performance:TripAdvisor

5.3. RQ3: Maximum Convex Hull

To answer RQ3 and compare the long-term unexpected performance of the recommendation model, we first need to define the concept of the maximum convex hull. The Maximum Convex Hull for each user is defined as the convex hull of all the items in the dataset whose utlity is above certain threshold. Intuitively, the maximum convex hull constitues the upper bound of the expected set for the user. The coverage percentage for convex hull is computed as the ratio of the area of expectations over that of the maximum convex hull.

As a part of our experiment, we repeat the item recommendation for users multiple times while observing how many novel recommendations have been generated during the iterative process. We are also interested to see if our proposed unexpected recommendation approach could reach the upper bound of the maximum convex hull for each user and how quick it extends the area of expected set. Furthermore when items are recommended in the previous iteration, it will become expected in the next iteration. We compare the coverage of the Maximum Convex Hull after 1,5,10,20,50 iterations, and the performance results are shown in Table 6, 7 and Figure 5. Figure 5 shows that our LCH methods will significantly outperform the baseline models in terms of convex hull coverage over all the iteration situations. Moreover, the convergence rate to the maximum convex hull for the Yelp dataset is much faster than that of the TripAdvisor dataset. It happens because we have richer information for the restaurant recommendation: apart from user-item transaction, ratings and reviews, we also have the information about restaurant categories, cuisines and user friendship network, while we only have reviews and ratings data for the TripAdvisor dataset.

To sum up, we give the answers to all three research questions in Section 4 and conclude that the proposed LCH model achieves the best performance in unexpected recommendations compared to all the other state-of-the-art baseline models.

Iterations 1 5 10 20 50
LCH 0.1894 0.6241 0.8526 0.9558 0.9917
SPR 0.1667 0.2677 0.3728 0.4460 0.5072
Auralist 0.1751 0.3843 0.4812 0.5792 0.6890
HOM-LIN 0.1554 0.3660 0.5096 0.6072 0.6972
Random 0.1219 0.2311 0.3432 0.3773 0.4553

Table 6. Comparison of Maximum Convex Hull Coverage:Yelp
Iterations 1 5 10 20 50
LCH 0.1101 0.4434 0.5691 0.7257 0.8477
SPR 0.1081 0.1678 0.2374 0.3798 0.4989
Auralist 0.1082 0.1765 0.2558 0.4002 0.5152
HOM-LIN 0.1054 0.1781 0.2691 0.4094 0.5333
Random 0.1047 0.1087 0.1772 0.2635 0.3859

Table 7. Comparison of Maximum Convex Hull Coverage:TripAdvisor
(a) Yelp
(b) TripAdvisor
Figure 5. Comparison of Latent Convex Hull Coverage

6. Conclusion

In this paper, we propose a novel approach to provide unexpected and useful recommendations based on the concept of latent convex hull, which constitutes the convex closure set of expected items for the users. We define unexpectedness as the distance of an item in the latent space from the closure set of expected items for the user. We define the utility as a linear combination of unexpectedness and ratings and recommend items to the users based on this utility measure. Furthermore, we demonstrate that the proposed approach consistently and significantly outperforms other baseline models in terms of the unexpectedness, serendipity, diversity and coverage measures, which supports the validity and superiority of the LCH model.

The contributions of this paper are threefold. First, we propose a novel definition of unexpectedness based on the latent convex hull to capture the latent relationships between users and items via the embedding techniques and heterogeneous information network. Note that we define unexpectedness in the latent space, as opposed to the feature space, as have been done in all the previous proposed definitions of unexpectedness. Second, we propose an unexpected recommendation model based on this novel definition of unexpectedness. Specifically, the hybrid utility function is a linear combination of unexpectedness and usefulness. Third, we conduct extensive experiments and show that our proposed model consistently and significantly outperforms the other baseline models in terms of serendipity, unexpectedness, and diversity performance metrics, while achieving the same level of accuracy in terms of RMSE, MAE, Precision and Recall measures. We also show that our proposed model approaches the Maximum Convex Hull significantly faster than other models.

As the future work, we plan to conduct live experiments with real business environment in order to further evaluate the effectiveness of unexpected recommendations and analyze both qualitative and quantitative aspects in a traditional online retail setting, especially with the utilization of the A/B test. Moreover, we will further explore the convexity property of the user’s expectations, which is introduced in Section 3. Specifically, we plan to connect cognitive psychology and field experiments to dig deeper into the theory. Finally, we plan to explore further the concept of unexpectedness & relevance and investigate how to automatically combine the concept of unexpectedness into the deep-learning based recommender systems.

References

  • (1)
  • Adamopoulos and Tuzhilin (2015) Panagiotis Adamopoulos and Alexander Tuzhilin. 2015. On unexpectedness in recommender systems: Or how to better expect the unexpected. ACM Transactions on Intelligent Systems and Technology (TIST) 5, 4 (2015), 54.
  • Akiyama et al. (2010) Takayuki Akiyama, Kiyohiro Obara, and Masaaki Tanizaki. 2010. Proposal and Evaluation of Serendipitous Recommendation Method Using General Unexpectedness.. In PRSAT@ RecSys. 3–10.
  • Altman (1992) Naomi S Altman. 1992. An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician 46, 3 (1992), 175–185.
  • Dong et al. (2017) Yuxiao Dong, Nitesh V Chawla, and Ananthram Swami. 2017. metapath2vec: Scalable representation learning for heterogeneous networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 135–144.
  • Gärdenfors (2004) Peter Gärdenfors. 2004. Conceptual spaces: The geometry of thought. MIT press.
  • Ge et al. (2010) Mouzhi Ge, Carla Delgado-Battenfeld, and Dietmar Jannach. 2010. Beyond accuracy: evaluating recommender systems by coverage and serendipity. In Proceedings of the fourth ACM conference on Recommender systems. ACM, 257–260.
  • George and Merugu (2005) Thomas George and Srujana Merugu. 2005. A scalable collaborative filtering framework based on co-clustering. In Data Mining, Fifth IEEE international conference on. IEEE, 4–pp.
  • Han et al. (2018) Xiaotian Han, Chuan Shi, Senzhang Wang, S Yu Philip, and Li Song. 2018. Aspect-Level Deep Collaborative Filtering via Heterogeneous Information Networks.. In IJCAI. 3393–3399.
  • Herlocker et al. (2004) Jonathan L Herlocker, Joseph A Konstan, Loren G Terveen, and John T Riedl. 2004. Evaluating collaborative filtering recommender systems. ACM Transactions on Information Systems (TOIS) 22, 1 (2004), 5–53.
  • Iaquinta et al. (2010) Leo Iaquinta, Marco de Gemmis, Pasquale Lops, Giovanni Semeraro, and Piero Molino. 2010. Can a recommender system induce serendipitous encounters? In E-commerce. InTech.
  • Kontonasios et al. (2012) Kleanthis-Nikolaos Kontonasios, Eirini Spyropoulou, and Tijl De Bie. 2012. Knowledge discovery interestingness measures based on unexpectedness. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 2, 5 (2012), 386–399.
  • Kotkov et al. (2018) Denis Kotkov, Joseph A Konstan, Qian Zhao, and Jari Veijalainen. 2018. Investigating Serendipity in Recommender Systems Based on Real User Feedback. (2018).
  • Lee and Seung (2001) Daniel D Lee and H Sebastian Seung. 2001. Algorithms for non-negative matrix factorization. In Advances in neural information processing systems. 556–562.
  • Lu et al. (2012) Qiuxia Lu, Tianqi Chen, Weinan Zhang, Diyi Yang, and Yong Yu. 2012. Serendipitous personalized ranking for top-n recommendation. In Web Intelligence and Intelligent Agent Technology (WI-IAT), 2012 IEEE/WIC/ACM International Conferences on, Vol. 1. IEEE, 258–265.
  • Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. 3111–3119.
  • Murakami et al. (2007) Tomoko Murakami, Koichiro Mori, and Ryohei Orihara. 2007. Metrics for evaluating the serendipity of recommendation lists. In

    Annual conference of the Japanese society for artificial intelligence

    . Springer, 40–46.
  • Nguyen et al. (2014) Tien T Nguyen, Pik-Mai Hui, F Maxwell Harper, Loren Terveen, and Joseph A Konstan. 2014. Exploring the filter bubble: the effect of using recommender systems on content diversity. In Proceedings of the 23rd international conference on World wide web. ACM, 677–686.
  • Pariser (2011) Eli Pariser. 2011. The filter bubble: How the new personalized web is changing what we read and how we think. Penguin.
  • Rendle (2010) Steffen Rendle. 2010. Factorization machines. In Data Mining (ICDM), 2010 IEEE 10th International Conference on. IEEE, 995–1000.
  • Sarwar, Karypis, Konstan, and Riedl (Sarwar et al.) Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. Incremental singular value decomposition algorithms for highly scalable recommender systems. Citeseer.
  • Shani and Gunawardana (2011) Guy Shani and Asela Gunawardana. 2011. Evaluating recommendation systems. In Recommender systems handbook. Springer, 257–297.
  • Shi et al. (2018) Chuan Shi, Binbin Hu, Xin Zhao, and Philip Yu. 2018. Heterogeneous Information Network Embedding for Recommendation. IEEE Transactions on Knowledge and Data Engineering (2018).
  • Shi et al. (2017) Chuan Shi, Yitong Li, Jiawei Zhang, Yizhou Sun, and S Yu Philip. 2017. A survey of heterogeneous information network analysis. IEEE Transactions on Knowledge and Data Engineering 29, 1 (2017), 17–37.
  • Slater (2014) Morton Slater. 2014. Lagrange multipliers revisited. In Traces and Emergence of Nonlinear Programming. Springer, 293–306.
  • Sun and Han (2013) Yizhou Sun and Jiawei Han. 2013. Mining heterogeneous information networks: a structural analysis approach. Acm Sigkdd Explorations Newsletter 14, 2 (2013), 20–28.
  • Yu et al. (2014) Xiao Yu, Xiang Ren, Yizhou Sun, Quanquan Gu, Bradley Sturt, Urvashi Khandelwal, Brandon Norick, and Jiawei Han. 2014. Personalized entity recommendation: A heterogeneous information network approach. In Proceedings of the 7th ACM international conference on Web search and data mining. ACM, 283–292.
  • Yu et al. (2013) Xiao Yu, Xiang Ren, Yizhou Sun, Bradley Sturt, Urvashi Khandelwal, Quanquan Gu, Brandon Norick, and Jiawei Han. 2013. Recommendation in heterogeneous information networks with implicit user feedback. In Proceedings of the 7th ACM conference on Recommender systems. ACM, 347–350.
  • Zenker and Gärdenfors (2015) Frank Zenker and Peter Gärdenfors. 2015. Applications of conceptual spaces. Cited on (2015), 25.
  • Zhang et al. (2017) Shuai Zhang, Lina Yao, and Aixin Sun. 2017. Deep learning based recommender system: A survey and new perspectives. arXiv preprint arXiv:1707.07435 (2017).
  • Zhang et al. (2012) Yuan Cao Zhang, Diarmuid Ó Séaghdha, Daniele Quercia, and Tamas Jambor. 2012. Auralist: introducing serendipity into music recommendation. In Proceedings of the fifth ACM international conference on Web search and data mining. ACM, 13–22.
  • Zheng et al. (2015) Qianru Zheng, Chi-Kong Chan, and Horace HS Ip. 2015. An unexpectedness-augmented utility model for making serendipitous recommendation. In Industrial Conference on Data Mining. Springer, 216–230.