A novel graph-based model for hybrid recommendations in cold-start scenarios

08/31/2018 ∙ by Cesare Bernardis, et al. ∙ Politecnico di Milano 0

Cold-start is a very common and still open problem in the Recommender Systems literature. Since cold start items do not have any interaction, collaborative algorithms are not applicable. One of the main strategies is to use pure or hybrid content-based approaches, which usually yield to lower recommendation quality than collaborative ones. Some techniques to optimize performance of this type of approaches have been studied in recent past. One of them is called feature weighting, which assigns to every feature a real value, called weight, that estimates its importance. Statistical techniques for feature weighting commonly used in Information Retrieval, like TF-IDF, have been adapted for Recommender Systems, but they often do not provide sufficient quality improvements. More recent approaches, FBSM and LFW, estimate weights by leveraging collaborative information via machine learning, in order to learn the importance of a feature based on other users opinions. This type of models have shown promising results compared to classic statistical analyzes cited previously. We propose a novel graph, feature-based machine learning model to face the cold-start item scenario, learning the relevance of features from probabilities of item-based collaborative filtering algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Graph-based models

Defining as the dataset of a Recommender System composed by the set of users , the set of items , the set of item features and all the relations among them. Defining as the set of vertices, as the set of edges and assigning to every edge a weight equal to the value of the relation between the nodes it connects, we can represent with a weighted tripartite graph . Recommendations on can be provided exploiting random walks over , accomplished by starting on a vertex and choosing randomly one of its neighbors at each step. We can represent with a square adjacency matrix , where every entry represents the weight of the edge that connects the node to . Normalizing rowwise, we can compute the probability, or markovian, matrix P where represents the probability of choosing any node when standing in any node . To get the probability relative to random walks of length , we can elevate to the power of . Finally, recommendation lists are provided selecting, for each users, the items with highest probabilities. It has been shown in literature that short random walks achieve best performance in most situations(Cooper et al., 2014). In particular, if we consider paths of length 3 over , we can distinguish two types, starting from a user node:

  • user-item-user-item: a collaborative path that exploits only interactions to reach the destination. Its results are obtained removing edges from items to features before elevating P. This approach has been called (Cooper et al., 2014).

  • user-item-feature-item: a content-based path that exploits item features over other users’ interactions to reach the destination. Its results are obtained removing edges from items to users before elevating .

2. Model simplification

In previous sections we stated that a graph-based random walk model uses only final user-item probabilities to produce recommendation lists, which means that of the whole 3 steps random walk result we only need the one referred to complete paths from users to items, that is represented by one single submatrix of . This property allows to reduce the exponentiation of to a multiplication of three of its submatrices. We define four non-zero submatrices of that can be derived directly from the feedback matrix and the binary item content matrix :

  • probability to reach an item node from a user node

  • : probability to reach a user node from an item node

  • : probability to reach a feature node from an item node

  • : probability to reach an item node from a feature node

The estimated user-item probabilities used for recommendations, that we will call , can be obtained with two different multiplications of these submatrices, depending on the nature of the path:

  • Collaborative:

  • Content-based:

3. Feature weighting model

We can introduce feature weights over edges that connect items to features, obtaining a variant of that we will call . This way we can influence the probability to reach feature nodes and, consequently, other item nodes in the last steps of content-based paths. Note that weights have to be strictly positive, because they directly become probabilities. As we stated in previous sections, we want to exploit collaborative information to estimate feature weights, so, in order to do that, we want to obtain as similar results as possible between the collaborative path and the weighted content-based one. In other words we want that:

Now we can define a regression problem over feature weights to solve the equation, minimizing the residual sum of squares with Stochastic Gradient Descent. Given two items

, we can formalize the problem as:

4. Target matrix

We can state that our model is the solution to a regression problem where the target is a probability matrix obtained with 3-step random walks following the collaborative path. We will refer to this solution as the hybrid path. However, we could use as target any probability matrix that contains collaborative information. In particular, we will see the results obtained using the probability matrix of the (Paudel et al., 2017) approach, calculated adapting the popularity-based re-ranking procedure. We will refer to this alternative as the re-ranked hybrid path.

5. Datasets

We tested our model on the well known Movielens 20M dataset, using genres and lemmatized 1 and 2-grams of user based tags as features, and on The Movies Dataset, publicly available on Kaggle111https://www.kaggle.com/rounakbanik/the-movies-dataset, that adds to the Full Movielens dataset the editorial items metadata available on TMDb. To remove some noise, we applied some filters on both the datasets: we removed items and users with too few interactions, items with too few or too many features, and too rare features. We split each dataset for train and test, in order to keep a test set that could reproduce a cold start scenario. So we kept the 20% of the items, chosen randomly, and all their ratings for the test set, while we used the remaining 80% for the train phase. Then we split the training set in two more sets with the same 80-20 proportions, respectively for the training and the validation of the model.

6. Evaluation

For the evaluation we used three common metrics of Recommender Systems literature, that can highlight the accuracy of the model in prediction and ranking: we used the @5 variants of Recall, Mean Average Precision and Normalized Discounted Cumulative Gain. We compared the results obtained by different approaches:

  • CBF: Content-based KNN algorithm with cosine similarity

  • CBF-IDF: Content-based KNN algorithm with cosine similarity, using IDF to assign feature weights

  • : 3-steps random walks following content-based path

  • : 3-steps random walks following hybrid path

  • : 3-steps random walks following hybrid re-ranked path

7. Results

Algorithm
Recall
MAP
NDCG
CBF 0.10135 0.22026 0.13957
CBF-IDF 0.10147 0.21813 0.13941
0.08853 0.20617 0.12359
0.09561 0.21323 0.13247
0.10919 0.22826 0.14782
Table 1. Performance @5 of different algorithms on test set of Movielens 20M dataset, the baselines are on top.
Algorithm
Recall
MAP
NDCG
CBF 0.04850 0.08727 0.06594
CBF-IDF 0.05033 0.08988 0.06720
0.05482 0.09987 0.07298
0.06810 0.12176 0.09118
0.06776 0.11934 0.09017
Table 2. Performance @5 of different algorithms on test set of The Movies Dataset, the baselines are on top.

Analyzing results summarized in Tables 2 and 2, we can see that both the hybrid paths outperform the purely content-based one, which means that the collaborative information exploited is useful and increases performance. We can also notice that the target matrix influences the quality of the final model. In particular, the re-ranked path provides more reliable results, and its performance is higher than both Content-based KNN approaches on both datasets. The non re-ranked path, instead, is not able to reach CBF scores on Movielens 20M, but obtains the best results on The Movies Dataset. In conclusion, we can state that the model was able to outperform both non-weighted and IDF feature weighting approaches, showing the importance of collaborative information and proving to be a potentially good solution for cold-start scenarios.

8. Conclusion

We proposed a new approach to face the item cold-start problem of Recommender Systems. We have shown that it is possible to model a hybrid graph-based recommender exploiting collaborative information to estimate feature weights and improve quality of content-based recommendations. Future directions include validating this results on more datasets and baselines, as well as learning from other collaborative probability matrices.

References

  • (1)
  • Cella et al. (2017) Leonardo Cella, Stefano Cereda, Massimo Quadrana, and Paolo Cremonesi. 2017. Derive item features relevance from past user interactions. In UMAP.
  • Cooper et al. (2014) Colin Cooper, Sang Hyuk Lee, Tomasz Radzik, and Yiannis Siantos. 2014. Random walks in recommender systems: exact computation and simulations. In Proceedings of the 23rd International Conference on World Wide Web. ACM, 811–816.
  • Paudel et al. (2017) Bibek Paudel, Fabian Christoffel, Chris Newell, and Abraham Bernstein. 2017. Updatable, accurate, diverse, and scalable recommendations for interactive applications. ACM Transactions on Interactive Intelligent Systems (TiiS) (2017).
  • Sharma et al. (2015) Mohit Sharma, Jiayu Zhou, Junling Hu, and George Karypis. 2015. Feature-based factorized Bilinear Similarity Model for Cold-Start Top-n Item Recommendation. In Proceedings of the 2015 SIAM International Conference on Data Mining, Vancouver, BC, Canada, April 30 - May 2, 2015. 190–198.