Explainable Entity-based Recommendations with Knowledge Graphs

07/12/2017
by   Rose Catherine, et al.
Carnegie Mellon University
0

Explainable recommendation is an important task. Many methods have been proposed which generate explanations from the content and reviews written for items. When review text is unavailable, generating explanations is still a hard problem. In this paper, we illustrate how explanations can be generated in such a scenario by leveraging external knowledge in the form of knowledge graphs. Our method jointly ranks items and knowledge graph entities using a Personalized PageRank procedure to produce recommendations together with their explanations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

06/22/2019

Explainable Knowledge Graph-based Recommendation via Deep Reinforcement Learning

This paper studies recommender systems with knowledge graphs, which can ...
02/24/2021

Teach Me to Explain: A Review of Datasets for Explainable NLP

Explainable NLP (ExNLP) has increasingly focused on collecting human-ann...
11/20/2021

Explainable Biomedical Recommendations via Reinforcement Learning Reasoning on Knowledge Graphs

For Artificial Intelligence to have a greater impact in biology and medi...
07/07/2021

Graphing else matters: exploiting aspect opinions and ratings in explainable graph-based recommendations

The success of neural network embeddings has entailed a renewed interest...
07/07/2021

Rating and aspect-based opinion graph embeddings for explainable recommendations

The success of neural network embeddings has entailed a renewed interest...
09/16/2019

Explainable Product Search with a Dynamic Relation Embedding Model

Product search is one of the most popular methods for customers to disco...
07/18/2018

Improving Explainable Recommendations with Synthetic Reviews

An important task for a recommender system to provide interpretable expl...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Improving the accuracy of predictions in recommender systems is an important research topic. An equally important task is explaining the predictions to the user. Explanations may serve many different purposes (Tintarev and Masthoff, 2011). They can show how the system works (transparency) or help users make an informed choice (effectiveness). They may be evaluated on whether they convince the user to make a purchase (persuasiveness) or whether they help the user make decision more rapidly (efficiency). In general, providing an explanation has been shown to build user’s trust in the recommender system (Pu and Chen, Pu and Chen).

The focus of this paper is a system to generate explanations for Knowledge Graph (KG) -based recommendation. Users and items are typically associated with factual data, referred to as content. For users, the content may include demographics and other profile data. For items such as movies, it might include the actors, directors, genre, and the like. The KG encodes the interconnections between such facts, and leveraging these links has been shown to improve recommender performance (Catherine and Cohen, 2016; Yu, Ren, Sun, Gu, Sturt, Khandelwal, Norick, and Han, Yu et al.; Chaudhari, Azaria, and Mitchell, Chaudhari et al.).

Although a number of explanation schemes have been proposed in the past (Section 2), there has been no work which produces explanations for KG-based recommenders. In this paper, we present a method to jointly rank items and entities in the KG such that the entities can serve as an explanation for the recommendation.

Our technique can be run without training, thereby allowing faster deployment in new domains. Once enough data has been collected, it can then be trained to yield better performance. The proposed method can also be used in a dialog setting, where a user interacts with the system to refine its suggestions.

2. Related Work

Generating explanations for recommendations has been an active area of research for more than a decade. (Herlocker et al., 2000) was an early work that assessed different ways of explaining recommendations in a collaborative filtering (CF) -based recommender system.

In content-based recommenders, the explanations revolve around the content or profile of the user and the item. The system of (Bilgic and Mooney, 2005) simply displayed keyword matches between the user’s profile and the books being recommended. Similarly, (Vig, Sen, and Riedl, Vig et al.) proposed a method called ‘Tagsplanations’, which showed the degree to which a tag is relevant to the item, and the sentiment of the user towards the tag.

With the advent of social networks, explanations that leverage social connections have also gained attention. For example, (Sharma and Cosley, 2013) produced explanations that showed whether a good friend of the user has liked something, where friendship strength was computed from their interactions on Facebook.

More recent research has focused on providing explanations that are extracted from user written reviews for the items. (Zhang, Lai, Zhang, Zhang, Liu, and Ma, Zhang et al.) extracted phrases and sentiments expressed in the reviews and used them to generate explanations. (McAuley and Leskovec, McAuley and Leskovec) uses topics learned from the reviews as aspects of the item, and uses the topic distribution in the reviews to find useful or representative reviews.

Knowledge Graphs have been shown to improve the performance of recommender systems in the past. (Yu, Ren, Sun, Gu, Sturt, Khandelwal, Norick, and Han, Yu et al.) proposed a meta-path based method that learned paths consisting of node types in a graph. Similarly, (Ostuni, Di Noia, Di Sciascio, and Mirizzi, Ostuni et al.) used paths to find the top-N recommendations in a learning-to-rank framework. A few methods such as (Chaudhari, Azaria, and Mitchell, Chaudhari et al.; Musto, Basile, de Gemmis, Lops, Semeraro, and Rutigliano, Musto et al.) rank items using Personalized PageRank. In these methods, the entities present in the text of an item are first mapped to entities in a knowledge graph. (Catherine and Cohen, 2016)

proposed probabilistic logic programming models for recommendation on knowledge graphs. None of the above KB-based recommenders attempted to generate explanations.

3. Explanation Method

In this section, we propose our method, which builds on the work of (Catherine and Cohen, 2016) by using ProPPR (Wang, Mazaitis, and Cohen, Wang et al.) for learning to recommend. ProPPR (Programming with Personalized Page Rank) is a first order logic system. It takes as input a set of rules and a database of facts, and uses these to generate an approximate local grounding of each query in a small graph. Candidate answers to the query are the nodes in the graph that satisfy the rules. The candidates are then ranked by running a Personalized PageRank algorithm on the graph.

Our technique proceeds in two main steps. First, it uses ProPPR to jointly rank items and entities for a user. Second, it consolidates the results into recommendations and explanations.

To use ProPPR to rank items and entities, we first define a notion of similarity between nodes in the graph, using the same similarity rules as (Catherine and Cohen, 2016) (Figure 1). This simple rule states that two entities X and E are similar if they are the same (Rule 1), or if there is a link in the graph connecting X to another entity Z, which is similar to E (Rule 2). Note that this definition of similarity is recursive.

(1)
(2)
Figure 1. Similarity in a graph

Next, the model has two sets of rules for ranking: one set for joint ranking of movies that the user would like, together with the most likely reason (Figure 2), and a similar set for movies that the user would not like (Figure 3). In Figure 2, Rule 3 states that a user U will like an entity E and a movie M if the user likes the entity, and the entity is related (sim) to the movie. The clause isMovie ensures that the variable M is bound to a movie, since sim admits all types of entities. Rule 3 invokes the predicate likes(U,E), which holds for an entity E if the user has explicitly stated that they like it (Rule 4), or if they have provided positive feedback (e.g. clicked, thumbs up, high star rating) for a movie M containing (via link(M,E)) the entity (Rule 5). The method for finding movies and entities that the user will dislike is similar to the above, as given in Figure 3.

(3)
(4)
(5)
Figure 2. Predicting likes
Figure 3. Predicting Dislikes

To jointly rank the items and entities, we use ProPPR to query the willLike(U,E,M) predicate with the user specified and the other two variables free. Then, the ProPPR engine will ground the query into a proof graph by replacing each variable recursively with literals that satisfy the rules from the KG (Catherine and Cohen, 2016; Wang, Mazaitis, and Cohen, Wang et al.). A sample grounding when queried for a user alice who likes tom_hanks and the movie da_vinci_code is shown in Figure 4.

Figure 4. Sample grounding for predicting likes

After constructing the proof graph, ProPPR runs a Personalized PageRank algorithm with willLike(alice, E, M) as the start node. In this simple example, we will let the scores for (tom_hanks, bridge_of_spies), (tom_hanks, inferno), (drama_thriller, bridge_of_spies), and (drama_thriller, snowden), be 0.4, 0.4, 0.3 and 0.3 respectively.

Now, let us suppose that alice has also specified that she dislikes crime movies. If we follow the grounding procedure for dislikes and rank the answers, we may obtain (crime, inferno) with score 0.2. Our system then proceeds to consolidate the recommendations and the explanations by grouping by movie names, adding together their ‘like’ scores and deducting their ‘dislike’ scores. For each movie, the entities can be ranked according to their joint score. The end result is a list of reasons which can be shown to the user:

  1. bridge_of_spies, score = 0.4 + 0.3 = 0.7, reasons =
    { tom_hanks, drama_thriller }

  2. snowden, score = 0.3, reasons = { drama_thriller }

  3. inferno, score = 0.4 - 0.2 = 0.2, reasons = { tom_hanks, (-ve) crime }

4. Real World Deployment

The proposed method is presently used as the backend of a personal agent running on mobile devices for recommending movies (Pecune, Baumann, Matsuyama, Romero, Akoju, Du, Catherine, Cassell, Eskenazi, Black, and Cohen, Pecune et al.) undergoing Beta testing. The knowledge graph for recommendations is constructed from the weekly dump files released by imdb.com. The personal agent uses a dialog model of interaction with the user. In this setting, users are actively involved in refining the recommendations depending on what their mood might be. For example, for a fun night out with friends, a user may want to watch an action movie, whereas when spending time with her significant other, the same user may be in the mood for a romantic comedy.

5. Conclusions

Knowledge graphs have been shown to improve recommender system accuracy in the past. However, generating explanations to help users make an informed choice in KG-based systems has not been attempted before. In this paper, we proposed a method to produce a ranked list of entities as explanations by jointly ranking them with the corresponding movies.

References

  • (1)
  • Bilgic and Mooney (2005) Mustafa Bilgic and Raymond J. Mooney. 2005. Explaining Recommendations: Satisfaction vs. Promotion. In Beyond Personalization Workshop.
  • Catherine and Cohen (2016) R. Catherine and W. Cohen. 2016. Personalized Recommendations Using Knowledge Graphs: A Probabilistic Logic Programming Approach. In Proc. RecSys ’16. 325–332.
  • Chaudhari, Azaria, and Mitchell (Chaudhari et al.) S. Chaudhari, A. Azaria, and T. Mitchell. An Entity Graph Based Recommender System. In RecSys ’16 Posters.
  • Herlocker et al. (2000) J. Herlocker, J. Konstan, and J. Riedl. 2000. Explaining collaborative filtering recommendations.. In CSCW. 241–250.
  • McAuley and Leskovec (McAuley and Leskovec) J. McAuley and J. Leskovec. Hidden Factors and Hidden Topics: Understanding Rating Dimensions with Review Text. In RecSys ’13. 165–172.
  • Musto, Basile, de Gemmis, Lops, Semeraro, and Rutigliano (Musto et al.) C. Musto, P. Basile, M. de Gemmis, P. Lops, G. Semeraro, and S. Rutigliano. Automatic Selection of Linked Open Data features in Graph-based Recommender Systems. In CBRecSys 2015.
  • Ostuni, Di Noia, Di Sciascio, and Mirizzi (Ostuni et al.) V. Ostuni, T. Di Noia, E. Di Sciascio, and R. Mirizzi. Top-N Recommendations from Implicit Feedback Leveraging Linked Open Data. In RecSys ’13. 85–92.
  • Pecune, Baumann, Matsuyama, Romero, Akoju, Du, Catherine, Cassell, Eskenazi, Black, and Cohen (Pecune et al.) F. Pecune, T. Baumann, Y. Matsuyama, O. Romero, S. Akoju, Y. Du, R. Catherine, J. Cassell, M. Eskenazi, A. Black, and W. Cohen. InMind Movie Agent - A Platform for Research (In Preparation).
  • Pu and Chen (Pu and Chen) P. Pu and L. Chen. Trust Building with Explanation Interfaces. In IUI ’06. 93–100.
  • Sharma and Cosley (2013) A. Sharma and D. Cosley. 2013. Do Social Explanations Work?: Studying and Modeling the Effects of Social Explanations in Recommender Systems. In WWW ’13. 1133–1144.
  • Tintarev and Masthoff (2011) N. Tintarev and J. Masthoff. 2011. Designing and Evaluating Explanations for Recommender Systems. 479–510.
  • Vig, Sen, and Riedl (Vig et al.) J. Vig, S. Sen, and J. Riedl. Tagsplanations: Explaining Recommendations Using Tags. In IUI ’09. 47–56.
  • Wang, Mazaitis, and Cohen (Wang et al.) W. Wang, K. Mazaitis, and W. Cohen. Programming with Personalized Pagerank: A Locally Groundable First-order Probabilistic Logic. In Proc. CIKM ’13.
  • Yu, Ren, Sun, Gu, Sturt, Khandelwal, Norick, and Han (Yu et al.) X. Yu, X. Ren, Y. Sun, Q. Gu, B. Sturt, U. Khandelwal, B. Norick, and J. Han. Personalized Entity Recommendation: A Heterogeneous Information Network Approach. In WSDM ’14. 283–292.
  • Zhang, Lai, Zhang, Zhang, Liu, and Ma (Zhang et al.) Y. Zhang, G. Lai, M. Zhang, Y. Zhang, Y. Liu, and S. Ma.

    Explicit Factor Models for Explainable Recommendation Based on Phrase-level Sentiment Analysis. In

    SIGIR ’14. 83–92.