Hybrid Deep Embedding for Recommendations with Dynamic Aspect-Level Explanations

01/18/2020
by   Huanrui Luo, et al.
0

Explainable recommendation is far from being well solved partly due to three challenges. The first is the personalization of preference learning, which requires that different items/users have different contributions to the learning of user preference or item quality. The second one is dynamic explanation, which is crucial for the timeliness of recommendation explanations. The last one is the granularity of explanations. In practice, aspect-level explanations are more persuasive than item-level or user-level ones. In this paper, to address these challenges simultaneously, we propose a novel model called Hybrid Deep Embedding (HDE) for aspect-based explainable recommendations, which can make recommendations with dynamic aspect-level explanations. The main idea of HDE is to learn the dynamic embeddings of users and items for rating prediction and the dynamic latent aspect preference/quality vectors for the generation of aspect-level explanations, through fusion of the dynamic implicit feedbacks extracted from reviews and the attentive user-item interactions. Particularly, as the aspect preference/quality of users/items is learned automatically, HDE is able to capture the impact of aspects that are not mentioned in reviews of a user or an item. The extensive experiments conducted on real datasets verify the recommending performance and explainability of HDE. The source code of our work is available at <https://github.com/lola63/HDE-Python>

READ FULL TEXT

page 1

page 8

research
10/20/2021

Hierarchical Aspect-guided Explanation Generation for Explainable Recommendation

Explainable recommendation systems provide explanations for recommendati...
research
02/15/2021

ELIXIR: Learning from User Feedback on Explanations to Improve Recommender Models

System-provided explanations for recommendations are an important compon...
research
07/12/2020

Explainable Recommendation via Interpretable Feature Mapping and Evaluation of Explainability

Latent factor collaborative filtering (CF) has been a widely used techni...
research
06/10/2018

Explainable Recommendation via Multi-Task Learning in Opinionated Text Data

Explaining automatically generated recommendations allows users to make ...
research
07/07/2021

Rating and aspect-based opinion graph embeddings for explainable recommendations

The success of neural network embeddings has entailed a renewed interest...
research
03/16/2023

Measuring the Impact of Explanation Bias: A Study of Natural Language Justifications for Recommender Systems

Despite the potential impact of explanations on decision making, there i...
research
11/01/2021

Comparative Explanations of Recommendations

As recommendation is essentially a comparative (or ranking) process, a g...

Please sign up or login with your details

Forgot password? Click here to reset