PRINCE: Provider-side Interpretability with Counterfactual Explanations in Recommender Systems

11/19/2019
by   Azin Ghazimatin, et al.
28

Interpretable explanations for recommender systems and other machine learning models are crucial to gain user trust. Prior works that have focused on paths connecting users and items in a heterogeneous network have several limitations, such as discovering relationships rather than true explanations, or disregarding other users' privacy. In this work, we take a fresh perspective, and present PRINCE: a provider-side mechanism to produce tangible explanations for end-users, where an explanation is defined to be a set of minimal actions performed by the user that, if removed, changes the recommendation to a different item. Given a recommendation, PRINCE uses a polynomial-time optimal algorithm for finding this minimal set of a user's actions from an exponential search space, based on random walks over dynamic graphs. Experiments on two real-world datasets show that PRINCE provides more compact explanations than intuitive baselines, and insights from a crowdsourced user-study demonstrate the viability of such action-based explanations. We thus posit that PRINCE produces scrutable, actionable, and concise explanations, owing to its use of counterfactual evidence, a user's own actions, and minimal sets, respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/14/2022

Reinforced Path Reasoning for Counterfactual Explainable Recommendation

Counterfactual explanations interpret the recommendation mechanism via e...
research
05/28/2023

Choose your Data Wisely: A Framework for Semantic Counterfactuals

Counterfactual explanations have been argued to be one of the most intui...
research
03/14/2023

Explaining Recommendation System Using Counterfactual Textual Explanations

Currently, there is a significant amount of research being conducted in ...
research
07/09/2022

On the Relationship Between Counterfactual Explainer and Recommender

Recommender systems employ machine learning models to learn from histori...
research
03/02/2022

Counterfactually Evaluating Explanations in Recommender Systems

Modern recommender systems face an increasing need to explain their reco...
research
05/11/2021

Counterfactual Explanations for Neural Recommenders

Understanding why specific items are recommended to users can significan...
research
12/30/2020

Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA

While research on explaining predictions of open-domain QA systems (ODQA...

Please sign up or login with your details

Forgot password? Click here to reset