Counterfactual Explanations for Neural Recommenders

05/11/2021
by   Khanh Hiep Tran, et al.
0

Understanding why specific items are recommended to users can significantly increase their trust and satisfaction in the system. While neural recommenders have become the state-of-the-art in recent years, the complexity of deep models still makes the generation of tangible explanations for end users a challenging problem. Existing methods are usually based on attention distributions over a variety of features, which are still questionable regarding their suitability as explanations, and rather unwieldy to grasp for an end user. Counterfactual explanations based on a small set of the user's own actions have been shown to be an acceptable solution to the tangibility problem. However, current work on such counterfactuals cannot be readily applied to neural models. In this work, we propose ACCENT, the first general framework for finding counterfactual explanations for neural recommenders. It extends recently-proposed influence functions for identifying training points most relevant to a recommendation, from a single to a pair of items, while deducing a counterfactual set in an iterative process. We use ACCENT to generate counterfactual explanations for two popular neural models, Neural Collaborative Filtering (NCF) and Relational Collaborative Filtering (RCF), and demonstrate its feasibility on a sample of the popular MovieLens 100K dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2020

Convex Density Constraints for Computing Plausible Counterfactual Explanations

The increasing deployment of machine learning as well as legal regulatio...
research
04/21/2022

Features of Explainability: How users understand counterfactual and causal explanations for categorical and continuous features in XAI

Counterfactual explanations are increasingly used to address interpretab...
research
07/09/2022

On the Relationship Between Counterfactual Explainer and Recommender

Recommender systems employ machine learning models to learn from histori...
research
11/19/2019

PRINCE: Provider-side Interpretability with Counterfactual Explanations in Recommender Systems

Interpretable explanations for recommender systems and other machine lea...
research
05/29/2023

Faithfulness Tests for Natural Language Explanations

Explanations of neural models aim to reveal a model's decision-making pr...
research
05/28/2023

Choose your Data Wisely: A Framework for Semantic Counterfactuals

Counterfactual explanations have been argued to be one of the most intui...
research
03/01/2021

Counterfactual Explanations for Oblique Decision Trees: Exact, Efficient Algorithms

We consider counterfactual explanations, the problem of minimally adjust...

Please sign up or login with your details

Forgot password? Click here to reset