DeepAI AI Chat
Log In Sign Up

Explaining Recommendation System Using Counterfactual Textual Explanations

by   Niloofar Ranjbar, et al.

Currently, there is a significant amount of research being conducted in the field of artificial intelligence to improve the explainability and interpretability of deep learning models. It is found that if end-users understand the reason for the production of some output, it is easier to trust the system. Recommender systems are one example of systems that great efforts have been conducted to make their output more explainable. One method for producing a more explainable output is using counterfactual reasoning, which involves altering minimal features to generate a counterfactual item that results in changing the output of the system. This process allows the identification of input features that have a significant impact on the desired output, leading to effective explanations. In this paper, we present a method for generating counterfactual explanations for both tabular and textual features. We evaluated the performance of our proposed method on three real-world datasets and demonstrated a +5% improvement on finding effective features (based on model-based measures) compared to the baseline method.


page 1

page 2

page 3

page 4


PRINCE: Provider-side Interpretability with Counterfactual Explanations in Recommender Systems

Interpretable explanations for recommender systems and other machine lea...

Natural Example-Based Explainability: a Survey

Explainable Artificial Intelligence (XAI) has become increasingly signif...

From Intrinsic to Counterfactual: On the Explainability of Contextualized Recommender Systems

With the prevalence of deep learning based embedding approaches, recomme...

Generating Plausible Counterfactual Explanations for Deep Transformers in Financial Text Classification

Corporate mergers and acquisitions (M A) account for billions of dolla...

Real-Time Counterfactual Explanations For Robotic Systems With Multiple Continuous Outputs

Although many machine learning methods, especially from the field of dee...

For Better or Worse: The Impact of Counterfactual Explanations' Directionality on User Behavior in xAI

Counterfactual explanations (CFEs) are a popular approach in explainable...

Causal Proxy Models for Concept-Based Model Explanations

Explainability methods for NLP systems encounter a version of the fundam...