CLEAR: Causal Explanations from Attention in Neural Recommenders

10/07/2022
by   Shami Nisimov, et al.
0

We present CLEAR, a method for learning session-specific causal graphs, in the possible presence of latent confounders, from attention in pre-trained attention-based recommenders. These causal graphs describe user behavior, within the context captured by attention, and can provide a counterfactual explanation for a recommendation. In essence, these causal graphs allow answering "why" questions uniquely for any specific session. Using empirical evaluations we show that, compared to naively using attention weights to explain input-output relations, counterfactual explanations found by CLEAR are shorter and an alternative recommendation is ranked higher in the original top-k recommendations.

READ FULL TEXT
research
10/16/2022

CLEAR: Generative Counterfactual Explanations on Graphs

Counterfactual explanations promote explainability in machine learning m...
research
08/26/2022

Veritas: Answering Causal Queries from Video Streaming Traces

In this paper, we seek to answer what-if questions - i.e., given recorde...
research
07/14/2022

Reinforced Path Reasoning for Counterfactual Explainable Recommendation

Counterfactual explanations interpret the recommendation mechanism via e...
research
11/13/2020

Structured Attention Graphs for Understanding Deep Image Classifications

Attention maps are a popular way of explaining the decisions of convolut...
research
09/28/2022

Causal Proxy Models for Concept-Based Model Explanations

Explainability methods for NLP systems encounter a version of the fundam...
research
07/20/2020

Shopping in the Multiverse: A Counterfactual Approach to In-Session Attribution

We tackle the challenge of in-session attribution for on-site search eng...
research
12/10/2020

Influence-Driven Explanations for Bayesian Network Classifiers

One of the most pressing issues in AI in recent years has been the need ...

Please sign up or login with your details

Forgot password? Click here to reset