Log In Sign Up

Evaluating computational models of explanation using human judgments

by   Michael Pacer, et al.

We evaluate four computational models of explanation in Bayesian networks by comparing model predictions to human judgments. In two experiments, we present human participants with causal structures for which the models make divergent predictions and either solicit the best explanation for an observed event (Experiment 1) or have participants rate provided explanations for an observed event (Experiment 2). Across two versions of two causal structures and across both experiments, we find that the Causal Explanation Tree and Most Relevant Explanation models provide better fits to human data than either Most Probable Explanation or Explanation Tree models. We identify strengths and shortcomings of these models and what they can reveal about human explanation. We conclude by suggesting the value of pursuing computational and psychological investigations of explanation in parallel.


page 1

page 2

page 3

page 4


Explanation Trees for Causal Bayesian Networks

Bayesian networks can be used to extract explanations about the observed...

What is understandable in Bayesian network explanations?

Explaining predictions from Bayesian networks, for example to physicians...

Understanding the Unforeseen via the Intentional Stance

We present an architecture and system for understanding novel behaviors ...

A Human-Grounded Evaluation of SHAP for Alert Processing

In the past years, many new explanation methods have been proposed to ac...

Towards Causal Explanation Detection with Pyramid Salient-Aware Network

Causal explanation analysis (CEA) can assist us to understand the reason...

Evaluating Tree Explanation Methods for Anomaly Reasoning: A Case Study of SHAP TreeExplainer and TreeInterpreter

Understanding predictions made by Machine Learning models is critical in...

Contrastive Explanation: A Structural-Model Approach

The topic of causal explanation in artificial intelligence has gathered ...