What is understandable in Bayesian network explanations?

10/04/2021
by   Raphaela Butz, et al.
0

Explaining predictions from Bayesian networks, for example to physicians, is non-trivial. Various explanation methods for Bayesian network inference have appeared in literature, focusing on different aspects of the underlying reasoning. While there has been a lot of technical research, there is very little known about how well humans actually understand these explanations. In this paper, we present ongoing research in which four different explanation approaches were compared through a survey by asking a group of human participants to interpret the explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2012

Explanation Trees for Causal Bayesian Networks

Bayesian networks can be used to extract explanations about the observed...
research
08/08/2023

Adding Why to What? Analyses of an Everyday Explanation

In XAI it is important to consider that, in contrast to explanations for...
research
12/28/2010

Looking for plausibility

In the interpretation of experimental data, one is actually looking for ...
research
09/26/2013

Evaluating computational models of explanation using human judgments

We evaluate four computational models of explanation in Bayesian network...
research
03/20/2013

On the Generation of Alternative Explanations with Implications for Belief Revision

In general, the best explanation for a given observation makes no promis...
research
09/04/2023

Why Change My Design: Explaining Poorly Constructed Visualization Designs with Explorable Explanations

Although visualization tools are widely available and accessible, not ev...

Please sign up or login with your details

Forgot password? Click here to reset