Strategies for Generating Micro Explanations for Bayesian Belief Networks

03/27/2013
by   Peter Sember, et al.
0

Bayesian Belief Networks have been largely overlooked by Expert Systems practitioners on the grounds that they do not correspond to the human inference mechanism. In this paper, we introduce an explanation mechanism designed to generate intuitive yet probabilistically sound explanations of inferences drawn by a Bayesian Belief Network. In particular, our mechanism accounts for the results obtained due to changes in the causal and the evidential support of a node.

READ FULL TEXT

page 1

page 7

page 8

research
03/20/2013

On the Generation of Alternative Explanations with Implications for Belief Revision

In general, the best explanation for a given observation makes no promis...
research
04/08/2020

Bayesian Interpolants as Explanations for Neural Inferences

The notion of Craig interpolant, used as a form of explanation in automa...
research
03/27/2013

An Explanation Mechanism for Bayesian Inferencing Systems

Explanation facilities are a particularly important feature of expert sy...
research
02/27/2013

Belief Maintenance in Bayesian Networks

Bayesian Belief Networks (BBNs) are a powerful formalism for reasoning u...
research
03/27/2013

Ergo: A Graphical Environment for Constructing Bayesian

We describe an environment that considerably simplifies the process of g...
research
02/13/2013

A Structurally and Temporally Extended Bayesian Belief Network Model: Definitions, Properties, and Modeling Techniques

We developed the language of Modifiable Temporal Belief Networks (MTBNs)...
research
02/27/2013

An Evaluation of an Algorithm for Inductive Learning of Bayesian Belief Networks Usin

Bayesian learning of belief networks (BLN) is a method for automatically...

Please sign up or login with your details

Forgot password? Click here to reset