Most Relevant Explanation in Bayesian Networks

01/16/2014
by   Changhe Yuan, et al.
0

A major inference task in Bayesian networks is explaining why some variables are observed in their particular states using a set of target variables. Existing methods for solving this problem often generate explanations that are either too simple (underspecified) or too complex (overspecified). In this paper, we introduce a method called Most Relevant Explanation (MRE) which finds a partial instantiation of the target variables that maximizes the generalized Bayes factor (GBF) as the best explanation for the given evidence. Our study shows that GBF has several theoretical properties that enable MRE to automatically identify the most relevant target variables in forming its explanation. In particular, conditional Bayes factor (CBF), defined as the GBF of a new explanation conditioned on an existing explanation, provides a soft measure on the degree of relevance of the variables in the new explanation in explaining the evidence given the existing explanation. As a result, MRE is able to automatically prune less relevant variables from its explanation. We also show that CBF is able to capture well the explaining-away phenomenon that is often represented in Bayesian networks. Moreover, we define two dominance relations between the candidate solutions and use the relations to generalize MRE to find a set of top explanations that is both diverse and representative. Case studies on several benchmark diagnostic Bayesian networks show that MRE is often able to find explanatory hypotheses that are not only precise but also concise.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/09/2012

Most Relevant Explanation: Properties, Algorithms, and Evaluations

Most Relevant Explanation (MRE) is a method for finding multivariate exp...
research
08/05/2022

Motivating explanations in Bayesian networks using MAP-independence

In decision support systems the motivation and justification of the syst...
research
11/30/2021

Finding, Scoring and Explaining Arguments in Bayesian Networks

We propose a new approach to explain Bayesian Networks. The approach rev...
research
02/14/2019

Which is the least complex explanation? Abduction and complexity

It may happen that for a certain abductive problem there are several pos...
research
03/06/2013

A Study of Scaling Issues in Bayesian Belief Networks for Ship Classification

The problems associated with scaling involve active and challenging rese...
research
03/06/2013

Relevant Explanations: Allowing Disjunctive Assignments

Relevance-based explanation is a scheme in which partial assignments to ...
research
08/22/2000

Explaining away ambiguity: Learning verb selectional preference with Bayesian networks

This paper presents a Bayesian model for unsupervised learning of verb s...

Please sign up or login with your details

Forgot password? Click here to reset