Influence-Driven Explanations for Bayesian Network Classifiers

by   Antonio Rago, et al.

One of the most pressing issues in AI in recent years has been the need to address the lack of explainability of many of its models. We focus on explanations for discrete Bayesian network classifiers (BCs), targeting greater transparency of their inner workings by including intermediate variables in explanations, rather than just the input and output variables as is standard practice. The proposed influence-driven explanations (IDXs) for BCs are systematically generated using the causal relationships between variables within the BC, called influences, which are then categorised by logical requirements, called relation properties, according to their behaviour. These relation properties both provide guarantees beyond heuristic explanation methods and allow the information underpinning an explanation to be tailored to a particular context's and user's requirements, e.g., IDXs may be dialectical or counterfactual. We demonstrate IDXs' capability to explain various forms of BCs, e.g., naive or multi-label, binary or categorical, and also integrate recent approaches to explanations for BCs from the literature. We evaluate IDXs with theoretical and empirical analyses, demonstrating their considerable advantages when compared with existing explanation methods.



There are no comments yet.


page 1

page 2

page 3

page 4


If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques

In recent years, there has been an explosion of AI research on counterfa...

A Formal Approach to Explainability

We regard explanations as a blending of the input sample and the model's...

Efficient computation of contrastive explanations

With the increasing deployment of machine learning systems in practice, ...

Adequate and fair explanations

Explaining sophisticated machine-learning based systems is an important ...

One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency

The need for transparency of predictive systems based on Machine Learnin...

Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges

Issues regarding explainable AI involve four components: users, laws & r...

Explaining by Removing: A Unified Framework for Model Explanation

Researchers have proposed a wide variety of model explanation approaches...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.