Influence-Driven Explanations for Bayesian Network Classifiers

12/10/2020
by   Antonio Rago, et al.
0

One of the most pressing issues in AI in recent years has been the need to address the lack of explainability of many of its models. We focus on explanations for discrete Bayesian network classifiers (BCs), targeting greater transparency of their inner workings by including intermediate variables in explanations, rather than just the input and output variables as is standard practice. The proposed influence-driven explanations (IDXs) for BCs are systematically generated using the causal relationships between variables within the BC, called influences, which are then categorised by logical requirements, called relation properties, according to their behaviour. These relation properties both provide guarantees beyond heuristic explanation methods and allow the information underpinning an explanation to be tailored to a particular context's and user's requirements, e.g., IDXs may be dialectical or counterfactual. We demonstrate IDXs' capability to explain various forms of BCs, e.g., naive or multi-label, binary or categorical, and also integrate recent approaches to explanations for BCs from the literature. We evaluate IDXs with theoretical and empirical analyses, demonstrating their considerable advantages when compared with existing explanation methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/29/2022

A Theoretical Framework for AI Models Explainability

Explainability is a vibrant research topic in the artificial intelligenc...
research
02/26/2021

If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques

In recent years, there has been an explosion of AI research on counterfa...
research
01/15/2020

A Formal Approach to Explainability

We regard explanations as a blending of the input sample and the model's...
research
10/06/2020

Efficient computation of contrastive explanations

With the increasing deployment of machine learning systems in practice, ...
research
10/07/2022

CLEAR: Causal Explanations from Attention in Neural Recommenders

We present CLEAR, a method for learning session-specific causal graphs, ...
research
01/27/2020

One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency

The need for transparency of predictive systems based on Machine Learnin...
research
06/06/2022

Towards Responsible AI for Financial Transactions

The application of AI in finance is increasingly dependent on the princi...

Please sign up or login with your details

Forgot password? Click here to reset