Defining Explanation in Probabilistic Systems

02/06/2013
by   Urszula Chajewska, et al.
0

As probabilistic systems gain popularity and are coming into wider use, the need for a mechanism that explains the system's findings and recommendations becomes more critical. The system will also need a mechanism for ordering competing explanations. We examine two representative approaches to explanation in the literature - one due to Gärdenfors and one due to Pearl - and show that both suffer from significant problems. We propose an approach to defining a notion of "better explanation" that combines some of the features of both together with more recent work by Pearl and others on causality.

READ FULL TEXT

page 1

page 2

page 3

page 5

page 6

page 7

page 8

page 9

research
03/15/2020

Causality-based Explanation of Classification Outcomes

We propose a simple definition of an explanation for the outcome of a cl...
research
04/03/2023

Explanation: from ethics to logic

When a decision, such as the approval or denial of a bank loan, is deleg...
research
03/27/2013

An Explanation Mechanism for Bayesian Inferencing Systems

Explanation facilities are a particularly important feature of expert sy...
research
04/10/2023

Explanation Strategies for Image Classification in Humans vs. Current Explainable AI

Explainable AI (XAI) methods provide explanations of AI models, but our ...
research
04/24/2022

Post Processing Recommender Systems with Knowledge Graphs for Recency, Popularity, and Diversity of Explanations

Existing explainable recommender systems have mainly modeled relationshi...
research
06/10/2023

Two-Stage Holistic and Contrastive Explanation of Image Classification

The need to explain the output of a deep neural network classifier is no...
research
10/19/2021

Coalitional Bayesian Autoencoders – Towards explainable unsupervised deep learning

This paper aims to improve the explainability of Autoencoder's (AE) pred...

Please sign up or login with your details

Forgot password? Click here to reset