Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications

03/07/2021
by   Yu-Liang Chou, et al.
0

There has been a growing interest in model-agnostic methods that can make deep learning models more transparent and explainable to a user. Some researchers recently argued that for a machine to achieve a certain degree of human-level explainability, this machine needs to provide human causally understandable explanations, also known as causability. A specific class of algorithms that have the potential to provide causability are counterfactuals. This paper presents an in-depth systematic review of the diverse existing body of literature on counterfactuals and causability for explainable artificial intelligence. We performed an LDA topic modelling analysis under a PRISMA framework to find the most relevant literature articles. This analysis resulted in a novel taxonomy that considers the grounding theories of the surveyed algorithms, together with their underlying properties and applications in real-world data. This research suggests that current model-agnostic counterfactual algorithms for explainable AI are not grounded on a causal theoretical formalism and, consequently, cannot promote causability to a human decision-maker. Our findings suggest that the explanations derived from major algorithms in the literature provide spurious correlations rather than cause/effects relationships, leading to sub-optimal, erroneous or even biased explanations. This paper also advances the literature with new directions and challenges on promoting causability in model-agnostic approaches for explainable artificial intelligence.

READ FULL TEXT

page 17

page 18

research
01/30/2023

Towards the Linear Algebra Based Taxonomy of XAI Explanations

This paper proposes an alternative approach to the basic taxonomy of exp...
research
07/10/2023

Impact of Feature Encoding on Malware Classification Explainability

This paper investigates the impact of feature encoding techniques on the...
research
11/30/2020

Why model why? Assessing the strengths and limitations of LIME

When it comes to complex machine learning models, commonly referred to a...
research
05/15/2021

XAI Method Properties: A (Meta-)study

In the meantime, a wide variety of terminologies, motivations, approache...
research
12/11/2019

Explainability Fact Sheets: A Framework for Systematic Assessment of Explainable Approaches

Explanations in Machine Learning come in many forms, but a consensus reg...
research
09/24/2020

Landscape of R packages for eXplainable Artificial Intelligence

The growing availability of data and computing power fuels the developme...
research
12/18/2017

Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology

Digital pathology is not only one of the most promising fields of diagno...

Please sign up or login with your details

Forgot password? Click here to reset