This is not the Texture you are looking for! Introducing Novel Counterfactual Explanations for Non-Experts using Generative Adversarial Learning

12/22/2020
by   Silvan Mertes, et al.
14

With the ongoing rise of machine learning, the need for methods for explaining decisions made by artificial intelligence systems is becoming a more and more important topic. Especially for image classification tasks, many state-of-the-art tools to explain such classifiers rely on visual highlighting of important areas of the input data. Contrary, counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image in a way such that the classifier would have made a different prediction. By doing so, the users of counterfactual explanation systems are equipped with a completely different kind of explanatory information. However, methods for generating realistic counterfactual explanations for image classifiers are still rare. In this work, we present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques. Additionally, we conduct a user study to evaluate our approach in a use case which was inspired by a healthcare scenario. Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems that work with saliency maps, namely LIME and LRP.

READ FULL TEXT

page 13

page 17

page 24

page 27

page 31

page 32

research
07/19/2022

Alterfactual Explanations – The Relevance of Irrelevance for Explaining AI Systems

Explanation mechanisms from the field of Counterfactual Thinking are a w...
research
04/16/2020

Explainable Image Classification with Evidence Counterfactual

The complexity of state-of-the-art modeling techniques for image classif...
research
07/11/2020

Fast Real-time Counterfactual Explanations

Counterfactual explanations are considered, which is to answer why the p...
research
11/13/2020

Structured Attention Graphs for Understanding Deep Image Classifications

Attention maps are a popular way of explaining the decisions of convolut...
research
01/21/2023

Counterfactual Explanation and Instance-Generation using Cycle-Consistent Generative Adversarial Networks

The image-based diagnosis is now a vital aspect of modern automation ass...
research
06/11/2020

Getting a CLUE: A Method for Explaining Uncertainty Estimates

Both uncertainty estimation and interpretability are important factors f...
research
11/09/2020

Explaining Deep Graph Networks with Molecular Counterfactuals

We present a novel approach to tackle explainability of deep graph netwo...

Please sign up or login with your details

Forgot password? Click here to reset