Towards Human-Understandable Visual Explanations:Imperceptible High-frequency Cues Can Better Be Removed

04/16/2021
by   Kaili Wang, et al.
0

Explainable AI (XAI) methods focus on explaining what a neural network has learned - in other words, identifying the features that are the most influential to the prediction. In this paper, we call them "distinguishing features". However, whether a human can make sense of the generated explanation also depends on the perceptibility of these features to humans. To make sure an explanation is human-understandable, we argue that the capabilities of humans, constrained by the Human Visual System (HVS) and psychophysics, need to be taken into account. We propose the human perceptibility principle for XAI, stating that, to generate human-understandable explanations, neural networks should be steered towards focusing on human-understandable cues during training. We conduct a case study regarding the classification of real vs. fake face images, where many of the distinguishing features picked up by standard neural networks turn out not to be perceptible to humans. By applying the proposed principle, a neural network with human-understandable explanations is trained which, in a user study, is shown to better align with human intuition. This is likely to make the AI more trustworthy and opens the door to humans learning from machines. In the case study, we specifically investigate and analyze the behaviour of the human-imperceptible high spatial frequency features in neural networks and XAI methods.

READ FULL TEXT

page 3

page 4

page 5

page 6

page 8

research
05/31/2019

Do Human Rationales Improve Machine Explanations?

Work on "learning with rationales" shows that humans providing explanati...
research
04/10/2023

Explanation Strategies for Image Classification in Humans vs. Current Explainable AI

Explainable AI (XAI) methods provide explanations of AI models, but our ...
research
07/10/2017

An Analysis of Human-centered Geolocation

Online social networks contain a constantly increasing amount of images ...
research
03/05/2019

Distinguishing mirror from glass: A 'big data' approach to material perception

Visually identifying materials is crucial for many tasks, yet material p...
research
12/10/2020

DAX: Deep Argumentative eXplanation for Neural Networks

Despite the rapid growth in attention on eXplainable AI (XAI) of late, e...
research
01/27/2022

Diagnosing AI Explanation Methods with Folk Concepts of Behavior

When explaining AI behavior to humans, how is the communicated informati...
research
06/16/2020

How Much Can I Trust You? – Quantifying Uncertainties in Explaining Neural Networks

Explainable AI (XAI) aims to provide interpretations for predictions mad...

Please sign up or login with your details

Forgot password? Click here to reset