Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks

08/07/2019
by   Jörg Wagner, et al.
26

To verify and validate networks, it is essential to gain insight into their decisions, limitations as well as possible shortcomings of training data. In this work, we propose a post-hoc, optimization based visual explanation method, which highlights the evidence in the input image for a specific prediction. Our approach is based on a novel technique to defend against adversarial evidence (i.e. faulty evidence due to artefacts) by filtering gradients during optimization. The defense does not depend on human-tuned parameters. It enables explanations which are both fine-grained and preserve the characteristics of images, such as edges and colors. The explanations are interpretable, suited for visualizing detailed evidence and can be tested as they are valid model inputs. We qualitatively and quantitatively evaluate our approach on a multitude of models and datasets.

READ FULL TEXT

page 7

page 17

page 19

page 23

page 24

page 25

page 26

page 27

research
03/16/2023

Fine-Grained and High-Faithfulness Explanations for Convolutional Neural Networks

Recently, explaining CNNs has become a research hotspot. CAM (Class Acti...
research
11/01/2018

Towards Explainable NLP: A Generative Explanation Framework for Text Classification

Building explainable systems is a critical problem in the field of Natur...
research
02/15/2018

Multimodal Explanations: Justifying Decisions and Pointing to the Evidence

Deep models that are both effective and explainable are desirable in man...
research
04/22/2020

Assessing the Reliability of Visual Explanations of Deep Models with Adversarial Perturbations

The interest in complex deep neural networks for computer vision applica...
research
07/25/2018

Grounding Visual Explanations

Existing visual explanation generating agents learn to fluently justify ...
research
01/01/2023

NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical Development Patterns of Preterm Infants

Deploying reliable deep learning techniques in interdisciplinary applica...
research
04/21/2019

GAN-based Generation and Automatic Selection of Explanations for Neural Networks

One way to interpret trained deep neural networks (DNNs) is by inspectin...

Please sign up or login with your details

Forgot password? Click here to reset