DeepAI AI Chat
Log In Sign Up

Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks

by   Jörg Wagner, et al.
University of Bonn

To verify and validate networks, it is essential to gain insight into their decisions, limitations as well as possible shortcomings of training data. In this work, we propose a post-hoc, optimization based visual explanation method, which highlights the evidence in the input image for a specific prediction. Our approach is based on a novel technique to defend against adversarial evidence (i.e. faulty evidence due to artefacts) by filtering gradients during optimization. The defense does not depend on human-tuned parameters. It enables explanations which are both fine-grained and preserve the characteristics of images, such as edges and colors. The explanations are interpretable, suited for visualizing detailed evidence and can be tested as they are valid model inputs. We qualitatively and quantitatively evaluate our approach on a multitude of models and datasets.


page 7

page 17

page 19

page 23

page 24

page 25

page 26

page 27


Fine-Grained and High-Faithfulness Explanations for Convolutional Neural Networks

Recently, explaining CNNs has become a research hotspot. CAM (Class Acti...

Towards Explainable NLP: A Generative Explanation Framework for Text Classification

Building explainable systems is a critical problem in the field of Natur...

Multimodal Explanations: Justifying Decisions and Pointing to the Evidence

Deep models that are both effective and explainable are desirable in man...

Assessing the Reliability of Visual Explanations of Deep Models with Adversarial Perturbations

The interest in complex deep neural networks for computer vision applica...

Robust Ante-hoc Graph Explainer using Bilevel Optimization

Explaining the decisions made by machine learning models for high-stakes...

NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical Development Patterns of Preterm Infants

Deploying reliable deep learning techniques in interdisciplinary applica...

Grounding Visual Explanations

Existing visual explanation generating agents learn to fluently justify ...