Efficient Image Evidence Analysis of CNN Classification Results

01/05/2018
by   Keyang Zhou, et al.
0

Convolutional neural networks (CNNs) define the current state-of-the-art for image recognition. With their emerging popularity, especially for critical applications like medical image analysis or self-driving cars, confirmability is becoming an issue. The black-box nature of trained predictors make it difficult to trace failure cases or to understand the internal reasoning processes leading to results. In this paper we introduce a novel efficient method to visualise evidence that lead to decisions in CNNs. In contrast to network fixation or saliency map methods, our method is able to illustrate the evidence for or against a classifier's decision in input pixel space approximately 10 times faster than previous methods. We also show that our approach is less prone to noise and can focus on the most relevant input regions, thus making it more accurate and interpretable. Moreover, by making simplifications we link our method with other visualisation methods, providing a general explanation for gradient-based visualisation techniques. We believe that our work makes network introspection more feasible for debugging and understanding deep convolutional networks. This will increase trust between humans and deep learning models.

READ FULL TEXT

page 2

page 3

page 9

page 10

page 11

page 12

research
08/08/2018

Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer's Disease

Visualizing and interpreting convolutional neural networks (CNNs) is an ...
research
12/31/2020

iGOS++: Integrated Gradient Optimized Saliency by Bilateral Perturbations

The black-box nature of the deep networks makes the explanation for "why...
research
04/06/2021

White Box Methods for Explanations of Convolutional Neural Networks in Image Classification Tasks

In recent years, deep learning has become prevalent to solve application...
research
08/16/2019

Gradient Weighted Superpixels for Interpretability in CNNs

As Convolutional Neural Networks embed themselves into our everyday live...
research
10/21/2019

Contextual Prediction Difference Analysis

The interpretation of black-box models has been investigated in recent y...
research
04/30/2021

Interpretable Semantic Photo Geolocalization

Planet-scale photo geolocalization is the complex task of estimating the...
research
03/08/2016

A New Method to Visualize Deep Neural Networks

We present a method for visualising the response of a deep neural networ...

Please sign up or login with your details

Forgot password? Click here to reset