InterpNET: Neural Introspection for Interpretable Deep Learning

10/26/2017
by   Shane Barratt, et al.
1

Humans are able to explain their reasoning. On the contrary, deep neural networks are not. This paper attempts to bridge this gap by introducing a new way to design interpretable neural networks for classification, inspired by physiological evidence of the human visual system's inner-workings. This paper proposes a neural network design paradigm, termed InterpNET, which can be combined with any existing classification architecture to generate natural language explanations of the classifications. The success of the module relies on the assumption that the network's computation and reasoning is represented in its internal layer activations. While in principle InterpNET could be applied to any existing classification architecture, it is evaluated via an image classification and explanation task. Experiments on a CUB bird classification and explanation dataset show qualitatively and quantitatively that the model is able to generate high-quality explanations. While the current state-of-the-art METEOR score on this dataset is 29.2, InterpNET achieves a much higher METEOR score of 37.9.

READ FULL TEXT
research
09/15/2017

Embedding Deep Networks into Visual Explanations

In this paper, we propose a novel explanation module to explain the pred...
research
05/31/2018

DeepMiner: Discovering Interpretable Representations for Mammogram Classification and Explanation

We propose DeepMiner, a framework to discover interpretable representati...
research
04/06/2021

White Box Methods for Explanations of Convolutional Neural Networks in Image Classification Tasks

In recent years, deep learning has become prevalent to solve application...
research
01/27/2018

Towards an Understanding of Neural Networks in Natural-Image Spaces

Two major uncertainties, dataset bias and perturbation, prevail in state...
research
02/22/2023

Stress and Adaptation: Applying Anna Karenina Principle in Deep Learning for Image Classification

Image classification with deep neural networks has reached state-of-art ...
research
10/06/2020

IS-CAM: Integrated Score-CAM for axiomatic-based explanations

Convolutional Neural Networks have been known as black-box models as hum...
research
05/27/2020

Explaining Neural Networks by Decoding Layer Activations

To derive explanations for deep learning models, ie. classifiers, we pro...

Please sign up or login with your details

Forgot password? Click here to reset