Methods for Interpreting and Understanding Deep Neural Networks

06/24/2017
by   Grégoire Montavon, et al.
0

This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. It introduces some recently proposed techniques of interpretation, along with theory, tricks and recommendations, to make most efficient use of these techniques on real data. It also discusses a number of practical applications.

READ FULL TEXT

page 3

page 10

research
06/22/2018

Visualizing and Understanding Deep Neural Networks in CTR Prediction

Although deep learning techniques have been successfully applied to many...
research
01/08/2019

Explaining AlphaGo: Interpreting Contextual Effects in Neural Networks

In this paper, we propose to disentangle and interpret contextual effect...
research
11/23/2018

Representer Point Selection for Explaining Deep Neural Networks

We propose to explain the predictions of a deep neural network, by point...
research
05/13/2020

Towards Interpretable Deep Learning Models for Knowledge Tracing

As an important technique for modeling the knowledge states of learners,...
research
11/17/2017

Using KL-divergence to focus Deep Visual Explanation

We present a method for explaining the image classification predictions ...
research
03/21/2019

Interpreting Neural Networks Using Flip Points

Neural networks have been criticized for their lack of easy interpretati...
research
01/08/2021

An Information-theoretic Progressive Framework for Interpretation

Both brain science and the deep learning communities have the problem of...

Please sign up or login with your details

Forgot password? Click here to reset