DeepAI AI Chat
Log In Sign Up

Explaining and Interpreting LSTMs

09/25/2019
by   Leila Arras, et al.
Johannes Kepler University Linz
Berlin Institute of Technology (Technische Universität Berlin)
Fraunhofer
17

While neural networks have acted as a strong unifying force in the design of modern AI systems, the neural network architectures themselves remain highly heterogeneous due to the variety of tasks to be solved. In this chapter, we explore how to adapt the Layer-wise Relevance Propagation (LRP) technique used for explaining the predictions of feed-forward networks to the LSTM architecture used for sequential data modeling and forecasting. The special accumulators and gated interactions present in the LSTM require both a new propagation scheme and an extension of the underlying theoretical framework to deliver faithful explanations.

READ FULL TEXT

page 3

page 22

06/22/2017

Explaining Recurrent Neural Network Predictions in Sentiment Analysis

Recently, a technique called Layer-wise Relevance Propagation (LRP) was ...
08/29/2017

A Simple LSTM model for Transition-based Dependency Parsing

We present a simple LSTM-based transition-based dependency parser. Our m...
11/19/2021

Interpreting deep urban sound classification using Layer-wise Relevance Propagation

After constructing a deep neural network for urban sound classification,...
11/27/2019

Explaining Models by Propagating Shapley Values of Local Components

In healthcare, making the best possible predictions with complex models ...
09/21/2021

Multiblock-Networks: A Neural Network Analog to Component Based Methods for Multi-Source Data

Training predictive models on datasets from multiple sources is a common...
09/13/2021

Explaining Deep Learning Representations by Tracing the Training Process

We propose a novel explanation method that explains the decisions of a d...