Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural Networks

10/27/2019
by   Aya Abdelsalam Ismail, et al.
15

Recent efforts to improve the interpretability of deep neural networks use saliency to characterize the importance of input features to predictions made by models. Work on interpretability using saliency-based methods on Recurrent Neural Networks (RNNs) has mostly targeted language tasks, and their applicability to time series data is less understood. In this work we analyze saliency-based methods for RNNs, both classical and gated cell architectures. We show that RNN saliency vanishes over time, biasing detection of salient features only to later time steps and are, therefore, incapable of reliably detecting important features at arbitrary time intervals. To address this vanishing saliency problem, we propose a novel RNN cell structure (input-cell attention), which can extend any RNN cell architecture. At each time step, instead of only looking at the current input vector, input-cell attention uses a fixed-size matrix embedding, each row of the matrix attending to different inputs from current or previous time steps. Using synthetic data, we show that the saliency map produced by the input-cell attention RNN is able to faithfully detect important features regardless of their occurrence in time. We also apply the input-cell attention RNN on a neuroscience task analyzing functional Magnetic Resonance Imaging (fMRI) data for human subjects performing a variety of tasks. In this case, we use saliency to characterize brain regions (input features) for which activity is important to distinguish between tasks. We show that standard RNN architectures are only capable of detecting important brain regions in the last few time steps of the fMRI data, while the input-cell attention model is able to detect important brain region activity across time without latter time step biases.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 14

page 15

10/26/2020

Benchmarking Deep Learning Interpretability in Time Series Predictions

Saliency methods are used extensively to highlight the importance of inp...
04/23/2020

Evaluating Adversarial Robustness for Deep Neural Network Interpretability using fMRI Decoding

While deep neural networks (DNNs) are being increasingly used to make pr...
10/15/2019

Jointly Discriminative and Generative Recurrent Neural Networks for Learning from fMRI

Recurrent neural networks (RNNs) were designed for dealing with time-ser...
08/23/2018

Brain Biomarker Interpretation in ASD Using Deep Learning and fMRI

Autism spectrum disorder (ASD) is a complex neurodevelopmental disorder....
11/03/2016

Recurrent Neural Networks for Spatiotemporal Dynamics of Intrinsic Networks from fMRI Data

Functional magnetic resonance imaging (fMRI) of temporally-coherent bloo...
11/29/2021

Improving Deep Learning Interpretability by Saliency Guided Training

Saliency methods have been widely used to highlight important input feat...
07/19/2019

Universality and individuality in neural dynamics across large populations of recurrent networks

Task-based modeling with recurrent neural networks (RNNs) has emerged as...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.