NeuroView-RNN: It's About Time

02/23/2022
by   CJ Barberan, et al.
12

Recurrent Neural Networks (RNNs) are important tools for processing sequential data such as time-series or video. Interpretability is defined as the ability to be understood by a person and is different from explainability, which is the ability to be explained in a mathematical formulation. A key interpretability issue with RNNs is that it is not clear how each hidden state per time step contributes to the decision-making process in a quantitative manner. We propose NeuroView-RNN as a family of new RNN architectures that explains how all the time steps are used for the decision-making process. Each member of the family is derived from a standard RNN architecture by concatenation of the hidden steps into a global linear classifier. The global linear classifier has all the hidden states as the input, so the weights of the classifier have a linear mapping to the hidden states. Hence, from the weights, NeuroView-RNN can quantify how important each time step is to a particular decision. As a bonus, NeuroView-RNN also offers higher accuracy in many cases compared to the RNNs and their variants. We showcase the benefits of NeuroView-RNN by evaluating on a multitude of diverse time-series datasets.

READ FULL TEXT

page 11

page 12

research
09/15/2021

Interpretable Additive Recurrent Neural Networks For Multivariate Clinical Time Series

Time series models with recurrent neural networks (RNNs) can have high a...
research
08/04/2023

Universal Approximation of Linear Time-Invariant (LTI) Systems through RNNs: Power of Randomness in Reservoir Computing

Recurrent neural networks (RNNs) are known to be universal approximators...
research
10/15/2021

NeuroView: Explainable Deep Network Decision Making

Deep neural networks (DNs) provide superhuman performance in numerous co...
research
11/06/2017

Neural Speed Reading via Skim-RNN

Inspired by the principles of speed reading, we introduce Skim-RNN, a re...
research
08/28/2023

Kernel Limit of Recurrent Neural Networks Trained on Ergodic Data Sequences

Mathematical methods are developed to characterize the asymptotics of re...
research
06/07/2020

Fusion Recurrent Neural Network

Considering deep sequence learning for practical application, two repres...
research
12/13/2020

MEME: Generating RNN Model Explanations via Model Extraction

Recurrent Neural Networks (RNNs) have achieved remarkable performance on...

Please sign up or login with your details

Forgot password? Click here to reset