Sequential Interpretability: Methods, Applications, and Future Direction for Understanding Deep Learning Models in the Context of Sequential Data

04/27/2020
by   Benjamin Shickel, et al.
26

Deep learning continues to revolutionize an ever-growing number of critical application areas including healthcare, transportation, finance, and basic sciences. Despite their increased predictive power, model transparency and human explainability remain a significant challenge due to the "black box" nature of modern deep learning models. In many cases the desired balance between interpretability and performance is predominately task specific. Human-centric domains such as healthcare necessitate a renewed focus on understanding how and why these frameworks are arriving at critical and potentially life-or-death decisions. Given the quantity of research and empirical successes of deep learning for computer vision, most of the existing interpretability research has focused on image processing techniques. Comparatively, less attention has been paid to interpreting deep learning frameworks using sequential data. Given recent deep learning advancements in highly sequential domains such as natural language processing and physiological signal processing, the need for deep sequential explanations is at an all-time high. In this paper, we review current techniques for interpreting deep learning techniques involving sequential data, identify similarities to non-sequential methods, and discuss current limitations and future avenues of sequential interpretability research.

READ FULL TEXT

page 8

page 9

page 11

page 12

page 13

page 16

page 19

page 23

research
10/16/2020

Semantics of the Black-Box: Can knowledge graphs help make deep learning systems more interpretable and explainable?

The recent series of innovations in deep learning (DL) have shown enormo...
research
05/17/2021

A Review on Explainability in Multimodal Deep Neural Nets

Artificial Intelligence techniques powered by deep neural nets have achi...
research
04/26/2022

A survey on attention mechanisms for medical applications: are we moving towards better algorithms?

The increasing popularity of attention mechanisms in deep learning algor...
research
03/24/2022

Interpretability of Neural Network With Physiological Mechanisms

Deep learning continues to play as a powerful state-of-art technique tha...
research
02/12/2022

Towards Best Practice of Interpreting Deep Learning Models for EEG-based Brain Computer Interfaces

Understanding deep learning models is important for EEG-based brain-comp...
research
10/16/2021

TorchEsegeta: Framework for Interpretability and Explainability of Image-based Deep Learning Models

Clinicians are often very sceptical about applying automatic image proce...
research
12/22/2019

Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing

Deep neural networks provide unprecedented performance gains in many rea...

Please sign up or login with your details

Forgot password? Click here to reset