Interpretable Factorization for Neural Network ECG Models

06/26/2020
by   Christopher Snyder, et al.
0

The ability of deep learning (DL) to improve the practice of medicine and its clinical outcomes faces a looming obstacle: model interpretation. Without description of how outputs are generated, a collaborating physician can neither resolve when the model's conclusions are in conflict with his or her own, nor learn to anticipate model behavior. Current research aims to interpret networks that diagnose ECG recordings, which has great potential impact as recordings become more personalized and widely deployed. A generalizable impact beyond ECGs lies in the ability to provide a rich test-bed for the development of interpretive techniques in medicine. Interpretive techniques for Deep Neural Networks (DNNs), however, tend to be heuristic and observational in nature, lacking the mathematical rigor one might expect in the analysis of math equations. The motivation of this paper is to offer a third option, a scientific approach. We treat the model output itself as a phenomenon to be explained through component parts and equations governing their behavior. We argue that these component parts should also be "black boxes" –additional targets to interpret heuristically with clear functional connection to the original. We show how to rigorously factor a DNN into a hierarchical equation consisting of black box variables. This is not a subdivision into physical parts, like an organism into its cells; it is but one choice of an equation into a collection of abstract functions. Yet, for DNNs trained to identify normal ECG waveforms on PhysioNet 2017 Challenge data, we demonstrate this choice yields interpretable component models identified with visual composite sketches of ECG samples in corresponding input regions. Moreover, the recursion distills this interpretation: additional factorization of component black boxes corresponds to ECG partitions that are more morphologically pure.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/12/2019

Generating an Explainable ECG Beat Space With Variational Auto-Encoders

Electrocardiogram signals are omnipresent in medicine. A vital aspect in...
research
11/14/2021

Interpretable ECG classification via a query-based latent space traversal (qLST)

Electrocardiography (ECG) is an effective and non-invasive diagnostic to...
research
01/17/2022

Black-box error diagnosis in deep neural networks: a survey of tools

The application of Deep Neural Networks (DNNs) to a broad variety of tas...
research
10/22/2020

Theory-based residual neural networks: A synergy of discrete choice models and deep neural networks

Researchers often treat data-driven and theory-driven models as two disp...
research
08/08/2020

Enhance CNN Robustness Against Noises for Classification of 12-Lead ECG with Variable Length

Electrocardiogram (ECG) is the most widely used diagnostic tool to monit...
research
10/06/2021

Disentangling deep neural networks with rectified linear units using duality

Despite their success deep neural networks (DNNs) are still largely cons...

Please sign up or login with your details

Forgot password? Click here to reset