An Investigation of Interpretability Techniques for Deep Learning in Predictive Process Analytics

02/21/2020
by   Catarina Moreira, et al.
11

This paper explores interpretability techniques for two of the most successful learning algorithms in medical decision-making literature: deep neural networks and random forests. We applied these algorithms in a real-world medical dataset containing information about patients with cancer, where we learn models that try to predict the type of cancer of the patient, given their set of medical activity records. We explored different algorithms based on neural network architectures using long short term deep neural networks, and random forests. Since there is a growing need to provide decision-makers understandings about the logic of predictions of black boxes, we also explored different techniques that provide interpretations for these classifiers. In one of the techniques, we intercepted some hidden layers of these neural networks and used autoencoders in order to learn what is the representation of the input in the hidden layers. In another, we investigated an interpretable model locally around the random forest's prediction. Results show learning an interpretable model locally around the model's prediction leads to a higher understanding of why the algorithm is making some decision. Use of local and linear model helps identify the features used in prediction of a specific instance or data point. We see certain distinct features used for predictions that provide useful insights about the type of cancer, along with features that do not generalize well. In addition, the structured deep learning approach using autoencoders provided meaningful prediction insights, which resulted in the identification of nonlinear clusters correspondent to the patients' different types of cancer.

READ FULL TEXT

page 1

page 2

page 3

page 9

page 10

page 11

page 12

page 13

research
12/11/2015

Distilling Knowledge from Deep Networks with Applications to Healthcare Domain

Exponential growth in Electronic Healthcare Records (EHR) has resulted i...
research
09/30/2022

An Interactive Interpretability System for Breast Cancer Screening with Deep Learning

Deep learning methods, in particular convolutional neural networks, have...
research
06/17/2021

Deep Learning Through the Lens of Example Difficulty

Existing work on understanding deep learning often employs measures that...
research
07/24/2023

Feature Gradient Flow for Interpreting Deep Neural Networks in Head and Neck Cancer Prediction

This paper introduces feature gradient flow, a new technique for interpr...
research
02/10/2020

Making Logic Learnable With Neural Networks

While neural networks are good at learning unspecified functions from tr...
research
07/24/2020

Impact of Medical Data Imprecision on Learning Results

Test data measured by medical instruments often carry imprecise ranges t...
research
08/19/2019

SIRUS: making random forests interpretable

State-of-the-art learning algorithms, such as random forests or neural n...

Please sign up or login with your details

Forgot password? Click here to reset