Topological Interpretability for Deep-Learning

05/15/2023
by   Adam Spannaus, et al.
0

With the increasing adoption of AI-based systems across everyday life, the need to understand their decision-making mechanisms is correspondingly accelerating. The level at which we can trust the statistical inferences made from AI-based decision systems is an increasing concern, especially in high-risk systems such as criminal justice or medical diagnosis, where incorrect inferences may have tragic consequences. Despite their successes in providing solutions to problems involving real-world data, deep learning (DL) models cannot quantify the certainty of their predictions. And are frequently quite confident, even when their solutions are incorrect. This work presents a method to infer prominent features in two DL classification models trained on clinical and non-clinical text by employing techniques from topological and geometric data analysis. We create a graph of a model's prediction space and cluster the inputs into the graph's vertices by the similarity of features and prediction statistics. We then extract subgraphs demonstrating high-predictive accuracy for a given label. These subgraphs contain a wealth of information about features that the DL model has recognized as relevant to its decisions. We infer these features for a given label using a distance metric between probability measures, and demonstrate the stability of our method compared to the LIME interpretability method. This work demonstrates that we may gain insights into the decision mechanism of a DL model, which allows us to ascertain if the model is making its decisions based on information germane to the problem or identifies extraneous patterns within the data.

READ FULL TEXT
research
12/05/2021

Explainable Deep Learning in Healthcare: A Methodological Survey from an Attribution View

The increasing availability of large collections of electronic health re...
research
05/22/2022

Analysis of functional neural codes of deep learning models

Deep neural networks (DNNs), the agents of deep learning (DL), require a...
research
03/14/2021

A new interpretable unsupervised anomaly detection method based on residual explanation

Despite the superior performance in modeling complex patterns to address...
research
01/19/2021

Using StyleGAN for Visual Interpretability of Deep Learning Models on Medical Images

As AI-based medical devices are becoming more common in imaging fields l...
research
12/16/2019

On the Understanding and Interpretation of Machine Learning Predictions in Clinical Gait Analysis Using Explainable Artificial Intelligence

Systems incorporating Artificial Intelligence (AI) and machine learning ...
research
07/03/2023

Towards Explainable AI for Channel Estimation in Wireless Communications

Research into 6G networks has been initiated to support a variety of cri...
research
04/27/2020

Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models

The wide-spread adoption of representation learning technologies in clin...

Please sign up or login with your details

Forgot password? Click here to reset