Generalization on the Enhancement of Layerwise Relevance Interpretability of Deep Neural Network

09/05/2020
by   Erico Tjoa, et al.
0

The practical application of deep neural networks are still limited by their lack of transparency. One of the efforts to provide explanation for decisions made by artificial intelligence (AI) is the use of saliency or heat maps highlighting relevant regions that contribute significantly to its prediction. A layer-wise amplitude filtering method was previously introduced to improve the quality of heatmaps, performing error corrections by noise-spike suppression. In this study, we generalize the layerwise error correction by considering any identifiable error and assuming there exists a groundtruth interpretable information. The forms of errors propagated through layerwise relevance methods are studied and we propose a filtering technique for interpretability signal rectification taylored to the trend of signal amplitude of the particular neural network used. Finally, we put forth arguments for the use of groundtruth interpretable information.

READ FULL TEXT

page 7

page 11

research
11/19/2019

Enhancing the Extraction of Interpretable Information for Ischemic Stroke Imaging from Deep Neural Networks

When artificial intelligence is used in the medical sector, interpretabi...
research
08/22/2023

Explicability and Inexplicability in the Interpretation of Quantum Neural Networks

Interpretability of artificial intelligence (AI) methods, particularly d...
research
02/24/2020

Breaking Batch Normalization for better explainability of Deep Neural Networks through Layer-wise Relevance Propagation

The lack of transparency of neural networks stays a major break for thei...
research
07/02/2023

Minimum Levels of Interpretability for Artificial Moral Agents

As artificial intelligence (AI) models continue to scale up, they are be...
research
02/25/2022

Deep Neural Network for Automatic Assessment of Dysphonia

The purpose of this work is to contribute to the understanding and impro...
research
07/14/2020

Usefulness of interpretability methods to explain deep learning based plant stress phenotyping

Deep learning techniques have been successfully deployed for automating ...
research
10/15/2021

Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps and Relevance Orderings

We study the effects of constrained optimization formulations and Frank-...

Please sign up or login with your details

Forgot password? Click here to reset