A generalizable saliency map-based interpretation of model outcome

06/16/2020
by   Shailja Thakur, et al.
0

One of the significant challenges of deep neural networks is that the complex nature of the network prevents human comprehension of the outcome of the network. Consequently, the applicability of complex machine learning models is limited in the safety-critical domains, which incurs risk to life and property. To fully exploit the capabilities of complex neural networks, we propose a non-intrusive interpretability technique that uses the input and output of the model to generate a saliency map. The method works by empirically optimizing a randomly initialized input mask by localizing and weighing individual pixels according to their sensitivity towards the target class. Our experiments show that the proposed model interpretability approach performs better than the existing saliency map-based approaches methods at localizing the relevant input pixels. Furthermore, to obtain a global perspective on the target-specific explanation, we propose a saliency map reconstruction approach to generate acceptable variations of the salient inputs from the space of input data distribution for which the model outcome remains unaltered. Experiments show that our interpretability method can reconstruct the salient part of the input with a classification accuracy of 89

READ FULL TEXT

page 1

page 2

page 4

page 7

research
10/19/2020

Investigating and Simplifying Masking-based Saliency Methods for Model Interpretability

Saliency maps that identify the most informative regions of an image for...
research
07/17/2022

MDM:Visual Explanations for Neural Networks via Multiple Dynamic Mask

The active region lookup of a neural network tells us which regions the ...
research
10/13/2021

When saliency goes off on a tangent: Interpreting Deep Neural Networks with nonlinear saliency maps

A fundamental bottleneck in utilising complex machine learning systems f...
research
01/06/2023

Valid P-Value for Deep Learning-Driven Salient Region

Various saliency map methods have been proposed to interpret and explain...
research
12/31/2020

iGOS++: Integrated Gradient Optimized Saliency by Bilateral Perturbations

The black-box nature of the deep networks makes the explanation for "why...
research
07/05/2019

Visualizing Uncertainty and Saliency Maps of Deep Convolutional Neural Networks for Medical Imaging Applications

Deep learning models are now used in many different industries, while in...
research
11/05/2022

New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound

Saliency methods compute heat maps that highlight portions of an input t...

Please sign up or login with your details

Forgot password? Click here to reset