iGOS++: Integrated Gradient Optimized Saliency by Bilateral Perturbations

12/31/2020
by   Saeed Khorram, et al.
6

The black-box nature of the deep networks makes the explanation for "why" they make certain predictions extremely challenging. Saliency maps are one of the most widely-used local explanation tools to alleviate this problem. One of the primary approaches for generating saliency maps is by optimizing a mask over the input dimensions so that the output of the network is influenced the most by the masking. However, prior work only studies such influence by removing evidence from the input. In this paper, we present iGOS++, a framework to generate saliency maps that are optimized for altering the output of the black-box system by either removing or preserving only a small fraction of the input. Additionally, we propose to add a bilateral total variation term to the optimization that improves the continuity of the saliency map especially under high resolution and with thin object parts. The evaluation results from comparing iGOS++ against state-of-the-art saliency map methods show significant improvement in locating salient regions that are directly interpretable by humans. We utilized iGOS++ in the task of classifying COVID-19 cases from x-ray images and discovered that sometimes the CNN network is overfitted to the characters printed on the x-ray images when performing classification. Fixing this issue by data cleansing significantly improved the precision and recall of the classifier.

READ FULL TEXT

page 2

page 7

page 10

page 13

page 14

page 15

page 16

page 17

research
01/30/2020

Black-Box Saliency Map Generation Using Bayesian Optimisation

Saliency maps are often used in computer vision to provide intuitive int...
research
11/26/2021

Reinforcement Explanation Learning

Deep Learning has become overly complicated and has enjoyed stellar succ...
research
01/05/2018

Efficient Image Evidence Analysis of CNN Classification Results

Convolutional neural networks (CNNs) define the current state-of-the-art...
research
01/26/2021

Evaluating Input Perturbation Methods for Interpreting CNNs and Saliency Map Comparison

Input perturbation methods occlude parts of an input to a function and m...
research
05/22/2017

Real Time Image Saliency for Black Box Classifiers

In this work we develop a fast saliency detection method that can be app...
research
05/23/2022

What You See is What You Classify: Black Box Attributions

An important step towards explaining deep image classifiers lies in the ...
research
06/16/2020

A generalizable saliency map-based interpretation of model outcome

One of the significant challenges of deep neural networks is that the co...

Please sign up or login with your details

Forgot password? Click here to reset