Sanity Checks for Saliency Methods Explaining Object Detectors

Saliency methods are frequently used to explain Deep Neural Network-based models. Adebayo et al.'s work on evaluating saliency methods for classification models illustrate certain explanation methods fail the model and data randomization tests. However, on extending the tests for various state of the art object detectors we illustrate that the ability to explain a model is more dependent on the model itself than the explanation method. We perform sanity checks for object detection and define new qualitative criteria to evaluate the saliency explanations, both for object classification and bounding box decisions, using Guided Backpropagation, Integrated Gradients, and their Smoothgrad versions, together with Faster R-CNN, SSD, and EfficientDet-D0, trained on COCO. In addition, the sensitivity of the explanation method to model parameters and data labels varies class-wise motivating to perform the sanity checks for each class. We find that EfficientDet-D0 is the most interpretable method independent of the saliency method, which passes the sanity checks with little problems.

READ FULL TEXT

page 6

page 9

page 10

page 11

page 13

page 14

page 17

page 18

research
12/21/2022

DExT: Detector Explanation Toolkit

State-of-the-art object detectors are treated as black boxes due to thei...
research
06/06/2023

G-CAME: Gaussian-Class Activation Mapping Explainer for Object Detectors

Nowadays, deep neural networks for object detection in images are very p...
research
04/15/2023

ODSmoothGrad: Generating Saliency Maps for Object Detectors

Techniques for generating saliency maps continue to be used for explaina...
research
06/21/2023

Evaluating the overall sensitivity of saliency-based explanation methods

We address the need to generate faithful explanations of "black box" Dee...
research
09/20/2023

Signature Activation: A Sparse Signal View for Holistic Saliency

The adoption of machine learning in healthcare calls for model transpare...
research
06/05/2023

Towards Better Explanations for Object Detection

Recent advances in Artificial Intelligence (AI) technology have promoted...
research
10/07/2019

Interpretable Disentanglement of Neural Networks by Extracting Class-Specific Subnetwork

We propose a novel perspective to understand deep neural networks in an ...

Please sign up or login with your details

Forgot password? Click here to reset