DExT: Detector Explanation Toolkit

State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations. Even with unprecedented advancements in detector performance, the inability to explain how their outputs are generated limits their use in safety-critical applications. Previous work fails to produce explanations for both bounding box and classification decisions, and generally make individual explanations for various detectors. In this paper, we propose an open-source Detector Explanation Toolkit (DExT) which implements the proposed approach to generate a holistic explanation for all detector decisions using certain gradient-based explanation methods. We suggests various multi-object visualization methods to merge the explanations of multiple objects detected in an image as well as the corresponding detections in a single image. The quantitative evaluation show that the Single Shot MultiBox Detector (SSD) is more faithfully explained compared to other detectors regardless of the explanation methods. Both quantitative and human-centric evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides more trustworthy explanations among selected methods across all detectors. We expect that DExT will motivate practitioners to evaluate object detectors from the interpretability perspective by explaining both bounding box and classification decisions.

READ FULL TEXT

page 4

page 7

page 8

page 24

page 25

page 26

page 36

page 38

research
06/04/2023

Sanity Checks for Saliency Methods Explaining Object Detectors

Saliency methods are frequently used to explain Deep Neural Network-base...
research
10/20/2022

XC: Exploring Quantitative Use Cases for Explanations in 3D Object Detection

Explainable AI (XAI) methods are frequently applied to obtain qualitativ...
research
06/05/2020

Black-box Explanation of Object Detectors via Saliency Maps

We propose D-RISE, a method for generating visual explanations for the p...
research
10/07/2022

Quantitative Metrics for Evaluating Explanations of Video DeepFake Detectors

The proliferation of DeepFake technology is a rising challenge in today'...
research
04/13/2023

ODAM: Gradient-based instance-specific visual explanations for object detection

We propose the gradient-weighted Object Detector Activation Maps (ODAM),...
research
11/23/2022

Crown-CAM: Reliable Visual Explanations for Tree Crown Detection in Aerial Images

Visual explanation of "black-box" models has enabled researchers and exp...
research
10/18/2021

On Predictive Explanation of Data Anomalies

Numerous algorithms have been proposed for detecting anomalies (outliers...

Please sign up or login with your details

Forgot password? Click here to reset