Explaining YOLO: Leveraging Grad-CAM to Explain Object Detections

11/22/2022
by   Armin Kirchknopf, et al.
0

We investigate the problem of explainability for visual object detectors. Specifically, we demonstrate on the example of the YOLO object detector how to integrate Grad-CAM into the model architecture and analyze the results. We show how to compute attribution-based explanations for individual detections and find that the normalization of the results has a great impact on their interpretation.

READ FULL TEXT

page 1

page 2

research
04/30/2019

The Sierpinski Object in the Scott Realizability Topos

We study the Sierpinski Object in the Scott Realizability Topos....
research
04/04/2021

Learning Image Aesthetic Assessment from Object-level Visual Components

As it is said by Van Gogh, great things are done by a series of small th...
research
03/09/2022

Align-Deform-Subtract: An Interventional Framework for Explaining Object Differences

Given two object images, how can we explain their differences in terms o...
research
06/05/2020

Black-box Explanation of Object Detectors via Saliency Maps

We propose D-RISE, a method for generating visual explanations for the p...
research
06/15/2023

Improving Explainability of Disentangled Representations using Multipath-Attribution Mappings

Explainable AI aims to render model behavior understandable by humans, w...
research
01/01/2021

On Explaining Your Explanations of BERT: An Empirical Study with Sequence Classification

BERT, as one of the pretrianed language models, attracts the most attent...
research
03/29/2021

Efficient Explanations from Empirical Explainers

Amid a discussion about Green AI in which we see explainability neglecte...

Please sign up or login with your details

Forgot password? Click here to reset