GAM: Explainable Visual Similarity and Classification via Gradient Activation Maps

09/02/2021
by   Oren Barkan, et al.
29

We present Gradient Activation Maps (GAM) - a machinery for explaining predictions made by visual similarity and classification models. By gleaning localized gradient and activation information from multiple network layers, GAM offers improved visual explanations, when compared to existing alternatives. The algorithmic advantages of GAM are explained in detail, and validated empirically, where it is shown that GAM outperforms its alternatives across various tasks and datasets.

READ FULL TEXT

page 4

page 5

page 6

page 7

page 8

page 11

page 12

page 13

research
05/05/2023

Human Attention-Guided Explainable Artificial Intelligence for Computer Vision Models

We examined whether embedding human attention knowledge into saliency-ba...
research
08/13/2020

Towards Visually Explaining Similarity Models

We consider the problem of visually explaining similarity models, i.e., ...
research
04/13/2023

ODAM: Gradient-based instance-specific visual explanations for object detection

We propose the gradient-weighted Object Detector Activation Maps (ODAM),...
research
11/18/2019

Towards Visually Explaining Variational Autoencoders

Recent advances in Convolutional Neural Network (CNN) model interpretabi...
research
10/03/2019

Score-CAM:Improved Visual Explanations Via Score-Weighted Class Activation Mapping

Recently, more and more attention has been drawn into the internal mecha...
research
08/17/2019

U-CAM: Visual Explanation using Uncertainty based Class Activation Maps

Understanding and explaining deep learning models is an imperative task....
research
08/13/2022

Interpreting BERT-based Text Similarity via Activation and Saliency Maps

Recently, there has been growing interest in the ability of Transformer-...

Please sign up or login with your details

Forgot password? Click here to reset