DeepAI AI Chat
Log In Sign Up

GAM: Explainable Visual Similarity and Classification via Gradient Activation Maps

by   Oren Barkan, et al.

We present Gradient Activation Maps (GAM) - a machinery for explaining predictions made by visual similarity and classification models. By gleaning localized gradient and activation information from multiple network layers, GAM offers improved visual explanations, when compared to existing alternatives. The algorithmic advantages of GAM are explained in detail, and validated empirically, where it is shown that GAM outperforms its alternatives across various tasks and datasets.


page 4

page 5

page 6

page 7

page 8

page 11

page 12

page 13


Towards Visually Explaining Similarity Models

We consider the problem of visually explaining similarity models, i.e., ...

Towards Visually Explaining Variational Autoencoders

Recent advances in Convolutional Neural Network (CNN) model interpretabi...

Score-CAM:Improved Visual Explanations Via Score-Weighted Class Activation Mapping

Recently, more and more attention has been drawn into the internal mecha...

Shap-CAM: Visual Explanations for Convolutional Neural Networks based on Shapley Value

Explaining deep convolutional neural networks has been recently drawing ...

U-CAM: Visual Explanation using Uncertainty based Class Activation Maps

Understanding and explaining deep learning models is an imperative task....

Interpreting BERT-based Text Similarity via Activation and Saliency Maps

Recently, there has been growing interest in the ability of Transformer-...