Towards Visually Explaining Similarity Models

08/13/2020
by   Meng Zheng, et al.
17

We consider the problem of visually explaining similarity models, i.e., explaining why a model predicts two images to be similar in addition to producing a scalar score. While much recent work in visual model interpretability has focused on gradient-based attention, these methods rely on a classification module to generate visual explanations. Consequently, they cannot readily explain other kinds of models that do not use or need classification-like loss functions (e.g., similarity models trained with a metric learning loss). In this work, we bridge this crucial gap, presenting the first method to generate gradient-based visual explanations for image similarity predictors. By relying solely on the learned feature embedding, we show that our approach can be applied to any kind of CNN-based similarity architecture, an important step towards generic visual explainability. We show that our resulting visual explanations serve more than just interpretability; they can be infused into the model learning process itself with new trainable constraints based on our similarity explanations. We show that the resulting similarity models perform, and can be visually explained, better than the corresponding baseline models trained without our explanation constraints. We demonstrate our approach using extensive experiments on three different kinds of tasks: generic image retrieval, person re-identification, and low-shot semantic segmentation.

READ FULL TEXT

page 8

page 9

page 11

research
11/18/2019

Learning Similarity Attention

We consider the problem of learning similarity functions. While there ha...
research
11/18/2019

Towards Visually Explaining Variational Autoencoders

Recent advances in Convolutional Neural Network (CNN) model interpretabi...
research
09/02/2021

GAM: Explainable Visual Similarity and Classification via Gradient Activation Maps

We present Gradient Activation Maps (GAM) - a machinery for explaining p...
research
01/27/2018

Understanding Deep Architectures by Interpretable Visual Summaries

A consistent body of research investigates the recurrent visual patterns...
research
09/30/2022

Contrastive Corpus Attribution for Explaining Representations

Despite the widespread use of unsupervised models, very few methods are ...
research
11/05/2020

This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition

Image recognition with prototypes is considered an interpretable alterna...
research
12/19/2021

RELAX: Representation Learning Explainability

Despite the significant improvements that representation learning via se...

Please sign up or login with your details

Forgot password? Click here to reset