Why do These Match? Explaining the Behavior of Image Similarity Models

05/26/2019
by   Bryan A. Plummer, et al.
8

Explaining a deep learning model can help users understand its behavior and allow researchers to discern its shortcomings. Recent work has primarily focused on explaining models for tasks like image classification or visual question answering. In this paper, we introduce an explanation approach for image similarity models, where a model's output is a semantic feature representation rather than a classification. In this task, an explanation depends on both of the input images, so standard methods do not apply. We propose an explanation method that pairs a saliency map identifying important image regions with an attribute that best explains the match. We find that our explanations are more human-interpretable than saliency maps alone, and can also improve performance on the classic task of attribute recognition. The ability of our approach to generalize is demonstrated on two datasets from very different domains, Polyvore Outfits and Animals with Attributes 2.

READ FULL TEXT

page 7

page 8

page 13

page 14

page 15

page 16

page 17

page 18

research
09/20/2023

COSE: A Consistency-Sensitivity Metric for Saliency on Image Classification

We present a set of metrics that utilize vision priors to effectively as...
research
05/05/2023

Human Attention-Guided Explainable Artificial Intelligence for Computer Vision Models

We examined whether embedding human attention knowledge into saliency-ba...
research
06/15/2022

ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features

Deep learning models have achieved remarkable success in different areas...
research
04/27/2021

Explaining in Style: Training a GAN to explain a classifier in StyleSpace

Image classification models can depend on multiple different semantic at...
research
02/22/2019

Saliency Learning: Teaching the Model Where to Pay Attention

Deep learning has emerged as a compelling solution to many NLP tasks wit...
research
09/18/2019

Semantically Interpretable Activation Maps: what-where-how explanations within CNNs

A main issue preventing the use of Convolutional Neural Networks (CNN) i...

Please sign up or login with your details

Forgot password? Click here to reset