Model-Agnostic Explainability for Visual Search

02/28/2021
by   Mark Hamilton, et al.
31

What makes two images similar? We propose new approaches to generate model-agnostic explanations for image similarity, search, and retrieval. In particular, we extend Class Activation Maps (CAMs), Additive Shapley Explanations (SHAP), and Locally Interpretable Model-Agnostic Explanations (LIME) to the domain of image retrieval and search. These approaches enable black and grey-box model introspection and can help diagnose errors and understand the rationale behind a model's similarity judgments. Furthermore, we extend these approaches to extract a full pairwise correspondence between the query and retrieved image pixels, an approach we call "joint interpretations". Formally, we show joint search interpretations arise from projecting Harsanyi dividends, and that this approach generalizes Shapley Values and The Shapley-Taylor indices. We introduce a fast kernel-based method for estimating Shapley-Taylor indices and empirically show that these game-theoretic measures yield more consistent explanations for image similarity architectures.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

research
12/23/2021

AcME – Accelerated Model-agnostic Explanations: Fast Whitening of the Machine-Learning Black Box

In the context of human-in-the-loop Machine Learning applications, like ...
research
07/03/2019

Interpretable Counterfactual Explanations Guided by Prototypes

We propose a fast, model agnostic method for finding interpretable count...
research
04/01/2020

Ontology-based Interpretable Machine Learning for Textual Data

In this paper, we introduce a novel interpreting framework that learns a...
research
10/14/2020

Human-interpretable model explainability on high-dimensional data

The importance of explainability in machine learning continues to grow, ...
research
05/04/2023

Interpretable Regional Descriptors: Hyperbox-Based Local Explanations

This work introduces interpretable regional descriptors, or IRDs, for lo...
research
07/15/2019

A study on the Interpretability of Neural Retrieval Models using DeepSHAP

A recent trend in IR has been the usage of neural networks to learn retr...
research
03/06/2019

Camera Obscurer: Generative Art for Design Inspiration

We investigate using generated decorative art as a source of inspiration...

Please sign up or login with your details

Forgot password? Click here to reset