This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition

11/05/2020
by   Meike Nauta, et al.
0

Image recognition with prototypes is considered an interpretable alternative for black box deep learning models. Classification depends on the extent to which a test image "looks like" a prototype. However, perceptual similarity for humans can be different from the similarity learnt by the model. A user is unaware of the underlying classification strategy and does not know which image characteristics (e.g., color or shape) is the dominant characteristic for the decision. We address this ambiguity and argue that prototypes should be explained. Only visualizing prototypes can be insufficient for understanding what a prototype exactly represents, and why a prototype and an image are considered similar. We improve interpretability by automatically enhancing prototypes with extra information about visual characteristics considered important by the model. Specifically, our method quantifies the influence of color hue, shape, texture, contrast and saturation in a prototype. We apply our method to the existing Prototypical Part Network (ProtoPNet) and show that our explanations clarify the meaning of a prototype which might have been interpreted incorrectly otherwise. We also reveal that visually similar prototypes can have the same explanations, indicating redundancy. Because of the generality of our approach, it can improve the interpretability of any similarity-based method for prototypical image recognition.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 8

page 11

research
12/03/2020

Neural Prototype Trees for Interpretable Fine-grained Image Recognition

Interpretable machine learning addresses the black-box nature of deep ne...
research
12/11/2020

Color-related Local Binary Pattern: A Learned Local Descriptor for Color Image Recognition

Local binary pattern (LBP) as a kind of local feature has shown its simp...
research
11/22/2022

Towards Human-Interpretable Prototypes for Visual Assessment of Image Classification Models

Explaining black-box Artificial Intelligence (AI) models is a cornerston...
research
06/25/2019

Interpretable Image Recognition with Hierarchical Prototypes

Vision models are interpretable when they classify objects on the basis ...
research
08/13/2020

Towards Visually Explaining Similarity Models

We consider the problem of visually explaining similarity models, i.e., ...
research
08/22/2022

ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition

Prototypical part network (ProtoPNet) has drawn wide attention and boost...
research
07/03/2020

Interpretable Sequence Classification Via Prototype Trajectory

We propose a novel interpretable recurrent neural network (RNN) model, c...

Please sign up or login with your details

Forgot password? Click here to reset