DeepAI AI Chat
Log In Sign Up

This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition

by   Meike Nauta, et al.

Image recognition with prototypes is considered an interpretable alternative for black box deep learning models. Classification depends on the extent to which a test image "looks like" a prototype. However, perceptual similarity for humans can be different from the similarity learnt by the model. A user is unaware of the underlying classification strategy and does not know which image characteristics (e.g., color or shape) is the dominant characteristic for the decision. We address this ambiguity and argue that prototypes should be explained. Only visualizing prototypes can be insufficient for understanding what a prototype exactly represents, and why a prototype and an image are considered similar. We improve interpretability by automatically enhancing prototypes with extra information about visual characteristics considered important by the model. Specifically, our method quantifies the influence of color hue, shape, texture, contrast and saturation in a prototype. We apply our method to the existing Prototypical Part Network (ProtoPNet) and show that our explanations clarify the meaning of a prototype which might have been interpreted incorrectly otherwise. We also reveal that visually similar prototypes can have the same explanations, indicating redundancy. Because of the generality of our approach, it can improve the interpretability of any similarity-based method for prototypical image recognition.


page 1

page 3

page 5

page 6

page 8

page 11


Neural Prototype Trees for Interpretable Fine-grained Image Recognition

Interpretable machine learning addresses the black-box nature of deep ne...

Color-related Local Binary Pattern: A Learned Local Descriptor for Color Image Recognition

Local binary pattern (LBP) as a kind of local feature has shown its simp...

Towards Human-Interpretable Prototypes for Visual Assessment of Image Classification Models

Explaining black-box Artificial Intelligence (AI) models is a cornerston...

Interpretable Image Recognition with Hierarchical Prototypes

Vision models are interpretable when they classify objects on the basis ...

Interpretable Sequence Classification Via Prototype Trajectory

We propose a novel interpretable recurrent neural network (RNN) model, c...

ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition

Prototypical part network (ProtoPNet) has drawn wide attention and boost...

Understanding Deep Architectures by Interpretable Visual Summaries

A consistent body of research investigates the recurrent visual patterns...