Interpretable Image Recognition with Hierarchical Prototypes

06/25/2019
by   Peter Hase, et al.
2

Vision models are interpretable when they classify objects on the basis of features that a person can directly understand. Recently, methods relying on visual feature prototypes have been developed for this purpose. However, in contrast to how humans categorize objects, these approaches have not yet made use of any taxonomical organization of class labels. With such an approach, for instance, we may see why a chimpanzee is classified as a chimpanzee, but not why it was considered to be a primate or even an animal. In this work we introduce a model that uses hierarchically organized prototypes to classify objects at every level in a predefined taxonomy. Hence, we may find distinct explanations for the prediction an image receives at each level of the taxonomy. The hierarchical prototypes enable the model to perform another important task: interpretably classifying images from previously unseen classes at the level of the taxonomy to which they correctly relate, e.g. classifying a hand gun as a weapon, when the only weapons in the training data are rifles. With a subset of ImageNet, we test our model against its counterpart black-box model on two tasks: 1) classification of data from familiar classes, and 2) classification of data from previously unseen classes at the appropriate level in the taxonomy. We find that our model performs approximately as well as its counterpart black-box model while allowing for each classification to be interpreted.

READ FULL TEXT

page 3

page 7

page 8

page 10

page 11

page 12

page 13

research
03/08/2023

Learning Hybrid Interpretable Models: Theory, Taxonomy, and Methods

A hybrid model involves the cooperation of an interpretable model and a ...
research
09/27/2019

Interpreting Undesirable Pixels for Image Classification on Black-Box Models

In an effort to interpret black-box models, researches for developing ex...
research
11/05/2020

This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition

Image recognition with prototypes is considered an interpretable alterna...
research
06/05/2023

Interpretable Alzheimer's Disease Classification Via a Contrastive Diffusion Autoencoder

In visual object classification, humans often justify their choices by c...
research
08/31/2021

PACE: Posthoc Architecture-Agnostic Concept Extractor for Explaining CNNs

Deep CNNs, though have achieved the state of the art performance in imag...
research
10/31/2019

A study of data and label shift in the LIME framework

LIME is a popular approach for explaining a black-box prediction through...
research
07/07/2022

An Additive Instance-Wise Approach to Multi-class Model Interpretation

Interpretable machine learning offers insights into what factors drive a...

Please sign up or login with your details

Forgot password? Click here to reset