Neural Prototype Trees for Interpretable Fine-grained Image Recognition

by   Meike Nauta, et al.

Interpretable machine learning addresses the black-box nature of deep neural networks. Visual prototypes have been suggested for intrinsically interpretable image recognition, instead of generating post-hoc explanations that approximate a trained model. However, a large number of prototypes can be overwhelming. To reduce explanation size and improve interpretability, we propose the Neural Prototype Tree (ProtoTree), a deep learning method that includes prototypes in an interpretable decision tree to faithfully visualize the entire model. In addition to global interpretability, a path in the tree explains a single prediction. Each node in our binary tree contains a trainable prototypical part. The presence or absence of this prototype in an image determines the routing through a node. Decision making is therefore similar to human reasoning: Does the bird have a red throat? And an elongated beak? Then it's a hummingbird! We tune the accuracy-interpretability trade-off using ensembling and pruning. We apply pruning without sacrificing accuracy, resulting in a small tree with only 8 prototypes along a path to classify a bird from 200 species. An ensemble of 5 ProtoTrees achieves competitive accuracy on the CUB-200-2011 and Stanford Cars data sets. Code is available at


page 1

page 8

page 14

page 15

page 16

page 17

page 18

page 19


This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition

Image recognition with prototypes is considered an interpretable alterna...

An Interpretable Loan Credit Evaluation Method Based on Rule Representation Learner

The interpretability of model has become one of the obstacles to its wid...

Visualizing the decision-making process in deep neural decision forest

Deep neural decision forest (NDF) achieved remarkable performance on var...

Knowledge Distillation to Ensemble Global and Interpretable Prototype-Based Mammogram Classification Models

State-of-the-art (SOTA) deep learning mammogram classifiers, trained wit...

ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition

Prototypical part network (ProtoPNet) has drawn wide attention and boost...

Using Decision Tree as Local Interpretable Model in Autoencoder-based LIME

Nowadays, deep neural networks are being used in many domains because of...

Learning Physics from the Machine: An Interpretable Boosted Decision Tree Analysis for the Majorana Demonstrator

The Majorana Demonstrator is a leading experiment searching for neutrino...

Please sign up or login with your details

Forgot password? Click here to reset