Neural Prototype Trees for Interpretable Fine-grained Image Recognition

12/03/2020
by   Meike Nauta, et al.
3

Interpretable machine learning addresses the black-box nature of deep neural networks. Visual prototypes have been suggested for intrinsically interpretable image recognition, instead of generating post-hoc explanations that approximate a trained model. However, a large number of prototypes can be overwhelming. To reduce explanation size and improve interpretability, we propose the Neural Prototype Tree (ProtoTree), a deep learning method that includes prototypes in an interpretable decision tree to faithfully visualize the entire model. In addition to global interpretability, a path in the tree explains a single prediction. Each node in our binary tree contains a trainable prototypical part. The presence or absence of this prototype in an image determines the routing through a node. Decision making is therefore similar to human reasoning: Does the bird have a red throat? And an elongated beak? Then it's a hummingbird! We tune the accuracy-interpretability trade-off using ensembling and pruning. We apply pruning without sacrificing accuracy, resulting in a small tree with only 8 prototypes along a path to classify a bird from 200 species. An ensemble of 5 ProtoTrees achieves competitive accuracy on the CUB-200-2011 and Stanford Cars data sets. Code is available at https://github.com/M-Nauta/ProtoTree

READ FULL TEXT

page 1

page 8

page 14

page 15

page 16

page 17

page 18

page 19

research
11/05/2020

This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition

Image recognition with prototypes is considered an interpretable alterna...
research
04/03/2023

An Interpretable Loan Credit Evaluation Method Based on Rule Representation Learner

The interpretability of model has become one of the obstacles to its wid...
research
04/19/2019

Visualizing the decision-making process in deep neural decision forest

Deep neural decision forest (NDF) achieved remarkable performance on var...
research
09/26/2022

Knowledge Distillation to Ensemble Global and Interpretable Prototype-Based Mammogram Classification Models

State-of-the-art (SOTA) deep learning mammogram classifiers, trained wit...
research
08/22/2022

ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition

Prototypical part network (ProtoPNet) has drawn wide attention and boost...
research
04/07/2022

Using Decision Tree as Local Interpretable Model in Autoencoder-based LIME

Nowadays, deep neural networks are being used in many domains because of...
research
07/21/2022

Learning Physics from the Machine: An Interpretable Boosted Decision Tree Analysis for the Majorana Demonstrator

The Majorana Demonstrator is a leading experiment searching for neutrino...

Please sign up or login with your details

Forgot password? Click here to reset