Learning to Select Prototypical Parts for Interpretable Sequential Data Modeling

12/07/2022
by   Yifei Zhang, et al.
0

Prototype-based interpretability methods provide intuitive explanations of model prediction by comparing samples to a reference set of memorized exemplars or typical representatives in terms of similarity. In the field of sequential data modeling, similarity calculations of prototypes are usually based on encoded representation vectors. However, due to highly recursive functions, there is usually a non-negligible disparity between the prototype-based explanations and the original input. In this work, we propose a Self-Explaining Selective Model (SESM) that uses a linear combination of prototypical concepts to explain its own predictions. The model employs the idea of case-based reasoning by selecting sub-sequences of the input that mostly activate different concepts as prototypical parts, which users can compare to sub-sequences selected from different example inputs to understand model decisions. For better interpretability, we design multiple constraints including diversity, stability, and locality as training objectives. Extensive experiments in different domains demonstrate that our method exhibits promising interpretability and competitive accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2019

Analyzing the Interpretability Robustness of Self-Explaining Models

Recently, interpretable models called self-explaining models (SEMs) have...
research
12/02/2021

ProtGNN: Towards Self-Explaining Graph Neural Networks

Despite the recent progress in Graph Neural Networks (GNNs), it remains ...
research
06/20/2018

Towards Robust Interpretability with Self-Explaining Neural Networks

Most recent work on interpretability of complex machine learning models ...
research
08/16/2023

Interpretability Benchmark for Evaluating Spatial Misalignment of Prototypical Parts Explanations

Prototypical parts-based networks are becoming increasingly popular due ...
research
03/03/2015

The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification

We present the Bayesian Case Model (BCM), a general framework for Bayesi...
research
07/31/2022

INSightR-Net: Interpretable Neural Network for Regression using Similarity-based Comparisons to Prototypical Examples

Convolutional neural networks (CNNs) have shown exceptional performance ...
research
04/11/2022

ProtoTEx: Explaining Model Decisions with Prototype Tensors

We present ProtoTEx, a novel white-box NLP classification architecture b...

Please sign up or login with your details

Forgot password? Click here to reset