Training Deep Models to be Explained with Fewer Examples

12/07/2021
by   Tomoharu Iwata, et al.
0

Although deep models achieve high predictive performance, it is difficult for humans to understand the predictions they made. Explainability is important for real-world applications to justify their reliability. Many example-based explanation methods have been proposed, such as representer point selection, where an explanation model defined by a set of training examples is used for explaining a prediction model. For improving the interpretability, reducing the number of examples in the explanation model is important. However, the explanations with fewer examples can be unfaithful since it is difficult to approximate prediction models well by such example-based explanation models. The unfaithful explanations mean that the predictions by the explainable model are different from those by the prediction model. We propose a method for training deep models such that their predictions are faithfully explained by explanation models with a small number of examples. We train the prediction and explanation models simultaneously with a sparse regularizer for reducing the number of examples. The proposed method can be incorporated into any neural network-based prediction models. Experiments using several datasets demonstrate that the proposed method improves faithfulness while keeping the predictive performance.

READ FULL TEXT
research
11/21/2021

Explainable Software Defect Prediction: Are We There Yet?

Explaining the prediction results of software defect prediction models i...
research
02/22/2019

Saliency Learning: Teaching the Model Where to Pay Attention

Deep learning has emerged as a compelling solution to many NLP tasks wit...
research
08/25/2021

Inducing Semantic Grouping of Latent Concepts for Explanations: An Ante-Hoc Approach

Self-explainable deep models are devised to represent the hidden concept...
research
12/07/2020

Explainable Artificial Intelligence: How Subsets of the Training Data Affect a Prediction

There is an increasing interest in and demand for interpretations and ex...
research
11/15/2021

LIMEcraft: Handcrafted superpixel selection and inspection for Visual eXplanations

The increased interest in deep learning applications, and their hard-to-...
research
10/13/2022

Self-explaining deep models with logic rule reasoning

We present SELOR, a framework for integrating self-explaining capabiliti...
research
12/04/2018

Learning to Explain with Complemental Examples

This paper addresses the generation of explanations with visual examples...

Please sign up or login with your details

Forgot password? Click here to reset