Why is the prediction wrong? Towards underfitting case explanation via meta-classification

02/20/2023
by   Sheng Zhou, et al.
0

In this paper we present a heuristic method to provide individual explanations for those elements in a dataset (data points) which are wrongly predicted by a given classifier. Since the general case is too difficult, in the present work we focus on faulty data from an underfitted model. First, we project the faulty data into a hand-crafted, and thus human readable, intermediate representation (meta-representation, profile vectors), with the aim of separating the two main causes of miss-classification: the classifier is not strong enough, or the data point belongs to an area of the input space where classes are not separable. Second, in the space of these profile vectors, we present a method to fit a meta-classifier (decision tree) and express its output as a set of interpretable (human readable) explanation rules, which leads to several target diagnosis labels: data point is either correctly classified, or faulty due to a too weak model, or faulty due to mixed (overlapped) classes in the input space. Experimental results on several real datasets show more than 80 proposed intermediate representation allows to achieve a high degree of invariance with respect to the classifier used in the input space and to the dataset being classified, i.e. we can learn the metaclassifier on a dataset with a given classifier and successfully predict diagnosis labels for a different dataset or classifier (or both).

READ FULL TEXT

page 1

page 7

research
12/12/2022

PERFEX: Classifier Performance Explanations for Trustworthy AI Systems

Explainability of a classification model is crucial when deployed in rea...
research
07/19/2021

On the Veracity of Local, Model-agnostic Explanations in Audio Classification: Targeted Investigations with Adversarial Examples

Local explanation methods such as LIME have become popular in MIR as too...
research
08/05/2020

Counterfactual Explanation Based on Gradual Construction for Deep Networks

To understand the black-box characteristics of deep networks, counterfac...
research
03/12/2020

Model Agnostic Multilevel Explanations

In recent years, post-hoc local instance-level and global dataset-level ...
research
08/17/2020

Alpha Net: Adaptation with Composition in Classifier Space

Deep learning classification models typically train poorly on classes wi...
research
06/21/2021

Leveraging Conditional Generative Models in a General Explanation Framework of Classifier Decisions

Providing a human-understandable explanation of classifiers' decisions h...
research
12/13/2022

Interpretable Diabetic Retinopathy Diagnosis based on Biomarker Activation Map

Deep learning classifiers provide the most accurate means of automatical...

Please sign up or login with your details

Forgot password? Click here to reset