PERFEX: Classifier Performance Explanations for Trustworthy AI Systems

12/12/2022
by   Erwin Walraven, et al.
0

Explainability of a classification model is crucial when deployed in real-world decision support systems. Explanations make predictions actionable to the user and should inform about the capabilities and limitations of the system. Existing explanation methods, however, typically only provide explanations for individual predictions. Information about conditions under which the classifier is able to support the decision maker is not available, while for instance information about when the system is not able to differentiate classes can be very helpful. In the development phase it can support the search for new features or combining models, and in the operational phase it supports decision makers in deciding e.g. not to use the system. This paper presents a method to explain the qualities of a trained base classifier, called PERFormance EXplainer (PERFEX). Our method consists of a meta tree learning algorithm that is able to predict and explain under which conditions the base classifier has a high or low error or any other classification performance metric. We evaluate PERFEX using several classifiers and datasets, including a case study with urban mobility data. It turns out that PERFEX typically has high meta prediction performance even if the base classifier is hardly able to differentiate classes, while giving compact performance explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/18/2023

Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations

AI explanations are often mentioned as a way to improve human-AI decisio...
research
02/20/2023

Why is the prediction wrong? Towards underfitting case explanation via meta-classification

In this paper we present a heuristic method to provide individual explan...
research
12/06/2009

How to Explain Individual Classification Decisions

After building a classifier with modern tools of machine learning we typ...
research
05/31/2021

Bounded logit attention: Learning to explain image classifiers

Explainable artificial intelligence is the attempt to elucidate the work...
research
01/31/2022

Won't you see my neighbor?: User predictions, mental models, and similarity-based explanations of AI classifiers

Humans should be able work more effectively with artificial intelligence...
research
11/11/2019

t-SS3: a text classifier with dynamic n-grams for early risk detection over text streams

A recently introduced classifier, called SS3, has shown to be well suite...
research
03/16/2023

Finding Minimum-Cost Explanations for Predictions made by Tree Ensembles

The ability to explain why a machine learning model arrives at a particu...

Please sign up or login with your details

Forgot password? Click here to reset