"Even if ..." – Diverse Semifactual Explanations of Reject

07/05/2022
by   André Artelt, et al.
0

Machine learning based decision making systems applied in safety critical areas require reliable high certainty predictions. For this purpose, the system can be extended by an reject option which allows the system to reject inputs where only a prediction with an unacceptably low certainty would be possible. While being able to reject uncertain samples is important, it is also of importance to be able to explain why a particular sample was rejected. With the ongoing rise of eXplainable AI (XAI), a lot of explanation methodologies for machine learning based systems have been developed – explaining reject options, however, is still a novel field where only very little prior work exists. In this work, we propose to explain rejects by semifactual explanations, an instance of example-based explanation methods, which them self have not been widely considered in the XAI community yet. We propose a conceptual modeling of semifactual explanations for arbitrary reject options and empirically evaluate a specific implementation on a conformal prediction based reject option.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/16/2022

Model Agnostic Local Explanations of Reject

The application of machine learning based decision making systems in saf...
research
02/15/2022

Explaining Reject Options of Learning Vector Quantization Classifiers

While machine learning models are usually assumed to always output a pre...
research
04/20/2022

Backdooring Explainable Machine Learning

Explainable machine learning holds great potential for analyzing and und...
research
07/02/2022

PhilaeX: Explaining the Failure and Success of AI Models in Malware Detection

The explanation to an AI model's prediction used to support decision mak...
research
09/14/2021

Behavior of k-NN as an Instance-Based Explanation Method

Adoption of DL models in critical areas has led to an escalating demand ...
research
04/12/2023

Textual Explanations for Automated Commentary Driving

The provision of natural language explanations for the predictions of de...
research
03/27/2013

Theory-Based Inductive Learning: An Integration of Symbolic and Quantitative Methods

The objective of this paper is to propose a method that will generate a ...

Please sign up or login with your details

Forgot password? Click here to reset