Model Agnostic Local Explanations of Reject

05/16/2022
by   André Artelt, et al.
0

The application of machine learning based decision making systems in safety critical areas requires reliable high certainty predictions. Reject options are a common way of ensuring a sufficiently high certainty of predictions made by the system. While being able to reject uncertain samples is important, it is also of importance to be able to explain why a particular sample was rejected. However, explaining general reject options is still an open problem. We propose a model agnostic method for locally explaining arbitrary reject options by means of interpretable models and counterfactual explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/05/2022

"Even if ..." – Diverse Semifactual Explanations of Reject

Machine learning based decision making systems applied in safety critica...
research
02/15/2022

Explaining Reject Options of Learning Vector Quantization Classifiers

While machine learning models are usually assumed to always output a pre...
research
01/21/2020

Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach

Lack of understanding of the decisions made by model-based AI systems is...
research
02/10/2023

Two-step counterfactual generation for OOD examples

Two fundamental requirements for the deployment of machine learning mode...
research
10/06/2021

Shapley variable importance clouds for interpretable machine learning

Interpretable machine learning has been focusing on explaining final mod...
research
11/17/2016

Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance

At the core of interpretable machine learning is the question of whether...
research
11/30/2020

Why model why? Assessing the strengths and limitations of LIME

When it comes to complex machine learning models, commonly referred to a...

Please sign up or login with your details

Forgot password? Click here to reset