Calibrated Explanations for Regression

08/30/2023
by   Tuwe Lofstrom, et al.
0

Artificial Intelligence (AI) is often an integral part of modern decision support systems (DSSs). The best-performing predictive models used in AI-based DSSs lack transparency. Explainable Artificial Intelligence (XAI) aims to create AI systems that can explain their rationale to human users. Local explanations in XAI can provide information about the causes of individual predictions in terms of feature importance. However, a critical drawback of existing local explanation methods is their inability to quantify the uncertainty associated with a feature's importance. This paper introduces an extension of a feature importance explanation method, Calibrated Explanations (CE), previously only supporting classification, with support for standard regression and probabilistic regression, i.e., the probability that the target is above an arbitrary threshold. The extension for regression keeps all the benefits of CE, such as calibration of the prediction from the underlying model with confidence intervals, uncertainty quantification of feature importance, and allows both factual and counterfactual explanations. CE for standard regression provides fast, reliable, stable, and robust explanations. CE for probabilistic regression provides an entirely new way of creating probabilistic explanations from any ordinary regression model and with a dynamic selection of thresholds. The performance of CE for probabilistic regression regarding stability and speed is comparable to LIME. The method is model agnostic with easily understood conditional rules. An implementation in Python is freely available on GitHub and for installation using pip making the results in this paper easily replicable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/03/2023

Calibrated Explanations: with Uncertainty Information and Counterfactuals

Artificial Intelligence (AI) has become an integral part of decision sup...
research
07/15/2022

Creating an Explainable Intrusion Detection System Using Self Organizing Maps

Modern Artificial Intelligence (AI) enabled Intrusion Detection Systems ...
research
11/08/2021

Consistent Sufficient Explanations and Minimal Local Rules for explaining regression and classification models

To explain the decision of any model, we extend the notion of probabilis...
research
05/09/2023

When a CBR in Hand is Better than Twins in the Bush

AI methods referred to as interpretable are often discredited as inaccur...
research
12/15/2022

Calibrating AI Models for Wireless Communications via Conformal Prediction

When used in complex engineered systems, such as communication networks,...
research
07/02/2022

PhilaeX: Explaining the Failure and Success of AI Models in Malware Detection

The explanation to an AI model's prediction used to support decision mak...
research
04/26/2021

Exploiting Explanations for Model Inversion Attacks

The successful deployment of artificial intelligence (AI) in many domain...

Please sign up or login with your details

Forgot password? Click here to reset