Delivering Inflated Explanations

06/27/2023
by   Yacine Izza, et al.
0

In the quest for Explainable Artificial Intelligence (XAI) one of the questions that frequently arises given a decision made by an AI system is, “why was the decision made in this way?” Formal approaches to explainability build a formal model of the AI system and use this to reason about the properties of the system. Given a set of feature values for an instance to be explained, and a resulting decision, a formal abductive explanation is a set of features, such that if they take the given value will always lead to the same decision. This explanation is useful, it shows that only some features were used in making the final decision. But it is narrow, it only shows that if the selected features take their given values the decision is unchanged. It's possible that some features may change values and still lead to the same decision. In this paper we formally define inflated explanations which is a set of features, and for each feature of set of values (always including the value of the instance being explained), such that the decision will remain unchanged. Inflated explanations are more informative than abductive explanations since e.g they allow us to see if the exact value of a feature is important, or it could be any nearby value. Overall they allow us to better understand the role of each feature in the decision. We show that we can compute inflated explanations for not that much greater cost than abductive explanations, and that we can extend duality results for abductive explanations also to inflated explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/12/2023

In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making

The current literature on AI-advised decision making – involving explain...
research
12/21/2020

On Relating 'Why?' and 'Why Not?' Explanations

Explanations of Machine Learning (ML) models often address a 'Why?' ques...
research
06/05/2023

From Robustness to Explainability and Back Again

In contrast with ad-hoc methods for eXplainable Artificial Intelligence ...
research
06/03/2020

From Probability to Consilience: How Explanatory Values Implement Bayesian Reasoning

Recent work in cognitive science has uncovered a diversity of explanator...
research
08/11/2021

Logic Explained Networks

The large and still increasing popularity of deep learning clashes with ...
research
12/07/2020

Explainable Artificial Intelligence: How Subsets of the Training Data Affect a Prediction

There is an increasing interest in and demand for interpretations and ex...
research
11/23/2021

Is Shapley Explanation for a model unique?

Shapley value has recently become a popular way to explain the predictio...

Please sign up or login with your details

Forgot password? Click here to reset