DeepAI AI Chat
Log In Sign Up

The Challenge of Imputation in Explainable Artificial Intelligence Models

by   Muhammad Aurangzeb Ahmad, et al.

Explainable models in Artificial Intelligence are often employed to ensure transparency and accountability of AI systems. The fidelity of the explanations are dependent upon the algorithms used as well as on the fidelity of the data. Many real world datasets have missing values that can greatly influence explanation fidelity. The standard way to deal with such scenarios is imputation. This can, however, lead to situations where the imputed values may correspond to a setting which refer to counterfactuals. Acting on explanations from AI models with imputed values may lead to unsafe outcomes. In this paper, we explore different settings where AI models with imputation can be problematic and describe ways to address such scenarios.


page 1

page 2

page 3

page 4


Explanation in Artificial Intelligence: Insights from the Social Sciences

There has been a recent resurgence in the area of explainable artificial...

VitrAI – Applying Explainable AI in the Real World

With recent progress in the field of Explainable Artificial Intelligence...

Science Communications for Explainable Artificial Intelligence

Artificial Intelligence (AI) has a communication problem. XAI methods ha...

Calibrated Explanations: with Uncertainty Information and Counterfactuals

Artificial Intelligence (AI) has become an integral part of decision sup...

Enhanced Labeling of Issue Reports (with F^3T)

Standard automatic methods for recognizing problematic code can be great...

The Influence of Explainable Artificial Intelligence: Nudging Behaviour or Boosting Capability?

This article aims to provide a theoretical account and corresponding par...