Explainable AI without Interpretable Model

by   Kary Främling, et al.

Explainability has been a challenge in AI for as long as AI has existed. With the recently increased use of AI in society, it has become more important than ever that AI systems would be able to explain the reasoning behind their results also to end-users in situations such as being eliminated from a recruitment process or having a bank loan application refused by an AI system. Especially if the AI system has been trained using Machine Learning, it tends to contain too many parameters for them to be analysed and understood, which has caused them to be called `black-box' systems. Most Explainable AI (XAI) methods are based on extracting an interpretable model that can be used for producing explanations. However, the interpretable model does not necessarily map accurately to the original black-box model. Furthermore, the understandability of interpretable models for an end-user remains questionable. The notions of Contextual Importance and Utility (CIU) presented in this paper make it possible to produce human-like explanations of black-box outcomes directly, without creating an interpretable model. Therefore, CIU explanations map accurately to the black-box model itself. CIU is completely model-agnostic and can be used with any black-box system. In addition to feature importance, the utility concept that is well-known in Decision Theory provides a new dimension to explanations compared to most existing XAI methods. Finally, CIU can produce explanations at any level of abstraction and using different vocabularies and other means of interaction, which makes it possible to adjust explanations and interaction according to the context and to the target users.


page 1

page 2

page 3

page 4


Explainable AI for Bioinformatics: Methods, Tools, and Applications

Artificial intelligence(AI) systems based on deep neural networks (DNNs)...

Explanations of Black-Box Model Predictions by Contextual Importance and Utility

The significant advances in autonomous systems together with an immensel...

BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations

A key impediment to the use of AI is the lacking of transparency, especi...

Painting the black box white: experimental findings from applying XAI to an ECG reading setting

The shift from symbolic AI systems to black-box, sub-symbolic, and stati...

Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern Classification

Machine learning solutions for pattern classification problems are nowad...

Interpretable pipelines with evolutionarily optimized modules for RL tasks with visual inputs

The importance of explainability in AI has become a pressing concern, fo...

Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Fair and Explainable Automatic Recruitment

Machine learning methods are growing in relevance for biometrics and per...

Please sign up or login with your details

Forgot password? Click here to reset