Self-explaining AI as an alternative to interpretable AI

02/12/2020
by   Dan Elton, et al.
63

The ability to explain decisions made by AI systems is highly sought after, especially in domains where human lives are at stake such as medicine or autonomous vehicles. While it is always possible to approximate the input-output relations of deep neural networks with human-understandable rules, the discovery of the double descent phenomena suggests that no such approximation will ever map onto the actual functioning of deep neural networks. Double descent indicates that deep neural networks typically operate by smoothly interpolating between data points rather than by extracting a few high level rules. As a result neural networks trained on complex real world data are inherently hard to interpret and prone to failure if used outside their domain of applicability. To show how we might be able to trust AI despite these problems, we introduce the concept of self-explaining AI. Self-explaining AIs are capable of providing a human-understandable explanation of each decision along with confidence levels for both the decision and explanation. Some difficulties to this approach along with possible solutions are sketched. Finally, we argue it is also important that AI systems warn their user when they are asked to perform outside their domain of applicability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2020

Self-explainability as an alternative to interpretability for judging the trustworthiness of artificial intelligences

The ability to explain decisions made by AI systems is highly sought aft...
research
06/16/2020

Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey

Nowadays, deep neural networks are widely used in mission critical syste...
research
07/19/2022

Alterfactual Explanations – The Relevance of Irrelevance for Explaining AI Systems

Explanation mechanisms from the field of Counterfactual Thinking are a w...
research
05/24/2019

Not All Features Are Equal: Feature Leveling Deep Neural Networks for Better Interpretation

Self-explaining models are models that reveal decision making parameters...
research
09/02/2020

Estimating the Brittleness of AI: Safety Integrity Levels and the Need for Testing Out-Of-Distribution Performance

Test, Evaluation, Verification, and Validation (TEVV) for Artificial Int...
research
09/16/2021

Explainability Requires Interactivity

When explaining the decisions of deep neural networks, simple stories ar...
research
01/23/2023

SpArX: Sparse Argumentative Explanations for Neural Networks

Neural networks (NNs) have various applications in AI, but explaining th...

Please sign up or login with your details

Forgot password? Click here to reset