Self-explainability as an alternative to interpretability for judging the trustworthiness of artificial intelligences

02/12/2020
by   Dan Elton, et al.
44

The ability to explain decisions made by AI systems is highly sought after, especially in domains where human lives are at stake such as medicine or autonomous vehicles. While it is always possible to approximate the input-output relations of deep neural networks with human-understandable rules, the discovery of the double descent phenomena suggests that no such approximation will ever map onto the actual functioning of deep neural networks. Double descent indicates that deep neural networks typically operate by smoothly interpolating between data points rather than by extracting a few high level rules. As a result neural networks trained on complex real world data are inherently hard to interpret and prone to failure if used outside their domain of applicability. To show how we might be able to trust AI despite these problems, we introduce the concept of self-explaining AI. Self-explaining AIs are capable of providing a human-understandable explanation of each decision along with confidence levels for both the decision and explanation. Some difficulties to this approach along with possible solutions are sketched. Finally, we argue it is also important that AI systems warn their user when they are asked to perform outside their domain of applicability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2020

Self-explaining AI as an alternative to interpretable AI

The ability to explain decisions made by AI systems is highly sought aft...
research
06/16/2020

Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey

Nowadays, deep neural networks are widely used in mission critical syste...
research
09/16/2021

Explainability Requires Interactivity

When explaining the decisions of deep neural networks, simple stories ar...
research
08/22/2023

Explicability and Inexplicability in the Interpretation of Quantum Neural Networks

Interpretability of artificial intelligence (AI) methods, particularly d...
research
09/02/2020

Estimating the Brittleness of AI: Safety Integrity Levels and the Need for Testing Out-Of-Distribution Performance

Test, Evaluation, Verification, and Validation (TEVV) for Artificial Int...
research
10/19/2020

Unsupervised Expressive Rules Provide Explainability and Assist Human Experts Grasping New Domains

Approaching new data can be quite deterrent; you do not know how your ca...

Please sign up or login with your details

Forgot password? Click here to reset