Unexplainability and Incomprehensibility of Artificial Intelligence

06/20/2019
by   Roman V. Yampolskiy, et al.
1

Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would not understand some of those explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/13/2020

Argumentation-based Agents that Explain their Decisions

Explainable Artificial Intelligence (XAI) systems, including intelligent...
research
08/25/2022

Towards Benchmarking Explainable Artificial Intelligence Methods

The currently dominating artificial intelligence and machine learning te...
research
10/11/2022

On Explainability in AI-Solutions: A Cross-Domain Survey

Artificial Intelligence (AI) increasingly shows its potential to outperf...
research
09/14/2020

An Argumentation-based Approach for Explaining Goal Selection in Intelligent Agents

During the first step of practical reasoning, i.e. deliberation or goals...
research
11/12/2018

TED: Teaching AI to Explain its Decisions

Artificial intelligence systems are being increasingly deployed due to t...
research
11/03/2019

Artificial Intelligence Strategies for National Security and Safety Standards

Recent advances in artificial intelligence (AI) have lead to an explosio...
research
01/05/2023

Explain to Me: Towards Understanding Privacy Decisions

Privacy assistants help users manage their privacy online. Their tasks c...

Please sign up or login with your details

Forgot password? Click here to reset