DeepAI AI Chat
Log In Sign Up

Unexplainability and Incomprehensibility of Artificial Intelligence

by   Roman V. Yampolskiy, et al.

Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would not understand some of those explanations.


page 1

page 2

page 3

page 4


Argumentation-based Agents that Explain their Decisions

Explainable Artificial Intelligence (XAI) systems, including intelligent...

Towards Benchmarking Explainable Artificial Intelligence Methods

The currently dominating artificial intelligence and machine learning te...

On Explainability in AI-Solutions: A Cross-Domain Survey

Artificial Intelligence (AI) increasingly shows its potential to outperf...

An Argumentation-based Approach for Explaining Goal Selection in Intelligent Agents

During the first step of practical reasoning, i.e. deliberation or goals...

Explain to Me: Towards Understanding Privacy Decisions

Privacy assistants help users manage their privacy online. Their tasks c...

TED: Teaching AI to Explain its Decisions

Artificial intelligence systems are being increasingly deployed due to t...

A Road Map to Strong Intelligence

I wrote this paper because technology can really improve people's lives....