DeepAI AI Chat
Log In Sign Up

Unexplainability and Incomprehensibility of Artificial Intelligence

06/20/2019
by   Roman V. Yampolskiy, et al.
1

Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would not understand some of those explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/13/2020

Argumentation-based Agents that Explain their Decisions

Explainable Artificial Intelligence (XAI) systems, including intelligent...
08/25/2022

Towards Benchmarking Explainable Artificial Intelligence Methods

The currently dominating artificial intelligence and machine learning te...
10/11/2022

On Explainability in AI-Solutions: A Cross-Domain Survey

Artificial Intelligence (AI) increasingly shows its potential to outperf...
09/14/2020

An Argumentation-based Approach for Explaining Goal Selection in Intelligent Agents

During the first step of practical reasoning, i.e. deliberation or goals...
01/05/2023

Explain to Me: Towards Understanding Privacy Decisions

Privacy assistants help users manage their privacy online. Their tasks c...
11/12/2018

TED: Teaching AI to Explain its Decisions

Artificial intelligence systems are being increasingly deployed due to t...
02/20/2020

A Road Map to Strong Intelligence

I wrote this paper because technology can really improve people's lives....