Brittle AI, Causal Confusion, and Bad Mental Models: Challenges and Successes in the XAI Program

06/10/2021
by   Jeff Druce, et al.
14

The advances in artificial intelligence enabled by deep learning architectures are undeniable. In several cases, deep neural network driven models have surpassed human level performance in benchmark autonomy tasks. The underlying policies for these agents, however, are not easily interpretable. In fact, given their underlying deep models, it is impossible to directly understand the mapping from observations to actions for any reasonably complex agent. Producing this supporting technology to "open the black box" of these AI systems, while not sacrificing performance, was the fundamental goal of the DARPA XAI program. In our journey through this program, we have several "big picture" takeaways: 1) Explanations need to be highly tailored to their scenario; 2) many seemingly high performing RL agents are extremely brittle and are not amendable to explanation; 3) causal models allow for rich explanations, but how to present them isn't always straightforward; and 4) human subjects conjure fantastically wrong mental models for AIs, and these models are often hard to break. This paper discusses the origins of these takeaways, provides amplifying information, and suggestions for future work.

READ FULL TEXT

page 1

page 3

page 6

research
10/04/2020

Explainability via Responsibility

Procedural Content Generation via Machine Learning (PCGML) refers to a g...
research
06/16/2020

Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey

Nowadays, deep neural networks are widely used in mission critical syste...
research
03/07/2023

Causal Dependence Plots for Interpretable Machine Learning

Explaining artificial intelligence or machine learning models is an incr...
research
03/02/2020

A general framework for scientifically inspired explanations in AI

Explainability in AI is gaining attention in the computer science commun...
research
07/17/2020

Sequential Explanations with Mental Model-Based Policies

The act of explaining across two parties is a feedback loop, where one p...
research
11/17/2022

Explainability Via Causal Self-Talk

Explaining the behavior of AI systems is an important problem that, in p...
research
01/31/2022

Won't you see my neighbor?: User predictions, mental models, and similarity-based explanations of AI classifiers

Humans should be able work more effectively with artificial intelligence...

Please sign up or login with your details

Forgot password? Click here to reset