Instructive artificial intelligence (AI) for human training, assistance, and explainability

11/02/2021
by   Nicholas Kantack, et al.
0

We propose a novel approach to explainable AI (XAI) based on the concept of "instruction" from neural networks. In this case study, we demonstrate how a superhuman neural network might instruct human trainees as an alternative to traditional approaches to XAI. Specifically, an AI examines human actions and calculates variations on the human strategy that lead to better performance. Experiments with a JHU/APL-developed AI player for the cooperative card game Hanabi suggest this technique makes unique contributions to explainability while improving human performance. One area of focus for Instructive AI is in the significant discrepancies that can arise between a human's actual strategy and the strategy they profess to use. This inaccurate self-assessment presents a barrier for XAI, since explanations of an AI's strategy may not be properly understood or implemented by human recipients. We have developed and are testing a novel, Instructive AI approach that estimates human strategy by observing human actions. With neural networks, this allows a direct calculation of the changes in weights needed to improve the human strategy to better emulate a more successful AI. Subjected to constraints (e.g. sparsity) these weight changes can be interpreted as recommended changes to human strategy (e.g. "value A more, and value B less"). Instruction from AI such as this functions both to help humans perform better at tasks, but also to better understand, anticipate, and correct the actions of an AI. Results will be presented on AI instruction's ability to improve human decision-making and human-AI teaming in Hanabi.

READ FULL TEXT
research
05/03/2022

On the Effect of Information Asymmetry in Human-AI Teams

Over the last years, the rising capabilities of artificial intelligence ...
research
10/04/2020

Explainability via Responsibility

Procedural Content Generation via Machine Learning (PCGML) refers to a g...
research
11/12/2022

Deep Learning Generates Synthetic Cancer Histology for Explainability and Education

Artificial intelligence (AI) methods including deep neural networks can ...
research
10/16/2019

Explainable AI for Intelligence Augmentation in Multi-Domain Operations

Central to the concept of multi-domain operations (MDO) is the utilizati...
research
09/11/2021

An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability

Numerous government initiatives (e.g. the EU with GDPR) are coming to th...
research
04/14/2022

Should I Follow AI-based Advice? Measuring Appropriate Reliance in Human-AI Decision-Making

Many important decisions in daily life are made with the help of advisor...
research
07/14/2023

Value-based Fast and Slow AI Nudging

Nudging is a behavioral strategy aimed at influencing people's thoughts ...

Please sign up or login with your details

Forgot password? Click here to reset