Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions

01/11/2019
by   Upol Ehsan, et al.
16

Automated rationale generation is an approach for real-time explanation generation whereby a computational model learns to translate an autonomous agent's internal state and action data representations into natural language. Training on human explanation data can enable agents to learn to generate human-like explanations for their behavior. In this paper, using the context of an agent that plays Frogger, we describe (a) how to collect a corpus of explanations, (b) how to train a neural rationale generator to produce different styles of rationales, and (c) how people perceive these rationales. We conducted two user studies. The first study establishes the plausibility of each type of generated rationale and situates their user perceptions along the dimensions of confidence, humanlike-ness, adequate justification, and understandability. The second study further explores user preferences between the generated rationales with regard to confidence in the autonomous agent, communicating failure and unexpected behavior. Overall, we find alignment between the intended differences in features of the generated rationales and the perceived differences by users. Moreover, context permitting, participants preferred detailed rationales to form a stable mental model of the agent's behavior.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 7

research
02/25/2017

Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations

We introduce AI rationalization, an approach for generating explanations...
research
07/02/2023

Effects of Explanation Specificity on Passengers in Autonomous Driving

The nature of explanations provided by an explainable AI algorithm has b...
research
06/02/2021

Towards an Explanation Space to Align Humans and Explainable-AI Teamwork

Providing meaningful and actionable explanations to end-users is a funda...
research
06/23/2021

Not all users are the same: Providing personalized explanations for sequential decision making problems

There is a growing interest in designing autonomous agents that can work...
research
02/03/2018

Plan Explanations as Model Reconciliation -- An Empirical Study

Recent work in explanation generation for decision making agents has loo...
research
03/16/2022

Explaining Preference-driven Schedules: the EXPRES Framework

Scheduling is the task of assigning a set of scarce resources distribute...
research
05/20/2023

A Measure of Explanatory Effectiveness

In most conversations about explanation and AI, the recipient of the exp...

Please sign up or login with your details

Forgot password? Click here to reset