Plan Explanations as Model Reconciliation -- An Empirical Study

02/03/2018
by   Tathagata Chakraborti, et al.
0

Recent work in explanation generation for decision making agents has looked at how unexplained behavior of autonomous systems can be understood in terms of differences in the model of the system and the human's understanding of the same, and how the explanation process as a result of this mismatch can be then seen as a process of reconciliation of these models. Existing algorithms in such settings, while having been built on contrastive, selective and social properties of explanations as studied extensively in the psychology literature, have not, to the best of our knowledge, been evaluated in settings with actual humans in the loop. As such, the applicability of such explanations to human-AI and human-robot interactions remains suspect. In this paper, we set out to evaluate these explanation generation algorithms in a series of studies in a mock search and rescue scenario with an internal semi-autonomous robot and an external human commander. We demonstrate to what extent the properties of these algorithms hold as they are evaluated by humans, and how the dynamics of trust between the human and the robot evolve during the process of these interactions.

READ FULL TEXT

page 3

page 4

page 6

page 7

research
01/28/2017

Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy

When AI systems interact with humans in the loop, they are often called ...
research
11/30/2020

Why Did the Robot Cross the Road? A User Study of Explanation in Human-Robot Interaction

This work documents a pilot user study evaluating the effectiveness of c...
research
08/05/2022

On Model Reconciliation: How to Reconcile When Robot Does not Know Human's Model?

The Model Reconciliation Problem (MRP) was introduced to address issues ...
research
02/02/2019

Progressive Explanation Generation for Human-robot Teaming

Generating explanation to explain its behavior is an essential capabilit...
research
12/22/2020

Are We On The Same Page? Hierarchical Explanation Generation for Planning Tasks in Human-Robot Teaming using Reinforcement Learning

Providing explanations is considered an imperative ability for an AI age...
research
01/11/2019

Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions

Automated rationale generation is an approach for real-time explanation ...
research
10/09/2020

Integrating Intrinsic and Extrinsic Explainability: The Relevance of Understanding Neural Networks for Human-Robot Interaction

Explainable artificial intelligence (XAI) can help foster trust in and a...

Please sign up or login with your details

Forgot password? Click here to reset