Explainable AI for Robot Failures: Generating Explanations that Improve User Assistance in Fault Recovery

01/05/2021
by   Devleena Das, et al.
0

With the growing capabilities of intelligent systems, the integration of robots in our everyday life is increasing. However, when interacting in such complex human environments, the occasional failure of robotic systems is inevitable. The field of explainable AI has sought to make complex-decision making systems more interpretable but most existing techniques target domain experts. On the contrary, in many failure cases, robots will require recovery assistance from non-expert users. In this work, we introduce a new type of explanation, that explains the cause of an unexpected failure during an agent's plan execution to non-experts. In order for error explanations to be meaningful, we investigate what types of information within a set of hand-scripted explanations are most helpful to non-experts for failure and solution identification. Additionally, we investigate how such explanations can be autonomously generated, extending an existing encoder-decoder model, and generalized across environments. We investigate such questions in the context of a robot performing a pick-and-place manipulation task in the home environment. Our results show that explanations capturing the context of a failure and history of past actions, are the most effective for failure and solution identification among non-experts. Furthermore, through a second user evaluation, we verify that our model-generated explanations can generalize to an unseen office environment, and are just as effective as the hand-scripted explanations.

READ FULL TEXT
research
11/18/2020

Explainable AI for System Failures: Generating Explanations that Improve Human Assistance in Fault Recovery

With the growing capabilities of intelligent systems, the integration of...
research
08/08/2021

Semantic-Based Explainable AI: Leveraging Semantic Scene Graphs and Pairwise Ranking to Explain Robot Failures

When interacting in unstructured human environments, occasional robot fa...
research
01/11/2022

Subgoal-Based Explanations for Unreliable Intelligent Decision Support Systems

Intelligent decision support (IDS) systems leverage artificial intellige...
research
09/06/2019

Automatic Failure Recovery for End-User Programs on Service Mobile Robots

For service mobile robots to be most effective, it must be possible for ...
research
04/09/2022

Why did I fail? A Causal-based Method to Find Explanations for Robot Failures

Robot failures in human-centered environments are inevitable. Therefore,...
research
10/31/2021

JEDAI Explains Decision-Making AI

This paper presents JEDAI, an AI system designed for outreach and educat...
research
11/25/2021

Non-Asimov Explanations Regulating AI through Transparency

An important part of law and regulation is demanding explanations for ac...

Please sign up or login with your details

Forgot password? Click here to reset