Log In Sign Up

Explainable AI for Robot Failures: Generating Explanations that Improve User Assistance in Fault Recovery

by   Devleena Das, et al.

With the growing capabilities of intelligent systems, the integration of robots in our everyday life is increasing. However, when interacting in such complex human environments, the occasional failure of robotic systems is inevitable. The field of explainable AI has sought to make complex-decision making systems more interpretable but most existing techniques target domain experts. On the contrary, in many failure cases, robots will require recovery assistance from non-expert users. In this work, we introduce a new type of explanation, that explains the cause of an unexpected failure during an agent's plan execution to non-experts. In order for error explanations to be meaningful, we investigate what types of information within a set of hand-scripted explanations are most helpful to non-experts for failure and solution identification. Additionally, we investigate how such explanations can be autonomously generated, extending an existing encoder-decoder model, and generalized across environments. We investigate such questions in the context of a robot performing a pick-and-place manipulation task in the home environment. Our results show that explanations capturing the context of a failure and history of past actions, are the most effective for failure and solution identification among non-experts. Furthermore, through a second user evaluation, we verify that our model-generated explanations can generalize to an unseen office environment, and are just as effective as the hand-scripted explanations.


Explainable AI for System Failures: Generating Explanations that Improve Human Assistance in Fault Recovery

With the growing capabilities of intelligent systems, the integration of...

Semantic-Based Explainable AI: Leveraging Semantic Scene Graphs and Pairwise Ranking to Explain Robot Failures

When interacting in unstructured human environments, occasional robot fa...

Subgoal-Based Explanations for Unreliable Intelligent Decision Support Systems

Intelligent decision support (IDS) systems leverage artificial intellige...

Automatic Failure Recovery for End-User Programs on Service Mobile Robots

For service mobile robots to be most effective, it must be possible for ...

Why did I fail? A Causal-based Method to Find Explanations for Robot Failures

Robot failures in human-centered environments are inevitable. Therefore,...

JEDAI Explains Decision-Making AI

This paper presents JEDAI, an AI system designed for outreach and educat...

Non-Asimov Explanations Regulating AI through Transparency

An important part of law and regulation is demanding explanations for ac...