Explicablility as Minimizing Distance from Expected Behavior

11/16/2016
by   Anagha Kulkarni, et al.
0

In order to have effective human AI collaboration, it is not simply enough to address the question of autonomy; an equally important question is, how the AI's behavior is being perceived by their human counterparts. When AI agent's task plans are generated without such considerations, they may often demonstrate inexplicable behavior from the human's point of view. This problem arises due to the human's partial or inaccurate understanding of the agent's planning process and/or the model. This may have serious implications on human-AI collaboration, from increased cognitive load and reduced trust in the agent, to more serious concerns of safety in interactions with physical agent. In this paper, we address this issue by modeling the notion of plan explicability as a function of the distance between a plan that agent makes and the plan that human expects it to make. To this end, we learn a distance function based on different plan distance measures that can accurately model this notion of plan explicability, and develop an anytime search algorithm that can use this distance as a heuristic to come up with progressively explicable plans. We evaluate the effectiveness of our approach in a simulated autonomous car domain and a physical service robot domain. We provide empirical evaluations that demonstrate the usefulness of our approach in making the planning process of an autonomous agent conform to human expectations.

READ FULL TEXT

page 1

page 6

research
11/25/2015

Plan Explicability and Predictability for Robot Task Planning

Intelligent robots and machines are becoming pervasive in human populate...
research
09/01/2021

Balancing Performance and Human Autonomy with Implicit Guidance Agent

The human-agent team, which is a problem in which humans and autonomous ...
research
11/23/2018

Explicability? Legibility? Predictability? Transparency? Privacy? Security? The Emerging Landscape of Interpretable Agent Behavior

There has been significant interest of late in generating behavior of ag...
research
03/01/2019

To Monitor Or Not: Observing Robot's Behavior based on a Game-Theoretic Model of Trust

In scenarios where a robot generates and executes a plan, there may be i...
research
03/01/2019

To Monitor or to Trust: Observing Robot's Behavior based on a Game-Theoretic Model of Trust

In scenarios where a robot generates and executes a plan, there may be i...
research
12/22/2020

Are We On The Same Page? Hierarchical Explanation Generation for Planning Tasks in Human-Robot Teaming using Reinforcement Learning

Providing explanations is considered an imperative ability for an AI age...
research
05/18/2023

Towards Collaborative Plan Acquisition through Theory of Mind Modeling in Situated Dialogue

Collaborative tasks often begin with partial task knowledge and incomple...

Please sign up or login with your details

Forgot password? Click here to reset