To Monitor Or Not: Observing Robot's Behavior based on a Game-Theoretic Model of Trust

03/01/2019
by   Sailik Sengupta, et al.
0

In scenarios where a robot generates and executes a plan, there may be instances where this generated plan is less costly for the robot to execute but incomprehensible to the human. When the human acts as a supervisor and is held accountable for the robot's plan, the human may be at a higher risk if the incomprehensible behavior is deemed to be unsafe. In such cases, the robot, who may be unaware of the human's exact expectations, may choose to do (1) the most constrained plan (i.e. one preferred by all possible supervisors) incurring the added cost of executing highly sub-optimal behavior when the human is observing it and (2) deviate to a more optimal plan when the human looks away. These problems amplify in situations where the robot has to fulfill multiple goals and cater to the needs of different human supervisors. In such settings, the robot, being a rational agent, should take any chance it gets to deviate to a lower cost plan. On the other hand, continuous monitoring of the robot's behavior is often difficult for human because it costs them valuable resources (e.g., time, effort, cognitive overload, etc.). To optimize the cost for constant monitoring while ensuring the robots follow the safe behavior, we model this problem in the game-theoretic framework of trust where the human is the agent that trusts the robot. We show that the notion of human's trust, which is well-defined when there is a pure strategy equilibrium, is inversely proportional to the probability it assigns for observing the robot's behavior. We then show that with high probability, our game lacks a pure strategy Nash equilibrium, forcing us to define trust boundary over mixed strategies of the human in order to guarantee safe behavior by the robot.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/01/2019

To Monitor or to Trust: Observing Robot's Behavior based on a Game-Theoretic Model of Trust

In scenarios where a robot generates and executes a plan, there may be i...
research
09/12/2019

Robots that Take Advantage of Human Trust

Humans often assume that robots are rational. We believe robots take opt...
research
11/16/2016

Explicablility as Minimizing Distance from Expected Behavior

In order to have effective human AI collaboration, it is not simply enou...
research
01/06/2022

Trust-based Symbolic Motion Planning for Multi-robot Bounding Overwatch

Multi-robot bounding overwatch requires timely coordination of robot tea...
research
11/12/2021

Committing to Interdependence: Implications from Game Theory for Human-Robot Trust

Human-robot interaction and game theory have developed distinct theories...
research
01/13/2020

When Humans Aren't Optimal: Robots that Collaborate with Risk-Aware Humans

In order to collaborate safely and efficiently, robots need to anticipat...
research
01/16/2013

Approximately Optimal Monitoring of Plan Preconditions

Monitoring plan preconditions can allow for replanning when a preconditi...

Please sign up or login with your details

Forgot password? Click here to reset