When Humans Aren't Optimal: Robots that Collaborate with Risk-Aware Humans

01/13/2020
by   Minae Kwon, et al.
14

In order to collaborate safely and efficiently, robots need to anticipate how their human partners will behave. Some of today's robots model humans as if they were also robots, and assume users are always optimal. Other robots account for human limitations, and relax this assumption so that the human is noisily rational. Both of these models make sense when the human receives deterministic rewards: i.e., gaining either 100 or130 with certainty. But in real world scenarios, rewards are rarely deterministic. Instead, we must make choices subject to risk and uncertainty–and in these settings, humans exhibit a cognitive bias towards suboptimal behavior. For example, when deciding between gaining 100 with certainty or130 only 80 to make the risk-averse choice–even though it leads to a lower expected gain! In this paper, we adopt a well-known Risk-Aware human model from behavioral economics called Cumulative Prospect Theory and enable robots to leverage this model during human-robot interaction (HRI). In our user studies, we offer supporting evidence that the Risk-Aware model more accurately predicts suboptimal human behavior. We find that this increased modeling accuracy results in safer and more efficient human-robot collaboration. Overall, we extend existing rational human models so that collaborative robots can anticipate and plan around suboptimal human behavior during HRI.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 7

page 8

page 9

research
09/12/2019

Robots that Take Advantage of Human Trust

Humans often assume that robots are rational. We believe robots take opt...
research
04/23/2023

Should Collaborative Robots be Transparent?

Today's robots often assume that their behavior should be transparent. T...
research
03/13/2021

Dynamically Switching Human Prediction Models for Efficient Planning

As environments involving both robots and humans become increasingly com...
research
04/12/2021

Risk-Averse Biased Human Policies in Assistive Multi-Armed Bandit Settings

Assistive multi-armed bandit problems can be used to model team situatio...
research
03/15/2023

Robot Navigation in Risky, Crowded Environments: Understanding Human Preferences

Risky and crowded environments (RCE) contain abstract sources of risk an...
research
11/12/2021

Human irrationality: both bad and good for reward inference

Assuming humans are (approximately) rational enables robots to infer rew...
research
03/01/2019

To Monitor Or Not: Observing Robot's Behavior based on a Game-Theoretic Model of Trust

In scenarios where a robot generates and executes a plan, there may be i...

Please sign up or login with your details

Forgot password? Click here to reset