LESS is More: Rethinking Probabilistic Models of Human Behavior

01/13/2020
by   Andreea Bobu, et al.
45

Robots need models of human behavior for both inferring human goals and preferences, and predicting what people will do. A common model is the Boltzmann noisily-rational decision model, which assumes people approximately optimize a reward function and choose trajectories in proportion to their exponentiated reward. While this model has been successful in a variety of robotics domains, its roots lie in econometrics, and in modeling decisions among different discrete options, each with its own utility or reward. In contrast, human trajectories lie in a continuous space, with continuous-valued features that influence the reward function. We propose that it is time to rethink the Boltzmann model, and design it from the ground up to operate over such trajectory spaces. We introduce a model that explicitly accounts for distances between trajectories, rather than only their rewards. Rather than each trajectory affecting the decision independently, similar trajectories now affect the decision together. We start by showing that our model better explains human behavior in a user study. We then analyze the implications this has for robot inference, first in toy environments where we have ground truth and find more accurate inference, and finally for a 7DOF robot arm learning from user demonstrations.

READ FULL TEXT
research
08/08/2023

Optimizing Algorithms From Pairwise User Preferences

Typical black-box optimization approaches in robotics focus on learning ...
research
05/16/2023

Reward Learning with Intractable Normalizing Functions

Robots can learn to imitate humans by inferring what the human is optimi...
research
11/12/2021

Human irrationality: both bad and good for reward inference

Assuming humans are (approximately) rational enables robots to infer rew...
research
12/15/2017

Impossibility of deducing preferences and rationality from human policy

Inverse reinforcement learning (IRL) attempts to infer human rewards or ...
research
04/22/2022

The Boltzmann Policy Distribution: Accounting for Systematic Suboptimality in Human Models

Models of human behavior for prediction and collaboration tend to fall i...
research
10/10/2019

Asking Easy Questions: A User-Friendly Approach to Active Reward Learning

Robots can learn the right reward function by querying a human expert. E...
research
01/19/2021

Choice Set Misspecification in Reward Inference

Specifying reward functions for robots that operate in environments with...

Please sign up or login with your details

Forgot password? Click here to reset