The Boltzmann Policy Distribution: Accounting for Systematic Suboptimality in Human Models

04/22/2022
by   Cassidy Laidlaw, et al.
1

Models of human behavior for prediction and collaboration tend to fall into two categories: ones that learn from large amounts of data via imitation learning, and ones that assume human behavior to be noisily-optimal for some reward function. The former are very useful, but only when it is possible to gather a lot of human data in the target environment and distribution. The advantage of the latter type, which includes Boltzmann rationality, is the ability to make accurate predictions in new environments without extensive data when humans are actually close to optimal. However, these models fail when humans exhibit systematic suboptimality, i.e. when their deviations from optimal behavior are not independent, but instead consistent over time. Our key insight is that systematic suboptimality can be modeled by predicting policies, which couple action choices over time, instead of trajectories. We introduce the Boltzmann policy distribution (BPD), which serves as a prior over human policies and adapts via Bayesian inference to capture systematic deviations by observing human actions during a single episode. The BPD is difficult to compute and represent because policies lie in a high-dimensional continuous space, but we leverage tools from generative and sequence models to enable efficient sampling and inference. We show that the BPD enables prediction of human behavior and human-AI collaboration equally as well as imitation learning-based human models while using far less data.

READ FULL TEXT

page 4

page 8

page 19

page 21

research
12/14/2021

Modeling Strong and Human-Like Gameplay with KL-Regularized Search

We consider the task of building strong but human-like policies in multi...
research
11/03/2022

Optimal Behavior Prior: Data-Efficient Human Models for Improved Human-AI Collaboration

AI agents designed to collaborate with people benefit from models that e...
research
03/25/2021

Bayesian Disturbance Injection: Robust Imitation Learning of Flexible Policies

Scenarios requiring humans to choose from multiple seemingly optimal act...
research
01/13/2020

LESS is More: Rethinking Probabilistic Models of Human Behavior

Robots need models of human behavior for both inferring human goals and ...
research
11/07/2022

Bayesian Disturbance Injection: Robust Imitation Learning of Flexible Policies for Robot Manipulation

Humans demonstrate a variety of interesting behavioral characteristics w...
research
03/12/2019

Imitation Learning of Factored Multi-agent Reactive Models

We apply recent advances in deep generative modeling to the task of imit...
research
12/09/2022

On the Sensitivity of Reward Inference to Misspecified Human Models

Inferring reward functions from human behavior is at the center of value...

Please sign up or login with your details

Forgot password? Click here to reset