Active Preference-Based Gaussian Process Regression for Reward Learning

05/06/2020
by   Erdem Bıyık, et al.
5

Designing reward functions is a challenging problem in AI and robotics. Humans usually have a difficult time directly specifying all the desirable behaviors that a robot needs to optimize. One common approach is to learn reward functions from collected expert demonstrations. However, learning reward functions from demonstrations introduces many challenges ranging from methods that require highly structured models, e.g. reward functions that are linear in some predefined set of features to less structured reward functions that on the other hand require tremendous amount of data. In addition, humans tend to have a difficult time providing demonstrations on robots with high degrees of freedom, or even quantifying reward values for given demonstrations. To address these challenges, we present a preference-based learning approach, where as an alternative, the human feedback is only of the form of comparisons between trajectories. Furthermore, we do not assume highly constrained structures on the reward function. Instead, we model the reward function using a Gaussian Process (GP) and propose a mathematical formulation to actively find a GP using only human preferences. Our approach enables us to tackle both inflexibility and data-inefficiency problems within a preference-based learning framework. Our results in simulations and a user study suggest that our approach can efficiently learn expressive reward functions for robotics tasks.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 7

page 9

page 10

research
06/21/2019

Learning Reward Functions by Integrating Human Demonstrations and Preferences

Our goal is to accurately and efficiently learn reward functions for aut...
research
10/19/2022

Learning Preferences for Interactive Autonomy

When robots enter everyday human environments, they need to understand t...
research
09/27/2021

Learning Multimodal Rewards from Rankings

Learning from human feedback has shown to be a useful approach in acquir...
research
03/03/2021

Preference-based Learning of Reward Function Features

Preference-based learning of reward functions, where the reward function...
research
10/10/2019

Asking Easy Questions: A User-Friendly Approach to Active Reward Learning

Robots can learn the right reward function by querying a human expert. E...
research
06/18/2018

Learning from Outside the Viability Kernel: Why we Should Build Robots that can Fall with Grace

Despite impressive results using reinforcement learning to solve complex...
research
06/24/2020

Learning Reward Functions from Diverse Sources of Human Feedback: Optimally Integrating Demonstrations and Preferences

Reward functions are a common way to specify the objective of a robot. A...

Please sign up or login with your details

Forgot password? Click here to reset