Active Reward Learning from Multiple Teachers

03/02/2023
by   Peter Barnett, et al.
0

Reward learning algorithms utilize human feedback to infer a reward function, which is then used to train an AI system. This human feedback is often a preference comparison, in which the human teacher compares several samples of AI behavior and chooses which they believe best accomplishes the objective. While reward learning typically assumes that all feedback comes from a single teacher, in practice these systems often query multiple teachers to gather sufficient training data. In this paper, we investigate this disparity, and find that algorithmic evaluation of these different sources of feedback facilitates more accurate and efficient reward learning. We formally analyze the value of information (VOI) when reward learning from teachers with varying levels of rationality, and define and evaluate an algorithm that utilizes this VOI to actively select teachers to query for feedback. Surprisingly, we find that it is often more informative to query comparatively irrational teachers. By formalizing this problem and deriving an analytical solution, we hope to facilitate improvement in reward learning approaches to aligning AI behavior with human values.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/18/2023

Provably Feedback-Efficient Reinforcement Learning via Active Reward Learning

An appropriate reward function is of paramount importance in specifying ...
research
11/12/2022

The Expertise Problem: Learning from Specialized Feedback

Reinforcement learning from human feedback (RLHF) is a powerful techniqu...
research
08/23/2022

The Effect of Modeling Human Rationality Level on Learning Rewards from Multiple Feedback Types

When inferring reward functions from human behavior (be it demonstration...
research
01/09/2023

On The Fragility of Learned Reward Functions

Reward functions are notoriously difficult to specify, especially for ta...
research
07/24/2023

Provable Benefits of Policy Learning from Human Preferences in Contextual Bandit Problems

A crucial task in decision-making problems is reward engineering. It is ...
research
12/15/2022

Constitutional AI: Harmlessness from AI Feedback

As AI systems become more capable, we would like to enlist their help to...
research
08/16/2021

APReL: A Library for Active Preference-based Reward Learning Algorithms

Reward learning is a fundamental problem in robotics to have robots that...

Please sign up or login with your details

Forgot password? Click here to reset