APReL: A Library for Active Preference-based Reward Learning Algorithms

08/16/2021
by   Erdem Bıyık, et al.
0

Reward learning is a fundamental problem in robotics to have robots that operate in alignment with what their human user wants. Many preference-based learning algorithms and active querying techniques have been proposed as a solution to this problem. In this paper, we present APReL, a library for active preference-based reward learning algorithms, which enable researchers and practitioners to experiment with the existing techniques and easily develop their own algorithms for various modules of the problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/10/2018

Batch Active Preference-Based Learning of Reward Functions

Data generation and labeling are usually an expensive part of learning f...
research
06/21/2019

Learning Reward Functions by Integrating Human Demonstrations and Preferences

Our goal is to accurately and efficiently learn reward functions for aut...
research
08/08/2022

POLAR: Preference Optimization and Learning Algorithms for Robotics

Parameter tuning for robotic systems is a time-consuming and challenging...
research
03/03/2021

Preference-based Learning of Reward Function Features

Preference-based learning of reward functions, where the reward function...
research
01/22/2021

Prior Preference Learning from Experts:Designing a Reward with Active Inference

Active inference may be defined as Bayesian modeling of a brain with a b...
research
03/02/2023

Active Reward Learning from Multiple Teachers

Reward learning algorithms utilize human feedback to infer a reward func...
research
10/03/2022

Reward Learning with Trees: Methods and Evaluation

Recent efforts to learn reward functions from human feedback have tended...

Please sign up or login with your details

Forgot password? Click here to reset