Human Preference Scaling with Demonstrations For Deep Reinforcement Learning

07/25/2020
by   Zehong Cao, et al.
0

The current reward learning from human preferences could be used for resolving complex reinforcement learning (RL) tasks without access to the reward function by defining a single fixed preference between pairs of trajectory segments. However, the judgement of preferences between trajectories is not dynamic and still requires human inputs over 1,000 times. In this study, we propose a human preference scaling model that naturally reflects the human perception of the degree of choice between trajectories and then develop a human-demonstration preference model via supervised learning to reduce the number of human inputs. The proposed human preference scaling model with demonstrations can effectively solve complex RL tasks and achieve higher cumulative rewards in simulated robot locomotion - MuJoCo games - relative to the single fixed human preferences. Furthermore, our developed human-demonstration preference model only needs human feedback for less than 0.01% of the agent's interactions with the environment and significantly reduces up to 30% of the cost of human inputs compared to the existing approaches. To present the flexibility of our approach, we released a video (https://youtu.be/jQPe1OILT0M) showing comparisons of behaviours of agents trained with different types of human inputs. We believe that our naturally inspired human preference scaling with demonstrations is beneficial for precise reward learning and can potentially be applied to state-of-the-art RL systems, such as autonomy-level driving systems.

READ FULL TEXT

page 1

page 5

research
06/12/2017

Deep reinforcement learning from human preferences

For sophisticated reinforcement learning (RL) systems to interact useful...
research
07/22/2023

DIP-RL: Demonstration-Inferred Preference Learning in Minecraft

In machine learning for sequential decision-making, an algorithmic agent...
research
10/15/2020

Human-guided Robot Behavior Learning: A GAN-assisted Preference-based Reinforcement Learning Approach

Human demonstrations can provide trustful samples to train reinforcement...
research
03/02/2023

Preference Transformer: Modeling Human Preferences using Transformers for RL

Preference-based reinforcement learning (RL) provides a framework to tra...
research
04/10/2023

Learning a Universal Human Prior for Dexterous Manipulation from Human Preference

Generating human-like behavior on robots is a great challenge especially...
research
12/08/2018

Learning Montezuma's Revenge from a Single Demonstration

We propose a new method for learning from a single demonstration to solv...
research
02/17/2023

A State Augmentation based approach to Reinforcement Learning from Human Preferences

Reinforcement Learning has suffered from poor reward specification, and ...

Please sign up or login with your details

Forgot password? Click here to reset