DeepAI AI Chat
Log In Sign Up

Deep reinforcement learning from human preferences

by   Paul Christiano, et al.

For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.


Human Preference Scaling with Demonstrations For Deep Reinforcement Learning

The current reward learning from human preferences could be used for res...

Reward learning from human preferences and demonstrations in Atari

To solve complex real-world problems with reinforcement learning, we can...

Blind Bipedal Stair Traversal via Sim-to-Real Reinforcement Learning

Accurate and precise terrain estimation is a difficult problem for robot...

Leveraging human knowledge in tabular reinforcement learning: A study of human subjects

Reinforcement Learning (RL) can be extremely effective in solving comple...

Beyond Winning and Losing: Modeling Human Motivations and Behaviors Using Inverse Reinforcement Learning

In recent years, reinforcement learning (RL) methods have been applied t...

A User Study on Explainable Online Reinforcement Learning for Adaptive Systems

Online reinforcement learning (RL) is increasingly used for realizing ad...

LBGP: Learning Based Goal Planning for Autonomous Following in Front

This paper investigates a hybrid solution which combines deep reinforcem...

Code Repositories


Learning From Human Preferences - Tensorflow+Keras Implementation

view repo