Pitfalls of learning a reward function online

04/28/2020
by   Stuart Armstrong, et al.
6

In some agent designs like inverse reinforcement learning an agent needs to learn its own reward function. Learning the reward function and optimising for it are typically two different processes, usually performed at different stages. We consider a continual (“one life”) learning approach where the agent both learns the reward function and optimises for it at the same time. We show that this comes with a number of pitfalls, such as deliberately manipulating the learning process in one direction, refusing to learn, “learning” facts already known to the agent, and making decisions that are strictly dominated (for all relevant reward functions). We formally introduce two desirable properties: the first is `unriggability', which prevents the agent from steering the learning process in the direction of a reward function that is easier to optimise. The second is `uninfluenceability', whereby the reward-function learning process operates by learning facts about the environment. We show that an uninfluenceable process is automatically unriggable, and if the set of possible environments is sufficiently rich, the converse is true too.

READ FULL TEXT

Authors

page 7

11/04/2020

Generative Inverse Deep Reinforcement Learning for Online Recommendation

Deep reinforcement learning enables an agent to capture user's interest ...
09/25/2020

Deep Reinforcement Learning with Stage Incentive Mechanism for Robotic Trajectory Planning

To improve the efficiency of deep reinforcement learning (DRL) based met...
11/19/2018

Scalable agent alignment via reward modeling: a research direction

One obstacle to applying reinforcement learning algorithms to real-world...
06/07/2018

Simplifying Reward Design through Divide-and-Conquer

Designing a good reward function is essential to robot planning and rein...
02/16/2021

Making the most of your day: online learning for optimal allocation of time

We study online learning for optimal allocation when the resource to be ...
11/01/2021

On the Expressivity of Markov Reward

Reward is the driving force for reinforcement-learning agents. This pape...
09/10/2021

Potential-based Reward Shaping in Sokoban

Learning to solve sparse-reward reinforcement learning problems is diffi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.