Behavior Priors for Efficient Reinforcement Learning

by   Dhruva Tirumala, et al.

As we deploy reinforcement learning agents to solve increasingly challenging problems, methods that allow us to inject prior knowledge about the structure of the world and effective solution strategies becomes increasingly important. In this work we consider how information and architectural constraints can be combined with ideas from the probabilistic modeling literature to learn behavior priors that capture the common movement and interaction patterns that are shared across a set of related tasks or contexts. For example the day-to day behavior of humans comprises distinctive locomotion and manipulation patterns that recur across many different situations and goals. We discuss how such behavior patterns can be captured using probabilistic trajectory models and how these can be integrated effectively into reinforcement learning schemes, e.g. to facilitate multi-task and transfer learning. We then extend these ideas to latent variable models and consider a formulation to learn hierarchical priors that capture different aspects of the behavior in reusable modules. We discuss how such latent variable formulations connect to related work on hierarchical reinforcement learning (HRL) and mutual information and curiosity based objectives, thereby offering an alternative perspective on existing ideas. We demonstrate the effectiveness of our framework by applying it to a range of simulated continuous control domains.


page 12

page 16

page 19

page 27


Exploiting Hierarchy for Learning and Transfer in KL-regularized RL

As reinforcement learning agents are tasked with solving more challengin...

Sample-Efficient Reinforcement Learning through Transfer and Architectural Priors

Recent work in deep reinforcement learning has allowed algorithms to lea...

TempoRL: Temporal Priors for Exploration in Off-Policy Reinforcement Learning

Efficient exploration is a crucial challenge in deep reinforcement learn...

Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings

In this work, we take a representation learning perspective on hierarchi...

Meta Reinforcement Learning with Latent Variable Gaussian Processes

Data efficiency, i.e., learning from small data sets, is critical in man...

Learning When and What to Ask: a Hierarchical Reinforcement Learning Framework

Reliable AI agents should be mindful of the limits of their knowledge an...

Residual Pathway Priors for Soft Equivariance Constraints

There is often a trade-off between building deep learning systems that a...