Log In Sign Up

Universal Value Density Estimation for Imitation Learning and Goal-Conditioned Reinforcement Learning

by   Yannick Schroecker, et al.

This work considers two distinct settings: imitation learning and goal-conditioned reinforcement learning. In either case, effective solutions require the agent to reliably reach a specified state (a goal), or set of states (a demonstration). Drawing a connection between probabilistic long-term dynamics and the desired value function, this work introduces an approach which utilizes recent advances in density estimation to effectively learn to reach a given state. As our first contribution, we use this approach for goal-conditioned reinforcement learning and show that it is both efficient and does not suffer from hindsight bias in stochastic domains. As our second contribution, we extend the approach to imitation learning and show that it achieves state-of-the art demonstration sample-efficiency on standard benchmark tasks.


page 1

page 2

page 3

page 4


Learning To Reach Goals Without Reinforcement Learning

Imitation learning algorithms provide a simple and straightforward appro...

Goal-conditioned Imitation Learning

Designing rewards for Reinforcement Learning (RL) is challenging because...

Reinforcement and Imitation Learning for Diverse Visuomotor Skills

We propose a model-free deep reinforcement learning method that leverage...

Understanding Hindsight Goal Relabeling Requires Rethinking Divergence Minimization

Hindsight goal relabeling has become a foundational technique for multi-...

Generalizing to New Tasks via One-Shot Compositional Subgoals

The ability to generalize to previously unseen tasks with little to no s...

Hybrid system identification using switching density networks

Behaviour cloning is a commonly used strategy for imitation learning and...

Metric Residual Networks for Sample Efficient Goal-Conditioned Reinforcement Learning

Goal-conditioned reinforcement learning (GCRL) has a wide range of poten...