Log In Sign Up

Action Priors for Large Action Spaces in Robotics

by   Ondrej Biza, et al.

In robotics, it is often not possible to learn useful policies using pure model-free reinforcement learning without significant reward shaping or curriculum learning. As a consequence, many researchers rely on expert demonstrations to guide learning. However, acquiring expert demonstrations can be expensive. This paper proposes an alternative approach where the solutions of previously solved tasks are used to produce an action prior that can facilitate exploration in future tasks. The action prior is a probability distribution over actions that summarizes the set of policies found solving previous tasks. Our results indicate that this approach can be used to solve robotic manipulation problems that would otherwise be infeasible without expert demonstrations.


page 2

page 7

page 8

page 11


Automatic Curricula via Expert Demonstrations

We propose Automatic Curricula via Expert Demonstrations (ACED), a reinf...

Reinforcement learning with Demonstrations from Mismatched Task under Sparse Reward

Reinforcement learning often suffer from the sparse reward issue in real...

Learning Singularity Avoidance

With the increase in complexity of robotic systems and the rise in non-e...

Bayesian multitask inverse reinforcement learning

We generalise the problem of inverse reinforcement learning to multiple ...

Learning from Interventions using Hierarchical Policies for Safe Learning

Learning from Demonstrations (LfD) via Behavior Cloning (BC) works well ...

SCAPE: Learning Stiffness Control from Augmented Position Control Experiences

We introduce a sample-efficient method for learning state-dependent stif...

CEIP: Combining Explicit and Implicit Priors for Reinforcement Learning with Demonstrations

Although reinforcement learning has found widespread use in dense reward...