Action Priors for Large Action Spaces in Robotics

01/11/2021
by   Ondrej Biza, et al.
2

In robotics, it is often not possible to learn useful policies using pure model-free reinforcement learning without significant reward shaping or curriculum learning. As a consequence, many researchers rely on expert demonstrations to guide learning. However, acquiring expert demonstrations can be expensive. This paper proposes an alternative approach where the solutions of previously solved tasks are used to produce an action prior that can facilitate exploration in future tasks. The action prior is a probability distribution over actions that summarizes the set of policies found solving previous tasks. Our results indicate that this approach can be used to solve robotic manipulation problems that would otherwise be infeasible without expert demonstrations.

READ FULL TEXT

page 2

page 7

page 8

page 11

research
06/16/2021

Automatic Curricula via Expert Demonstrations

We propose Automatic Curricula via Expert Demonstrations (ACED), a reinf...
research
12/03/2022

Reinforcement learning with Demonstrations from Mismatched Task under Sparse Reward

Reinforcement learning often suffer from the sparse reward issue in real...
research
07/11/2018

Learning Singularity Avoidance

With the increase in complexity of robotic systems and the rise in non-e...
research
04/12/2023

Exploiting Symmetry and Heuristic Demonstrations in Off-policy Reinforcement Learning for Robotic Manipulation

Reinforcement learning demonstrates significant potential in automatical...
research
06/18/2011

Bayesian multitask inverse reinforcement learning

We generalise the problem of inverse reinforcement learning to multiple ...
research
09/21/2022

An Open Tele-Impedance Framework to Generate Large Datasets for Contact-Rich Tasks in Robotic Manipulation

Using large datasets in machine learning has led to outstanding results,...
research
12/04/2019

Learning from Interventions using Hierarchical Policies for Safe Learning

Learning from Demonstrations (LfD) via Behavior Cloning (BC) works well ...

Please sign up or login with your details

Forgot password? Click here to reset