Reward Conditioned Neural Movement Primitives for Population Based Variational Policy Optimization

11/09/2020
by   M. Tuluhan Akbulut, et al.
0

The aim of this paper is to study the reward based policy exploration problem in a supervised learning approach and enable robots to form complex movement trajectories in challenging reward settings and search spaces. For this, the experience of the robot, which can be bootstrapped from demonstrated trajectories, is used to train a novel Neural Processes-based deep network that samples from its latent space and generates the required trajectories given desired rewards. Our framework can generate progressively improved trajectories by sampling them from high reward landscapes, increasing the reward gradually. Variational inference is used to create a stochastic latent space to sample varying trajectories in generating population of trajectories given target rewards. We benefit from Evolutionary Strategies and propose a novel crossover operation, which is applied in the self-organized latent space of the individual policies, allowing blending of the individuals that might address different factors in the reward function. Using a number of tasks that require sequential reaching to multiple points or passing through gaps between objects, we showed that our method provides stable learning progress and significant sample efficiency compared to a number of state-of-the-art robotic reinforcement learning methods. Finally, we show the real-world suitability of our method through real robot execution involving obstacle avoidance.

READ FULL TEXT

page 1

page 6

11/17/2020

Efficient Exploration of Reward Functions in Inverse Reinforcement Learning via Bayesian Optimization

The problem of inverse reinforcement learning (IRL) is relevant to a var...
10/26/2020

Contextual Latent-Movements Off-Policy Optimization for Robotic Manipulation Skills

Parameterized movement primitives have been extensively used for imitati...
06/07/2018

Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings

In this work, we take a representation learning perspective on hierarchi...
04/06/2022

Perceive, Represent, Generate: Translating Multimodal Information to Robotic Motion Trajectories

We present Perceive-Represent-Generate (PRG), a novel three-stage framew...
01/11/2020

Reward Engineering for Object Pick and Place Training

Robotic grasping is a crucial area of research as it can result in the a...
02/19/2019

Learning to Generalize from Sparse and Underspecified Rewards

We consider the problem of learning from sparse and underspecified rewar...
12/09/2019

Adversarial recovery of agent rewards from latent spaces of the limit order book

Inverse reinforcement learning has proved its ability to explain state-a...