Implicit Kinematic Policies: Unifying Joint and Cartesian Action Spaces in End-to-End Robot Learning

03/03/2022
by   Aditya Ganapathi, et al.
0

Action representation is an important yet often overlooked aspect in end-to-end robot learning with deep networks. Choosing one action space over another (e.g. target joint positions, or Cartesian end-effector poses) can result in surprisingly stark performance differences between various downstream tasks – and as a result, considerable research has been devoted to finding the right action space for a given application. However, in this work, we instead investigate how our models can discover and learn for themselves which action space to use. Leveraging recent work on implicit behavioral cloning, which takes both observations and actions as input, we demonstrate that it is possible to present the same action in multiple different spaces to the same policy – allowing it to learn inductive patterns from each space. Specifically, we study the benefits of combining Cartesian and joint action spaces in the context of learning manipulation skills. To this end, we present Implicit Kinematic Policies (IKP), which incorporates the kinematic chain as a differentiable module within the deep network. Quantitative experiments across several simulated continuous control tasks – from scooping piles of small objects, to lifting boxes with elbows, to precise block insertion with miscalibrated robots – suggest IKP not only learns complex prehensile and non-prehensile manipulation from pixels better than baseline alternatives, but also can learn to compensate for small joint encoder offset errors. Finally, we also run qualitative experiments on a real UR5e to demonstrate the feasibility of our algorithm on a physical robotic system with real data. See https://tinyurl.com/4wz3nf86 for code and supplementary material.

READ FULL TEXT

page 1

page 4

page 5

page 6

research
06/20/2019

Variable Impedance Control in End-Effector Space: An Action Space for Reinforcement Learning in Contact-Rich Tasks

Reinforcement Learning (RL) of contact-rich manipulation tasks has yield...
research
05/10/2019

Multi-Pass Q-Networks for Deep Reinforcement Learning with Parameterised Action Spaces

Parameterised actions in reinforcement learning are composed of discrete...
research
04/20/2020

Spatial Action Maps for Mobile Manipulation

This paper proposes a new action representation for learning to perform ...
research
07/02/2023

Learning Robot Geometry as Distance Fields: Applications to Whole-body Manipulation

In this work, we propose to learn robot geometry as distance fields (RDF...
research
10/04/2021

Discovering Synergies for Robot Manipulation with Multi-Task Reinforcement Learning

Controlling robotic manipulators with high-dimensional action spaces for...
research
10/06/2020

Policy learning in SE(3) action spaces

In the spatial action representation, the action space spans the space o...
research
11/27/2020

An open-ended learning architecture to face the REAL 2020 simulated robot competition

Open-ended learning is a core research field of machine learning and rob...

Please sign up or login with your details

Forgot password? Click here to reset