ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for Mobile Manipulation

08/18/2020
by   Fei Xia, et al.
2

Many Reinforcement Learning (RL) approaches use joint control signals (positions, velocities, torques) as action space for continuous control tasks. We propose to lift the action space to a higher level in the form of subgoals for a motion generator (a combination of motion planner and trajectory executor). We argue that, by lifting the action space and by leveraging sampling-based motion planners, we can efficiently use RL to solve complex, long-horizon tasks that could not be solved with existing RL methods in the original action space. We propose ReLMoGen – a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals. To validate our method, we apply ReLMoGen to two types of tasks: 1) Interactive Navigation tasks, navigation problems where interactions with the environment are required to reach the destination, and 2) Mobile Manipulation tasks, manipulation tasks that require moving the robot base. These problems are challenging because they are usually long-horizon, hard to explore during training, and comprise alternating phases of navigation and interaction. Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments. In all settings, ReLMoGen outperforms state-of-the-art Reinforcement Learning and Hierarchical Reinforcement Learning baselines. ReLMoGen also shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.

READ FULL TEXT

page 5

page 8

page 17

page 18

page 19

page 20

research
10/22/2020

Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments

Deep reinforcement learning (RL) agents are able to learn contact-rich m...
research
10/24/2019

HRL4IN: Hierarchical Reinforcement Learning for Interactive Navigation with Mobile Manipulators

Most common navigation tasks in human environments require auxiliary arm...
research
06/17/2022

N^2M^2: Learning Navigation for Arbitrary Mobile Manipulation Motions in Unseen and Dynamic Environments

Despite its importance in both industrial and service robotics, mobile m...
research
01/10/2023

ORBIT: A Unified Simulation Framework for Interactive Robot Learning Environments

We present ORBIT, a unified and modular framework for robot learning pow...
research
02/25/2020

Whole-Body Control of a Mobile Manipulator using End-to-End Reinforcement Learning

Mobile manipulation is usually achieved by sequentially executing base a...
research
10/11/2022

VER: Scaling On-Policy RL Leads to the Emergence of Navigation in Embodied Rearrangement

We present Variable Experience Rollout (VER), a technique for efficientl...
research
06/29/2023

ArrayBot: Reinforcement Learning for Generalizable Distributed Manipulation through Touch

We present ArrayBot, a distributed manipulation system consisting of a 1...

Please sign up or login with your details

Forgot password? Click here to reset