RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies

07/10/2019
by   Hao-Tien Lewis Chiang, et al.
5

This paper addresses two challenges facing sampling-based kinodynamic motion planning: a way to identify good candidate states for local transitions and the subsequent computationally intractable steering between these candidate states. Through the combination of sampling-based planning, a Rapidly Exploring Randomized Tree (RRT) and an efficient kinodynamic motion planner through machine learning, we propose an efficient solution to long-range planning for kinodynamic motion planning. First, we use deep reinforcement learning to learn an obstacle-avoiding policy that maps a robot's sensor observations to actions, which is used as a local planner during planning and as a controller during execution. Second, we train a reachability estimator in a supervised manner, which predicts the RL policy's time to reach a state in the presence of obstacles. Lastly, we introduce RL-RRT that uses the RL policy as a local planner, and the reachability estimator as the distance function to bias tree-growth towards promising regions. We evaluate our method on three kinodynamic systems, including physical robot experiments. Results across all three robots tested indicate that RL-RRT outperforms state of the art kinodynamic planners in efficiency, and also provides a shorter path finish time than a steering function free method. The learned local planner policy and accompanying reachability estimator demonstrate transferability to the previously unseen experimental environments, making RL-RRT fast because the expensive computations are replaced with simple neural network inference.

READ FULL TEXT

page 1

page 6

page 7

research
06/01/2019

Harnessing Reinforcement Learning for Neural Motion Planning

Motion planning is an essential component in most of today's robotic app...
research
04/15/2022

Safe Reinforcement Learning Using Black-Box Reachability Analysis

Reinforcement learning (RL) is capable of sophisticated motion planning ...
research
06/11/2023

Reinforcement Learning in Robotic Motion Planning by Combined Experience-based Planning and Self-Imitation Learning

High-quality and representative data is essential for both Imitation Lea...
research
12/29/2022

Policy Optimization to Learn Adaptive Motion Primitives in Path Planning with Dynamic Obstacles

This paper addresses the kinodynamic motion planning for non-holonomic r...
research
03/14/2022

Speeding up deep neural network-based planning of local car maneuvers via efficient B-spline path construction

This paper demonstrates how an efficient representation of the planned p...
research
07/16/2023

Bayesian inference for data-efficient, explainable, and safe robotic motion planning: A review

Bayesian inference has many advantages in robotic motion planning over f...
research
09/17/2021

Integrating Deep Reinforcement and Supervised Learning to Expedite Indoor Mapping

The challenge of mapping indoor environments is addressed. Typical heuri...

Please sign up or login with your details

Forgot password? Click here to reset