PEARL: PrEference Appraisal Reinforcement Learning for Motion Planning

11/30/2018
by   Aleksandra Faust, et al.
0

Robot motion planning often requires finding trajectories that balance different user intents, or preferences. One of these preferences is usually arrival at the goal, while another might be obstacle avoidance. Here, we formalize these, and similar, tasks as preference balancing tasks (PBTs) on acceleration controlled robots, and propose a motion planning solution, PrEference Appraisal Reinforcement Learning (PEARL). PEARL uses reinforcement learning on a restricted training domain, combined with features engineered from user-given intents. PEARL's planner then generates trajectories in expanded domains for more complex problems. We present an adaptation for rejection of stochastic disturbances and offer in-depth analysis, including task completion conditions and behavior analysis when the conditions do not hold. PEARL is evaluated on five problems, two multi-agent obstacle avoidance tasks and three that stochastically disturb the system at run-time: 1) a multi-agent pursuit problem with 1000 pursuers, 2) robot navigation through 900 moving obstacles, which is is trained with in an environment with only 4 static obstacles, 3) aerial cargo delivery, 4) two robot rendezvous, and 5) flying inverted pendulum. Lastly, we evaluate the method on a physical quadrotor UAV robot with a suspended load influenced by a stochastic disturbance. The video, https://youtu.be/ZkFt1uY6vlw contains the experiments and visualization of the simulations.

READ FULL TEXT

page 1

page 3

research
07/11/2019

Learning Safe Unlabeled Multi-Robot Planning with Motion Constraints

In this paper, we present a learning approach to goal assignment and tra...
research
10/30/2020

Towards Preference Learning for Autonomous Ground Robot Navigation Tasks

We are interested in the design of autonomous robot behaviors that learn...
research
04/27/2022

Minimum Displacement Motion Planning for Movable Obstacles

This paper presents a minimum displacement motion planning problem where...
research
08/03/2021

SABER: Data-Driven Motion Planner for Autonomously Navigating Heterogeneous Robots

We present an end-to-end online motion planning framework that uses a da...
research
08/03/2023

NeuroSwarm: Multi-Agent Neural 3D Scene Reconstruction and Segmentation with UAV for Optimal Navigation of Quadruped Robot

Quadruped robots have the distinct ability to adapt their body and step ...
research
09/18/2023

Wait, That Feels Familiar: Learning to Extrapolate Human Preferences for Preference Aligned Path Planning

Autonomous mobility tasks such as lastmile delivery require reasoning ab...
research
09/30/2019

End-to-End Motion Planning of Quadrotors Using Deep Reinforcement Learning

In this work, a novel, end-to-end motion planning method is proposed for...

Please sign up or login with your details

Forgot password? Click here to reset