Long-Range Indoor Navigation with PRM-RL

02/25/2019 ∙ by Anthony Francis, et al. ∙ 4

Long-range indoor navigation requires guiding robots with noisy sensors and controls through cluttered environments along paths that span a variety of buildings. We achieve this with PRM-RL, a hierarchical robot navigation method in which reinforcement learning agents that map noisy sensors to robot controls learn to solve short-range obstacle avoidance tasks, and then sampling-based planners map where these agents can reliably navigate in simulation; these roadmaps and agents are then deployed on-robot, guiding the robot along the shortest path where the agents are likely to succeed. Here we use Probabilistic Roadmaps (PRMs) as the sampling-based planner and AutoRL as the reinforcement learning method in the indoor navigation context. We evaluate the method in simulation for kinematic differential drive and kinodynamic car-like robots in several environments, and on-robot for differential-drive robots at two physical sites. Our results show PRM-RL with AutoRL is more successful than several baselines, is robust to noise, and can guide robots over hundreds of meters in the face of noise and obstacles in both simulation and on-robot, including over 3.3 kilometers of physical robot navigation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 5

page 8

page 9

page 14

page 18

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Long-range indoor robot navigation requires human-scale robots (Fig. (a)a) to move safely over building-scale distances (Figs. (b)b). To robustly navigate over long distances while adapting to new environments, we factor the problem into long-range path planning and end-to-end local control, while assuming mapping and localization are provided. Long-range path planning finds collision-free paths to distant goals not reachable by local control [38]. End-to-end local control produces feasible controls to follow ideal paths while avoiding obstacles (e.g., [35], [22]) and compensating for noisy sensors and localization [12]. We enable end-to-end local control to inform long-range path planning through sampling-based planning.

(a) Indoor navigation platform
(b) Navigation testbed
Fig. 1: The long-range indoor navigation task. a) Approximately cylindrical differential drive robot. b) Deployment environments are office buildings.

Sampling-based planners, such as Probabilistic Roadmaps (PRMs) [34] and Rapidly Exploring Random Trees (RRTs) [37, 39], plan efficiently by approximating the topology of the configuration space with a graph or tree constructed by sampling points in free space and connecting these points if there is a collision-free local path between them. Typically, these local paths are created by line-of-sight tests or an inexpensive local planner, which are then connected in a sequence to form the full collision-free path.

Fig. 2: Quad-Building Complex - by : A large roadmap derived from real building plans which PRM-RL successfully navigated of the time in simulation. The connected segment in the upper center corresponds to the upper floor of Building 1 used in our evaluations and contains the space over which our SLAM map in Figure 6 was collected. Blue dots are sampled points and yellow lines are roadmap connections navigable in simulation with the AutoRL policy. This roadmap has 15.9 thousand samples and had 1.4 million candidate edges prior to connection attempts, of which 689 thousand edges were confirmed. It took 4 days to build using 300 workers in a cluster, requiring 1.1 billion collision checks. The upper right inset is the training environment from Figure (a)a, to scale; the quad-building complex is approximately two hundred times larger in map area.
(a) Training environment - by
(b) Building 2 - by
(c) Building 3 - by
(d) Building 1 - by
Fig. 3: The environments used for indoor navigation are derived from real building plans. a) The smallest environment is used to train the RL agent for faster training and iteration. b)-d) PRMs for deployment environments are built using agents trained in the training environment. Red regions are deemed too close to obstacles and cause episode termination when the robot enters them; light grey is free space from which the starts and goals are selected. Blue lines connect PRM waypoints, and the RL agent’s executed trajectory is black.

Regardless of how a path is generated, executing it requires handling sensor noise, unmodeled dynamics, and environment changes. Recently, reinforcement learning (RL) agents [36] have solved complex robot control problems [61], generated trajectories under task constraints [20], demonstrated robustness to noise [19], and learned complex skills [51] [49], making them good choices to deal with task constraints. Many simple navigation tasks require only low-dimensional sensors and controls, like lidar and differential drive, and can be solved with easily trainable networks [63, 25, 7]. However, as we increase complexity of the problem by requiring longer episodes or providing only sparse rewards [18], RL agents become more difficult to train, and RL doesn’t always transfer well to new environments [30] [29].

Long-range navigation presents all of these challenges. Rewards are sparse and episodes are long, making long-range agents hard to train. On complex maps short-range agents are vulnerable to local minima like wide barriers and narrow passages. Even within deployment categories, environments have vast variety: the SunCG dataset has 45,000 distinct houselike environments [58], and the US alone has over 5.6 million office buildings [9].

We present PRM-RL, an approach to long-range navigation which combines PRMs and RL to overcome each other’s shortfalls. In PRM-RL, an RL agent learns a local point-to-point task, incorporating system dynamics and sensor noise independent of long-range environment structure. The agent’s learned behavior then influences roadmap construction; PRM-RL builds a roadmap by connecting two configuration points only if the agent consistently navigates the point-to-point path between them collision free, thereby learning the long-range environment structure. PRM-RL roadmaps perform better than roadmaps based on pure connectivity because they respect robot dynamics. RL agents perform better with roadmap guidance, avoiding local minima even in untrained-for environments. PRM-RL thus combines PRM efficiency with RL robustness, creating a long-range navigation planner that not only avoids local-minima traps, but is resilient to environmental changes and transfers well to new environments, as shown in Figure 2, where a policy trained on a small training environment scales to a four-building complex almost two hundred times larger in map area.

This paper contributes PRM-RL as a hierarchical kinodynamic planner for navigation in large environments for robots with noisy sensors. This paper is an extension of [21], which contributes the original PRM-RL method. Here, we investigate PRM-RL in the navigation context and make the following contributions beyond the original paper: 1) Algorithm 2 for PRM-RL roadmap building 2) Algorithm 3 for robot deployment; 3) PRM-RL application to kinodynamic planning on a car model with inertia; 4) in-depth analysis of PRM-RL, including: 4.1) correlation between the quality of the local planner and the overall hierarchical planner; 4.2) impact of improving planners and changing parameters on PRM-RL computational time complexity; 4.3) impact of a robust local planner on the effective topology of the configuration space. All the evaluations and experiments are new and original to this paper. We evaluate PRM-RL with a more effective local planner [12], compare it in simulation against several baselines in seven different buildings, and deploy it to two physical robot testbeds.

Overall, we show improved performance, better roadmap generation, and easier on-robot transfer, including a relative success rate increase of over [21], and over [11], while maintaining good performance despite increasing noise. We also show that only adding edges when agents can always navigate them makes roadmaps cheaper to build and improves navigation success; denser roadmaps also have higher simulated success rates but at substantial roadmap construction cost. Floorplans are not always available or up to date, but we show roadmaps built from SLAM maps close the simulation to reality gap by producing planners that perform in simulation virtually as well as they do on the robot. SLAM-based PRM-RL enables real robot deployments with hundred-meter collision-free trajectories at two different sites on two different robots with success rates as high as . We also show that PRM-RL functions well on robots with dynamic constraints, with an success rate in simulation.

While this paper focuses on navigation, we believe that the analysis and empirical findings will be of interest to the wider motion planning research community for two reasons. First, PRM-RL presented here is an example of a hierarchical motion planner that can incorporate models of sensor, localization, and control uncertainties into the roadmap construction phase, resulting in a planner that performs as well in simulation as it does on robots. Second, we present a comprehensive analysis of the trade-offs in performance and computational complexity and the interplay between local and global planner that is not specific to navigation.

Ii Related Work

Ii-1 Probabilistic roadmaps

PRMs [34] have been used in a wide variety of planning problems from robotics [26, 45] to molecular folding [3, 53, 60]. They have also been integrated with reinforcement learning for state space dimensionality reduction [43, 53] by using PRM nodes as the state space for the reinforcement learning agent. In contrast, our work applies reinforcement learning to the full state space as a local planner for PRMs. In prior work for an aerial cargo delivery task, we trained RL agents to track paths generated from PRMs constructed using both a straight-line local planner [20]. PRMs have also been modified to work with moving obstacles [27, 55], noisy sensors [44], and localization errors [2, 4]. Safety PRM [44] uses probabilistic collision checking with a straight-line planner, associating with all nodes and edges a measure of potential collision. All those methods address one source of errors at the time. In contrast, PRM-RL uses a RL-based local planner capable of avoiding obstacles and handling noisy sensors and dynamics, and at the node connection phase the RL local planner does Monte Carlo path rollouts with deterministic collision checking. We only add edges if the path can be consistently navigated within a tunable success threshold.

PRMs are easy to parallelize, either through parallel edge connections [5], sampling [8], or building sub-regional roadmaps [15] in parallel. To speed-up building of our large scale roadmaps, we use approach similar to [5] across different computers in a cluster. Individual Monte Carlo rollouts of the edge connecting can be parallelized across multiple processors or run sequentially to allow for early termination.

Ii-2 Reinforcement learning in motion planning

Reinforcement learning has recently gained popularity in solving motion planning problems for systems with unknown dynamics [36], and has enabled robots to learn tasks that have been previously difficult or impossible [1, 10, 40]. For example, Deep Deterministic Policy Gradient (DDPG) [41] is a current state-of-the-art algorithm that works with very high dimensional state and action spaces and is able to learn to control robots based on unprocessed sensor observations [40].

Deep RL has been successfully applied to the navigation problem for robots, including visual navigation with simplified navigation controllers [7, 14, 25, 50, 56, 64], more realistic controllers environments in gamelike environments [6, 13, 48], extracting navigation features from realistic environments [10, 23]. In the local planner setting similar to ours, differential drive robot with 1-d lidar sensing several approaches emerged recently using asynchronous DDPG [59], expert demonstrations [54], DDPG [42], and curriculum learning [62], and AutoRL [12]

. While any of the obstacle-avoidance agents can be used as a local planner in PRM-RL, we choose AutoRL for the simplicity of training, because AutoRL automates the search for the reward and neural network architecture.

Ii-3 Hierarchical planners with reinforcement learning

Several recent works feature hierarchical planners combined with reinforcement learning, over a grid [17], or manually selected waypoints [32]. In these works, the roadgraph points are connected w.r.t. to a straight-line planner, and the reinforcement learning agent is used as an execution policy at run-time. We use the obstacle-avoidance RL policy for both execution policy and a local planner as a basis for connecting the edges in the roadgraph. This approach results in the roadmap connectivity that is tuned to the abilities of the particular robot.

Iii Problem Statement

PRM-RL addresses the general problem of generating a valid executable path for a robotic system given task constraints. Given the collision-free portion of the configuration space and optionally a task predicate that must be satisfied , a valid point is a point in that satisfies the task predicate . Define a path as a sequence of waypoints ; we assume time is discrete for the purposes of exposition, but this is not strictly required. A valid path consists of only valid waypoints: .

A robot’s observation space is generally not identical to , but is produced by a sensor process that can be modeled as a combination of inherent sensor dynamics and a source of noise: . Similarly, actions in the robot’s action space have a state-dependent effect which also can be modeled as a combination of inherent robot dynamics and a source of noise: .

Given a valid start configuration and a policy , an executed trajectory is a sequence of configuration states that results from drawing actions from the policy and its noise processes: . A trajectory is considered a failure it produces an invalid point by exiting or failing to satisfy , or if it exceeds a task-specific limit . Given a valid observable goal configuration , a non-failed trajectory is considered a success if it reaches a point sufficiently close to the goal with respect to a task-dependent threshold , at which point the task is considered to have completed.

With respect to a given path-interpreting policy , a valid executable path is one which the policy can reliably execute to achieve task completion – guiding the agent from the start state of to within of the goal state within time steps, all along an executed trajectory which remains within and satisfies the task predicate

. Because noise makes execution stochastic, a path is defined to be reliable if the policy’s probability of task completion using it exceeds a task-dependent

success threshold .

In other words, PRM-RL is a tool for generating paths which an RL agent can reliably satisfy without violating the constraints of its task. In [21] we demonstrated PRM-RL’s success on tasks with constraints, but in this work, we focus solely on the navigation task, which collapses the task predicate to remaining within , and collapses the full configuration space available to the robot to a task space limited to the robot’s position and orientation.

Iv Methods

The PRM-RL method has three stages: training an environment-independent local planner policy with RL, creating a roadmap specific to that local planner and an environment, and deploying that roadmap and policy to the environment for querying, trajectory generation and execution. Fig. 4 illustrates the method.

First, in the training stage (Fig. (a)a), to enable a robot to perform a specific task, we train a local planner policy with RL. For indoor navigation, that task is short-range point-to-point navigation end-to-end from sensors to actions. This task is independent of the environment that the robot will eventually operate in. The RL agent learns to perform a task on an environment comparable to the deployment environment, but with a small state space to make training tractable. This is a Monte Carlo simulation process: we train multiple policies and select the fittest one for the next stage of PRM-RL, regardless of the learning algorithm used.

Next, in the creation phase (Fig. (b)b upper), to prepare a robot to work in a specific environment, we use this best policy as a local planner to build a PRM with respect to the target site. Obstacle maps, such as floor plans or SLAM maps, can be used for any robot we wish to deploy in a building as long as the policy transfers well to the real world. This is a one-time setup per robot and environment. Points are sampled from as in standard PRM, but PRM-RL only adds an edge to the roadmap only when the agent can navigate it in simulation with greater probability than over trials. Rather than being determined by the geometry of free space, the resulting roadmap is tuned to capabilities of the particular robot, so different robots over the same floor plan may generate different roadmaps with different connectivity.

Finally, in the deployment phase, (Fig. (b)b lower), to perform the task in the environment, the constructed roadmap is queried to generate trajectories, which are executed by the same RL agent used to generate the roadmap. Querying a roadmap can be the same as standard PRM; alternately, during construction we can record additional data about executed trajectories in the roadmap to enable other trajectory metrics, such as minimal energy consumption, shortest time, and so on, which are not available with geometry-only approaches. At execution time, the RL agent navigates to the first waypoint. Once the agent is close enough to the waypoint, the next waypoint becomes its new goal; the process repeats until the agent has traversed the whole trajectory.

(a) RL Training
(b) PRM-RL Deployment
Fig. 4: The PRM-RL approach. a) RL learns a model of task and system dynamics. This enables the construction of a local planner and the generation of a PRM-RL roadmap. This roadmap and policy can then be executed on-robot using the same local planner. b) The same policy can generate roadmaps for different floorplans, enabling deployment to many sites.

Iv-a RL Agent Training

To maintain independence from environments, we train the RL agent to perform its task without knowledge of the workspace topology: the transfer function

that leads the system to task completion is only conditioned on what the robot can observe and what it has been commanded to do. Formally, we learn policies to control an agent that we model as a partially observable Markov decision process (POMDP) represented as a tuple

of observations, actions, dynamics, noise, reward and discount. Observations, actions, dynamics and noise are determined by the characteristics of the robot and are continuous and multi-dimensional. Reward and discount are determined by the requirements of the task: is a scalar discount factor, whereas the reward has a more complicated structure , including a true scalar objective representing the task and a weighted dense scalar reward based on observable features. We assume a presence of a simplified black-box simulator without knowing the full non-linear system dynamics.

PRMs require a local planner to connect points; in its basic form, PRM-RL’s local planner must navigate a robot from a start position to a goal position without collision; we call this the point-to-point (P2P) task. The task space for navigation is , that is, a robot at a position and orientation. The navigable portion of the task space is typically many times the robot’s radius – for the spaces we consider, meter-scale robots in hundred-meter buildings.

Iv-A1 P2P for differential drive robots

Our primary focus on this paper is on approximately cylindrical differential drive robots with lidar, represented by a unicycle or Type (2,0) model [57] with configuration space

. The observations are pairs of 1-d lidar vectors and goal sets observed over

steps: . The actions are a 2-dimensional continuous action vector that encodes the robot’s linear and angular velocity, . The dynamics and noise are implicit in the real world but are encoded separately in simulation in the system dynamics and an added noise model.

The true objective of the P2P agent is to maximize the probability of reaching the goal without collisions,

(1)

where is an indicator function, is the goal pose, and the goal size, and the zero-collision property is enforced by terminating episodes on collisions. The goal observation is the relative goal position in polar coordinates, which is readily available from localization. The reward for P2P for differential drive robots is:

(2)

where is 1 when the agent reaches the goal and 0 otherwise, is the negative Euclidean distance to the goal, is 1 when the agent collides with obstacles and 0 otherwise, is the distance to the closest obstacle, is a constant penalty step with value 1, and is the negative angular speed. We train this model with AutoRL [12] over the DDPG [41]

algorithm. AutoRL automates hyperparameter search in reinforcement learning using an evolutionary approach. AutoRL takes as inputs a true objective used to evaluate the agent, a parameterized dense reward that the agent uses to train itself, and optionally neural network architecture and algorithm hyperparameters. To train in agent, AutoRL typically optimizes these hyperparameters in phases. First, given an arbitrary or hand-tuned architecture, it trains several populations of RL agents with different reward parameters and optimizes over the true objective. Optionally, a second phase repeats the process with the dense reward fixed while searching over neural network architectures instead.

Iv-A2 P2P for carlike robots

To confirm that PRM-RL generalizes to other classes of agents, we also model car-like robots represented with a kinematic single-track model [52] with configuration space a subset of restricted to the range of turn of the front wheels. The true objective of P2P does not change for car drive, but because the turn radius of the car is limited and the car must perform more complex maneuvers, we choose a slightly different reward model:

(3)

where all values are the same as for diff drive except rewards the delta change of Euclidean distance to goal, is the negative of backwards speed and 0 for forward speed, and and are dropped. We train this model with hyperparameter tuning with Vizier [24] over the DDPG [41] algorithm in a different training regime in which the car model is allowed to collide up to 10 times in training, but is still evaluated on the true objective of zero collisions.

0:  : Start and goal.
0:   Success threshold.
0:  : Number of attempts.
0:  : Sufficient distance to the goal.
0:  : Maximum steps for trajectory.
0:  : Task predicate.
0:  : RL agent’s policy.
0:   Generative model of system dynamics.
0:   ,
1:  ,
2:  
3:  for   do
4:      // Run in parallel, or sequential for early termination.
5:      // Sample from the
6:      // state space
7:     , ,
8:     
9:     while  C-free  do
10:         ,
11:         
12:         
13:         
14:     end while
15:     if  then
16:         
17:     end if
18:     if  then
19:         return  False, 0, 0 // Not enough success, we can terminate.
20:     end if
21:     
22:     
23:  end for
24:  ,
25:  return  
Algorithm 1 PRM-RL AddEdge

Iv-B PRM Construction

The basic PRM method works by sampling robot configurations in the the robot’s configuration space, retaining only collision-free samples as nodes in the roadmap. PRMs then attempt to connect the samples to their nearest neighbors using a local planner. If an obstacle-free path between nodes exists, an edge is added to the roadmap.

We modify the basic PRM by changing the way nodes are connected. Formally, we represent PRMs with graphs modeled as a tuple of nodes and edges. Nodes are always in free space, , and edges always connect valid nodes , but we do not require that the line of sight between those nodes is in , allowing edges that “go around” corners. Since we are primarily interested in robustness to noise and adherence to the task, we only connect configurations if the RL agent can consistently perform the point-to-point task between two points.

The robot’s state space is a superset of the

because it represents not just the configuration of the robot but its possible observations and other task-dependent state. Therefore, we sample multiple variations around the start and goal configuration points, and add an edge only if the success rate exceeds a threshold. Note this means PRM-RL trajectories aren’t guaranteed to be collision-free paths; this is not a limitation of the method, but is rather the unavoidable result of sensor noise. A standard PRM would not produce a collision-free trajectory for noisy or changing environments. Nevertheless, when discussing the results, we estimate a lower bound on the probability of collision.

Algorithm 1 describes how PRM-RL adds edges to the PRMs. We sample multiple points from the state space, which correspond to the start and goal in the configuration space, and attempt to connect the two points over trials. An attempt is successful only if the agent reaches sufficiently close to the goal point. To compute the total length of a trajectory, we sum the distances for all steps plus the remaining distance to the goal. The length we associate with the edge is the average of the distance of successful edges. The algorithm recommends adding the edge to the roadmap if the success probability is above the threshold .

The worst-case number of collision checks in Algorithm 1 is , because multiple attempts are required for each edge to determine . While each trial of checking the trajectory could be parallelized with processors, an alternative approach is to serialize trajectory checking within Algorithm 1, terminating early if the tests fail too many times. Mathematically, for a given success threshold and desired number of attempts, at least trials must succeed; therefore we can terminate when or when the failures exceed the complementary probability . This can provide substantial savings if is high, as shown in Section V-D.

We use a custom kinematic 3D simulator which provides drive models for agents and supports visual sensors such as cameras and lidars. The simulator also provides parameterized noise models for actions, observations, and robot localization, which we find improves model training and robustness. Because the simulation is kinematic, stepping is fast compared to the full-phyisics simulations, speeding up both RL training and roadmap building.

0:  Obstacle maps
0:   Generative model of system dynamics.
0:   Sampling density.
0:   Policy range.
0:   Number of processors.
0:   Success threshold.
0:  : Number of attempts.
0:  RL policy, Roadmaps,
1:  Train RL agent with [12] given as described in Section IV-A.
2:  for  // For each environment. do
3:     Sample map with density and store candidate edges w.r.t. .
4:     Partition candidate edges in subsets, .
5:     for  // In parallel. do
6:         for  // In parallel. do
7:            if AddEdge: Run Alg 1 with  then
8:               Add nodes and edge to the roadmap
9:            end if
10:         end for
11:     end for
12:     Store
13:  end for
14:  return  RL policy, Roadmaps,
Algorithm 2 PRM-RL Build roadmaps

For larger roadmaps, an effective strategy is to parallelize calls to Algorithm 1. Algorithm 2 describes the roadmap building procedure, where a RL agent is trained once, and used on several environments. While building a roadmap for each environment, we first sample the obstacle map to the given density, and store all candidate edges. Two nodes that are within the RL policy range, are considered candidates. Next, we partition all the candidate edges is subsets for distributed processing. Each candidate edge is considered and it is added, along with its nodes, to the roadmap if the AddEdge Monte Carlo rollouts return positive outcome.

0:  PRM
0:  : RL agent’s policy.
0:  : Start and goal.
0:  : Sufficient distance to the goal.
0:  : Maximum steps for trajectory.
1:  Add start and goal to if needed.
2:  Query PRM and receive list of waypoints .
3:  for  do
4:     Set as a goal for the RL agent .
5:     Set current state, as start state
6:     
7:     while  is not within from  do
8:         Apply action and observe the resulting state as new current state
9:         
10:         if  is in collision then
11:            return  Error: Collision.
12:         end if
13:         if  then
14:            return  Error: Timeout.
15:         end if
16:     end while
17:  end for
18:  return  Success.
Algorithm 3 PRM-RL Navigate

Iv-C Navigation

Finally, Algorithm 3 describes the navigation procedure, which takes a start and a goal position. As in traditional PRMs, these are added to the roadmap if not already there. Then roadmap is queried to result in a list of waypoints. If no waypoints are returned, the algorithm returns the start and goal as the path to give the RL agent the opportunity to attempt to navigate on its own. In execution, the RL agent is given one waypoint at the time as a sub-goal, clearing them sequentially as it gets within goal distance, until the final destination is reached or is exceeded.

V Results

We evaluate PRM-RL’s performance on both floormaps and SLAM maps, with respect to noise resilience, and with on-robot experiments. Section V-A describes the robot and training setup, evaluation environments, and baselines. Section V-B evaluates the performance on the floormaps against the baselines. With the performance established against the baselines, the following sections examine PRM-RL’s characteristics in more depth. Section V-C assesses navigation robustness to noise, and Section V-D studies the role of success threshold and sampling density, and applicability of PRM-RL with for robots with dynamic constraints in Section V-H. Since one of our goals is to assess PRM-RL for real-world navigation, the remainder of the results focuses on on-robot evaluation and sim2real transfer. Section V-E evaluates PRM-RL on noisy SLAM maps, and Section V-G analyzes the sim2real transfer and on-robot experiments.

V-a Methodology

V-A1 Robot setup

We use two different robot kinematic models, differential drive [38] and simple car model [38]. Both are controlled with linear and angular velocities commanded at Hz, receive goal observation from off-the-shelf localization, and are represented as circles with radius. The obstacle observations are from 1D lidar data (Fig. 5), with a FOV resampled to rays. Following [49], we use an observation trace of the last 3 frames to provide a simple notion of time to the agent. We use differential drive robots like the Fetch [47] for physical experiments.

Fig. 5: Lidar observation that the differential drive robot uses for navigation. The background image shows a hallway with a clear path ahead of the robot and walls to the left and a right. The inset shows the lidar image: white regions indicate no obstacles within , and progressively darker regions indicate closer and closer obstacles.
Reward Description Min Max Diff Car
Drive Model
1 when goal reached 0.0 100.0 14.30 0.82
and 0 otherwise
Negative Euclidean 0.0 1.0 0.17 N/A
distance to goal
Delta of Euclidean 0.0 5.0 N/A 2.03
distance to goal
1 on collision -100.0 0.0 -31.75 -1.80
and 0 otherwise
Distance to 0.0 1.0 0.45 N/A
closest obstacle
Constant per-step -1.0 0.0 -0.34 -0.10
penalty
Negative angular 0.0 1.0 0.41 N/A
speed
Negative backward -1.0 0.0 N/A -0.64
speed
TABLE I: P2P Reward Components and Their AutoRL-Tuned Values

V-A2 Obstacle-avoidance local planner training

We use the P2P agent from [12] for the differential drive robot, and train an agent with AutoRL over DDPG [12] for the car robot. In both cases, the true objective is goal reached within , allowing the RL agent to cope with noisy sensors and dynamics. Table I depicts learned reward hyperparameters. DDPG actor and critic are feed-forward fully-connected networks. Actor networks are three-layers deep, while the critics consists of a two-layer observation networks joined with the action networks by two fully connected layers, resulting in actor layer widths of 50, 20, and 10, and critic widths of 10 and 10. Appendix VII contains the training hyperparameter details.

The training environment is by (Fig. (a)a). To simulate imperfect real-world sensing, the simulator adds Gaussian noise, , to its observations.

Environment Type Dimensions Visual
Training Floor map by Fig. (a)a
Building 1 Floor map by Fig. (d)d
Building 2 Floor map by Fig. (c)c
Building 3 Floor map by Fig. (b)b
Building Complex Floor map by Fig. 2
Physical Testbed 1 SLAM by Fig. (a)a
Physical Testbed 2 SLAM by N/A (private)
TABLE II: Environments

V-A3 Evaluation environments

Table II lists our simulation environments, all derived from real-world buildings. Training, Building 1-3, depicted in Fig. 3, and Building Complex (Fig. 2) are metric maps derived from real building floor plans. They are between 12 to 200 times larger than the training environment by area. Physical Testbed 1 (Fig. 6) and Physical Testbed 2 are derived from SLAM maps used for robot deployment environments.

(a) SLAM Map - by
(b) Roadmap - 0.4 samp. / at succ.
(c) Robot Trajectories - 10 runs with success
Fig. 6: PRMs for robot deployments are built over SLAM maps of the target environment. a) SLAM map collected at the site of a robot deployment. b) PRM-RL learns the effective connectivity of the map for the given policy. Note the blow-up of the T-junction in the lower right, which the PRM does not connect because a support pillar in the center of the corridor forms a hazard for the robot’s current policy. c) This point appears on our QA test plan, leading to a real-world failure at the same location; paths in connected areas often succeed.

V-A4 Roadmap building

For simplicity we use unform random sampling to build roadmaps. We connect PRMs with a effective threshold of over 20 attempts, with a max connection distance of based on the effective navigation radius for our P2P agents per [12], except where stated differently.

Label Local Execution Obstacle Avoid Monte Carlo Description
Planner (LP) Policy LP Execution roll-outs
AutoRL N/A AutoRL N/A Yes No The AutoRL policy [12] without guidance by a PRM.
PRM-SL Straight Line Straight Line No No No Straight-line PRMs [34] with a straight-line execution policy.
PRM-GAPF Straight Line Guided APF No Yes No Straight-line PRMs [34] executed by guided APF similar to [11].
ICRA18 DDPG DDPG Yes Yes Yes PRM-RL w/ DDPG for roadmap & execution ([21]).
PRM-RL AutoRL AutoRL Yes Yes Yes PRM-RL w/ AutoRL for roadmap & execution (ours).
TABLE III: Baselines

V-A5 Baselines

We select four baselines shown in Table III. The baselines differ in the local planner, which is used for building the roadmap, and execution policy, which guides the robot. The baseline selection critia was determined by the obstacle avoidance ability of a policy, its stochasticity. Recall that, PRM-RL uses a stochastic policy capable of obstacle-avoidance, for connecting nodes in the roadmap, and relies on Monte Carlo roll-outs of the policy.

The selected baselines are AutoRL [12], PRM-SL [34], PRM-GAPF (a modification of [11]), and our original PRM-RL with a hand-tuned RL policy [21]. The AutoRL [12] baseline policy is not guided by a PRM. PRM-SL [34] uses roadmaps built with a straight-line planner and a straight-line execution policy. PRM-GAPF uses PRMs built with a straight-line local planner, and an execution policy guided by an artificial potential field over the PRM-SL path, similarly to [11]. ICRA18 [21] is PRM-RL with hand-tuned DDPG as the planner. Where not otherwise specified, PRM-RL refers to our current approach. We do not compare PRM-RL with RRTs because they are one-time planners and are relatively expensive for building on-the-fly for long range trajectories.

For comparisons to baselines, unless otherwise stated, each roadmap is evaluated on 100 queries selected from the between start and goal positions from to .

Fig. 7: PRM-RL’s success is nearly , outperforming AutoRL, PRMs using straight-line planning, PRM-GAPF and ICRA18. PRM-RL outperforms both our prior work and non-learned alternatives.

V-B PRM-RL Performance on Floorplan Maps

Figure 7 shows the success rate and path length comparison over 10,000 queries. PRM-RL’s average success rate is over all three maps, which outperforms the baselines by factors of for pure AutoRL, for PRM-SL, for PRM-GAPF, and for ICRA18. Numerical results are in Table IV in Appendix VII. Outperforming the non-guided AutoRL policy is not surprising as it does not have the knowledge of the obstacle map; note that AutoRL’s peformance degrades rapidly on maps other than the training environment, while PRM-guided policies show less degradation and occasional improvement.

We can draw two observations from these results. First, they provide evidence that PRM-RL successfully enables a local planner trained on one environment to perform well in environments that it was not trained on - and furthermore can enable it to perform better on the original training environment than it can on its own. This is not surprising as our local planners were not designed to travel long distances, but guiding the local policy via the PRM can more than double its success rate. Second, the PRM success rate is tightly correlated with the success of the local planner. The P2P policy used in PMR-RL outperforms the one used in ICRA2018 by 31% and APF by 48% as reported in [12]. Extended to PRMs, the performance lifts are 19.9% and 33%. This is strong evidence that investing in a high-fidelity local planner increases PRM-RL’s performance of the overall navigation task.

(a) Success Rate vs Lidar Noise
(b) Success Rate vs Goal Noise
(c) Success Rate vs Action Noise
Fig. 8:

PRM-RL shows more robustness to noise than PRM-GAPF. PRM-RL performance is in blue while PRM-GAPF is in green; individual maps are marked with T, 1, 2, 3, and S for Training, Buildings 1-3, and the SLAM map. The thick lines marks the means and their light borders the standard deviations. a) PRM-RL smoothly degrades as lidar noise rises towards

with typically no more than a drop, while PRM-GAPF degrades quickly as lidar noise rises towards , up to drop. b) PRM-RL also shows smooth degradation with position noise (modeled as goal uncertainty); PRM-GAPF degrades up to with position noise (modeled as localization uncertainty). c) Action noise causes the sharpest degradation of up to on PRM-RL; action noise has less effect on PRM-GAPF, up to of peak, but it is still worse than PRM-RL except on the SLAM map at high noise levels.

V-C PRM-RL Robustness to Noise

Real robot sensors and actuators are not perfect, so it is important for navigation methods to be resilient to noise. Fig. 8 shows the evaluation of PRM-RL and PRM-GAPF over simulated Gaussian noise sources with mean and in the following ranges:

  • Lidar sensor noise from -, over three times the radius of the robot.

  • Goal position noise from -, over twice the radius of the goal target.

  • Action noise of velocity - and angular velocity .

As lidar and goal position noise increases, PRM-RL shows only a slight degradation, typically no more than an drop in success even at . Action noise shows a steeper degradation, from to depending on the environment. In contrast, PRM-GAPF degrades steeply with respect to lidar noise, achieving success rates of and dropping from the peak score. Localization noise for PRM-GAPF, while not directly comparable to the goal noise collected for PRM-RL, also dropped off to success scores of . While PRM-GAPF seems to be somewhat more resistant to increased action noise, dropping off only from its highest score, it nonetheless never achieves more than success rate and was almost always worse than PRM-RL.

PRM-RL is resilient to lidar, localization and actuator noise on the order of tens of centimeters, which is larger than the typical lidar, localization and actuator errors we observe on our robot platforms, indicating that PRM-RL is a feasible approach to deal with the kinds of errors our robots actually encounter. This is similar to the trend to noise sensitivity reported in [12], and suggests that overall navigation resilience to noise is correlated to the that of the local planner.

(a) Success rate over sampling density
(b) Collision Checks over sampling density
Fig. 9: Increasing the density of sampling improves performance, but at the cost of time for roadmap construction. a) As density increases, RL agents guided by PRM-RL succeed more often with a sweet spot of node per square meter. b) Cost rises prohibitively as sampling rises; over node per square meter, collision checks for the training map surpass our largest floorplan roadmaps collected at per square meter (dotted line).

V-D The Impact of Sampling Density and Success Threshholds

To deploy PRM-RL on real robots, we need to choose sampling density and success threshold parameters that provide the best performance.

Figure (a)a shows that PRM-RL success rate increases steadily up to a sampling density of samples per meter, which is roughly twice the size of our robot, and then levels off. The trend appears to be robot-specific, rather than the environment specific. At the same time, collision checks increase rapidly with sampling density (Fig. (b)b); While traditional PRM theory predicts that performance would continue to improve with increased sampling [33, 28], these results raise the possibility that beyond this density PRM-RL performance is robot-noise-bound, and that sampling beyond this density provides little benefit at rapidly increasing cost. For that reason, we suggest finding the optimal sampling density in the small training environment at the fraction of the cost, and using it when building larger roadmaps.

(a) Connection Threshold % vs PRM-RL eval
(b) Threshold % vs Collision Checks
Fig. 10: Increasing required edge connection success improves performance and reduces collision checks. a) As the threshold for connecting nodes in the PRM rises, RL agents guided by PRM-RL succeed more often with a sweet spot of and higher. b) Furthermore, early termination enables PRM-RL to skip unneeded connectivity checks for savings exceeding , an effect important on large roadmaps.

To determine the appropriate connection success threshold, we survey navigation success over our training environment and a SLAM map (Fig 10). These results show that the agent’s ability to navigate the map increases as the threshold increases (Fig. (a)a). Because our connection algorithm terminates earlier for higher thresholds when it detects failures, collision checks drop as the success threshold rises (Fig. (b)b). At the end, for larger roadmaps, the success threshold of 100% not only produces the most reliable roadmaps, but requires less collision checks to build them.

For the evaluations in simulation reported earlier in this paper, we choose parameters which enabled comparison with [21]: a sampling density of samples per meter and an effective success connection threshold of . For the two roadmaps in Fig 10, this results in an average collision check savings over .

However, if comparison to a baseline is not important, these results suggest choosing map densities as high as samples per meter with as high a success connection threshold as possible. For the physical deployments we discuss later, we choose a success connection threshold of and generate maps at and samples per meter and empirically measure performance; as predicted by thr simulation results, we observe better performance on denser maps.

V-E PRM-RL Performance on SLAM Maps

Floorplans are not always available or up-to-date, but many robots can readily generate SLAM maps and update them as buildings change. To assess the feasibility of using SLAM maps, we evaluate PRM-RL on SLAM maps generated from some of the same buildings as our floorplan roadmaps. Figure 6 illustrates a sample roadmap, corresponding to Physical Testbed 1, part of floor two of Building 1 in (d)d and the upper center section of the large-scale roadmap in Figure 2. This SLAM-derived roadmap has 195 nodes and 1488 edges with 2.1 million collision checks.

We compare PRM-RL’s performance with PRM-GAPF on this map. PRM-GAPF achieves a success rate of , whereas PRM-RL’s success rate is , a relative increase. Close inspection of the failure cases indicated some of these connections are genuinely hard for our current policies due to features of this map. For example, the inset in Figure (b)b shows an area which appears navigable at first glance which PRM-RL refuses to connect: closer inspection reveals pixels in the SLAM map which correspond to a pillar in reality, which humans can easily walk around but which is challenging for our robot to navigate around due to the width of its base.

These results show that while the performance of PRM-RL with an AutoRL policy on SLAM roadmaps is not as good as its average on the floorplan roadmaps in our tests, it is better than the baselines on the floorplan maps, and is roughly equivalent to PRM-RL with the ICRA2018 policy on these floorplan maps. These results indicate PRM-RL performs well enough to merit tests on roadmaps intended for real robots at physical sites, which we discuss in the following two sections.

V-F Scaling PRM-RL to Large-Scale Maps

Our robot deployment sites substantially larger than our simulated test maps, raising the question of how PRM-RL would scale up. For example, the SLAM map discussed in the previous section is only part of one building within a 4-building complex; where the SLAM map is by , a map of the 4-building complex is by . To assess PRM-RL’s performance on large-scale maps, we built and test roadmaps for maps covering the complete footprint of both deployment sites.

Figure 2 depicts a large floorplan roadmap from the 4-building complex. This roadmap has 15.9 thousand samples and 1.4 million candidate edges prior to connection attempts, of which 689 thousand edges were confirmed at a success threshold. This roadmap took 4 days to build using 300 workers in a cluster and required 1.1 billion collision checks. PRM-RL successfully navigates this roadmap of the time evaluated over 1,000 random navigation attempts with a maximum path distance of (note our other experiments use , which generally will not cross the skybridge in this larger map; for reference, using our default evaluation settings, PRM-RL navigates this roadmap of the time).

For our other candidate robot deployment site, we use a large SLAM map, by . We constructed a roadmap with 2069 nodes and 53,800 edges, collected with 42 million collision checks at the higher success threshold of . PRM-RL successfully navigated this of the time, evaluated over 1,000 random navigation attempts; as on our smaller SLAM map, examination of the failure cases indicate that the more complex geometry recorded by SLAM proves problematic for our current policies.

These results indicate that PRM-RL’s simulated performance on roadmaps continues to surpass the average success threshold we observed previously in [21], making it worthwhile to test on real robots at the physical sites.

(a) Sparse Map
(b) Dense Map
Fig. 11: Trajectories collected from PRM-RL execution on real robots. a) Several queries executed on a differential drive robot and tracked with onboard localization in a real office environment in Physical Testbed 2. Trajectories are in color, circles represent successful navigation, X’s represent emergency stops, and the scale is in meters; the floorplan and PRM connectivity are not displayed for privacy. b) With a denser roadmap, the success rate is higher, but we observe more failures in presence of environment changes at PRM waypoints. The longest successful circuit conducted on this map was with a longest individual PRM-RL trajectory of .

V-G Transfer of PRM-RL to Physical Robots

We empirically evaluate PRM-RL on two physical environments on two differential-drive robots. First, we evaluate PRM-RL with the AutoRL policy for a differential-drive robot in Physical Testbed 1 (Fig. (a)a). We collected 43 trajectories accumulating over traveled. PRM-RL exhibited an overall success rate of over 39 runs; the longest successful trajectory was meters. Fig. (c)c shows a sample of 10 of these runs. The video is available at https://youtu.be/xN-OWX5gKvQ

Second, we evaluate PRM-RL in Physical Testbed 2. For a variant of the roadmap generated at success rate and density of samples per meter, we collected 31 trajectories over of travel, shown in Fig. (a)a; the longest successful trajectory was . We cannot directly compare this evaluation to our simulated runs because the e-stop policies designed to protect the robot do not match our simulation termination conditions; nevertheless, we recorded 14 successful runs out of 31, a success rate. We then tested a variant of the roadmap generated at success rate and a density of samples per meter over 49 runs over of travel for an improved success rate, shown in Fig. (b)b; the longest successful trajectory was . These runs include a complete circuit where PRM-RL successfully navigated between a loop of waypoints over . Interestingly, this much more successful map showed qualitatively worse behavior on some paths by attempting to navigate through PRM waypoints blocked by temporary obstructions.

Fig. 12: PRM-RL’s performance in simulation and on real robots. SLAM-based PRM-RL close sim2real gap.

Figure 12 summarizes these results. Despite the more aggressive episode termination policies on-robot (near-collisions are treated as failures on-robot) and the more limited available search space, we nonetheless observe similar results: over several different roadmaps constructed at different densities and success criteria, PRM-RL built with noisy SLAM maps achieves success in simulation and success on-robot, closing the sim2real gap. This makes SLAM-based PRM-RL simulation results a good predictor of on-robot performance.

(a) Roadmap built using car model
(b) Example car model trajectory
Fig. 13: PRMs can also be built for agents with dynamic constraints. a) Roadmap built for our training environment using a nonholonomic car model. b) Example trajectory for the car model; the upper right shows a three point turn to change the robot orientation. Yellow lines are the PRM path, the blue line is the agent’s trajectory, and the white lines indicate progress towards its true goal.

V-H PRM-RL with Dynamic Constraints

To demonstrate that PRM-RL could guide more complicated robots, we develop a drive model for a simulated F1/10 car [16] with dynamics following [52]. Average success over the four maps in Figure 3 is with a standard deviation of ; average success in simulation on Physical Testbed 1 is . Figure (a)a illustrates a roadmap built with this model over our training map with samples per meter connected with a success rate; this roadmap has 32 nodes and 313 edges connected after 403 thousand edge checks. On this roadmap, PRM-RL exhibits an success rate, including cases where the car needs to make a complex maneuver to turn around (Fig. (b)b). These results are as good as the on-robot results for the dense map of our larger robot deployment, indicating that PRM-RL is a viable candidate for further testing on more complicated robot models.

(a) Effective vs Actual Radius
(b) Connections Possible with PRM-RL
(c) Increased Density of PRM-RL
Fig. 14: PRM-RL enables capturing valid samples not visible by line of sight. a) The ability of RL to go around corners makes obstacles effectively smaller in configuration space. b) This means more connections can be made for a given connectivity neighborhood. Solid black arrows represent valid connections for either PRM-SL or PRM-RL, dotted red arrows represent invalid connections for either method, and blue arrows indicate valid trajectories recovered by PRM-RL. c) Compared to PRM-SL, PRM-RL recovers many more potential connections as obstacles grow denser and RL gets better.
(a) Effect of Noise in Corridors
(b) Reliability with 1 vs 2 Walls
(c) Overconfidence of PRM-SL vs PRM-RL
Fig. 15: PRM-RL enables capturing hazards in the environment difficult to learn with RL. a) RL agents may learn to avoid obstacles, but not every location in the environment has identical clearance; on the left the robot can hug one wall, but on the right it must pass between two walls, so uncertainty in controls leads to two possible failure modes. b) Therefore, as paths lengthen, the chance of navigating reliably drops faster in corridors than in wall hugging (blue region). c) This leads to RL overconfidence in a regime between where both PRM-RL and RL are reliable and where they both are not; PRM-RL can encode this in the roadmap by deleting the right edge in (a).

Vi Analysis

In the previous section, we empirically established correlations between the local planner’s competence and PRM-RL’s resilience to noise; we also explored the contributions of sampling densities, success thresholds and obstacle map sources to the success of overall navigation. We concluded that 1) using an execution policy that is resilient to noise and avoids obstacles as a local planner improves the overall success of the hierarchical planner; 2) a success threshold of 100% improves overall navigation success, 3) the upper bound to navigation success is not dependent on density but policy performance and robot type, and 4) using realistic obstacle maps, such as SLAM maps, as a basis for building roadmaps provides simulation results closer to reality.

This section provides a deeper analysis of those empirical findings. Section VI-C analyzes the impact of local planner’s obstacle avoidance and noise resilience on roadmap construction. Section VI-B examines the computational complexity of PRM-RL, and Section VI-C discusses causes of failure for trajectories over multiple waypoints.

Vi-a PRM-RL Roadmap Connectivity

Unlike straight-line planners, RL agents can often go around corners and smaller obstacles; Figure 14 shows how this effectively transforms the configuration space to make obstacles smaller. While the agent never traverses these corner points, as they are not in , they nevertheless do not serve to block the agent’s path, unlike central portions of a larger body which might block or trap the control agent in a local minimum. If we model this phenomenon as an effective reduction in radius of a circular obstacle with respect to a policy , and further model the connection region as a disc filled with randomly sampled obstacles from to in total area density , we can estimate an upper bound on connection success as the idealized case in which obstacles do not occlude and the chance of connection is just the complementary probability of encountering an obstacle over a region of space, ; this corresponds to the looser bound in the RL case. Combining these, a conservative estimate of the proportion of samples connected by PRM-RL to those connected by PRM-SL is:

(4)

This simplified model indicates that as obstacle density increases, it becomes harder to connect points, but as RL’s ability to cut corners increases, PRM-RL has an increasing advantage over PRM-SL in connecting these difficult roadmaps. Hence, in environments with dense obstacles, it makes sense to invest in execution policies that avoid obstacles really well, and use them as local planners. Alternately, it suggests that policies that can learn strategies for dealing with frequently occurring obstacle topologies, like box canyons and corners, are a fruitful area for future work.

Conversely, PRM-RL does not connect nodes in a roadmap where a policy cannot navigate reliably. This is the key difference from standard PRMs, and is the cause for the upper limit on performance improvements as the roadamps increase their density - the roadmaps are policy-bound, rather than sampling bound. One question to ask is why local control policies can’t simply learn to “drive safe” under the guidance of some on-the-fly path planner like A*. However, an analysis of how noise impacts the behavior of policies indicates that policies which do not memorize the environment may overestimate their ability to navigate because the hazards that they can observe locally may not represent an accurate picture of the hazards of the global environment.

To see why, suppose a policy has learned to navigate safely in the presence of noise by maintaining a distance from walls. Modeling time as discrete and assuming the robot is travelling at a constant speed, so that on each time slice the robot moves a constant distance units forward, let us further model sources of variability as transient Gaussian noise orthogonal to the robot’s travel with zero mean and standard deviation ; this results in a probability of collision per step of

(the cumulative distribution function of the Gaussian noise model

evaluated at , expressed in terms of the complementary Gauss error function ). Figure (a)a shows that when the robot is traveling in a narrow corridor, it is twice as likely to collide as it does when hugging a wall, even though it may be maintaining from any given wall at all times. Over a path of length , a conservative lower bound on the chance of collision rises exponentially with the number of steps it takes to traverse the path,

(5)

causing the narrow corridor case to become unsafe faster than the wall-hugging case as shown in the blue region of Figure (b)b. This means that a RL policy that judges its safety based on locally observable features can overestimate the safety of a path in the region between where both PRM-RL and RL would succeed and where both PRM-RL and RL would fail (Figure (c)c); the same would be true of RL guided by PRM-SL based on clearance, such as in grid-based sampling [17] and Safety PRM [44]. In this case, RL or RL guided by PRM-SL can make erroneous navigation choices, whereas PRM-RL simply does not add that link to the roadmap. While in theory an agent could be designed to cope with this specific issue, other scenarios can present similar problems: no local prediction of collision probability can give a true guarantee of trajectory safety. While this is an idealized analysis, our experience is that real-world environments can be more pathological - for example, at one site curtains behind a glass wall induced a persistent 0.5 meter error in the robot’s perceived location, causing it to drive dangerously close to the wall for long distances despite the presence of a safety layer. PRM-RL provides an automated method to avoid adding those unreliable connections in the roadmap, resulting in the roadmaps that more sparsely connect but transfer more reliably on-robot.

Vi-B PRM-RL Computational Complexity

The computational complexity of connecting a node to a roadmap PRM-RL is impacted by a constant factor over PRM-SL because point connection tests for PRM-RL require a sequence of simulated MC rollouts, whereas PRM-SL requires a single straight-line collision test. In addition, the empirical evaluations show that there is a sample density beyond which the navigation success rate of PRM-RL does not improve. Here we assess the computational complexity of building a PRM-RL roadmap with known RL policy range, , optimal sampling density, , and workspace volume

Approaches exist to construct PRMs with nodes in edge tests, but generally achieve this by imposing various limits, e.g. connecting the k nearest neighbors or making the radius of connection an inverse function of the number of nodes [31]. PRM-RL instead attempts to connect all nodes within a given radius, both to give the RL agent the highest chance of success and to satisfy user expectations on navigation behavior, making it more similar to Simplified PRM [33] which requires edge calculations. However, for practical purposes most RL algorithms cannot connect points longer than some effective range which can be determined empirically (see [12]). This can be used to restrict connection attempts to a local neighborhood, reducing the edge calculations to if a spatial data structure is used for the nearest neighbor lookup.

To ensure coverage, PRM-RL samples the configuration space to a given density rather than choosing a fixed number of points , so the number of points sampled is a function of the workspace volume multiplied with density, where is the workspace volume, and is the optimal sampling density. Next, the complexity of connecting a single node to the roadmap given connection attempts is , because the search volume for the policy with effective range is where is the workspace dimensionality, and therefore the number of expected samples in that volume is . Therefore, the computational complexity of building a roadmap is

(6)

dropping the term because it is a constant. Eq. (6) exposes the following power sources:

  • Complexity is linear with the workspace volume and success threshold.

  • The roadmap building time is in sampling density, matching empirical observations in Fig. (b)b, and it is worthwhile in assessing the optimal sampling density before investing in building the roadmaps.

  • The complexity is in the policy range, and a sophisticated local planners that can reliably cover long distances increase the computational cost of the roadmap (unless a lower arbitrary connection limit is chosen).

Note each node connection is an independent operation, and with a sufficient number of processors the roadmap building can be parallelized up to the expected samples up . So, given processors, the effective time complexity is possibly alleviating some of the cost of increased sampling.

Last, note that when using early termination, increasing the success threshold often but not always reduces the required number of connection attempts . In the worst case scenario where we require , early termination can at best cut to , but as increases the number of failures needed to exclude an edge, , drops toward 1. Conversely, if navigation is successful then the full samples need to be collected for an edge; the distribution of successes and failures thus has a large effect on the cost. One way to control this cost to reduce the max connection distance to less than the effective policy navigation distance ; in this case the agent is more often expected to succeed, and can potentially be reduced. We have observed that these tradeoffs can significantly affect the cost of a run, but must be studied empirically for the environments of interest.

Vi-C PRM-RL Trajectory Execution

Because PRM-RL construction calculates the probability of success before adding an edge, we can estimate the expected probability of success of a long-range path that passes through several waypoints. Recall that to add an edge to the roadmap we collect Monte Carlo rollouts and require an observed proportion of successes typically of and . Given that expected probability of success of a Bernoulli trial observing successes out of samples is

(7)

[46], the actual probability of successful navigation over an edge with successful samples is , and similarly for thresholds of . Extrapolating over the sequence of edges in a PRM-RL path, the expected success rate is where is number of waypoints. In [21] we observe PRM-RL paths with waypoints averaged over our three deployment maps, yielding an estimated probability of success of for the threshold and for the threshold. Therefore, for the lengths of paths we observe in our typical deployment environments, the threshold improves PRM-RL’s theoretical performance to the point that it is more likely to succeed than not, which is what we observe empirically.

Vii Conclusion

We presented PRM-RL, a hierarchical planning method for long-range navigation that combines sampling-based path planning with RL agent as a local planner in very large environments. The core of the method is that roadmap nodes are connected only when the RL agent can connect them consistently in the face of noise and obstacles. This extension of [21] contributed roadmap construction and robot deployment algorithms, along with roadmap connectivity, computational complexity, and navigation performance analysis. We evaluated the method on a differential drive and a car model with inertia used on floormaps from five building, two noisy obstacle maps, and on two physical testbeds. We showed that 1) the navigation quality and resilience to noise of the execution policy directly transfers to the hierarchical planner; 2) a 100% success threshold in roadmap construction yields both the highest quality and most computationally efficient roadmaps; 3) building roadmaps from the noisy SLAM maps that the robot uses at execution time virtually closes the sim2real gap, yielding simulation success rates very similar to those observed on robots. PRM-RL with SLAM maps embed into the roadmap information that the robot uses at execution time, providing a better estimate of on-robot performance. Failure modes include pathologies of the local policy, poorly positioned samples, and sparsely connected roadmaps. In future work, we will examine improved policies with more resistance to noise, better sampling techniques to position the samples strategically, and techniques for improving map coverage with better localization and obstacle maps.

[Training Hyperparameters] Both the actor and critic use the AdamOptimizer with , , ; the actor’s learning rate is and the critic’s is

. The actor uses DQDA gradient clipping and the critic uses

with the Huber loss for temporal difference errors. Our number of initial stabilization steps is , after which we use a soft target network update of on every step. Our training batch size is and our replay buffer has a capacity of million. We let training run for million steps, but save policies every steps and select the best policy over the course of the run.

[Tabular results for PRM-RL success rate]

Env. Navigation Success (%)
AutoRL PRM-SL PRM-GAPF ICRA18 PRM-RL
Bldg. 1 9.0 15 35 43 69
Bldg. 2 9.3 18 32 38 63
Bldg. 3 4.0 8 39 66 74
Avg. 7.4 13.7 35.3 49.0 68.7
Impr. % 825 402 94.3 40.1 N/A
TABLE IV: PRM-RL Success Rates vs Baselines

[Table of Symbols]

Symbol Units or Domain Meaning
Configuration space of dimension
Free portion of configuration space
State space of robot plus task state
Task space for navigation
Free portion of the task space
Physical workspace of dimension
Observation space of dimension
Action space of dimension
Task dynamics
Noise model or
Reward model
True objective
Dense reward
Named reward component
Discount
Indicator function
Task predicate
Sensor w/ dyn. & noise
Action w/ dyn. & noise
Waypoint on path
Point along trajectory
Start state
Goal state
meters Goal success distance
Max trajectory execution steps
Diff drive action lin, ang vel.
Line from to in graph
Num. edge connection attempts
Num. observed connection successes
Edge connection success threshold
meters Max attempted edge connection dist.
meters Policy effective nav. distance
Policy effective obst. shrinkage
Sampling density per meter
points Number of points to sample
Probability of successful navigation
Number of waypoints on a path
Volume of the workspace
Workspace dimensionality
Number of processors
noise for

Acknowledgments

The authors thank Jeffrey Bingham, James Davidson, Brian Ichter, Ken Oslund, Peter Pastor, Oscar Ramirez, Lydia Tapia, Alex Toshev, and Vincent Vanhoucke for their helpful discussions and contributions to the PRM-RL project.

References