Long-range indoor robot navigation requires human-scale robots (Fig. (a)a) to move safely over building-scale distances (Figs. (b)b). To robustly navigate over long distances while adapting to new environments, we factor the problem into long-range path planning and end-to-end local control, while assuming mapping and localization are provided. Long-range path planning finds collision-free paths to distant goals not reachable by local control . End-to-end local control produces feasible controls to follow ideal paths while avoiding obstacles (e.g., , ) and compensating for noisy sensors and localization . We enable end-to-end local control to inform long-range path planning through sampling-based planning.
Sampling-based planners, such as Probabilistic Roadmaps (PRMs)  and Rapidly Exploring Random Trees (RRTs) [37, 39], plan efficiently by approximating the topology of the configuration space with a graph or tree constructed by sampling points in free space and connecting these points if there is a collision-free local path between them. Typically, these local paths are created by line-of-sight tests or an inexpensive local planner, which are then connected in a sequence to form the full collision-free path.
Regardless of how a path is generated, executing it requires handling sensor noise, unmodeled dynamics, and environment changes. Recently, reinforcement learning (RL) agents  have solved complex robot control problems , generated trajectories under task constraints , demonstrated robustness to noise , and learned complex skills  , making them good choices to deal with task constraints. Many simple navigation tasks require only low-dimensional sensors and controls, like lidar and differential drive, and can be solved with easily trainable networks [63, 25, 7]. However, as we increase complexity of the problem by requiring longer episodes or providing only sparse rewards , RL agents become more difficult to train, and RL doesn’t always transfer well to new environments  .
Long-range navigation presents all of these challenges. Rewards are sparse and episodes are long, making long-range agents hard to train. On complex maps short-range agents are vulnerable to local minima like wide barriers and narrow passages. Even within deployment categories, environments have vast variety: the SunCG dataset has 45,000 distinct houselike environments , and the US alone has over 5.6 million office buildings .
We present PRM-RL, an approach to long-range navigation which combines PRMs and RL to overcome each other’s shortfalls. In PRM-RL, an RL agent learns a local point-to-point task, incorporating system dynamics and sensor noise independent of long-range environment structure. The agent’s learned behavior then influences roadmap construction; PRM-RL builds a roadmap by connecting two configuration points only if the agent consistently navigates the point-to-point path between them collision free, thereby learning the long-range environment structure. PRM-RL roadmaps perform better than roadmaps based on pure connectivity because they respect robot dynamics. RL agents perform better with roadmap guidance, avoiding local minima even in untrained-for environments. PRM-RL thus combines PRM efficiency with RL robustness, creating a long-range navigation planner that not only avoids local-minima traps, but is resilient to environmental changes and transfers well to new environments, as shown in Figure 2, where a policy trained on a small training environment scales to a four-building complex almost two hundred times larger in map area.
This paper contributes PRM-RL as a hierarchical kinodynamic planner for navigation in large environments for robots with noisy sensors. This paper is an extension of , which contributes the original PRM-RL method. Here, we investigate PRM-RL in the navigation context and make the following contributions beyond the original paper: 1) Algorithm 2 for PRM-RL roadmap building 2) Algorithm 3 for robot deployment; 3) PRM-RL application to kinodynamic planning on a car model with inertia; 4) in-depth analysis of PRM-RL, including: 4.1) correlation between the quality of the local planner and the overall hierarchical planner; 4.2) impact of improving planners and changing parameters on PRM-RL computational time complexity; 4.3) impact of a robust local planner on the effective topology of the configuration space. All the evaluations and experiments are new and original to this paper. We evaluate PRM-RL with a more effective local planner , compare it in simulation against several baselines in seven different buildings, and deploy it to two physical robot testbeds.
Overall, we show improved performance, better roadmap generation, and easier on-robot transfer, including a relative success rate increase of over , and over , while maintaining good performance despite increasing noise. We also show that only adding edges when agents can always navigate them makes roadmaps cheaper to build and improves navigation success; denser roadmaps also have higher simulated success rates but at substantial roadmap construction cost. Floorplans are not always available or up to date, but we show roadmaps built from SLAM maps close the simulation to reality gap by producing planners that perform in simulation virtually as well as they do on the robot. SLAM-based PRM-RL enables real robot deployments with hundred-meter collision-free trajectories at two different sites on two different robots with success rates as high as . We also show that PRM-RL functions well on robots with dynamic constraints, with an success rate in simulation.
While this paper focuses on navigation, we believe that the analysis and empirical findings will be of interest to the wider motion planning research community for two reasons. First, PRM-RL presented here is an example of a hierarchical motion planner that can incorporate models of sensor, localization, and control uncertainties into the roadmap construction phase, resulting in a planner that performs as well in simulation as it does on robots. Second, we present a comprehensive analysis of the trade-offs in performance and computational complexity and the interplay between local and global planner that is not specific to navigation.
Ii Related Work
Ii-1 Probabilistic roadmaps
PRMs  have been used in a wide variety of planning problems from robotics [26, 45] to molecular folding [3, 53, 60]. They have also been integrated with reinforcement learning for state space dimensionality reduction [43, 53] by using PRM nodes as the state space for the reinforcement learning agent. In contrast, our work applies reinforcement learning to the full state space as a local planner for PRMs. In prior work for an aerial cargo delivery task, we trained RL agents to track paths generated from PRMs constructed using both a straight-line local planner . PRMs have also been modified to work with moving obstacles [27, 55], noisy sensors , and localization errors [2, 4]. Safety PRM  uses probabilistic collision checking with a straight-line planner, associating with all nodes and edges a measure of potential collision. All those methods address one source of errors at the time. In contrast, PRM-RL uses a RL-based local planner capable of avoiding obstacles and handling noisy sensors and dynamics, and at the node connection phase the RL local planner does Monte Carlo path rollouts with deterministic collision checking. We only add edges if the path can be consistently navigated within a tunable success threshold.
PRMs are easy to parallelize, either through parallel edge connections , sampling , or building sub-regional roadmaps  in parallel. To speed-up building of our large scale roadmaps, we use approach similar to  across different computers in a cluster. Individual Monte Carlo rollouts of the edge connecting can be parallelized across multiple processors or run sequentially to allow for early termination.
Ii-2 Reinforcement learning in motion planning
Reinforcement learning has recently gained popularity in solving motion planning problems for systems with unknown dynamics , and has enabled robots to learn tasks that have been previously difficult or impossible [1, 10, 40]. For example, Deep Deterministic Policy Gradient (DDPG)  is a current state-of-the-art algorithm that works with very high dimensional state and action spaces and is able to learn to control robots based on unprocessed sensor observations .
Deep RL has been successfully applied to the navigation problem for robots, including visual navigation with simplified navigation controllers [7, 14, 25, 50, 56, 64], more realistic controllers environments in gamelike environments [6, 13, 48], extracting navigation features from realistic environments [10, 23]. In the local planner setting similar to ours, differential drive robot with 1-d lidar sensing several approaches emerged recently using asynchronous DDPG , expert demonstrations , DDPG , and curriculum learning , and AutoRL 
. While any of the obstacle-avoidance agents can be used as a local planner in PRM-RL, we choose AutoRL for the simplicity of training, because AutoRL automates the search for the reward and neural network architecture.
Ii-3 Hierarchical planners with reinforcement learning
Several recent works feature hierarchical planners combined with reinforcement learning, over a grid , or manually selected waypoints . In these works, the roadgraph points are connected w.r.t. to a straight-line planner, and the reinforcement learning agent is used as an execution policy at run-time. We use the obstacle-avoidance RL policy for both execution policy and a local planner as a basis for connecting the edges in the roadgraph. This approach results in the roadmap connectivity that is tuned to the abilities of the particular robot.
Iii Problem Statement
PRM-RL addresses the general problem of generating a valid executable path for a robotic system given task constraints. Given the collision-free portion of the configuration space and optionally a task predicate that must be satisfied , a valid point is a point in that satisfies the task predicate . Define a path as a sequence of waypoints ; we assume time is discrete for the purposes of exposition, but this is not strictly required. A valid path consists of only valid waypoints: .
A robot’s observation space is generally not identical to , but is produced by a sensor process that can be modeled as a combination of inherent sensor dynamics and a source of noise: . Similarly, actions in the robot’s action space have a state-dependent effect which also can be modeled as a combination of inherent robot dynamics and a source of noise: .
Given a valid start configuration and a policy , an executed trajectory is a sequence of configuration states that results from drawing actions from the policy and its noise processes: . A trajectory is considered a failure it produces an invalid point by exiting or failing to satisfy , or if it exceeds a task-specific limit . Given a valid observable goal configuration , a non-failed trajectory is considered a success if it reaches a point sufficiently close to the goal with respect to a task-dependent threshold , at which point the task is considered to have completed.
With respect to a given path-interpreting policy , a valid executable path is one which the policy can reliably execute to achieve task completion – guiding the agent from the start state of to within of the goal state within time steps, all along an executed trajectory which remains within and satisfies the task predicate
. Because noise makes execution stochastic, a path is defined to be reliable if the policy’s probability of task completion using it exceeds a task-dependentsuccess threshold .
In other words, PRM-RL is a tool for generating paths which an RL agent can reliably satisfy without violating the constraints of its task. In  we demonstrated PRM-RL’s success on tasks with constraints, but in this work, we focus solely on the navigation task, which collapses the task predicate to remaining within , and collapses the full configuration space available to the robot to a task space limited to the robot’s position and orientation.
The PRM-RL method has three stages: training an environment-independent local planner policy with RL, creating a roadmap specific to that local planner and an environment, and deploying that roadmap and policy to the environment for querying, trajectory generation and execution. Fig. 4 illustrates the method.
First, in the training stage (Fig. (a)a), to enable a robot to perform a specific task, we train a local planner policy with RL. For indoor navigation, that task is short-range point-to-point navigation end-to-end from sensors to actions. This task is independent of the environment that the robot will eventually operate in. The RL agent learns to perform a task on an environment comparable to the deployment environment, but with a small state space to make training tractable. This is a Monte Carlo simulation process: we train multiple policies and select the fittest one for the next stage of PRM-RL, regardless of the learning algorithm used.
Next, in the creation phase (Fig. (b)b upper), to prepare a robot to work in a specific environment, we use this best policy as a local planner to build a PRM with respect to the target site. Obstacle maps, such as floor plans or SLAM maps, can be used for any robot we wish to deploy in a building as long as the policy transfers well to the real world. This is a one-time setup per robot and environment. Points are sampled from as in standard PRM, but PRM-RL only adds an edge to the roadmap only when the agent can navigate it in simulation with greater probability than over trials. Rather than being determined by the geometry of free space, the resulting roadmap is tuned to capabilities of the particular robot, so different robots over the same floor plan may generate different roadmaps with different connectivity.
Finally, in the deployment phase, (Fig. (b)b lower), to perform the task in the environment, the constructed roadmap is queried to generate trajectories, which are executed by the same RL agent used to generate the roadmap. Querying a roadmap can be the same as standard PRM; alternately, during construction we can record additional data about executed trajectories in the roadmap to enable other trajectory metrics, such as minimal energy consumption, shortest time, and so on, which are not available with geometry-only approaches. At execution time, the RL agent navigates to the first waypoint. Once the agent is close enough to the waypoint, the next waypoint becomes its new goal; the process repeats until the agent has traversed the whole trajectory.
Iv-a RL Agent Training
To maintain independence from environments, we train the RL agent to perform its task without knowledge of the workspace topology: the transfer function
that leads the system to task completion is only conditioned on what the robot can observe and what it has been commanded to do. Formally, we learn policies to control an agent that we model as a partially observable Markov decision process (POMDP) represented as a tupleof observations, actions, dynamics, noise, reward and discount. Observations, actions, dynamics and noise are determined by the characteristics of the robot and are continuous and multi-dimensional. Reward and discount are determined by the requirements of the task: is a scalar discount factor, whereas the reward has a more complicated structure , including a true scalar objective representing the task and a weighted dense scalar reward based on observable features. We assume a presence of a simplified black-box simulator without knowing the full non-linear system dynamics.
PRMs require a local planner to connect points; in its basic form, PRM-RL’s local planner must navigate a robot from a start position to a goal position without collision; we call this the point-to-point (P2P) task. The task space for navigation is , that is, a robot at a position and orientation. The navigable portion of the task space is typically many times the robot’s radius – for the spaces we consider, meter-scale robots in hundred-meter buildings.
Iv-A1 P2P for differential drive robots
Our primary focus on this paper is on approximately cylindrical differential drive robots with lidar, represented by a unicycle or Type (2,0) model  with configuration space
. The observations are pairs of 1-d lidar vectors and goal sets observed oversteps: . The actions are a 2-dimensional continuous action vector that encodes the robot’s linear and angular velocity, . The dynamics and noise are implicit in the real world but are encoded separately in simulation in the system dynamics and an added noise model.
The true objective of the P2P agent is to maximize the probability of reaching the goal without collisions,
where is an indicator function, is the goal pose, and the goal size, and the zero-collision property is enforced by terminating episodes on collisions. The goal observation is the relative goal position in polar coordinates, which is readily available from localization. The reward for P2P for differential drive robots is:
where is 1 when the agent reaches the goal and 0 otherwise, is the negative Euclidean distance to the goal, is 1 when the agent collides with obstacles and 0 otherwise, is the distance to the closest obstacle, is a constant penalty step with value 1, and is the negative angular speed. We train this model with AutoRL  over the DDPG 
algorithm. AutoRL automates hyperparameter search in reinforcement learning using an evolutionary approach. AutoRL takes as inputs a true objective used to evaluate the agent, a parameterized dense reward that the agent uses to train itself, and optionally neural network architecture and algorithm hyperparameters. To train in agent, AutoRL typically optimizes these hyperparameters in phases. First, given an arbitrary or hand-tuned architecture, it trains several populations of RL agents with different reward parameters and optimizes over the true objective. Optionally, a second phase repeats the process with the dense reward fixed while searching over neural network architectures instead.
Iv-A2 P2P for carlike robots
To confirm that PRM-RL generalizes to other classes of agents, we also model car-like robots represented with a kinematic single-track model  with configuration space a subset of restricted to the range of turn of the front wheels. The true objective of P2P does not change for car drive, but because the turn radius of the car is limited and the car must perform more complex maneuvers, we choose a slightly different reward model:
where all values are the same as for diff drive except rewards the delta change of Euclidean distance to goal, is the negative of backwards speed and 0 for forward speed, and and are dropped. We train this model with hyperparameter tuning with Vizier  over the DDPG  algorithm in a different training regime in which the car model is allowed to collide up to 10 times in training, but is still evaluated on the true objective of zero collisions.
Iv-B PRM Construction
The basic PRM method works by sampling robot configurations in the the robot’s configuration space, retaining only collision-free samples as nodes in the roadmap. PRMs then attempt to connect the samples to their nearest neighbors using a local planner. If an obstacle-free path between nodes exists, an edge is added to the roadmap.
We modify the basic PRM by changing the way nodes are connected. Formally, we represent PRMs with graphs modeled as a tuple of nodes and edges. Nodes are always in free space, , and edges always connect valid nodes , but we do not require that the line of sight between those nodes is in , allowing edges that “go around” corners. Since we are primarily interested in robustness to noise and adherence to the task, we only connect configurations if the RL agent can consistently perform the point-to-point task between two points.
The robot’s state space is a superset of the
because it represents not just the configuration of the robot but its possible observations and other task-dependent state. Therefore, we sample multiple variations around the start and goal configuration points, and add an edge only if the success rate exceeds a threshold. Note this means PRM-RL trajectories aren’t guaranteed to be collision-free paths; this is not a limitation of the method, but is rather the unavoidable result of sensor noise. A standard PRM would not produce a collision-free trajectory for noisy or changing environments. Nevertheless, when discussing the results, we estimate a lower bound on the probability of collision.
Algorithm 1 describes how PRM-RL adds edges to the PRMs. We sample multiple points from the state space, which correspond to the start and goal in the configuration space, and attempt to connect the two points over trials. An attempt is successful only if the agent reaches sufficiently close to the goal point. To compute the total length of a trajectory, we sum the distances for all steps plus the remaining distance to the goal. The length we associate with the edge is the average of the distance of successful edges. The algorithm recommends adding the edge to the roadmap if the success probability is above the threshold .
The worst-case number of collision checks in Algorithm 1 is , because multiple attempts are required for each edge to determine . While each trial of checking the trajectory could be parallelized with processors, an alternative approach is to serialize trajectory checking within Algorithm 1, terminating early if the tests fail too many times. Mathematically, for a given success threshold and desired number of attempts, at least trials must succeed; therefore we can terminate when or when the failures exceed the complementary probability . This can provide substantial savings if is high, as shown in Section V-D.
We use a custom kinematic 3D simulator which provides drive models for agents and supports visual sensors such as cameras and lidars. The simulator also provides parameterized noise models for actions, observations, and robot localization, which we find improves model training and robustness. Because the simulation is kinematic, stepping is fast compared to the full-phyisics simulations, speeding up both RL training and roadmap building.
For larger roadmaps, an effective strategy is to parallelize calls to Algorithm 1. Algorithm 2 describes the roadmap building procedure, where a RL agent is trained once, and used on several environments. While building a roadmap for each environment, we first sample the obstacle map to the given density, and store all candidate edges. Two nodes that are within the RL policy range, are considered candidates. Next, we partition all the candidate edges is subsets for distributed processing. Each candidate edge is considered and it is added, along with its nodes, to the roadmap if the AddEdge Monte Carlo rollouts return positive outcome.
Finally, Algorithm 3 describes the navigation procedure, which takes a start and a goal position. As in traditional PRMs, these are added to the roadmap if not already there. Then roadmap is queried to result in a list of waypoints. If no waypoints are returned, the algorithm returns the start and goal as the path to give the RL agent the opportunity to attempt to navigate on its own. In execution, the RL agent is given one waypoint at the time as a sub-goal, clearing them sequentially as it gets within goal distance, until the final destination is reached or is exceeded.
We evaluate PRM-RL’s performance on both floormaps and SLAM maps, with respect to noise resilience, and with on-robot experiments. Section V-A describes the robot and training setup, evaluation environments, and baselines. Section V-B evaluates the performance on the floormaps against the baselines. With the performance established against the baselines, the following sections examine PRM-RL’s characteristics in more depth. Section V-C assesses navigation robustness to noise, and Section V-D studies the role of success threshold and sampling density, and applicability of PRM-RL with for robots with dynamic constraints in Section V-H. Since one of our goals is to assess PRM-RL for real-world navigation, the remainder of the results focuses on on-robot evaluation and sim2real transfer. Section V-E evaluates PRM-RL on noisy SLAM maps, and Section V-G analyzes the sim2real transfer and on-robot experiments.
V-A1 Robot setup
We use two different robot kinematic models, differential drive  and simple car model . Both are controlled with linear and angular velocities commanded at Hz, receive goal observation from off-the-shelf localization, and are represented as circles with radius. The obstacle observations are from 1D lidar data (Fig. 5), with a FOV resampled to rays. Following , we use an observation trace of the last 3 frames to provide a simple notion of time to the agent. We use differential drive robots like the Fetch  for physical experiments.
|1 when goal reached||0.0||100.0||14.30||0.82|
|and 0 otherwise|
|distance to goal|
|Delta of Euclidean||0.0||5.0||N/A||2.03|
|distance to goal|
|1 on collision||-100.0||0.0||-31.75||-1.80|
|and 0 otherwise|
V-A2 Obstacle-avoidance local planner training
We use the P2P agent from  for the differential drive robot, and train an agent with AutoRL over DDPG  for the car robot. In both cases, the true objective is goal reached within , allowing the RL agent to cope with noisy sensors and dynamics. Table I depicts learned reward hyperparameters. DDPG actor and critic are feed-forward fully-connected networks. Actor networks are three-layers deep, while the critics consists of a two-layer observation networks joined with the action networks by two fully connected layers, resulting in actor layer widths of 50, 20, and 10, and critic widths of 10 and 10. Appendix VII contains the training hyperparameter details.
The training environment is by (Fig. (a)a). To simulate imperfect real-world sensing, the simulator adds Gaussian noise, , to its observations.
|Training||Floor map||by||Fig. (a)a|
|Building 1||Floor map||by||Fig. (d)d|
|Building 2||Floor map||by||Fig. (c)c|
|Building 3||Floor map||by||Fig. (b)b|
|Building Complex||Floor map||by||Fig. 2|
|Physical Testbed 1||SLAM||by||Fig. (a)a|
|Physical Testbed 2||SLAM||by||N/A (private)|
V-A3 Evaluation environments
Table II lists our simulation environments, all derived from real-world buildings. Training, Building 1-3, depicted in Fig. 3, and Building Complex (Fig. 2) are metric maps derived from real building floor plans. They are between 12 to 200 times larger than the training environment by area. Physical Testbed 1 (Fig. 6) and Physical Testbed 2 are derived from SLAM maps used for robot deployment environments.
V-A4 Roadmap building
For simplicity we use unform random sampling to build roadmaps. We connect PRMs with a effective threshold of over 20 attempts, with a max connection distance of based on the effective navigation radius for our P2P agents per , except where stated differently.
|Label||Local||Execution||Obstacle Avoid||Monte Carlo||Description|
|AutoRL||N/A||AutoRL||N/A||Yes||No||The AutoRL policy  without guidance by a PRM.|
|PRM-SL||Straight Line||Straight Line||No||No||No||Straight-line PRMs  with a straight-line execution policy.|
|PRM-GAPF||Straight Line||Guided APF||No||Yes||No||Straight-line PRMs  executed by guided APF similar to .|
|ICRA18||DDPG||DDPG||Yes||Yes||Yes||PRM-RL w/ DDPG for roadmap & execution ().|
|PRM-RL||AutoRL||AutoRL||Yes||Yes||Yes||PRM-RL w/ AutoRL for roadmap & execution (ours).|
We select four baselines shown in Table III. The baselines differ in the local planner, which is used for building the roadmap, and execution policy, which guides the robot. The baseline selection critia was determined by the obstacle avoidance ability of a policy, its stochasticity. Recall that, PRM-RL uses a stochastic policy capable of obstacle-avoidance, for connecting nodes in the roadmap, and relies on Monte Carlo roll-outs of the policy.
The selected baselines are AutoRL , PRM-SL , PRM-GAPF (a modification of ), and our original PRM-RL with a hand-tuned RL policy . The AutoRL  baseline policy is not guided by a PRM. PRM-SL  uses roadmaps built with a straight-line planner and a straight-line execution policy. PRM-GAPF uses PRMs built with a straight-line local planner, and an execution policy guided by an artificial potential field over the PRM-SL path, similarly to . ICRA18  is PRM-RL with hand-tuned DDPG as the planner. Where not otherwise specified, PRM-RL refers to our current approach. We do not compare PRM-RL with RRTs because they are one-time planners and are relatively expensive for building on-the-fly for long range trajectories.
For comparisons to baselines, unless otherwise stated, each roadmap is evaluated on 100 queries selected from the between start and goal positions from to .
V-B PRM-RL Performance on Floorplan Maps
Figure 7 shows the success rate and path length comparison over 10,000 queries. PRM-RL’s average success rate is over all three maps, which outperforms the baselines by factors of for pure AutoRL, for PRM-SL, for PRM-GAPF, and for ICRA18. Numerical results are in Table IV in Appendix VII. Outperforming the non-guided AutoRL policy is not surprising as it does not have the knowledge of the obstacle map; note that AutoRL’s peformance degrades rapidly on maps other than the training environment, while PRM-guided policies show less degradation and occasional improvement.
We can draw two observations from these results. First, they provide evidence that PRM-RL successfully enables a local planner trained on one environment to perform well in environments that it was not trained on - and furthermore can enable it to perform better on the original training environment than it can on its own. This is not surprising as our local planners were not designed to travel long distances, but guiding the local policy via the PRM can more than double its success rate. Second, the PRM success rate is tightly correlated with the success of the local planner. The P2P policy used in PMR-RL outperforms the one used in ICRA2018 by 31% and APF by 48% as reported in . Extended to PRMs, the performance lifts are 19.9% and 33%. This is strong evidence that investing in a high-fidelity local planner increases PRM-RL’s performance of the overall navigation task.
PRM-RL shows more robustness to noise than PRM-GAPF. PRM-RL performance is in blue while PRM-GAPF is in green; individual maps are marked with T, 1, 2, 3, and S for Training, Buildings 1-3, and the SLAM map. The thick lines marks the means and their light borders the standard deviations. a) PRM-RL smoothly degrades as lidar noise rises towardswith typically no more than a drop, while PRM-GAPF degrades quickly as lidar noise rises towards , up to drop. b) PRM-RL also shows smooth degradation with position noise (modeled as goal uncertainty); PRM-GAPF degrades up to with position noise (modeled as localization uncertainty). c) Action noise causes the sharpest degradation of up to on PRM-RL; action noise has less effect on PRM-GAPF, up to of peak, but it is still worse than PRM-RL except on the SLAM map at high noise levels.
V-C PRM-RL Robustness to Noise
Real robot sensors and actuators are not perfect, so it is important for navigation methods to be resilient to noise. Fig. 8 shows the evaluation of PRM-RL and PRM-GAPF over simulated Gaussian noise sources with mean and in the following ranges:
Lidar sensor noise from -, over three times the radius of the robot.
Goal position noise from -, over twice the radius of the goal target.
Action noise of velocity - and angular velocity .
As lidar and goal position noise increases, PRM-RL shows only a slight degradation, typically no more than an drop in success even at . Action noise shows a steeper degradation, from to depending on the environment. In contrast, PRM-GAPF degrades steeply with respect to lidar noise, achieving success rates of and dropping from the peak score. Localization noise for PRM-GAPF, while not directly comparable to the goal noise collected for PRM-RL, also dropped off to success scores of . While PRM-GAPF seems to be somewhat more resistant to increased action noise, dropping off only from its highest score, it nonetheless never achieves more than success rate and was almost always worse than PRM-RL.
PRM-RL is resilient to lidar, localization and actuator noise on the order of tens of centimeters, which is larger than the typical lidar, localization and actuator errors we observe on our robot platforms, indicating that PRM-RL is a feasible approach to deal with the kinds of errors our robots actually encounter. This is similar to the trend to noise sensitivity reported in , and suggests that overall navigation resilience to noise is correlated to the that of the local planner.
V-D The Impact of Sampling Density and Success Threshholds
To deploy PRM-RL on real robots, we need to choose sampling density and success threshold parameters that provide the best performance.
Figure (a)a shows that PRM-RL success rate increases steadily up to a sampling density of samples per meter, which is roughly twice the size of our robot, and then levels off. The trend appears to be robot-specific, rather than the environment specific. At the same time, collision checks increase rapidly with sampling density (Fig. (b)b); While traditional PRM theory predicts that performance would continue to improve with increased sampling [33, 28], these results raise the possibility that beyond this density PRM-RL performance is robot-noise-bound, and that sampling beyond this density provides little benefit at rapidly increasing cost. For that reason, we suggest finding the optimal sampling density in the small training environment at the fraction of the cost, and using it when building larger roadmaps.
To determine the appropriate connection success threshold, we survey navigation success over our training environment and a SLAM map (Fig 10). These results show that the agent’s ability to navigate the map increases as the threshold increases (Fig. (a)a). Because our connection algorithm terminates earlier for higher thresholds when it detects failures, collision checks drop as the success threshold rises (Fig. (b)b). At the end, for larger roadmaps, the success threshold of 100% not only produces the most reliable roadmaps, but requires less collision checks to build them.
For the evaluations in simulation reported earlier in this paper, we choose parameters which enabled comparison with : a sampling density of samples per meter and an effective success connection threshold of . For the two roadmaps in Fig 10, this results in an average collision check savings over .
However, if comparison to a baseline is not important, these results suggest choosing map densities as high as samples per meter with as high a success connection threshold as possible. For the physical deployments we discuss later, we choose a success connection threshold of and generate maps at and samples per meter and empirically measure performance; as predicted by thr simulation results, we observe better performance on denser maps.
V-E PRM-RL Performance on SLAM Maps
Floorplans are not always available or up-to-date, but many robots can readily generate SLAM maps and update them as buildings change. To assess the feasibility of using SLAM maps, we evaluate PRM-RL on SLAM maps generated from some of the same buildings as our floorplan roadmaps. Figure 6 illustrates a sample roadmap, corresponding to Physical Testbed 1, part of floor two of Building 1 in (d)d and the upper center section of the large-scale roadmap in Figure 2. This SLAM-derived roadmap has 195 nodes and 1488 edges with 2.1 million collision checks.
We compare PRM-RL’s performance with PRM-GAPF on this map. PRM-GAPF achieves a success rate of , whereas PRM-RL’s success rate is , a relative increase. Close inspection of the failure cases indicated some of these connections are genuinely hard for our current policies due to features of this map. For example, the inset in Figure (b)b shows an area which appears navigable at first glance which PRM-RL refuses to connect: closer inspection reveals pixels in the SLAM map which correspond to a pillar in reality, which humans can easily walk around but which is challenging for our robot to navigate around due to the width of its base.
These results show that while the performance of PRM-RL with an AutoRL policy on SLAM roadmaps is not as good as its average on the floorplan roadmaps in our tests, it is better than the baselines on the floorplan maps, and is roughly equivalent to PRM-RL with the ICRA2018 policy on these floorplan maps. These results indicate PRM-RL performs well enough to merit tests on roadmaps intended for real robots at physical sites, which we discuss in the following two sections.
V-F Scaling PRM-RL to Large-Scale Maps
Our robot deployment sites substantially larger than our simulated test maps, raising the question of how PRM-RL would scale up. For example, the SLAM map discussed in the previous section is only part of one building within a 4-building complex; where the SLAM map is by , a map of the 4-building complex is by . To assess PRM-RL’s performance on large-scale maps, we built and test roadmaps for maps covering the complete footprint of both deployment sites.
Figure 2 depicts a large floorplan roadmap from the 4-building complex. This roadmap has 15.9 thousand samples and 1.4 million candidate edges prior to connection attempts, of which 689 thousand edges were confirmed at a success threshold. This roadmap took 4 days to build using 300 workers in a cluster and required 1.1 billion collision checks. PRM-RL successfully navigates this roadmap of the time evaluated over 1,000 random navigation attempts with a maximum path distance of (note our other experiments use , which generally will not cross the skybridge in this larger map; for reference, using our default evaluation settings, PRM-RL navigates this roadmap of the time).
For our other candidate robot deployment site, we use a large SLAM map, by . We constructed a roadmap with 2069 nodes and 53,800 edges, collected with 42 million collision checks at the higher success threshold of . PRM-RL successfully navigated this of the time, evaluated over 1,000 random navigation attempts; as on our smaller SLAM map, examination of the failure cases indicate that the more complex geometry recorded by SLAM proves problematic for our current policies.
These results indicate that PRM-RL’s simulated performance on roadmaps continues to surpass the average success threshold we observed previously in , making it worthwhile to test on real robots at the physical sites.
V-G Transfer of PRM-RL to Physical Robots
We empirically evaluate PRM-RL on two physical environments on two differential-drive robots. First, we evaluate PRM-RL with the AutoRL policy for a differential-drive robot in Physical Testbed 1 (Fig. (a)a). We collected 43 trajectories accumulating over traveled. PRM-RL exhibited an overall success rate of over 39 runs; the longest successful trajectory was meters. Fig. (c)c shows a sample of 10 of these runs. The video is available at https://youtu.be/xN-OWX5gKvQ
Second, we evaluate PRM-RL in Physical Testbed 2. For a variant of the roadmap generated at success rate and density of samples per meter, we collected 31 trajectories over of travel, shown in Fig. (a)a; the longest successful trajectory was . We cannot directly compare this evaluation to our simulated runs because the e-stop policies designed to protect the robot do not match our simulation termination conditions; nevertheless, we recorded 14 successful runs out of 31, a success rate. We then tested a variant of the roadmap generated at success rate and a density of samples per meter over 49 runs over of travel for an improved success rate, shown in Fig. (b)b; the longest successful trajectory was . These runs include a complete circuit where PRM-RL successfully navigated between a loop of waypoints over . Interestingly, this much more successful map showed qualitatively worse behavior on some paths by attempting to navigate through PRM waypoints blocked by temporary obstructions.
Figure 12 summarizes these results. Despite the more aggressive episode termination policies on-robot (near-collisions are treated as failures on-robot) and the more limited available search space, we nonetheless observe similar results: over several different roadmaps constructed at different densities and success criteria, PRM-RL built with noisy SLAM maps achieves success in simulation and success on-robot, closing the sim2real gap. This makes SLAM-based PRM-RL simulation results a good predictor of on-robot performance.
V-H PRM-RL with Dynamic Constraints
To demonstrate that PRM-RL could guide more complicated robots, we develop a drive model for a simulated F1/10 car  with dynamics following . Average success over the four maps in Figure 3 is with a standard deviation of ; average success in simulation on Physical Testbed 1 is . Figure (a)a illustrates a roadmap built with this model over our training map with samples per meter connected with a success rate; this roadmap has 32 nodes and 313 edges connected after 403 thousand edge checks. On this roadmap, PRM-RL exhibits an success rate, including cases where the car needs to make a complex maneuver to turn around (Fig. (b)b). These results are as good as the on-robot results for the dense map of our larger robot deployment, indicating that PRM-RL is a viable candidate for further testing on more complicated robot models.
In the previous section, we empirically established correlations between the local planner’s competence and PRM-RL’s resilience to noise; we also explored the contributions of sampling densities, success thresholds and obstacle map sources to the success of overall navigation. We concluded that 1) using an execution policy that is resilient to noise and avoids obstacles as a local planner improves the overall success of the hierarchical planner; 2) a success threshold of 100% improves overall navigation success, 3) the upper bound to navigation success is not dependent on density but policy performance and robot type, and 4) using realistic obstacle maps, such as SLAM maps, as a basis for building roadmaps provides simulation results closer to reality.
This section provides a deeper analysis of those empirical findings. Section VI-C analyzes the impact of local planner’s obstacle avoidance and noise resilience on roadmap construction. Section VI-B examines the computational complexity of PRM-RL, and Section VI-C discusses causes of failure for trajectories over multiple waypoints.
Vi-a PRM-RL Roadmap Connectivity
Unlike straight-line planners, RL agents can often go around corners and smaller obstacles; Figure 14 shows how this effectively transforms the configuration space to make obstacles smaller. While the agent never traverses these corner points, as they are not in , they nevertheless do not serve to block the agent’s path, unlike central portions of a larger body which might block or trap the control agent in a local minimum. If we model this phenomenon as an effective reduction in radius of a circular obstacle with respect to a policy , and further model the connection region as a disc filled with randomly sampled obstacles from to in total area density , we can estimate an upper bound on connection success as the idealized case in which obstacles do not occlude and the chance of connection is just the complementary probability of encountering an obstacle over a region of space, ; this corresponds to the looser bound in the RL case. Combining these, a conservative estimate of the proportion of samples connected by PRM-RL to those connected by PRM-SL is:
This simplified model indicates that as obstacle density increases, it becomes harder to connect points, but as RL’s ability to cut corners increases, PRM-RL has an increasing advantage over PRM-SL in connecting these difficult roadmaps. Hence, in environments with dense obstacles, it makes sense to invest in execution policies that avoid obstacles really well, and use them as local planners. Alternately, it suggests that policies that can learn strategies for dealing with frequently occurring obstacle topologies, like box canyons and corners, are a fruitful area for future work.
Conversely, PRM-RL does not connect nodes in a roadmap where a policy cannot navigate reliably. This is the key difference from standard PRMs, and is the cause for the upper limit on performance improvements as the roadamps increase their density - the roadmaps are policy-bound, rather than sampling bound. One question to ask is why local control policies can’t simply learn to “drive safe” under the guidance of some on-the-fly path planner like A*. However, an analysis of how noise impacts the behavior of policies indicates that policies which do not memorize the environment may overestimate their ability to navigate because the hazards that they can observe locally may not represent an accurate picture of the hazards of the global environment.
To see why, suppose a policy has learned to navigate safely in the presence of noise by maintaining a distance from walls. Modeling time as discrete and assuming the robot is travelling at a constant speed, so that on each time slice the robot moves a constant distance units forward, let us further model sources of variability as transient Gaussian noise orthogonal to the robot’s travel with zero mean and standard deviation ; this results in a probability of collision per step of
(the cumulative distribution function of the Gaussian noise modelevaluated at , expressed in terms of the complementary Gauss error function ). Figure (a)a shows that when the robot is traveling in a narrow corridor, it is twice as likely to collide as it does when hugging a wall, even though it may be maintaining from any given wall at all times. Over a path of length , a conservative lower bound on the chance of collision rises exponentially with the number of steps it takes to traverse the path,
causing the narrow corridor case to become unsafe faster than the wall-hugging case as shown in the blue region of Figure (b)b. This means that a RL policy that judges its safety based on locally observable features can overestimate the safety of a path in the region between where both PRM-RL and RL would succeed and where both PRM-RL and RL would fail (Figure (c)c); the same would be true of RL guided by PRM-SL based on clearance, such as in grid-based sampling  and Safety PRM . In this case, RL or RL guided by PRM-SL can make erroneous navigation choices, whereas PRM-RL simply does not add that link to the roadmap. While in theory an agent could be designed to cope with this specific issue, other scenarios can present similar problems: no local prediction of collision probability can give a true guarantee of trajectory safety. While this is an idealized analysis, our experience is that real-world environments can be more pathological - for example, at one site curtains behind a glass wall induced a persistent 0.5 meter error in the robot’s perceived location, causing it to drive dangerously close to the wall for long distances despite the presence of a safety layer. PRM-RL provides an automated method to avoid adding those unreliable connections in the roadmap, resulting in the roadmaps that more sparsely connect but transfer more reliably on-robot.
Vi-B PRM-RL Computational Complexity
The computational complexity of connecting a node to a roadmap PRM-RL is impacted by a constant factor over PRM-SL because point connection tests for PRM-RL require a sequence of simulated MC rollouts, whereas PRM-SL requires a single straight-line collision test. In addition, the empirical evaluations show that there is a sample density beyond which the navigation success rate of PRM-RL does not improve. Here we assess the computational complexity of building a PRM-RL roadmap with known RL policy range, , optimal sampling density, , and workspace volume
Approaches exist to construct PRMs with nodes in edge tests, but generally achieve this by imposing various limits, e.g. connecting the k nearest neighbors or making the radius of connection an inverse function of the number of nodes . PRM-RL instead attempts to connect all nodes within a given radius, both to give the RL agent the highest chance of success and to satisfy user expectations on navigation behavior, making it more similar to Simplified PRM  which requires edge calculations. However, for practical purposes most RL algorithms cannot connect points longer than some effective range which can be determined empirically (see ). This can be used to restrict connection attempts to a local neighborhood, reducing the edge calculations to if a spatial data structure is used for the nearest neighbor lookup.
To ensure coverage, PRM-RL samples the configuration space to a given density rather than choosing a fixed number of points , so the number of points sampled is a function of the workspace volume multiplied with density, where is the workspace volume, and is the optimal sampling density. Next, the complexity of connecting a single node to the roadmap given connection attempts is , because the search volume for the policy with effective range is where is the workspace dimensionality, and therefore the number of expected samples in that volume is . Therefore, the computational complexity of building a roadmap is
dropping the term because it is a constant. Eq. (6) exposes the following power sources:
Complexity is linear with the workspace volume and success threshold.
The roadmap building time is in sampling density, matching empirical observations in Fig. (b)b, and it is worthwhile in assessing the optimal sampling density before investing in building the roadmaps.
The complexity is in the policy range, and a sophisticated local planners that can reliably cover long distances increase the computational cost of the roadmap (unless a lower arbitrary connection limit is chosen).
Note each node connection is an independent operation, and with a sufficient number of processors the roadmap building can be parallelized up to the expected samples up . So, given processors, the effective time complexity is possibly alleviating some of the cost of increased sampling.
Last, note that when using early termination, increasing the success threshold often but not always reduces the required number of connection attempts . In the worst case scenario where we require , early termination can at best cut to , but as increases the number of failures needed to exclude an edge, , drops toward 1. Conversely, if navigation is successful then the full samples need to be collected for an edge; the distribution of successes and failures thus has a large effect on the cost. One way to control this cost to reduce the max connection distance to less than the effective policy navigation distance ; in this case the agent is more often expected to succeed, and can potentially be reduced. We have observed that these tradeoffs can significantly affect the cost of a run, but must be studied empirically for the environments of interest.
Vi-C PRM-RL Trajectory Execution
Because PRM-RL construction calculates the probability of success before adding an edge, we can estimate the expected probability of success of a long-range path that passes through several waypoints. Recall that to add an edge to the roadmap we collect Monte Carlo rollouts and require an observed proportion of successes typically of and . Given that expected probability of success of a Bernoulli trial observing successes out of samples is
, the actual probability of successful navigation over an edge with successful samples is , and similarly for thresholds of . Extrapolating over the sequence of edges in a PRM-RL path, the expected success rate is where is number of waypoints. In  we observe PRM-RL paths with waypoints averaged over our three deployment maps, yielding an estimated probability of success of for the threshold and for the threshold. Therefore, for the lengths of paths we observe in our typical deployment environments, the threshold improves PRM-RL’s theoretical performance to the point that it is more likely to succeed than not, which is what we observe empirically.
We presented PRM-RL, a hierarchical planning method for long-range navigation that combines sampling-based path planning with RL agent as a local planner in very large environments. The core of the method is that roadmap nodes are connected only when the RL agent can connect them consistently in the face of noise and obstacles. This extension of  contributed roadmap construction and robot deployment algorithms, along with roadmap connectivity, computational complexity, and navigation performance analysis. We evaluated the method on a differential drive and a car model with inertia used on floormaps from five building, two noisy obstacle maps, and on two physical testbeds. We showed that 1) the navigation quality and resilience to noise of the execution policy directly transfers to the hierarchical planner; 2) a 100% success threshold in roadmap construction yields both the highest quality and most computationally efficient roadmaps; 3) building roadmaps from the noisy SLAM maps that the robot uses at execution time virtually closes the sim2real gap, yielding simulation success rates very similar to those observed on robots. PRM-RL with SLAM maps embed into the roadmap information that the robot uses at execution time, providing a better estimate of on-robot performance. Failure modes include pathologies of the local policy, poorly positioned samples, and sparsely connected roadmaps. In future work, we will examine improved policies with more resistance to noise, better sampling techniques to position the samples strategically, and techniques for improving map coverage with better localization and obstacle maps.
[Training Hyperparameters] Both the actor and critic use the AdamOptimizer with , , ; the actor’s learning rate is and the critic’s is
. The actor uses DQDA gradient clipping and the critic useswith the Huber loss for temporal difference errors. Our number of initial stabilization steps is , after which we use a soft target network update of on every step. Our training batch size is and our replay buffer has a capacity of million. We let training run for million steps, but save policies every steps and select the best policy over the course of the run.
[Tabular results for PRM-RL success rate]
|Env.||Navigation Success (%)|
[Table of Symbols]
|Symbol||Units or Domain||Meaning|
|Configuration space of dimension|
|Free portion of configuration space|
|State space of robot plus task state|
|Task space for navigation|
|Free portion of the task space|
|Physical workspace of dimension|
|Observation space of dimension|
|Action space of dimension|
|Noise model or|
|Named reward component|
|Sensor w/ dyn. & noise|
|Action w/ dyn. & noise|
|Waypoint on path|
|Point along trajectory|
|meters||Goal success distance|
|Max trajectory execution steps|
|Diff drive action lin, ang vel.|
|Line from to in graph|
|Num. edge connection attempts|
|Num. observed connection successes|
|Edge connection success threshold|
|meters||Max attempted edge connection dist.|
|meters||Policy effective nav. distance|
|Policy effective obst. shrinkage|
|Sampling density per meter|
|points||Number of points to sample|
|Probability of successful navigation|
|Number of waypoints on a path|
|Volume of the workspace|
|Number of processors|
The authors thank Jeffrey Bingham, James Davidson, Brian Ichter, Ken Oslund, Peter Pastor, Oscar Ramirez, Lydia Tapia, Alex Toshev, and Vincent Vanhoucke for their helpful discussions and contributions to the PRM-RL project.
-  D. Abel, A. Agarwal, F. Diaz, A. Krishnamurthy, and R. E. Schapire. Exploratory gradient boosting for reinforcement learning in complex domains. arXiv preprint arXiv:1603.04119, 2016.
-  A. Agha-mohammadi, S. Chakravorty, and N. Amato. FIRM: Sampling-based feedback motion planning under motion uncertainty and imperfect measurements. Int. J. Robot. Res., 33(2):268–304, 2014.
-  I. Al-Bluwi, T. Siméon, and J. Cortés. Motion planning algorithms for molecular simulations: A survey. Computer Science Review, 6(4):125 – 143, 2012.
-  R. Alterovitz, T. Simeon, and K. Goldberg. The stochastic motion roadmap: A sampling framework for planning with markov motion uncertainty. page 246 – 253, Atlanta, GA, USA, June 2007.
-  N. M. Amato and L. K. Dale. Probabilistic roadmap methods are embarrassingly parallel. In Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C), volume 1, pages 688–694 vol.1, May 1999.
-  S. Bhatti, A. Desmaison, O. Miksik, N. Nardelli, N. Siddharth, and P. H. Torr. Playing doom with slam-augmented deep reinforcement learning. arXiv preprint arXiv:1612.00380, 2016.
-  S. Brahmbhatt and J. Hays. Deepnav: Learning to navigate large cities. arXiv preprint arXiv:1701.09135, 2017.
-  c. Ichnowski and c. Alterovitz. Parallel sampling-based motion planning with superlinear speedup. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1206–1212, Oct 2012.
-  A Look at the U.S. Commercial Building Stock: Results from EIA’s 2012 Commercial Buildings Energy Consumption Survey (CBECS). https://www.eia.gov/consumption/commercial/reports/2012/buildstock/, 2012.
C. Chen, A. Seff, A. Kornhauser, and J. Xiao.
Deepdriving: Learning affordance for direct perception in autonomous
Proceedings of the IEEE International Conference on Computer Vision, pages 2722–2730, 2015.
-  H.-T. Chiang, N. Malone, K. Lesser, M. Oishi, and L. Tapia. Path-guided artificial potential fields with stochastic reachable sets for motion planning in highly dynamic environments. In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), pages 2347–2354, 2015.
-  L. Chiang, A. Faust, M. Fiser, and A. Francis. Learning navigation behaviors end-to-end with autorl. IEEE Robotics and Automation Letters (RA-L), 2019.
-  A. Dosovitskiy and V. Koltun. Learning to act by predicting the future. arXiv preprint arXiv:1611.01779, 2016.
-  Y. Duan, J. Schulman, X. Chen, P. L. Bartlett, I. Sutskever, and P. Abbeel. RL: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016.
-  C. Ekenna, S. A. Jacobs, S. Thomas, and N. M. Amato. Adaptive neighbor connection for prms: A natural fit for heterogeneous environments and parallelism. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1249–1256, Nov 2013.
-  F1/10 autonomous racing competition. http://f1tenth.org/about, 2018. Accessed: 2018-12-03.
-  T. Fan, X. Cheng, J. Pan, P. Long, W. Liu, R. Yang, and D. Manocha. Getting robots unfrozen and unlost in dense pedestrian crowds. IEEE Robotics and Automation Letters (RA-L), 2019.
-  A. Faust, H.-T. Chiang, and L. Tapia. PEARL: PrEference appraisal reinforcement learning for motion planning. arXiv:1811.12651.
-  A. Faust, N. Malone, and L. Tapia. Preference-balancing motion planning under stochastic disturbances. In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), pages 3555–3562, 2015.
-  A. Faust, I. Palunko, P. Cruz, R. Fierro, and L. Tapia. Automated aerial suspended cargo delivery through reinforcement learning. Artificial Intelligence, 247:381 – 398, 2017. Special Issue on AI and Robotics.
-  A. Faust, O. Ramirez, M. Fiser, K. Oslund, A. Francis, J. Davidson, and L. Tapia. PRM-RL: long-range robotic navigation tasks by combining reinforcement learning and sampling-based planning. CoRR, abs/1710.03937, 2017.
-  D. Fox, W. Burgard, and S. Thrun. The dynamic window approach to collision avoidance. IEEE Robotics & Automation Magazine, 4(1):23–33, 1997.
A. Giusti, J. Guzzi, D. C. Cireşan, F.-L. He, J. P. Rodríguez,
F. Fontana, M. Faessler, C. Forster, J. Schmidhuber, G. Di Caro, et al.
A machine learning approach to visual perception of forest trails for mobile robots.IEEE Robotics and Automation Letters (RA-L), 1(2):661–667, 2016.
-  D. Golovin, B. Solnik, S. Moitra, G. Kochanski, J. Karro, and D. Sculley. Google vizier: A service for black-box optimization. In Proc. of ACM International Conference on Knowledge Discovery and Data Mining, pages 1487–1495. ACM, 2017.
-  S. Gupta, J. Davidson, S. Levine, R. Sukthankar, and J. Malik. Cognitive mapping and planning for visual navigation. CoRR, abs/1702.03920, 2017.
-  K. Hauser, T. Bretl, J. Claude Latombe, and B. Wilcox. Motion planning for a sixlegged lunar robot. In The Seventh International Workshop on the Algorithmic Foundations of Robotics, pages 16–18, 2006.
-  D. Hsu, R. Kindel, J.-C. Latombe, and S. M. Rock. Randomized kinodynamic motion planning with moving obstacles. Int. J. Robot. Res., 21(3):233–256, 2002.
-  D. Hsu, J.-C. Latombe, and H. Kurniawati. On the probabilistic foundations of probabilistic roadmap planning. In Robotics Research, pages 83–97. Springer, 2007.
-  Deep Reinforcement Learning Doesn’t Work Yet. https://www.alexirpan.com/2018/02/14/rl-hard.html, 2018.
-  L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal of artificial intelligence research, 4:237–285, 1996.
-  S. Karaman and E. Frazzoli. Sampling-based algorithms for optimal motion planning. The international journal of robotics research, 30(7):846–894, 2011.
-  Y. Kato, K. Kamiyama, and K. Morioka. Autonomous robot navigation system with learning based on deep q-network and topological maps. In 2017 IEEE/SICE International Symposium on System Integration (SII), pages 1040–1046, Dec 2017.
-  L. E. Kavraki and J. claude Latombe. Probabilistic roadmaps for robot path planning, 1998.
-  L. E. Kavraki, P. Švestka, J. C. Latombe, and M. H. Overmars. Probabilistic roadmaps for path planning in high-dimensional configuration spaces. IEEE Trans. Robot. Automat., 12(4):566–580, August 1996.
-  O. Khatib. Real-time obstacle avoidance for manipulators and mobile robots. The international journal of robotics research, 5(1):90–98, 1986.
-  J. Kober, J. A. Bagnell, and J. Peters. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11):1238–1274, 2013.
-  J. Kuffner and S. LaValle. RRT-connect: An efficient approach to single-query path planning. In IEEE International Conference on Robotics and Automation, volume 2, pages 995 –1001 vol.2, 2000.
-  S. M. LaValle. Planning Algorithms. Cambridge University Press, Cambridge, U.K., 2006. Available at http://planning.cs.uiuc.edu/.
-  S. M. Lavalle and J. J. Kuffner. Rapidly-exploring random trees: Progress and prospects. In Algorithmic and Computational Robotics: New Directions, pages 293–308, 2000.
-  S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(39):1–40, 2016.
-  T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. CoRR, abs/1509.02971, 2015.
-  P. Long, T. Fan, X. Liao, W. Liu, H. Zhang, and J. Pan. Towards optimally decentralized multi-robot collision avoidance via deep reinforcement learning. In 2018 IEEE International Conference on Robotics and Automation, ICRA 2018, Brisbane, Australia, May 21-25, 2018, pages 6252–6259, 2018.
-  N. Malone, A. Faust, B. Rohrer, R. Lumia, J. Wood, and L. Tapia. Efficient motion-based task learning for a serial link manipulator. Transactions on Control and Mechanical Systems, 3(1), 2014.
-  N. Malone, K. Manavi, J. Wood, and L. Tapia. Construction and use of roadmaps that incorporate workspace modeling errors. In Proc. IEEE Int. Conf. Intel. Rob. Syst. (IROS), pages 1264–1271, Nov 2013.
-  N. Malone, B. Rohrer, L. Tapia, R. Lumia, and J. Wood. Implementation of an embodied general reinforcement learner on a serial link manipulator. In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), pages 862–869, May 2012.
-  N. D. Megill and M. Pavicic. Estimating bernoulli trial probability from a small sample. CoRR, abs/1105.1486, 2011.
-  D. K. E. D. Melonee Wise, Michael Ferguson and D. Dymesich. Fetch & freight: Standard platforms for service robot applications. In Workshop on Autonomous Mobile Service Robots held at the 2016 International Joint Conference on Artificial Intelligence, 2016.
-  P. Mirowski, R. Pascanu, F. Viola, H. Soyer, A. Ballard, A. Banino, M. Denil, R. Goroshin, L. Sifre, K. Kavukcuoglu, et al. Learning to navigate in complex environments. arXiv preprint arXiv:1611.03673, 2016.
-  V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
-  A. Mousavian, A. Toshev, M. Fiser, J. Kosecka, and J. Davidson. Visual representations for semantic target driven navigation. CoRR, abs/1805.06066, 2018.
-  K. Mülling, J. Kober, and J. Peters. A biomimetic approach to robot table tennis. Adaptive Behavior, 19(5):359–376, 2011.
-  B. Paden, M. Cáp, S. Z. Yong, D. S. Yershov, and E. Frazzoli. A survey of motion planning and control techniques for self-driving urban vehicles. CoRR, abs/1604.07446, 2016.
-  J.-J. Park, J.-H. Kim, and J.-B. Song. Path planning for a robot manipulator based on probabilistic roadmap and reinforcement learning. In International Journal of Control, Automation, and Systems, pages 674–680, 2008.
-  M. Pfeiffer, M. Schaeuble, J. I. Nieto, R. Siegwart, and C. Cadena. From perception to decision: A data-driven approach to end-to-end motion planning for autonomous ground robots. Proc. IEEE Int. Conf. Robot. Autom. (ICRA), pages 1527–1533, 2017.
-  S. Rodríguez, J.-M. Lien, and N. M. Amato. A framework for planning motion in environments with moving obstacles. In Proc. IEEE Int. Conf. Intel. Rob. Syst. (IROS), pages 3309–3314, 2007.
-  A. Seff and J. Xiao. Learning from maps: Visual common sense for autonomous driving. arXiv preprint arXiv:1611.08583, 2016.
-  B. Siciliano and O. Khatib. Springer handbook of robotics. Springer, 2016.
S. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. Funkhouser.
Semantic scene completion from a single depth image.
IEEE Conference on Computer Vision and Pattern Recognition, 2017.
-  L. Tai, G. Paolo, and M. Liu. Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 31–36, Sept 2017.
-  L. Tapia, S. Thomas, and N. M. Amato. A motion planning approach to studying molecular motions. Communications in Information and Systems, 10(1):53–68, 2010.
-  A. Yahya, A. Li, M. Kalakrishnan, Y. Chebotar, and S. Levine. Collective robot reinforcement learning with distributed asynchronous guided policy search. CoRR, abs/1610.00673, 2016.
-  J. Zhang, J. T. Springenberg, J. Boedecker, and W. Burgard. Deep reinforcement learning with successor features for navigation across similar environments. In Proc. IEEE Int. Conf. Intel. Rob. Syst. (IROS), pages 2371–2378. IEEE, 2017.
-  Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning. arXiv preprint arXiv:1609.05143, 2016.
-  Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning. In 2017 IEEE international conference on robotics and automation (ICRA), pages 3357–3364. IEEE, 2017.