Learned Critical Probabilistic Roadmaps for Robotic Motion Planning

10/08/2019 ∙ by Brian Ichter, et al. ∙ 0

Sampling-based motion planning techniques have emerged as an efficient algorithmic paradigm for solving complex motion planning problems. These approaches use a set of probing samples to construct an implicit graph representation of the robot's state space, allowing arbitrarily accurate representations as the number of samples increases to infinity. In practice, however, solution trajectories only rely on a few critical states, often defined by structure in the state space (e.g., doorways). In this work we propose a general method to identify these critical states via graph-theoretic techniques (betweenness centrality) and learn to predict criticality from only local environment features. These states are then leveraged more heavily via global connections within a hierarchical graph, termed Critical Probabilistic Roadmaps. Critical PRMs are demonstrated to achieve up to three orders of magnitude improvement over uniform sampling, while preserving the guarantees and complexity of sampling-based motion planning. A video is available at https://youtu.be/AYoD-pGd9ms.



There are no comments yet.


page 3

page 5

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Robot motion planning computes a collision free, dynamically feasible, and low-cost trajectory from an initial state to goal region [26]. Sampling-based motion planning (SBMP) approaches, such as probabilistic roadmaps (PRMs) [21] and rapidly-exploring random trees [25], efficiently solve complex planning problems through using a set of probing samples to construct an implicit graph representation of the robot’s state space. To connect the initial state and goal region, PRMs search this roadmap graph and identify a sequence of states and local connections which the robot may traverse. Though these algorithms can form arbitrarily accurate representations as the number of samples increases to infinity, in practice, only a few critical states are necessary to parameterize solution trajectories. Often these critical states enjoy significant structure, e.g., entries to narrow passages, yet are only identified through exhaustive sampling [16]. Furthermore, when these states are identified [15, 16], traditionally they are treated with equal importance as less critical samples, e.g., samples in open regions.

We present a method that learns to recognize critical states and use them to construct a hierarchical PRM. These critical states are quantified through betweenness centrality [10]

, a graph-theoretic measure of centrality based on a sample’s importance to shortest paths through a graph, followed by a smoothing step that retains critical samples that are necessary for planning. Given this set of critical states, a neural network is learned to predict criticality from local, environmental features; this local focus enables scaling to complex environments. Online, with a new, previously unseen planning problem, we construct a

Critical PRM, which samples a small number of critical states and a large number of non-critical states. The non-critical samples are connected locally, preserving the asymptotic optimality and complexity of SBMP. The connection strategy for the critical states, however, is modified by connecting critical states to all samples, providing critical edges through the graph. The results in this work demonstrate the algorithm’s generality and show a significant reduction in computation time to achieve the same success rate and cost as baselines.

Related Work. Since the early days of sampling-based motion planning [21], researchers have sought to develop improved sampling techniques that bias samples towards important regions of the state space [14, 5] or cover the state space more efficiently [6]. Several works have considered deterministic sequences for covering spaces evenly [6, 24, 18]

. To bias samples, several previous works have used heuristically driven approaches to sample more frequently near obstacles

[5], near narrow passages [14], or to adaptively combine several of such approaches [15]. Others have used workspace decompositions to identify regions of interest [23, 32]. Another promising approach [11] adaptively samples only in regions that may improve the current solution. These methods are generally limited in their ability to sample the state space or consider local planner policies.

One recent approach has been to learn sample distributions to directly sample from in sampling-based motion planning [36, 16, 35, 7, 27]. [16] and [35] use offline solution trajectories to learn a distribution of samples and bias sampling towards regions where an optimal solution might lie. [7] learns a local library sampler to bias sampling towards regions likely to contain the solution. [27] learns to identify critical regions for sampling based on images of successful solutions and leverages these samples by growing trees from them. [22] proposes several heuristic approaches for identifying diverse bottleneck states to plan with sparse roadmaps. In this work, we identify critical regions from graph theoretical techniques in the state space and incorporate these samples into a hierarchical Critical Roadmap, which allows the critical samples to connect throughout the state space. Furthermore, most previous work has focused on a single-query setting, where the initial and goal state can be heavily leveraged in sample selection and where complex, large environments can be difficult to learn from. Instead, Critical PRMs are targeted towards multi-query settings and only require local information to choose samples.

Finally, we note the connections between Critical PRM and other areas of machine learning. Within Reinforcement Learning (RL), the exploration problem is not unlike the motion planning problem we consider herein. In this context, critical states may be used to identify and subsequently reward useful subgoals, allowing more efficient exploration

[34, 13]. These critical states can further be used to identify discrete skills to enable learned hierarchies [2, 30, 31]. Within planning and control, recent work has sought to learn low-dimensional representations of important regions of the state space [33, 3, 17].

Statement of Contributions. The contributions of this paper are threefold. First, we present a methodology for identifying and subsequently learning to identify critical states for the optimal motion planning problem. The criticality of a node is based on the graph-theoretic betweenness centrality, which allows states to be identified in problems with complex environments, state spaces, and local planners. The learned regressor further can identify criticality from only local features, allowing scaling to more complex environments and requiring less training data. Second, we present an algorithm for leveraging these critical states as more important through a hierarchical PRM, termed the Critical PRM. In particular, these states are globally connected through the state space, along with a set of locally connected uniform states. This allows critical states and connected edges to serve as highways through the state space. Third, we demonstrate the Critical PRM algorithm on a number of motion planning problems as well as compare to state of the art baselines. Our findings show that Critical PRM can outperform other methods by up to three orders of magnitude in computation time to a success rate and one in cost. Furthermore, we show that Critical PRM can scale to the complexity of real-world planning problems.

Organization. In Section II, we introduce the optimal motion planning problem approached herein. In Section III, we overview the process of learning to identify critical samples. In Section IV, we describe the Critical PRM algorithm. In Section V, we demonstrate the generality and speed improvements of Critical PRMs as well as show the performance on a real robot. Finally, in Section VI, we overview conclusions and outline future directions.

Ii Problem Statement

The goal of this work is to efficiently solve optimal motion planning problems by learning to identify and effectively leveraging states critical to their solutions. Informally, solving the optimal motion planning problem entails finding the shortest free path from an initial state to a goal region, if one exists. For complex problems, there exist formulations that include kinematic, differential, or other more complex constraints [29, 26]. The geometric motion planning problem, a simple version of the problem, is defined as follows. Let be the state space, with . Let denote the obstacle space, the free state space, the initial condition, and the goal region. A path is defined by a continuous function, . We refer to a path as collision-free if for all , and feasible if it is collision-free, , and . We thus wish to solve,

Problem 1 (Optimal motion planning)

Given a motion planning problem and a cost measure , find a feasible path such that is feasible. If no such path exists, report failure.

Even simple forms of the motion planning problem are known to be PSPACE-complete [26], and thus one often turns to approximate methods to solve the problem efficiently. In particular, sampling-based motion planning techniques have emerged as one such state of the art approach. These algorithms avoid explicitly constructing the problem’s state space and instead build an approximate, implicit representation of the state space. This representation is constructed through a set of probing samples, each a potential state the robot may be in. A graph (or tree) is then built by connecting these samples to their local neighbors via a local planner under the supervision of a black-box collision checker [26]. Finally, given an initial and goal state, this representation can be searched to find a trajectory connecting the two. In this work we focus on a multi-query setting, focusing on identifying critical states and connecting critical states globally.

Iii Critical Sample Identification and Learning

Iii-a Identifying Critical States

The first question we seek to answer is what defines a critical sample for the optimal motion planning problem. If we consider a human navigating an indoor environment, these critical states may be doorways and non-critical states may be hallways or other open regions. In the context of geometric motion planning discovering these narrow passages is often the bottleneck [14]. However, as problems increase in difficulty, either due to more complex environments (cluttered and unstructured) or more complex robotic systems (differential constraints, rotational DoFs, complex local policies, stochasticity), it is not clear how such concepts can be utilized. We thus wish to devise a principled, general method for extracting and learning to identify critical states.

Several approaches exist to compute a sample’s “criticality” in the state space; the most promising we considered were: label propagation [28], minimum k-cuts [12], and betweenness centrality [10]. Label propagation algorithms seek to break graphs into communities, where the transition between communities may be considered a bottleneck state. Label propagation finds reasonable results for very narrow passage problems, but the results are unstable and poorly defined for problems with less constrained bottlenecks (Fig 0(b)). Minimum -cut algorithms seek to find the minimum-weighted cuts in a graph partition the graph into components. Minimum -cuts can identify many critical samples, but require a fixed number of cuts be provided which can result in too few cuts, thus ignoring critical regions, or too many, thus identifying non-critical regions, e.g., corners, Fig. 0(c). Furthermore, minimum -cuts is significantly slower than the other approaches. Ultimately, we selected betweenness centrality, a graph-theoretic measure of the importance of each node to shortest paths through a graph.

As outlined in Fig. 2, betweenness centrality is computed by counting the number of all-pairs shortest paths that pass through a specific node. We make two alterations to betweenness centrality to adapt it to the motion planning problem and the complexity of PRMs. First, we only compute an approximate value by solving shortest path problems with a randomly chosen initial node (sampled without replacement) to all other graph nodes (note if , this is exact). Each time a node is used in a shortest path, its centrality score is incremented. Secondly, we add a smoothing step to discount samples that can be skipped along the shortest path. Essentially, for a collision-free path that traverses nodes , if the connection between node and is collision-free, then node is not critical to the path, and thus its score should not be incremented. This step is necessary to eliminate samples that are simply in the free space trajectory between critical samples and used due to the limited connection radius. Crucially, it also allows critical states to be identified in part by local environment features, enabling more compact environment representations and increased data efficiency, and thus better scaling to complex environments.

(a) Label Propagation
(b) Label Propagation
(c) Minimum k-cuts
(d) Betweenness
Fig. 1: Approaches for identification of critical samples: (0(a)-0(b)) Label propagating approaches fail with moderately narrow passages. (0(c)) Minimum k-cuts identify several non-critical samples. (0(d)) Betweenness centrality was found to be principled and perform well. The size and color is proportional to criticality.

Iii-B Learning to Recognize States

The first phase of the algorithm learns to identify critical samples from a set of PRMs generated for a family of training environments. This phase generates the critical sample dataset and then trains a predictor deep neural net model conditioned on the planning environment to output the criticality of a sample. This methodology allows the neural network to generalize to new environments at test time. Furthermore, due to the smoothing step, which discounts the criticality of states if they can be skipped in the local region, this network only needs a local representation of the environment. This allows for much more scalable training in complex problems, as demonstrated in Section V. Note that here we learn to predict sample criticality and sample accordingly rather than learning a distribution of states. Though the distributional approach may allow more precise critical regions to be learned, this regressor approach only requires local features and may handle the multi-modality of critical states more effectively.

Dataset Creation. For a given set of state spaces in a training set, we construct a standard PRM with samples and edges for if and only if the trajectory from sample to is collision free and the samples are within a connection radius , as defined in [20]. With these roadmaps in hand, we compute the criticality of each sample via the betweenness centrality as described above.

Training. The computed centrality values become labels for training a neural network parameterized by where is a state sample and is a representation of . Herein, is a representation of the local environment around , e.g., an occupancy grid. We minimize loss to learn .

(a) PRM
(b) Solve Problems
(c) Criticality Data
(d) Learned Criticality
Fig. 2: The critical sample identification and learning process: (1(a)) Build PRMs on a family of training environments. (1(b)) Solve several one to all planning problems. (1(c)) Identify points via betweenness centrality. The size (larger more critical) and color (red more critical, blue less) of each sample is proportional to criticality, the lines colors indicate how often an edge is used and are only for visualization. (1(d)) In a new environment, the criticality prediction network predicts the criticality of each sample (green indicates not critical, blue to red is colored in increasing criticality). Ultimately, Critical PRM critical states are sampled proportional to their criticality.
0:  Planning problem , , ,
1:  Construct conditioning variables (e.g., occupancy grid).
2:  Sample states and compute criticality with .
3:  Select critical samples proportional to criticality.
4:  Select uniform samples.
5:  Connect non-critical samples within an radius [20].
6:  Connect critical samples to all samples (see footnote 1).
7:  Connect and globally into the Critical PRM.
8:  Search Critical PRM for shortest path from to .
Fig. 3: Online Critical PRM Construction

Iv Critical Probabilistic Roadmaps

Given the ability to identify critical samples in a previously unseen environment, the next key question is how to best leverage these important samples. Previous works have general sampled them at a higher rate, but considered them of equal importance beyond that [15, 16]. Herein, we propose a hierarchical graph, called a Critical Probabilistic Roadmap (Critical PRM), that globally connects critical states, along with a bed of locally connected uniform states. This allows these critical states to act as primary hubs within the space, while preserving the theoretical guarantees of sampling-based motion planning via the uniform samples.

Given a new planning environment, the online portion of the Critical PRM algorithm proceeds as follows (and outlined in Fig. 3. The value of allows the critical samples to be chosen from a more dense covering of the state space. Given a planning problem in a new environment with free space , corresponding environmental input , a sample budget of and a constant that controls the number of critical samples, we first select samples and predict their criticality with (Line 1-2). Next, to select

critical samples, the samples are stochastically drawn with a probability proportional to their criticality (Line

3). We refer to the remaining samples as non-critical samples (Line 4). To connect the Critical PRM, we connect the non-critical samples locally, only with neighbors within an connection radius, as detailed in [20] (Line 5). In contrast, the critical samples are connected globally, to all other samples regardless of distance (Line 6).111For very large spaces or costly local planners, this connection radius can instead be a large constant value. Finally, given a planning problem with an initial and goal region, we connect them into the roadmap globally, and search the roadmap for the shortest connecting path (Line 7-8). Fig. 3(b) shows a Critical PRM example with 20 samples versus a uniformly sampled PRM with 1000 samples in Fig. 3(a), wherein the Critical PRM achieves better connectivity than a standard PRM with 50x fewer samples.

Complexity: The complexity of Critical PRM remains as the uniform samples are locally connected and maintain the standard complexity of PRM [20]. The critical samples are connected to all neighbors, requiring no nearest neighbor lookup and constant time connections and collision checks.

Probabilistic Completeness and Asymptotic Optimality: The theoretical guarantees of probabilistic completeness and asymptotic optimality from [19, 18, 20] hold for this method by adjusting any references to (the number of samples) to (the number of uniform samples in our methodology). This result is detailed in Appendix D of [19] and Section 5.3 of [18], which show that adding samples can only improve the solution.

V Experiments

In this section we evaluate the Critical PRM algorithm on several motion planning problems and on robot. The results in this work were implemented in a mix of Python and Julia [4]

along with TensorFlow 

[1]. The network architectures are fully connected for 1D environment inputs, convolutional for 2D inputs, and 3D convolutional for 3D inputs. The loss for each problem is a log mean squared error on a sample’s criticality. Each training dataset was composed of 50% critical states (states with criticality greater than 0) and 50% non-critical states. Note that we make comparisons for several problems well-tuned to Hybrid sampling [15], however we do not make comparisons to previous learning-based approaches targeted towards single-query settings, as these heavily rely on initial and goal states and have difficulty scaling to the large complex environments that the local nature of Critical PRM can cope with.

V-a Narrow Passage Environment

(a) Uniform PRM
(b) Critical PRM
(c) 2D: Time (s) vs. Success
(d) 2D: Time (s) vs. Cost
(e) 3D Problem
(f) Learned Criticality
(g) 3D: Time (s) vs. Success
(h) 3D: Time (s) vs. Cost
Fig. 4: With 50x fewer samples, the Critical PRM (3(b)) fully connects a space that uniform PRM (3(a)) cannot. (3(c)) Success rate and (3(d)) cost for Critical PRM (blue) and uniform PRM (red) shows orders of magnitude improvement (over 50 problems). Furthermore, the use of global connections for critical states is responsible for an order of magnitude improvement compared to critical states with a standard radius (orange). (3(e)-3(h)) 3D problem demonstrates similar results and that Critical PRM with local features (a local occupancy grid) performs well in more complex environments.

As a proof of concept, we consider randomly generated 2D and 3D narrow passage environments shown in Figs. 3(a)-3(d) and 3(e)-3(h). These environments provide both clear learned samples and allow comparison to previous heuristic methods tuned for narrow passages [15]. Two critical prediction networks were trained for each environment with exact and local workspace representations. The first network is given an exact representation of the workspace as input: the -coordinates of each gap. The second network is given a local representation of the workspace as input. For 2D, the workspace is divided into a occupancy grid, of which the local occupancy grid is fed into the network. For 3D, the workspace is divided into and locally . 2D used and 3D used . The training data consisted of 1k problems and 100k example states. For comparisons, we also consider a standard, uniform PRM [21] and Hybrid sampling PRM [15], which specifically biases samples towards obstacles and narrow passages. For the network with better performance on each problem, we consider the effect of globally connecting critical samples by showing results for sampling critical states, but only local connections (instead with the connection radius ).

For each problem, Critical PRM outperforms both uniform and hybrid sampling in cost and success rate vs. computation time (s). For 2D, the exact representation achieves better initial performance compared to the occupancy grid input, though the occupancy grid input performs similarly well at higher computation times. However for 3D, due to the complexity of the problem, the local representation vastly outperformed the exact even with a significantly higher dimensionality. This is because each sample’s local environment is unique, allowing for 100k different environment training inputs instead of only 1k unique inputs for the exact. Furthermore, while the Critical PRM with a standard connection radius outperforms uniform sampling, the global connections improve both cost and success rate substantially.

V-B Rigid Body Planning

SE(2) L Problem. To further demonstrate the ability of the Critical PRM learning methodology to identify narrow passages in the state space from obstacle representations given in the workspace, we also consider the problem of maneuvering a rigid L-shaped robot through a constrained environment as depicted in Fig. 4(a). The state space includes an orientation dimension in addition to two position dimensions. To highlight the complications that arise from considering orientation, the family of environments for this subsection is constructed by computing Voronoi diagrams for randomly drawn points in the unit square and cutting passages in the borders between regions (which have variable orientation by construction) significantly narrower than the lengths the robot body segments.

From Fig. 4(c), which displays inferred sample criticality superimposed on a PRM not within the neural net’s training set, we can see that the learning process has gained the insight that states where the arms of the L straddle a passage are most valuable. The learned model produces smoother criticality predictions compared to the ground truth values depicted in Fig. 4(b)

. This ground truth is in a sense overfit to the specific sample set and associated graph (see, e.g., narrow passages with nearby states having near-zero criticality because the ground truth graph contains no path through the passage); in contrast the regression estimate gives a sense of how a sample might be useful in a generic PRM. For this problem family we find that putting learned criticality to work in Critical PRM improves the required computation time for a given success rate by approximately half an order of magnitude, Fig. 


(a) SE(2) Planning
(b) Ground Truth Criticality
(c) Learned Criticality

(d) Time (s) vs. Success
Fig. 5: Learning criticality for SE(2) rigid body motion planning (4(a)). The ground truth criticality (4(b)) and network outputs (4(c)) are visualized as robot miniatures with color/size proportional to log-criticality (redder and larger is more critical). Critical PRM displays a half order of magnitude improvement in computation time for a fixed success rate (4(d)).

SE(3) I Problem. We also consider an I rigid body shown in Fig. 5(a). We use the same environment inputs and network architecture from the earlier 3D planning problem. Due to the complexity gathering data for the problem, we only train the criticality network from 200 environments for 100k total samples (and to showcase the minimal data requirements). Fig. 5(b) shows several critical states for the SE(3) problem. The non-critical samples generally occur in free space (blue) and the critical samples (red) tend to be near the gaps and oriented perpendicular to the obstacle plane, though they are reasonably varied, allowing some diversity. The results are shown in Figs. 5(c)-5(d). For this environment, the local representation again outperforms by orders of magnitude (even in such a low data regime and with a high dimensional input)–the difference is particularly stark in path cost compared to Hybrid PRM and Uniform PRM. The global connections too have a large effect in both success rate and cost.

(b) Learned Crit.
(c) Time (s) vs. Success
(d) Time (s) vs. Cost
(a) SE(3)
Fig. 6: This SE(3) I rigid body planning problem demonstrates that Critical PRM can consider the full state space when selecting samples and again demonstrates the benefit of both critical samples and the hierarchical roadmap. In the learned criticality visualization (5(b)) redder is more critical.
(a) SE(3)

V-C Reinforcement Learned Local Policy

In this section, we demonstrate the generality and efficacy of Critical PRMs on a challenging planning problem that requires consideration of: (1) the effect of a complex local planner, (2) the effect of uncertainty in choosing robust samples that are navigable in the presence of sensor and dynamics noise, and (3) high-dimensional, complex, real-world environment representations in the form of images of office building floor plans (Fig. 7). This problem follows the formulation presented in [8, 9]

, in which a differential drive robot navigates an office environment to a local goal via a reinforcement-learned policy. The policy takes as input a vector from the current state to the goal and a 64-wide, 220

field-of-view to see the local environment along with a realistic noise model. This is incorporated into the PRM framework by connecting local samples, thus allowing more intelligent local planning. These edges are added if the local planner is able to perform the connection 100% of the time over 20 trials, thus requiring samples to be chosen robustly to the local policy and noise. The value of was set as 15 (note the increase due to the complexity of the problem and number of narrow passages). The critical training data is visualized in Fig. 7 along with the input to the network (a 100100 pixel image over a 10m10m area). The network used on this data trained over three different office floor plans, with a total of 50k samples. We note that due to the scarcity of environments and complexity of input for the full environments, previous learned methods [16, 22] that require full environment input cannot generalize. Furthermore, approaches that seek to find samples near obstacles or narrow passages [15] are likely to both select difficult to reach states given the problem’s stochasticity and not extract the key features of the problem given the clutter.

Fig. 8 shows the results of Critical PRM on a new building. Though the building has not been seen before, the neural network is able to extract the importance of features like hallways and ignore areas around cubicles. Note that the critical points are not always in doorways as one may expect for a straight line policy, as the intelligent local policy is often able to robustly enter or exit such narrow passages. In terms of success rate and cost, Critical PRM is approximately 5 times faster to compute. We also compare to if the critical points were learned only considering straight line (SL) connections (though at runtime, the robot still executes the learned policy). This compares a method that cannot take the local planner into account when learning to sample. The straight line connections still outperform PRM, but are approximately 3 times slower than Critical PRM.

Fig. 7: Floor plan environment and critical sample data for RL-trained policy. The input to the criticality prediction network is a 100100 pixel (10m10m) image of the local environment around a sample. Note the most critical samples avoid open regions and are not necessarily at doorways due to the local intelligent policy.
(b) Time vs. Success
(c) Time vs. Cost
Fig. 8: (7(a)) Predicted sample criticality for a new floor plan extracts the importance of the center hallway and avoids outside cubicles. (7(b)7(c)) Success rate and cost versus time (s) over 50 problems.

V-D Physical Experiments

Fig. 9: On robot experiments with Critical PRM computes lower-cost, more-robust trajectories in less time than PRM. (8(a)) Fetch and lidar observation. (8(b)) PRM and (8(c)) Critical PRM trajectories (planned waypoints shown as stars). The circled cyan trajectories were run 5 times each, resulting in 40% success for uniform PRM and 100% for Critical PRM.

Lastly, we implement the approach on a physical robot, a Fetch operating in an indoor environment using SLAM (see Fig. 9). The robot observes and localizes against the environment with a 220 field-of-view single plane lidar observation and navigates locally using the reinforcement learned policy described in Section V-C. The input to the criticality network is a 8080 pixel image of the environment. The network was trained on 5k samples from a different lidar-mapped environment.

The experiments are shown in Fig. 9, for which 6 long-range trajectories are shown traversing the space. The Critical PRM trajectories in Fig. 8(c) were generally more direct and robust than their uniform PRM counterpart in Fig. 8(b). To this end, the cyan (circled) path in Figs. 8(c)-8(b) was repeated five times, of which the Critical PRM trajectory was successful 100% of the time versus 40% for the uniform PRM. This improved performance was primarily a result of the Critical PRM using waypoints only when necessary, allowing more spread out waypoints (e.g., when the environment was less cluttered), allowing the RL policy to better adjust to stochasticity in operation.

Vi Conclusions and Future Work

Conclusions. In this paper, we have presented a method towards learning critical samples for sampling-based robot motion planning and using them more heavily. Specifically, we identify critical samples via betweenness centrality—a graph theoretic measure of a node’s importance to shortest paths—and learn to predict them via a neural network that takes local workspace information as input. For a new planning problem, these critical samples are then sampled from more frequently and connected globally (along with a bed of uniform samples locally connected). The result is a hierarchical roadmap we term the Critical PRM. The algorithm is demonstrated to achieve up to a three order of magnitude improvement in computation time required to achieve a success rate and cost due to both the selection of better samples and the use within a hierarchical roadmap. The method is further demonstrated to be general enough to handle real-world data, state space sampling, and complex local policies. Lastly, Critical PRM is shown on robot to compute lower-cost and more-robust trajectories.

Future Work. In the future we plan to extend this work in several ways. First, we plan to show its ability to identify samples for differentially constrained systems and robot arms. Second, we plan to investigate how critical samples may be clustered to identify critical regions and reduce overlap between samples. Finally, we plan to use these results within a hierarchical RL framework by biasing the high level policy towards more critical subgoals.


  • [1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al. (2016) Tensorflow: a system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation, pp. 265–283. Cited by: §V.
  • [2] P. Bacon (2013) On the bottleneck concept for options discovery. Ph.D. Thesis, Masters thesis, McGill University. Cited by: §I.
  • [3] E. Banijamali, R. Shu, M. Ghavamzadeh, H. Bui, and A. Ghodsi (2018) Robust locally-linear controllable embedding. In AISTATS, Cited by: §I.
  • [4] J. Bezanson, S. Karpinski, V. B. Shah, and A. Edelman (2012) Julia: a fast dynamic language for technical computing. arXiv preprint arXiv:1209.5145. Cited by: §V.
  • [5] V. Boor, M. H. Overmars, and A. F. Van Der Stappen (1999) The gaussian sampling strategy for probabilistic roadmap planners. In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Cited by: §I.
  • [6] M. S. Branicky, S. M. LaValle, K. Olson, and L. Yang (2001) Quasi-randomized path planning. In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Cited by: §I.
  • [7] C. Chamzas, A. Shrivastava, and L. E. Kavraki (2019) Using local experiences for global motion planning. arXiv preprint arXiv:1903.08693. Cited by: §I.
  • [8] A. Faust, K. Oslund, O. Ramirez, A. Francis, L. Tapia, M. Fiser, and J. Davidson (2018) PRM-rl: long-range robotic navigation tasks by combining reinforcement learning and sampling-based planning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 5113–5120. Cited by: §V-C.
  • [9] A. Francis, A. Faust, H. L. Chiang, J. Hsu, J. C. Kew, M. Fiser, and T. E. Lee (2019) Long-range indoor navigation with prm-rl. arXiv preprint arXiv:1902.09458. Cited by: §V-C.
  • [10] L. C. Freeman (1977) A set of measures of centrality based on betweenness. Sociometry. Cited by: §I, §III-A.
  • [11] J. D. Gammell, S. S. Srinivasa, and T. D. Barfoot (2015) Batch informed trees (bit*): sampling-based optimal planning via the heuristically guided search of implicit random geometric graphs. In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Cited by: §I.
  • [12] O. Goldschmidt and D. S. Hochbaum (1994) A polynomial algorithm for the k-cut problem for fixed k. Mathematics of operations research. Cited by: §III-A.
  • [13] A. Goyal, R. Islam, D. Strouse, Z. Ahmed, M. Botvinick, H. Larochelle, S. Levine, and Y. Bengio (2019) Infobot: transfer and exploration via the information bottleneck. arXiv preprint arXiv:1901.10902. Cited by: §I.
  • [14] D. Hsu, L. E. Kavraki, J. Latombe, R. Motwani, and S. Sorkin (1998) On finding narrow passages with probabilistic roadmap planners. In Proc. Int. Workshop on Algorithmic Foundations of Robotics (WAFR), Cited by: §I, §III-A.
  • [15] D. Hsu, G. Sánchez-Ante, and Z. Sun (2005) Hybrid prm sampling with a cost-sensitive adaptive strategy. In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Cited by: §I, §I, §IV, §V-A, §V-C, §V.
  • [16] B. Ichter, J. Harrison, and M. Pavone (2018) Learning sampling distributions for robot motion planning. In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Cited by: §I, §I, §IV, §V-C.
  • [17] B. Ichter and M. Pavone (2019) Robot motion planning in learned latent spaces. IEEE Robotics and Automation Letters. Cited by: §I.
  • [18] L. Janson, B. Ichter, and M. Pavone (2018) Deterministic sampling-based motion planning: optimality, complexity, and performance. IJRR. Cited by: §I, §IV.
  • [19] L. Janson, E. Schmerling, A. Clark, and M. Pavone (2015) Fast marching tree: a fast marching sampling-based method for optimal motion planning in many dimensions. IJRR. Cited by: §IV.
  • [20] S. Karaman and E. Frazzoli (2011) Sampling-based algorithms for optimal motion planning. IJRR. Cited by: §III-B, §IV, §IV, §IV, 5.
  • [21] L. Kavraki, P. Svestka, and M. H. Overmars (1994) Probabilistic roadmaps for path planning in high-dimensional configuration spaces. IEEE TRO. Cited by: §I, §I, §V-A.
  • [22] R. Kumar, A. Mandalika, S. Choudhury, and S. S. Srinivasa (2019) LEGO: leveraging experience in roadmap generation for sampling-based planning. arXiv preprint arXiv:1907.09574. Cited by: §I, §V-C.
  • [23] H. Kurniawati and D. Hsu (2004) Workspace importance sampling for probabilistic roadmap planning. In IROS, Cited by: §I.
  • [24] S. M. LaValle, M. S. Branicky, and S. R. Lindemann (2004) On the relationship between classical grid search and probabilistic roadmaps. IJRR. Cited by: §I.
  • [25] S. M. LaValle and J. J. Kuffner Jr (2000) Rapidly-exploring random trees: progress and prospects. Cited by: §I.
  • [26] S. M. LaValle (2006) Planning algorithms. Cambridge university press. Cited by: §I, §II, §II.
  • [27] D. Molina, K. Kumar, and S. Srivastava (2019)

    Identifying critical regions for motion planning using auto-generated saliency labels with convolutional neural networks

    arXiv preprint arXiv:1903.03258. Cited by: §I.
  • [28] U. N. Raghavan, R. Albert, and S. Kumara (2007) Near linear time algorithm to detect community structures in large-scale networks. Physical review E. Cited by: §III-A.
  • [29] E. Schmerling, L. Janson, and M. Pavone (2015) Optimal sampling-based motion planning under differential constraints: the driftless case. In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Cited by: §II.
  • [30] Ö. Şimşek and A. G. Barto (2009) Skill characterization based on betweenness. In Advances in neural information processing systems, Cited by: §I.
  • [31] A. Solway, C. Diuk, N. Córdova, D. Yee, A. G. Barto, Y. Niv, and M. M. Botvinick (2014) Optimal behavioral hierarchy. PLoS computational biology. Cited by: §I.
  • [32] J. P. Van den Berg and M. H. Overmars (2005) Using workspace information as a guide to non-uniform sampling in probabilistic roadmap planners. IJRR. Cited by: §I.
  • [33] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller (2015) Embed to control: a locally linear latent dynamics model for control from raw images. In NIPS, Cited by: §I.
  • [34] Y. Wu, G. Tucker, and O. Nachum (2018) The laplacian in RL: learning representations with efficient approximations. arXiv preprint arXiv:1810.04586. Cited by: §I.
  • [35] C. Zhang, J. Huh, and D. D. Lee (2018) Learning implicit sampling distributions for motion planning. In IROS, Cited by: §I.
  • [36] M. Zucker, J. Kuffner, and J. A. Bagnell (2008) Adaptive workspace biasing for sampling based planners. In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Cited by: §I.


The authors thank Vincent Vanhoucke, Chase Kew, James Harrison, and Marco Pavone for insightful discussions.


The following describes the data used, algorithm hyperparameters, and network architectures. A dropout layer set to 0.1 follows each layer. Each layer other than the output layer or max-pooling has ReLU activation functions.

2D narrow passage, Section V-A:

  • Input Exact: -coordinates of each gap

  • Input Local: local occupancy grid from

  • Architecture: 2048 - 1024 - 512 - 1

  • Data: 1k environments, 100k samples

  • Alg. Params.: ,

3D narrow passage, Section V-A:

  • Input Exact: -coordinates of each gap

  • Input Local: local occupancy grid from

  • Architecture Exact: 512 - 512 - 512 - 512 - 1

  • Architecture Local: conv - conv - 128 - 128 - 1

  • Data: 1k environments, 100k samples

  • Alg. Params.: ,

SE(2), Section V-B:

  • Input: local occupancy grid from and local position within grid cell, orientation of each sample

  • Architecture: conv - max-pooling - conv - max-pooling - conv - max-pooling - 1

  • Data: 1k environments, 6m samples

  • Alg. Params.: ,

SE(3), Section V-B:

  • Input Exact: -coordinates of each gap and orientation of each sample

  • Input Local: local occupancy grid from and orientation of each sample

  • Architecture Exact: 512 - 512 - 512 - 512 - 1

  • Architecture Local:

    • Environment Network [ conv - conv - conv]

    • Orientation Network [256 - 256]

    • Stack output of networks into [256 - 256 - 1]

  • Data: 200 environments, 100k samples

  • Alg. Params.: ,

PRM-RL, Section V-C:

  • Input: pixel local image ( m)

  • Architecture: conv - max-pooling - conv - conv - 128 - 128 - 1

  • Data: 3 environments, 50k samples

  • Alg. Params.: ,

Physical Experiments, Section V-D:

  • Input: pixel local image ( m)

  • Architecture: conv - max-pooling - conv - conv - 128 - 128 - 1

  • Data: 2 environments, 5k samples

  • Alg. Params.: ,