A Configuration-Space Decomposition Scheme for Learning-based Collision Checking

11/17/2019 ∙ by Yiheng Han, et al. ∙ The University of Hong Kong 0

Motion planning for robots of high degrees-of-freedom (DOFs) is an important problem in robotics with sampling-based methods in configuration space C as one popular solution. Recently, machine learning methods have been introduced into sampling-based motion planning methods, which train a classifier to distinguish collision free subspace from in-collision subspace in C. In this paper, we propose a novel configuration space decomposition method and show two nice properties resulted from this decomposition. Using these two properties, we build a composite classifier that works compatibly with previous machine learning methods by using them as the elementary classifiers. Experimental results are presented, showing that our composite classifier outperforms state-of-the-art single classifier methods by a large margin. A real application of motion planning in a multi-robot system in plant phenotyping using three UR5 robotic arms is also presented.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Motion planning plays an important role in robotics, which finds a collision-free path to move a robot from a source to a target position. Configuration space [14] is widely used in robot motion planning, whose spatial dimensions characterize the degrees-of-freedom (DOFs) of the robot and each point in represents a configuration of the robot. By decomposing the space into a free subspace (i.e., the set of robot configurations without self-collision or collision with obstacles) and an in-collision subspace , motion planning is equivalent to finding a path completely within .

For robots of high DOFs (e.g., 6-DOF robotic arms or 17-DOF humanoid robots) in complex or dynamic environments, the boundary of is very complicated and usually cannot be represented analytically [14, 21, 18]. Sampling-based motion planning (SBMP) methods (e.g., [9, 10]) were then proposed to use sample points for characterizing and avoid explicitly establishing the boundary of . Some representative researches include the classic probabilistic roadmaps (PRM) [11], rapidly-exploring random trees (RRT) [15, 13], the variants of PRM and RRT (e.g., [10]) and some state of the arts [12, 1], etc.

To implicitly define , usually a large number of samples are required in SBMP methods and each sample is guaranteed to be collision-free by passing an exact collision detector such as GJK [4] or FCL [16], which is very time-consuming. To speed up the collision checking procedure, recently machine learning methods have been introduced into this area; see Section II for a summary. These methods use a small subset of samples to train a classifier . Then given an arbitrary sample in unknown regions in , the classifier can output a prediction that serves as a filter to quickly identify obviously in-collision or collision-free samples, and then only a small set of samples with ambiguity need to be finally checked by the exact collision detector. We call these predictions by machine learning methods as approximate collision checking. To ensure the success of these machine learning methods, the trained classifier must have a high accuracy.

SBMP methods can be used for both single query and multiple queries. For single query, the RRT methods (e.g., [15, 13]) start at a source point in and iteratively grow a search tree. At each iteration, a randomly selected point is used to drive the system with a small time step and this leads to a new vertex that is added to the tree by an edge linking it to the nearest vertex in the existing tree. The iteration is terminated when the target point is reached. For multiple queries, the PRM method [11] and its variants (e.g., [10]) spread out sample points uniformly covering the whole space . Then given any arbitrary source and target points, a collision-free path is obtained by making use of these uniform samples.

The trained classifiers in learning-based methods can improve the efficiency of both single and multiple queries in two ways. The first way is to use the prediction of the classifier to quickly identify the new collision-free samples when adding new vertices into the existing search tree. Pan and Manocha [19] propose a fast probabilistic collision checking method and prove that the collision query predicted by their classifier converges to the exact collision detection when the size of sampling points increases. Therefore, online learning is important, since the classifier needs to be online updated to improve the classification accuracy when more and more collision-free points are added into the data set. The second way is that instead of using a large number of samples to cover , a small set of samples can be used to train the classifier and a large number of samples that are predicted by a classifier as collision-free can be quickly generated by the classifier. After a path is planned using these samples, a final exact collision checking is applied for every sample in the path. For those samples that are in collision, a local repairing operation [2] is performed to update the path. In both way, the accuracy and query time of the classifiers are critical to affect the performance of motion planning.

(a) Rendered system design
(b) A snapshot of real system
Fig. 1: An updated multi-robot plant phenotyping system [23]. By applying our proposed composite classifier, 30% motion planning time can be saved.

In most previous machine learning methods, only one binary classifier was trained to predict whether a query sample point is in or . In particular, the subspace or is learned as a single entity. In this paper, we propose a novel decomposition of or , based on the DOFs of the robot. By decomposing or into a set of smaller subspaces related to the DOFs of the robot, we construct a composite classifier that consists of a set of simple classifiers and each classifier corresponds to a decomposed subspace. The advantage of this composite classifier is that the set of simple classifiers can be performed as a set of hierarchical filters that quickly filter in-collision samples from easy to hard levels.

Our composite classifier with the configuration-space decomposition scheme is general and can work compatibly with any previous machine learning methods with a single classifier (e.g., [17, 19, 8, 2]). We show that our composite classifier have a much higher accuracy and averagely 2 time faster than previous single classifier methods. We also apply our composite classifier in the motion planning of an updated multi-robot plant phenotyping system [23]. This updated system (Figure 1) consists of three UR5 robotic arms and each arm is equipped with an Intel RealSense SR-300 depth camera. To achieve fast, precise and noninvasive measurements for high-throughput plant phenotyping, all of three arms move simultaneously in each round of phenotyping data acquisition. Our results show that using the proposed composite classifier, 30% motion planning time can be saved.

Ii Related Work

Collision detection is frequently called in sampling-based motion planning. In this section, we briefly review the recent machine learning methods that are introduced to speed up the time-consuming collision detection process.

An early work using machine learning is the neural network approach

[3] that can only handle the collision detection for box-shaped objects. Pan and Manocha [18] design efficient GPU-based parallel

-nearest neighbors (KNN) and parallel collision detection algorithm, and propose an approximation representation for the configuration space based on machine learning. Pan and Manocha

[19] further make use of KNN in online learning configuration space. To achieve fast probabilistic collision checking, they use locality-sensitive hashing techniques that only have a sub-linear time complexity. Their probabilistic collision checking can effectively improve the performance (up to 2x speedup) of various motion planners such as RRT, RRT*, PRM and lazyPRM. Das and Yip [2]

propose another proxy collision detector that can achieve efficient active learning by utilizing lazy Gram matrix evaluation and a new cheaper kernel to reduce the training and query time. Heo et al.

[6]

develop a deep learning method that uses monitoring signals (i.e., external torque at every robotic joint) to estimate collision detection, which is applicable for industrial collaborative robots working with humans.

Most recent researches for speeding up the motion planning in SBMP methods use the idea to train a binary classifier using a small set of samples in

with correct labels (i.e., in-collision or collision-free) and then approximate collision checking can be quickly obtained by the prediction of the trained classifier. Various classifiers have been applied, including support vector machine (SVM)

[17], -nearest neighbor (KNN) [19]

, Gaussian mixture models (GMM)

[8, 7] and the Gaussian kernel functions [2]. In this paper, we propose a configuration-space decomposition method that leads to a novel composite classifier. This composite classifier consists of a set of binary classifiers and each of them can be any of the above mentioned classifiers, i.e., our model can work compatibly with these previous machine learning methods [17, 19, 8, 2]. In Section V, we show that our composite classifier can efficiently reduce the query time and improve the accuracy of single-classifier methods.

Iii Decomposition of Configuration Space

Motion planning in the configuration space of high DOFs is a great challenge in robotics. In this paper, we propose a novel decomposition scheme of and establish a composite classifier based on the decomposed subspaces, which has significantly better performance than a single classifier directly on .

Nowadays, robots of high DOFs become ubiquitous, such as robotic arms and humanoid robots. Our work is based on the following important observation. Every robot111In our study, we only consider the moving part of the robot; e.g., the base of the UR5 in Figure 2 does not move and is not included. of high DOFs can be separated into disjointed components, satisfying

(1)

where is the number of components. Let be all the DOFs of , where is the number of DOFs. For each component , one or more DOFs can be assigned to it, denoted as , such that

(2)

All the components can be ordered in such a way that the position and orientation of each component can be uniquely determined by the DOFs . One example of UR5 collaborative robot arm by Universal Robots Corp is shown in Figure 2.

Fig. 2: The decomposition of a 6-DOF UR5 robot . We denote these 6 DOFs as . The base of UR5 does not move and is not included into . can be decomposed into six components . Each is associated with a DOF , such that the position and orientation of each component can be uniquely determined by the DOFs .

The dimension of the configuration space of the robot is exactly , i.e., the number of DOFs in . Most previous works partition into a free subspace and an in-collision subspace , satisfying that the robot configuration specified by any point in does not have self-collision with or collision with obstacles. Based on the characteristics summarized in Eqs. (1-2), we further decompose and accordingly to each DOF of as follows.

Throughout this paper, we consider static environment with arbitrary complex obstacles. First, we consider the DOFs in , where is the number of DOFs in . Let be the subspace of spanned by the DOFs in . We define the subspace of by that the configuration of the component specified by any point in does not have self-collision or collision with obstacles. Then we partition into the free subspace and the in-collision subspace . We further define an expansion operation that expands the dimension of from to :

(3)

Similarly, we define

(4)

Obviously, .

Now we consider , , which is the subspace spanned by the DOFs in . We denote the number of DOFs in as . Any point in specifies the configuration of the component . We define the subspace of by that the configuration of the component specified by any point in does not have self-collision with or collision with obstacles. Then we partition into the free subspace and the in-collision subspace . Let

(5)
(6)

where is the number of DOFs in .

The decomposed subspaces and have the following two important properties:

(7)
(8)

See Figure 3 for an example.

Fig. 3: (a) For easy illustration, we consider a degenerated 3-DOF UR5 robot , with one cubic obstacle (shown in red). Each of three DOFs is ranged from to . Each , , is associated with a DOF , such that the position and orientation of are uniquely determined by the DOFs . (b) The in-collision subspace (shown as the green bold line segment) in spanned by the DOF . (c) The expanded in-collision subspace (shown in green box) in as defined in Eq. (4). (d) The in-collision subspace (shown as the yellow area) in spanned by the DOFs . (e) The expanded in-collision subspace (shown as the yellow volume) in as defined in Eq. (6). (f) The in-collision subspace (shown as the blue volume) in spanned by the DOFs . (g) The final in-collision space (shown as the color volume) in as defined in Eq. (7).

Iv Composite Classifier

In this section, we show that the properties in Eqs. (7-8) can lead to an efficient composite classifier. Our proposed composite classifier works compatibly with previous machine learning methods that train a single binary classifier to distinguish from (e.g., [17, 19, 8, 2]). Let be an elementary classifier that can be any one in these previous works.

Given a set of samples with ground-truth labels (in-collision or collision-free) in , we train an elementary classifier for each robotic component in the subspace , , i.e.,

(9)

The our composite classifier is defined by

(10)

where is the logistic AND operation, meaning that if and only if all of its operands are true, i.e., for , .

We have the following equivalent form of , which can also be implied from the properties in Eqs. (7-8):

(11)

where and are logistic NOT and OR operations.

The advantage of the composite classifier is of three-fold:

  • Each elementary classifier in works in a subspace of , in which the boundary between and is much simpler than the boundary between and in (see Figures 3b, 3d, 3f, 3g for an example), and thus the classification accuracy of each elementary classifier (as well as the composite classifier) is much higher than that of a single classifier directly working on ;

  • Except for , all other elementary classifiers work in low-dimensional subspaces with simple class boundaries and thus the classification speed is fast;

  • The elementary classifiers act as a hierarchical set of filters that filter in-collision samples from the lowest to the highest dimensions, i.e., not all the in-collision samples need to be checked by all the filters. Therefore, the overall classification speed is still faster than a single classifier directly working on .

Our experimental results in Section V show that averagely the classification accuracy of our composite classifier is improved 10%-20% and 2 times faster than a single classifier directly working on .

To take the full advantage of the proposed composite classifier , it is worthy of noting the following implementation details.

Effective DOFs in . In different application scenarios, not every DOF of the robot has the same importance. For example, for the UR5 robot shown in Figure 2, if a dexterous hand is attached to the end of component , the rotation of caused by the 6th DOF may make the dexterous hand collide with obstacle, and thus the 6th DOF should be seriously considered. However, if a cylindric platform (like the one for the 3D printing in [22]) is attached to the end of component , the rotation of caused by the 6th DOF only change the orientation of the cylindric platform, but cannot change its collision status. Therefore, the 6th DOF is not effective. In most cases, it is easy to determine the effectiveness of each DOF in using a simple input from the user. For non-effective DOFs , we do not build the subspace and train the classifier .

Online learning. As aforementioned in Section I, to apply the proposed composite classifier in single query with the RRT methods [15, 13], it must have the ability of online learning, i.e., efficiently updating when more and more samples are obtained during the RRT sampling process. Online updating equals to online updating its elementary classifiers . For commonly used elementary classifiers, their online learning schemes have been proposed, e.g., online learning of SVM [5], KNN [19], GMM [8, 7] and the Gaussian kernel functions [2].

Elementary Number TPR TNR Accuracy Query time ()
classifier of samples SC Ours SC Ours SC Ours Improvement SC Ours Speed up (times)
SVM 1K 0.734 0.924 0.718 0.845 0.727 0.884 21.6% 36.55 19.85 1.84x
10K 0.812 0.956 0.800 0.940 0.807 0.949 17.6% 170.07 53.90 3.16x
100K 0.891 0.991 0.882 0.972 0.887 0.982 10.7% 1090.09 242.18 4.50x
KNN 1K 0.695 0.892 0.655 0.789 0.677 0.838 23.8% 2.63 2.34 1.12x
10K 0.794 0.952 0.732 0.881 0.765 0.918 20.0% 6.72 5.12 1.31x
100K 0.850 0.971 0.828 0.952 0.840 0.962 14.5% 21.08 14.15 1.49x
TABLE I: The comparison of single-classifier (SC) methods (SVM [17] and KNN [19]) and our composite-classifier method using SVM and KNN as elementary classifier respectively in a test environment with 1K, 10K and 100K random samples in the configuration space whose working environment consists of a 6-DOF UR5 robot and a cubic obstacle. The averaged true positive rate (TPR), true negative rate (TNP), classification accuracy, accuracy improvement (), query time (measured by microsecond ) and speed up () are reported.

V Experiments

Using the 6-DOF UR5 robot as the experiment platform, we implement the proposed configuration space decomposition method and the composite classifier in MATLAB. To train

, we randomly sample the parametric domain of the configuration space with a uniform distribution in a C++ ROS environment, and the label for each sample is specified by an exact collision detector called Flexible Collision Library (FCL)

[16]. All the running time reported in this section is recoded in a PC with an Intel i7-8700 CPU (3.20GHz) and 16GB RAM.

V-a Classification Accuracy and Time

We choose two representative single-classifier methods — SVM [17] and KNN [19] — as the baselines to compare with our composite classifier. We denote the composite classifier that uses SVM or KNN as the elementary classifier as or . First, we generate a test environment by randomly placing four cubic obstacles around the position of 6-DOF UR5 robot. Then we randomly sample 1K, 10K and 100K points in the configuration space as the training set, respectively. To test the classification accuracy, another 10K points are randomly sampled in . The averaged classification accuracy and query time are summarized in Table I, in which the true positive rate (TPR) and the true negative rate (TNP) are also reported. The following four characteristics are observed from these results.

Obstacle In-collision Collision
number free
1 16.3 14.1 9.1 1.3 7.2 0.1 51.9
2 16.0 17.9 9.3 1.6 6.4 0.2 48.7
4 16.7 19.4 9.5 2.1 5.8 0.2 46.3
8 16.3 27.4 15.0 2.2 5.1 0.1 33.9
TABLE II: In the working environments consisting of a 6-DOF UR5 robot and 1, 2, 4, 8 cubic obstacles, the percentage (%) of 10K random samples that lie in in-collision and collision-free subspaces is reported. For in-collision samples, the percentage (%) of them that can be detected in is also reported.

First, the accuracy of the composite classifier is much higher (improving 10%-20%) than that of the single classifier. This is because that our composite classifier decomposes the configuration space into a set of subspaces , in which each subspace has a simple boundary between and , and can be classified much more accurately by an elementary classifier , when compared to the accuracy of using a single classifier to directly classify and in .

Second, the query time of the composite classifier is faster than that of the single classifier. This is because that although the composite classifier may need to check multiple elementary classifiers, the query time in each elementary classifier is faster and only a few samples need to pass all these elementary classifiers. To further reveal the latter property, we report the percentage of random samples that lie in in-collision (which can be further decomposed into different subspaces ) and collision-free subspaces in Table II, showing that 40%-60% samples are filtered by the first three elementary classifiers in the subspaces . The results in Table II also show that the 6th DOF is not an effective DOF and removing the elementary classifier by merging it with can further speed up the query time.

Elementary Num of Accuracy Query time ()
classifier obstacles SC Ours SC Ours
SVM 2 0.828 0.927 12.0% 168.75 57.25 2.95x
4 0.807 0.949 17.6% 170.07 53.90 3.16x
8 0.768 0.908 18.2% 184.50 86.51 2.13x
KNN 2 0.772 0.907 17.5% 5.12 3.46 1.48x
4 0.765 0.918 20.0% 6.72 5.12 1.31x
8 0.733 0.896 22.2% 7.24 4.30 1.68x
TABLE III: The comparison of single-classifier (SC) methods (SVM [17] and KNN [19]) and our composite-classifier method using SVM and KNN as elementary classifier respectively in a test environment consisting of a 6-DOF UR5 robot and 2, 4, 8 cubic obstacles. Classification accuracy, accuracy improvement (), query time (measured by microsecond ) and speed up () are reported.

Third, the more sampling points, the higher the accuracy of both composite and single classifiers would be. However, even with 100K samples, the accuracy of the composite classifier is still 10% higher than that of the single classifier.

Fourth, the query time is increased when more samples are used. This is because that the computation of SVM discriminative function involves the weighted average of the kernel functions for all samples and KNN needs to compute the nearest distance to all samples. Our composite classifier is faster than single classifier in terms of both SVM and KNN, but the speed improvement of on SVM is much better than that of on KNN. This is because we use a KD-tree to speed up the KNN algorithm but there is no similar data structure for SVM.

Obstacle Basic RRT Variants of RRT with learning-based collision checking
number with FCL Fastron SVM KNN
2 (56%, 850.5) (49%, 524.4) (38%, 1329.6) (73%, 1058.1) (37%, 1330.0) (51%, 1130.7)
4 (51%, 880.9) (36%, 562.7) (35%, 1592.9) (69%, 1101.9) (39%, 1866.3) (46%, 1216.8)
8 (5%, 1000.1) (0%, not available) (2%, 2806.7) (27%, 2398.8) (6%, 2090.1) (14%, 1529.0)
TABLE IV: The success rate (the first number in bracket) and average generating time (the second number in bracket, in millisecond) for motion planning in 100 randomly generated pairs. Each pair consists of two collision-free points (one source and one target) in the configuration space. In the working environments consisting of a 6-DOF UR5 robot and 1, 2, 4, 8 cubic obstacles, respectively. We compare the basic RRT method with FCL, and the variants of RRT methods with learning-based collision checking, including Fastron [2], SVM [17], KNN [19] and our composite classifiers and .

To further test our proposed composite classifier in diverse environments, we randomly place 2, 4 and 8 cubic obstacles around the position of 6-DOF UR5 robot. we randomly sample 10K points in the configuration space as the training set and randomly sample another 10K points for testing. The accuracy and query time are summarized in Table III, from which the same conclusion can be drawn as those from Table I.

V-B Motion Planning Efficiency

As aforementioned in Section I, the learning-based collision checking can help improve the efficiency of both single-query and multiple-query sampling-based motion planning. In this section, we use the single-query RRT method as the baseline for comparison.

We use the open motion planning library (OMPL222http://ompl.kavrakilab.org/) [20], which provides an optimized implementation of the RRT method. The RRT algorithm uses a biased search to quickly explore the large unsearched space. Using enough time, RRT will eventually build a random space-filling tree and thus find a path between source and target points. For practical usage, we set the parameter MaxTime (i.e., maximum planning time used by RRT) to be 3 seconds. Therefore, the faster collision detection, the higher the success rate of path planning.

Fig. 4: Snapshots of a motion sequence of an updated multi-robot plant phenotyping system [23] using the learning-based method with our composite classifier. See accompanying demo video for more details.

We set up the basic RRT method implemented by OMPL for single-query motion planning by using FCL [16] for exact collision detection, which is very time-consuming. For comparison, we replace FCL by four learning-based collision detectors, i.e., Fastron [2], SVM [17], KNN [19] and our composite classifiers (using SVM as the elementary classifier) and (using KNN as the elementary classifier). Note that the prediction by trained classifiers only provides approximate collision checking. Once a path is planned by RRT with approximate collision checking, a final exact collision checking by FCL is needed for every sample in the path. For those samples that are in-collision, a local repairing operation proposed in the implementation of [2] is evoked. Therefore, the more accurate the classifier, the fewer samples need to be repaired and the more efficient the motion planning with the learning-based method would be.

As indicated in Section V-A, our proposed composite classifier has much better accuracy and less query time than each single classifier, and then will lead to a more efficient motion planning process. This conclusion is demonstrated by the motion planning results summarized in Table IV. These results are generated in the same working environments consisting of a 6-DOF UR5 robot and 1, 2, 4, 8 cubic obstacles as in Section V-A. We randomly generate 100 pairs of collision-free source and target points in the configuration space. The motion planning using different classifiers is applied to plan a path between each pair of source and target points. In the limited maximum planning time (3 seconds), not every pair can have a successful motion planning and we define the success rate as the ratio , where is the number of pairs that have a successful motion planning and 100 is the number of total pairs. We also define the average generating time as the time of generating successful paths averaged on successful paths. The results in Table IV show that (1) the larger the number of obstacles, the lower the success rate, (2) our composite classifier can significantly improve the success rate of single classifiers, (3) RRT with composite classifier significantly improves the success rate of the basic RRT with FCL, and (4) the average generating time of RRT with composite classifier is much shorter than that of RRT with each individual classifier.

V-C Application

We apply the proposed composite classifier in an updated multi-robot plant phenotyping system [23] for improving the efficiency of motion planning. This system was designed to provide fast, precise and noninvasive measurements for robot-assisted high-throughput plant phenotyping. To achieve this goal, this system was equipped with three UR5 robotic arms that can be moved simultaneously in each round of phenotyping data acquisition, using the depth camera (Intel RealSense SR-300) mounted at each robotic arm. The configuration space of this system has a very complicated boundary between and due to high possibilities of collision (with plant leaves) and self-collision (among robotic arms); see Figure 4 for a motion planning example. To satisfy the requirement of high-throughput plant phenotyping, the time for motion planning has to be fast. Using state-of-the-art RRT implementation in OMPL, the average motion planning time for each round is about one second, while using the learning-based method with our composite classifier, the average motion planning time is shorten to 0.692 seconds.

Vi Conclusions

In this paper, we propose a simple yet effective configuration space decomposition method based on the chacteristics inherent in the DOFs of robot, which leads to an efficient composite classifier with a much better classification accuracy than previous single classifiers. Our method can work compatibly with previous method by using them as elementary classifiers. Experimental results on both artificial environments with increasing complexity and a real environment of an updated multi-robot plant phenotyping system [23] demonstrate that our method can effectively shorten the time of motion planning by a large margin.

References

  • [1] S. Dai, S. Schaffert, A. Jasour, A. G. Hofmann, and B. C. Williams (2019) Chance constrained motion planning for high-dimensional robots. In 2019 IEEE International Conference on Robotics and Automation, ICRA 2019, Montreal, Canada, May 20-24, pp. 8805–8811. Cited by: §I.
  • [2] N. Das and M. Yip (2019) Learning-based proxy collision detection for robot motion planning applications. CoRR abs/1902.08164. Cited by: §I, §I, §II, §II, §IV, §IV, §V-B, TABLE IV.
  • [3] I. Garcia, J. D. Martin-Guerrero, E. Soria-Olivas, R. J. Martinez, S. Rueda, and R. Magdalena (2013) A neural network approach for real-time collision detection. In IEEE International Conference on Systems, Man and Cybernetics, Cited by: §II.
  • [4] E. G. Gilbert, D. W. Johnson, and S. S. Keerthi (1988) A fast procedure for computing the distance between complex objects in three-dimensional space. IEEE Journal on Robotics and Automation 4 (2), pp. 193–203. Cited by: §I.
  • [5] T. Glasmachers and C. Igel (2008) Second-order SMO improves SVM online and active learning. Neural Computation 20 (2), pp. 374–382. Cited by: §IV.
  • [6] Y. J. Heo, D. Kim, W. Lee, H. Kim, J. Park, and W. K. Chung (2019) Collision detection for industrial collaborative robots: a deep learning approach. IEEE Robotics and Automation Letters (presented at ICRA 2019) 4 (2), pp. 740–746. Cited by: §II.
  • [7] J. Huh, B. Lee, and D. D. Lee (2017) Adaptive motion planning with high-dimensional mixture models. In 2017 IEEE International Conference on Robotics and Automation, ICRA 2017, Singapore, Singapore, May 29 - June 3, pp. 3740–3747. Cited by: §II, §IV.
  • [8] J. Huh and D. D. Lee (2016) Learning high-dimensional mixture models for fast collision detection in rapidly-exploring random trees. In 2016 IEEE International Conference on Robotics and Automation, ICRA 2016, Stockholm, Sweden, May 16-21, pp. 63–69. Cited by: §I, §II, §IV, §IV.
  • [9] L. Jaillet, J. Cortés, and T. Siméon (2010) Sampling-based path planning on configuration-space costmaps. IEEE Trans. Robotics 26 (4), pp. 635–646. Cited by: §I.
  • [10] S. Karaman and E. Frazzoli (2011) Sampling-based algorithms for optimal motion planning. International Journal of Robotics Research 30 (7), pp. 846–894. Cited by: §I, §I.
  • [11] L. E. Kavraki, P. Svestka, J. Latombe, and M. H. Overmars (1996) Probabilistic roadmaps for path planning in high-dimensional configuration spaces. IEEE Trans. Robotics and Automation 12 (4), pp. 566–580. Cited by: §I, §I.
  • [12] D. Kim, Y. Kwon, and S. Yoon (2018) Dancing PRM*: simultaneous planning of sampling and optimization with configuration free space approximation. In 2018 IEEE International Conference on Robotics and Automation, ICRA 2018, Brisbane, Australia, May 21-25, pp. 7071–7078. Cited by: §I.
  • [13] J. J. Kuffner and S. M. Lavalle (2000) RRT-connect: an efficient approach to single-query path planning. In 2000 IEEE International Conference on Robotics and Automation, ICRA 2019, San Francisco, CA, USA, April 24-28, pp. 995–1001. Cited by: §I, §I, §IV.
  • [14] J. Latombe (1991) Robot motion planning. Kluwer Academic Publishers, Norwell, MA, USA. Cited by: §I, §I.
  • [15] S. M. Lavalle, J. J. Kuffner, and Jr. (2000) Rapidly-exploring random trees: progress and prospects. In Algorithmic and Computational Robotics: New Directions, pp. 293–308. Cited by: §I, §I, §IV.
  • [16] J. Pan, S. Chitta, and D. Manocha (2012) FCL: A general purpose library for collision and proximity queries. In IEEE International Conference on Robotics and Automation, ICRA 2012, 14-18 May, 2012, St. Paul, Minnesota, USA, pp. 3859–3866. Cited by: §I, §V-B, §V.
  • [17] J. Pan and D. Manocha (2012) Fast and robust motion planning with noisy data using machine learning. In Workshop on Robot Learning, in Proc. Int. Conf. on Machine Learning, Cited by: §I, §II, TABLE I, §IV, §V-A, §V-B, TABLE III, TABLE IV.
  • [18] J. Pan and D. Manocha (2015) Efficient configuration space construction and optimization for motion planning. Engineering 1 (1), pp. 46–57. Cited by: §I, §II.
  • [19] J. Pan and D. Manocha (2016) Fast probabilistic collision checking for sampling-based motion planning using locality-sensitive hashing. International Journal of Robotics Research 35 (12), pp. 1477–1496. Cited by: §I, §I, §II, §II, TABLE I, §IV, §IV, §V-A, §V-B, TABLE III, TABLE IV.
  • [20] I. A. Sucan, M. Moll, and L. E. Kavraki (2012) The open motion planning library. IEEE Robotics & Automation Magazine 19 (4), pp. 72–82. Cited by: §V-B.
  • [21] K. Tang and Y. Liu (2004) A geometric method for determining intersection relations between a movable convex object and a set of planar polygons. IEEE Trans. Robotics 20 (4), pp. 636–650. Cited by: §I.
  • [22] C. Wu, C. Dai, G. Fang, Y. Liu, and C. C. L. Wang (2017) RoboFDM: A robotic system for support-free fabrication using FDM. In 2017 IEEE International Conference on Robotics and Automation, ICRA 2017, Singapore, Singapore, May 29 - June 3, pp. 1175–1180. Cited by: §IV.
  • [23] C. Wu, R. Zeng, J. Pan, C. C.L. Wang, and Y. Liu (2019) Plant phenotyping by deep-learning based planner for multi-robots. IEEE Robotics and Automation Letters (to be presented at IROS 2019) 4 (4), pp. 3113–3120. Cited by: Fig. 1, §I, Fig. 4, §V-C, §VI.