Evaluating whether a given configuration of a robot manipulator is in a feasible position, such as for motion planning applications, typically involves a collision detector that produces a binary output: in-collision or collision-free . However, it is often far more useful to know the distance to collision in the workspace, as closer proximities may suggest that the robot is more at risk to colliding with an obstacle. This distance to collision is useful in defining robot control rules for safe robot motion in potentially unpredictable, partially observed, or nonstationary environments , such as in cases where the robots are working in close proximity with humans . In addition to collision avoidance for robots , applications requiring accurate representation of the distance between two objects include contact resolution in simulation (where objects must respond to collisions realistically)  and force feedback in haptic rendering (where forces are generated for a human manipulating a virtual or teleoperated tool) .
A robot’s distance to collision is typically defined to be the minimum distance between all of the robot’s links and all of the workspace obstacles [8, 10]. Assuming that the link of a robot in configuration is represented as and the workspace obstacle is , we define as
where provides a positive scalar distance between two non-intersecting bodies. When the two bodies are in collision, provides the penetration depth (the minimum amount intersecting bodies must move to be out of collision ) as a negative value. Fig. (a)a
shows a 2 degrees-of-freedom (DOF) robot with an obstacle, and its corresponding configuration space (or C-space) representation in Fig.(b)b. For each point in the C-space, the workspace distance to collision is shown.
As with collision detection, distance estimation is a time-consuming operation, with time complexity as high as for generalized distance calculations, where and are the number of vertices in two different bodies . Furthermore, reliable collision avoidance often requires the extensive use of external sensors . The sensors that provide obstacle information, such as cameras or depth sensors, are a source of uncertainty in the distance measurements , and thus allow us to acquire only noisy evaluations of distance, which we represent as . Accounting for this uncertainty is a necessity in distance estimation, yet few strategies exist that consider these imperfections in distance sensing. As there are many applications that utilize distances, in this work, we seek to create a model to efficiently and accurately estimate while accounting for potential uncertainty in the distance measurements.
In this work, we describe a modeling approach to estimate a robot manipulator’s distance to collision given noisy measurements. More specifically, we employ Gaussian process (GP) regression with the forward kinematics (FK) kernel, a similarity function designed for robot manipulators, to estimate .
We demonstrate that the GP model with the FK kernel has significantly better modeling accuracy than other comparable regression techniques and show that querying the proposed GP is orders of magnitude faster than the standard geometric approach for distance evaluation. We utilize our proposed technique in trajectory optimization applications and realize a 9 times improvement in optimization time compared to a noise-free geometric approach yet with similar final results. Finally, we show how a hybrid model, which considers the confidence of the model, can switch from GP-based predictions to sensor-based distance estimation in areas of low confidence.
I-B Related Work
The standard approach to calculating distances involves first using sensors to obtain the locations of obstacles, and then approximating the robot geometry and workspace obstacles to actually compute the distance, which is a considerably expensive process if there are many pairwise checks to perform [10, 22]
. Some techniques attempt to reduce this computational effort through approximate methods, such as by using values from a depth sensor to approximate a distance or by creating a proxy machine learning model using exemplar data. This section describes some geometric, depth perception-based, and machine learning techniques.
I-B1 Geometric Approaches
A standard approach to calculating distance to collision is to approximate the objects and use a geometric distance method between them. For example, Han et al.  estimate by first representing each link of a robot arm as a cylinder and converting point clouds acquired from a depth camera into multiple small cubes, and then using the GJK algorithm  to compute the distances. While their technique can accurately represent distances for a robot and obstacles observed with a depth camera, breaking obstacles into numerous small polyhedra may increase the potential number of pairwise checks that need to be performed, and there is no measure of uncertainty with this approach.
I-B2 Perception-Based Distance Fields
Flacco et al. create a method to estimate distance to collision for robot manipulators based on depth sensors . For any point in the workspace, distance equations are defined using a depth map, which is an image with depth values evaluated for each pixel.
The distances are used to create artificial repulsive forces to drive the robot away from obstacles. The authors claim that the distance values computed with their technique are satisfactory for the purpose of collision avoidance but are not to be considered as actual Cartesian distances. This limitation precludes the usage of this technique beyond the purpose of defining repulsive forces.
Furthermore, their distance estimation technique has no measure of sensor or model noise. However, as the primary purpose of their technique is to define repulsive forces away from obstacles, the authors take an average over all obstacles’ repulsive fields to reduce any potential noise originating from the depth sensor.
I-B3 Support Vector Machines
Pan et al. use a support vector machine (SVM) to model the collision region in C-space between two bodies. Binary collision checks are performed on random configurations of one object relative to the other to form a decision boundary. By determining the distance between a query configuration and the decision boundary, their SVM model is used to estimate distance. Since there is no closed-form solution for the distance to the boundary for a nonlinear SVM, sequential quadratic programming, a constrained nonlinear optimization technique, is used to find this distance. As distances in the C-space do not generally equate to distances in the workspace, a different distance metric is used (the displacement distance metric), which involves determining application-specific weighting terms . Additionally, this approach requires a unique SVM model between each pair of obstacles, which may make applying this technique to robot manipulators or environments with multiple obstacles more computationally expensive. Finally, this technique assumes the state of each object is perfectly known, which means there is no measure of uncertainty in this model.
I-B4 Neural Networks
RelaxedIK estimates self-collision costs by using line segments to represent the links. Distances between the links are used to approximate the distances. These distance computations are further approximated using a neural network. Their method successfully estimates these self-collision costs which enables usage in optimization problems. However, as with many neural network approaches, this technique requires a large amount of labeled data and a significant amount of time to train, precluding this approach from being applied to instances beyond distance to self-collision.
ClearanceNet is a promising neural network-based approach to estimate . The input to this network is the robot configuration and a parameterized representation of the environment. Their approach achieves over accuracy if the network’s output is used to predict the collision status . Neural networks, however, are of fixed size and complexity, often require a large amount of data to train, and since the locations of the workspace obstacles are parameterized, generalizing the implementation to new types of environments may require a restructuring of this specific neural network implementation.
Ii-a Kernel Functions
A kernel function is a function that produces a similarity score between two objects, such as
-dimensional robot configurations. A kernel function is a fundamental tool for learning-based algorithms including binary classification, probability density estimation, and dimensionality reduction. Kernel functions are often employed in nonparametric models, which are models that make no strong assumptions on the structure of the model and whose complexity scales with the data , which is desired for modeling distance fields given that many of the environments in which a manipulator is used is nonstationary.
A kernel matrix can be defined for a set of points where . A positive definite kernel is one that yields a positive definite . Positive definiteness is an important property because it guarantees nonsingularity for the kernel matrix, represents an implicit mapping to some higher-dimensional feature space according to Mercer’s theorem , and is a requirement in some machine learning algorithms including Gaussian processes .
There are various positive definite kernel functions, and the one employed in the model should be selected to capture meaningful similarities between the data . Two kernels that we will use in this paper are the Gaussian kernel and the forward kinematics kernel, which are described below.
Ii-A1 Gaussian Kernel
A common positive definite kernel function to use in kernel-based models is the Gaussian kernel , where is a parameter dictating the width of the kernel . Fig. (a)a shows a topological visualization of the Gaussian kernel for the 2 DOF robot in Fig. (c)c. The Gaussian kernel is often considered the default kernel to use if there is limited prior knowledge of the underlying structure of the data .
Ii-A2 Forward Kinematics Kernel
The forward kinematics (FK) kernel was designed to compare robot manipulators . Rather than comparing the joint angles representing the robot configuration directly, the locations of control points along the arm are compared. This avoids measuring distances directly in joint space, where proximal joint displacement has far greater effect than distal joint displacement to the Cartesian shape of the robot. The FK kernel is defined as , where is the location of the control point in the workspace for configuration and is a parameter dictating kernel width. Fig. (b)b shows a topological visualization of the FK kernel for a 2 DOF robot with control points placed at the end of each link. Significant improvement to classification performance was realized with the FK kernel  over other kernels [6, 4], and thus the FK kernel is a worthwhile avenue to explore when creating a model for distance estimation.
Regression models can produce a continuous scalar output given a possibly multi-dimensional input. A common perspective of regression analysis is to view the true output as a sum of its input-dependent expected value and some error term, and using the expected value as a predictor of the output for new queries. In the context of distance evaluation, we can assume that the measured, sensor-based distance is a sum of the true distance and a noise term: , where and
is the measurement noise variance.
Choosing nonparametric models such as kernel regression and Gaussian processes allow for distance maps to scale to the complexity of the environment and to account for changing environments over time. This is the ideal scenario as many environments a robot must work in are nonstationary and may include other agents.
Ii-B1 Kernel Regression
is one example of a nonparametric regression technique, where a probability distribution is estimated and used to compute an expected value. A training set ofinput samples and their corresponding outputs can be used to estimate a probability distribution of the outputs conditioned on the inputs. This probability distribution may then be used to calculate the expected value of query inputs by taking a weighted sum of all training datapoints to predict the output of a query point. The equation for prediction for a single query point is
where is some kernel function. KR does not need to learn any weights to estimate the expected value of , so a training procedure is not required. KR typically uses a kernel that has equal influence in each direction, such as the Gaussian kernel . The FK kernel is not a function of the distance between its inputs and changes shape depending on where the kernel is evaluated, so it is not well suited for the KR formulation in Eq. 2.
Ii-B2 Gaussian Process Regression
Gaussian processes (GPs) can be used for regression problems. GPs are nonparametric kernel-based models that fit a Gaussian distribution over the functions that can potentially fit a set of training data.
A GP is completely defined by a mean and covariance (or kernel) function. The prior distribution considers the space of all possible functions that can be defined using a prior mean (which is sometimes, but not always, 0) and a positive definite kernel function . A posterior distribution is defined given training data to condition the prior distribution. Assuming that is a training dataset, is a vector of corresponding training outputs, the posterior distribution for the regression output for a query is
where is the measurement noise variance in the measurements of 
. This noise term can either be selected prior to model fitting if it is known or estimated in advance, or it can be treated as a hyperparameter that can be estimated based on the training dataset.
For a single query , the posterior mean according to Eq. 3, , can serve as the regression output. A benefit of GPs is some measure of confidence may be obtained with the variance of the posterior distribution in Eq. 3, represented now as
. A confidence intervals (CI) is a set of bounds containing the true output with some probability. The bounds of a two-sided CI are, where
corresponds to a number of standard deviations from the mean.can be selected to contain a certain proportion of the area under a standard Gaussian distribution and may be determined from Gaussian distribution tables  or statistics software. For example, if , the two-sided CI contains 95% of the area, which intuitively means the CI has a probability of 95% of containing the true regression output . Selecting larger values to increase the probability of containing the true output will have a tradeoff of looser bounds on the CI.
The inversion that occurs in Eq. 3 may be costly for larger training datasets. This issue has been addressed by reduced rank approximations or using subsets of the training dataset , which are handled by many GP software libraries, and for which robot-specific GP modeling strategies have been proposed .
In this section, we define a model that accurately estimates for a given robot configuration and accounts for potential uncertainty in the estimation.
Iii-a Gaussian Process Regression Model Fitting
We propose to fit a GP model to a training set of configurations because of a GP’s flexibility in modeling arbitrary datasets and its capability in accounting for uncertainty. We further propose the use of the FK kernel in the GP due to the boost in performance this kernel has proven to provide in binary proxy collision checking applications .
Uniformly randomly sampled robot configurations are used as the training set . Each configuration in the set is labeled with a noisy distance measurement (calculated using the GJK/EPA [9, 26] geometric methods in our implementation), and the labels are stored in vector . To simulate noisy, sensor-based measurements, each label in is created by taking a sum of the true distance and Gaussian noise: , where .
All methods were written in MATLAB. In the implementation of the FK kernel, a control point is placed at the distal end of each link in the manipulator as was done in .
Iii-B Confidence-Based Hybrid Model
As described in Section II-B2, two-sided CIs may be defined based on the posterior distribution standard deviation. Similarly, a one-sided CI may be defined, e.g., is the CI with a lower bound on what the true regression output is for configuration with some degree of confidence. In this case, means that there is 95% probability that the true regression output is greater than . A one-sided lower-bounded CI may be used when a bound tighter than in the two-sided CI case is desired and an upper bound on the true value is not as important as a lower bound.
In Fig. 3, we visualize the levels of confidence for a 2 DOF robot having when using either GP (Gaussian) or GP (FK). The dashed black curve is where . We can see that the confidence regions for GP (FK) closely align with the C-space obstacle contour, while GP (Gaussian) does not match as closely. In fact, GP (Gaussian) has pockets of inappropriate confidence, such as decreased confidence in the corners of the shown C-space or high confidence at the bottom of the C-space obstacle. On the other hand, GP (FK) does not have these issues and has increasing confidence of truly being greater than 0 for configurations farther away from the C-space obstacle. GP (FK)’s more appropriate assignment of confidence suggests that we can create a hybrid model that switches distance estimation methods depending on the GP’s level of confidence.
Using the lower-bounded CI, we define a hybrid distance estimation technique that uses the GP mean as the predicted distance in areas with a high confidence of being collision-free and a more expensive method in other areas:
where denotes the desired level of confidence associated with the number of standard deviations (e.g., for ), and is the number of samples to collect from to potentially reduce the effects of sensor noise. The lower bound in Eq. 4 may of course be nonzero if a more conservative model is desired. This hybrid model may be useful when cheaper distance evaluations from the GP may be used when working far from the obstacle, but more meticulous methods may be required when in close proximity with obstacles.
We evaluate the performance of our proposed GP-based distance estimation technique against three other approaches: the geometric approach GJK/EPA (ground truth), a GP with a typical Gaussian kernel, and KR.
We begin by comparing the accuracy and timing of each technique before applying these techniques to path optimization. In each experiment, a 7 DOF planar robot manipulator is modeled with unit length link and a rectangular link profile, and a polygonal workspace obstacle is randomly placed in the manipulator’s reachable workspace. We set the measurement noise to be when creating each dataset. To get an idea of the level of this noise, for an obstacle one link length away from the robot (i.e., ), 99% of the measured values could be anywhere in the range . We assume the scale of the robot and environments are arbitrary, so units for any measurements of length are omitted.
Iv-a Accuracy and Query Timing
In this section, we evaluate the GP model’s performance in terms of its accuracy and computation timing. Metrics that we utilize to evaluate performance include mean squared error (MSE) and model query time. MSE is defined as
where the test set has elements, is the set of true labels for , and provides the predicted distance using one of the approximation techniques. Note that while each model is trained on noisy distance evaluations (i.e., ), we use noise-free distance evaluations (i.e., ) to obtain when computing the MSE.
Because the volumes of the in-collision and collision-free regions in the C-space may not be equal, the overall MSE may misrepresent the MSE within each region. Thus, we also evaluate MSE on subsets of the test data for which is either positive (collision-free) or negative (in-collision), which we refer to as true positive MSE (TPMSE) and true negative MSE (TNMSE), respectively. TPMSE and TNMSE are defined as
where and denote the sizes of the subsets for TPMSE and TNMSE, respectively.
Table I shows the performance of the models in terms of MSE and query time of each of the methods. Note that the average distance to collision is 1.7. We can see the MSEs when using the GP method with the FK kernel are significantly lower (i.e., closer to the GJK/EPA ground truth values) than the other two approximation methods. The FK kernel gives a large boost in modeling accuracy compared to the Gaussian kernel, as was expected , due to the FK kernel’s strong relation with locations in the workspace. The KR method performs worse than GP (Gaussian) in terms of MSE, probably because all points in have equal importance with the KR method while GPs assign weights to each point. Fig. 4 shows histograms of absolute errors in estimation of for GP (Gaussian), GP (FK), and KR. We can see that GP (FK) has a larger proportion of lower errors compared to the other methods.
|Method||MSE||TPMSE||TNMSE||Query Time (s)|
GP (FK) is about 3.7 times slower to query than GP (Gaussian) because the FK kernel involves an extra step of evaluating the forward kinematics of the robot and thus takes longer to evaluate than the Gaussian kernel. Also, while the KR method does not require any training time, its query times are longer because the entire training set is used during querying while only a subset of the training set needed for the GP methods. The GP and KR methods used datasets of size , which took approximately 229 ms to create. The training time for the GP (Gaussian) method is about 2.2 seconds, which is approximately 1 second faster than when using the FK kernel.
While GP (FK) is not as fast for training and querying compared to the GP (Gaussian), its MSE is significantly lower. GP (FK) has better modeling performance most likely because the FK kernel has a shape that more appropriately fits the structure of the data, as can be seen by comparing Fig. (b)b to Fig. (b)b. Due to the greater accuracy of the GP (FK) model over GP (Gaussian) and KR and the fact that the query time of GP (FK) is still 70 times faster than the ground truth method, GP (FK) is a strong candidate for accurately and efficiently modeling .
Iv-B Path Optimization with Clearance Constraint
In this section, we utilize the distance estimation techniques described above in a path optimization application. One purpose of path optimization is to minimize a cost, such as path length, while satisfying constraints, such as keeping a certain distance from obstacles, given a motion plan generated by some other method . In our experiment, we structure our optimization problem similar to the trajectory optimization formulation in TrajOpt . More specifically, the cost is defined as a sum of distances between waypoints under the inequality constraint of distance to collision.
In our implementation, we minimize the manipulator’s end effector trajectory rather than the C-space trajectory in order to avoid trivial solutions such as rotating the manipulator the other way around an obstacle. In this section, we represent the Cartesian position of the end effector in configuration as . Our optimization problem to find a sequence of configurations where is thus
where is the optimized motion plan, is the distance traveled by the end effector when executing motion plan , and are desired start and goal configurations, prevents large joint changes, and is a desired amount of clearance to maintain from a workspace obstacle. in the constraint is replaced by each model’s estimate.
|Method||Optim. Time (s)|
The constrained optimization problem solver in MATLAB is used to solve Eq. 8. and are randomized on either side of the obstacle to require obstacle avoidance in the motion plan. The initial motion plan is acquired using a Rapidly-Exploring Random Tree motion planner , which is then used as the initial seed for the optimization. is set to be 0.2 in these experiments.
Table II shows optimization times, optimized path lengths, and the worst satisfaction of the clearance constraint . Note that is positive if the constraint is satisfied. While optimization with GP (Gaussian) and KR is faster than when optimizing with GP (FK), the optimized result is often either in collision or has a long end effector path length. All learning-based methods are significantly faster than GJK/EPA, but using the GP (FK) provides the safest results.
Fig. 5 shows various examples of the optimized paths using the various methods to approximate in Eq. 8, including a case where the noisy distance measurements are used directly. The original unoptimized end effector path is shown as the dotted curve while the optimized paths are the thicker, colored curves. The configuration along each optimized path with the lowest distance to collision (according to GJK) is displayed, and the amount of desired clearance is represented as the dotted box around the workspace obstacle. It is evident that the optimized trajectories when using the GP (FK) is fairly similar to those when using GJK/EPA, while the other methods yield less optimal paths such as ones with collisions or with longer path lengths.
Iv-C Path Optimization with Clearance Maximization
In this section, we once again apply the distance estimation techniques to path optimization. However, rather than utilizing the distance to collision as a constraint, we include distance in the objective function to increase a trajectory’s clearance. We weight the path length objective function used in Eq. 8 with an exponentiated distance, which changes the optimization problem to
where . in is replaced by each distance method’s estimate. The exponentiation makes large for trajectories causing egregious penetration into an obstacle and small for trajectories with larger clearances.
|Method||Optim. Time (s)|
Table III shows average optimization times, path lengths, and minimum distances to collision for each method. GP (FK) has a higher amount of clearance than GP (Gaussian), KR, and noisy GJK/EPA methods, and in fact achieves results closer to the noise-free geometric method both in terms of clearance and path length despite being trained on noisy measurements. The noisy version of GJK/EPA yielded suboptimal solutions with large end effector path lengths and smaller clearances than GP (FK). GP (Gaussian) and KR have shorter optimization times because the optimization solver often quickly converges to an invalid solution (as can be seen from the lower minimum distances to collision). The paths in Fig. 6 exemplify these results.
Iv-D Confidence-Based Hybrid Model
We apply our proposed hybrid model to a scenario involving a narrow passage, where the manipulator must reach between two nearby obstacles. Inaccuracy in the distance estimation model may inadvertently lead to collision between these obstacles, so the hybrid model’s predictions can potentially handle this scenario by switching to the geometric method if the GP model is not confident.
Fig. (a)a shows an example optimized trajectory achieved with the hybrid model . The plotted robot configuration denotes the transition point where the hybrid model switches from GP-based predictions to noisy, sensor-based distance evaluations . We used evaluations of per configuration when the hybrid model did not use a GP-based prediction. The colors of the path denote the confidence level of the GP that the true distance to collision is greater than 0, which we determine via MATLAB’s . We can observe that the GP model is fairly confident when far from the obstacles, but the confidence level rapidly drops when the manipulator needs to move into the narrow passage.
Fig. (b)b shows the GP-based distance estimates , the hybrid estimates , and the true distance to collision . The shaded region represents the lower-bounded CI for the GP model. We can see that deviates from as the distance decreases, but the hybrid model manages to produce accurate measurements of distance by switching to the noisy distance measurements when the GP loses confidence.
In this paper, we propose a method to create a model for evaluating a robot manipulator’s distance to collision. Our approach utilizes GP regression and the FK kernel, a similarity function designed to compare configurations of a robot manipulator. Noisy distance measurements are used to train the models.
Querying the GP model with the FK kernel obtained up to 70 times faster distance estimates than the geometric method. The accuracy of this GP model significantly surpassed that of other regression techniques with up to 13 times lower MSE, even when trained on noisy distance values. The GP model with the FK kernel achieved optimized paths 9 times faster but of similar quality as was achieved by a noise-free geometric method.
We also introduce a confidence-based hybrid model that uses the GP mean as a predicted distance in areas of high confidence and an averaging method over sensor-based distances in areas of lower confidence. We demonstrate the capability of this technique in a narrow passage environment and realize the hybrid model switches to the sensor-based method as it gets closer to the obstacles.
Using GP regression with the FK kernel is thus a promising approach for creating a model for distance evaluation, even when working with noisy measurements. More efficient sampling, online training and updating, and parallelization are left for future work.
-  (1988) Calculating confidence intervals for regression. British Medical Journal 296 (April), pp. 1238–1242. Cited by: §II-B2.
-  (1992) An introduction to kernel and nearest-neighbor nonparametric regression. American Statistician 46 (3), pp. 175–185. External Links: Cited by: §II-A.
-  (2013) Developments in Kernel Design. In European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, External Links: Cited by: §II-A.
Fastron: An Online Learning-Based Model and Active Learning Strategy for Proxy Collision Detection. Proceedings of the 1st Annual Conference on Robot Learning 78, pp. 496–504. External Links: Cited by: §II-A2.
-  (2020) Forward Kinematics Kernel for Improved Proxy Collision Checking. IEEE Robotics and Automation Letters 5 (2), pp. 2349–2356. External Links: Cited by: §II-A2, §III-A, §III-A, §IV-A, TABLE III.
-  (2020) Learning-Based Proxy Collision Detection for Robot Motion Planning Applications. IEEE Transactions on Robotics, pp. 1–19. External Links: Cited by: §I, §II-A2.
-  (2013) Regression Models, Methods and Applications. Springer. External Links: Cited by: §II-B.
-  (2012) A Depth Space Approach to Human-Robot Collision Avoidance. In IEEE International Conference on Robotics and Automation, External Links: Cited by: §I-B2, §I, §I, §I.
-  (1988) A Fast Procedure for Computing the Distance Between Complex Objects in Three-Dimensional Space. IEEE Journal on Robotics and Automation 4 (2), pp. 193–203. External Links: Cited by: §I-B1, §III-A, TABLE III.
-  (2019) A Configuration-Space Decomposition Scheme for Learning-based Collision Checking. arXiv preprint arXiv:1911.08581 [cs]. External Links: Cited by: §I-B1, §I-B, §I.
-  (2008) Kernel methods in machine learning. Annals of Statistics 36 (3), pp. 1171–1220. External Links: Cited by: §II-A.
-  (2019) Neural Collision Clearance Estimator for Fast Robot Motion Planning. arXiv preprint arXiv:1910.05917 [cs.RO]. External Links: Cited by: §I-B4.
-  (1990) Minimum Distance Estimation and Prediction Under Uncertainty for Online Robotic Motion Planning. IFAC Proceedings Volumes 23 (8), pp. 93–98. External Links: Cited by: §I.
-  (2008) Rapidly-Exploring Random Trees: A New Tool for Path Planning. Animal Genetics 39 (5), pp. 561–563. Cited by: §IV-B.
-  (2003) Collision and Proximity Queries. In Handbook of Discrete and Computational Geometry, pp. 1029–1056. Cited by: §I, §I.
Mercer’s Theorem, Feature Maps, and Smoothing.
International Conference on Computational Learning Theory, pp. 154–168. External Links: Cited by: §II-A.
-  (1964) On Estimating Regression. Theory of Probability and Its Applications 9 (1), pp. 157–159. Cited by: §II-B1, TABLE III.
-  (2012) FCL: A general purpose library for collision and proximity queries. Proceedings - IEEE International Conference on Robotics and Automation, pp. 3859–3866. External Links: Cited by: §I.
-  (2013) Efficient Penetration Depth Approximation using Active Learning. ACM Transactions on Graphics 32 (6). Cited by: §I-B3, §I.
-  (2018) RelaxedIK: Real-time Synthesis of Accurate and Feasible Robot Arm Motion. In Robotics: Science and Systems, External Links: Cited by: §I-B4.
-  (2006) Gaussian Processes for Machine Learning. The MIT Press. External Links: Cited by: §II-A1, §II-A, §II-B1, §II-B2, §II-B2, §II-B2, TABLE III.
-  (2017) Minimum Distance Calculation for Safe Human Robot Interaction. Procedia Manufacturing 11 (June), pp. 99–106. External Links: Cited by: §I-B.
-  (2009) Integration of Active and Passive Compliance Control for Safe Human-Robot Coexistence. In IEEE International Conference on Robotics and Automation, External Links: Cited by: §I.
-  (2013) Finding Locally Optimal, Collision-Free Trajectories with Sequential Convex Optimization. In Robotics: Science and Systems, Cited by: §IV-B.
-  (1998) The connection between regularization operators and support vector kernels. Neural Networks 11 (4), pp. 637–649. External Links: Cited by: §II-A1.
-  (2001) Proximity queries and penetration depth computation on 3d game objects. Game developers conference. External Links: Cited by: §III-A, TABLE III.
-  (1964) Smooth Regression Analysis. Sankhya: The Indian Journal of Statistics 26 (4), pp. 359–372. Cited by: §II-B1, TABLE III.
-  (2020) SOLAR-GP: Sparse Online Locally Adaptive Regression Using Gaussian Processes for Bayesian Robot Model Learning and Control. IEEE Robotics and Automation Letters 5 (2), pp. 2832–2839. Cited by: §II-B2.