I Introduction
Detecting known, rigid objects in RGBD images relies on being able to align 3D object models with an observed scene. If alignments are inconsistent or inaccurate, detection rates will suffer. In noisy and cluttered scenes (such as shown in figure 1), good alignments must rely on multiple cues, such as 3D point positions, surface normals, curvature directions, edges, and image features. Yet there is no existing alignment method (other than brute force optimization) that can fuse all of this information together in a meaningful way.
The Bingham distribution^{1}^{1}1See the appendix for a brief overview. has recently been shown to be useful for fusing orientation information for 3D object detection [6]. In this paper, we derive a surprising result connecting the Bingham distribution to the classical leastsquares alignment problem, which allows our new system to easily fuse information from both position and orientation information in a principled, Bayesian alignment system which we call Bingham Procrustean Alignment (BPA).
Ia Background
Rigid alignment of two 3D point sets and is a wellstudied problem—one seeks an optimal (quaternion) rotation
and translation vector
to minimize an alignment cost function, such as sum of squared errors between corresponding points on and . Given known correspondences, and can be found in closed form with Horn’s method [8]. If correspondences are unknown, the alignment cost function can be specified in terms of sumofsquared distances between nearestneighbor points on and , and iterative algorithms like ICP (Iterative Closest Point) are guaranteed to reach a local minimum of the cost function [4]. During each iteration of ICP, Horn’s method is used to solve for an optimal and given a current set of correspondences, and then the correspondences are updated using nearest neighbors given the new pose.ICP can be slow, because it needs to find dense correspondences between the two point sets at each iteration. Subsampling the point sets can improve speed, but only at the cost of accuracy when the data is noisy. Another drawback is its sensitivity to outliers—for example when it is applied to a cluttered scene with segmentation error.
Particularly because of the clutter problem, many modern approaches to alignment use sparse point sets, where one only uses points computed at especially unique keypoints in the scene. These keypoint features can be computed from either 2D (image) or 3D (geometry) information, and often include not only positions, but also orientations derived from image gradients, surface normals, principal curvatures, etc. However, these orientations are typically only used in the feature matching and pose clustering stages, and are ignored during the alignment step.
Another limitation is that the resulting alignments are often based on just a few features, with noisy position measurements, and yet there is very little work on estimating confidence intervals on the resulting alignments. This is especially difficult when the features have different noise models—for example, a feature found on a flat surface will have a good estimate of its surface normal, but a high variance principal curvature direction, while a feature on an object edge may have a noisy normal, but precise principal curvature. Ideally, we would like to have a posterior distribution over the space of possible alignments, given the data, and we would like that distribution to include information from feature positions and orientation measurements, given varying noise models.
As we will see in the next section, a full joint distribution on
and is difficult to obtain. However, in the original leastsquares formulation, it is possible to solve for the optimal independently of , simply by taking to be the translation which aligns the centroids of and . Given a fixed , solving for the optimal then becomes tractable. In a Bayesian analysis of the leastsquares alignment problem, we seek a full distribution on given , not just the optimal value, . That way we can assess the confidence of our orientation estimates, and fuse with other sources of orientation information, such as from surface normals.Remarkably, given the common assumption of independent, isotropic Gaussian noise on position measurements (which is implicit in the classical leastsquares formulation), we can show that is a Bingham distribution. This result makes it easy to combine the leastsquares distribution on with other Bingham distributions from feature orientations (or prior distributions), since the Bingham is a common distribution for encoding uncertainty on 3D rotations represented as unit quaternions [5, 6, 2].
The mode of the leastsquares Bingham distribution on will be exactly the same as the optimal orientation from Horn’s method. When other sources of orientation information are available, they may bias the distribution away from . Thus, it is important that the concentration (inverse variance) parameters of the Bingham distributions are accurately estimated for each source of orientation information, so that this bias is proportional to confidence in the measurements. (See the appendix for an example.)
We use our new alignment method, BPA, to build an object detection system for known, rigid objects in cluttered RGBD images. Our system combines information from surface and edge feature correspondences to improve object alignments in cluttered scenes (as shown in figure 1), and acheives stateoftheart recognition performance on both an existing Kinect data set [1], and on a new data set containing far more clutter and pose variability than any existing data set^{2}^{2}2Most existing data sets for 3D cluttered object detection have very limited object pose variability (most of the objects are upright), and objects are often easily separable and supported by the same flat surface..
Ii Bingham Procrustean Alignment
Given two 3D point sets and in onetoone correspondence, we seek a distribution over the set of rigid transformations of , parameterized by a (quaternion) rotation and a translation vector . Assuming independent Gaussian noise models on deviations between corresponding points on and (transformed) , the conditional distribution is proportional to , where
(1)  
(2) 
given that is ’s rotation matrix, and covariances .
Given isotropic noise models^{3}^{3}3This is the implicit assumption in the leastsquares formulation. on point deviations (so that
is a scalar times the identity matrix),
reduces to a 1D Gaussian PDF on the distance between and , yieldingwhere depends on and .
Now consider the triangle formed by the origin (center of rotation), and , as shown on the left of figure 3. By the law of cosines, the squareddistance between , and is , which only depends on via the angle between the vectors and . (We drop the subscripts on , , , and for brevity.) We can thus replace with
(3) 
which has the form of a VonMises distribution on .
Next, we need to demonstrate how depends on . Without loss of generality, assume that points along the axis . When this is not the case, the Bingham distribution over which we derive below can be postrotated by any quaternion which takes to .
Clearly, there will be a family of ’s which rotate to form an angle of with , since we can compose with any rotation about and the resulting angle with will still be . To demonstrate what this family is, we first let be a unit quaternion which rotates onto ’s axis, and let , where is ’s rotation matrix. Then, let (with rotation matrix ) be a quaternion that rotates to , so that . Because and point along the axis , the first column of , , will point in the direction of , and form an angle of with , as shown on the right side of figure 3. Thus, .
The rotation matrix of quaternion is
Therefore, .
We can now make the following claim about :
Claim 1.
Given that lies along the axis, then the probability density is proportional to a Bingham density^{4}^{4}4See the appendix for an overview of the Bingham distribution. on with parameters
where “” indicates columnwise quaternion multiplication.
Proof.
The Bingham density in claim 1 is given by
(4)  
(5)  
(6) 
since , and . Since (6) is proportional to (3), we conclude that , as claimed.
∎
Claim 2.
Let be a quaternion that rotates onto the axis of (for arbitrary ). Then the probability density is proportional to a Bingham density on with parameters
where “” indicates columnwise quaternion multiplication.
As explained above, the distribution on from claim 1 must simply be postrotated by when is not aligned with the axis. The proof is left to the reader. Putting it all together, we find that
(7)  
(8) 
where and are taken from claim 2, and where and are computed from the formula for multiplication of Bingham PDFs, which is given in the appendix.
Equation 8 tells us that, in order to update a prior on given after data points and are observed, one must simply multiply the prior by an appropriate Bingham term. Therefore, assuming a Bingham prior over given
(which includes the uniform distribution), the conditional posterior,
is the PDF of a Bingham distribution.To demonstrate this fact, we relied only upon the assumption of independent isotropic Gaussian noise on position measurements, which is exactly the same assumption made implicitly in the leastsquares formulation of the optimal alignment problem. This illustrates a deep and hitherto unknown connection between leastsquares alignment and the Bingham distribution, and paves the way for the fusion of orientation and position measurements in a wide variety of applications.
Iia Incorporating Orientation Measurements
Now that we have shown how the orientation information from the leastsquares alignment of two point sets and is encoded as a Bingham distribution, it becomes trivial to incorporate independent orientation measurements at some or all of the points, provided that the orientation noise model is Bingham. Given orientation measurements ,
Similarly as in equation 8, is the product of Bingham distributions from corresponding orientation measurements in , and so the entire posterior is Bingham (provided as before that the prior is Bingham).
IiB The Alignment Algorithm
To incorporate our Bayesian model into an iterative ICPlike alignment algorithm, one could solve for the maximum a posteriori (MAP) position and orientation by maximizing with respect to and . However, for probabilistic completeness, it is often more desirable to draw samples from this posterior distribution.
The joint posterior distribution —where contains all the measurements ()—can be broken up into . Unfortunately, writing down a closedform distribution for is difficult. But sampling from the joint distribution is easy with an importance sampler, by first sampling from a proposal distribution—for example, a Gaussian centered on the optimal leastsquares translation (that aligns the centroids of and )—then sampling from , and then weighting the samples by the ratio of the true posterior (from equation 2) and the proposal distribution (e.g., Gaussian times Bingham).
We call this sampling algorithm Bingham Procrustean Alignment (BPA). It takes as input a set of (possibly oriented) features in onetoone correspondence, and returns samples from the distribution over possible alignments. In section V, we will show how BPA can be incorporated into an iterative alignment algorithm that recomputes feature correspondences at each step and uses BPA to propose a new alignment given the correspondences.
Iii Building NoiseAware 3D Object Models
Our first step in building a system to detect known, rigid objects—such as the ones in figure 4—is to build complete 3D models of each object. However, the end goal of model building is not just to estimate an object’s geometry correctly. Rather, we seek to predict what an RGBD sensor would see, from every possible viewing angle of the object. To generate such a predictive model, we will estimate both the most likely observations from each viewing angle, and also the degree of noise predicted in those measurements. That way, our detection system will realize that depth measurements near object boundaries, on reflective surfaces, or on surfaces at a high oblique angle with respect to the camera, are less reliable than fronton measurements of nonreflective, interior surface points.
In our modelbuilding system, we place each object on a servocontrolled turntable in 23 resting positions and collect RGBD images from a stationary Kinect sensor at turntable increments, for a total of 6090 views. We then find the turntable plane in the depth images (using RANSAC), and separate object point clouds (on top of the turntable) from the background. Next we align each set of 30 scans (taken of the object in a single resting position) by optimizing for the 2D position of the turntable’s center of rotation, with respect to an alignment cost function that measures the sumofsquared nearestneighbor distances from each object scan to every other scan. We then use another optimization to solve for the 6dof translation + rotation that aligns the 23 sets of scans together into one, global frame of reference.
After the object scans are aligned, we compute their surface normals, principal curvatures, and FPFH features [10], and we use the the ratio of principal curvatures to estimate the (Bingham) uncertainty on the quaternion orientation defined by normals and principal curvature directions at each point^{5}^{5}5The idea is to capture the orientation uncertainty on the principal curvature direction by measuring the “flatness” of the observed surface patch; see the appendix for details.. We then use raytracing to build a 3D occupancy grid model, where in addition to the typical probability of occupancy, we also store each 3D grid cell’s mean position and normal, and variance on the normals in that cell^{6}^{6}6 In fact, we store two “viewbuckets” per cell, each containing an occupancy probability, a position, a normal, and a normal variance, since on thin objects like cups and bowls, there may be points on two different surfaces which fall in the same grid cell.. We then threshold the occupancy grid at an occupancy probability of , and remove interior cells (which cannot be seen from any viewing angle) to obtain a full model point cloud, with associated normals and normal variance estimates. We also compute a distance transform of this model point cloud, by computing the distance from the center of each cell in the occupancy grid to the nearest model point (or zero if the cell contains a model point).
Next, for a fixed set of 66 viewing angles across the viewsphere, we estimate range edges—points on the model where there is a depth discontinuity in the predicted range image seen from that view angle. We also store the minimum distance from each model point to a range edge for each of the 66 viewing angles. Using these viewdependent edge distances, along with the angles between surface normals and viewpoints, we fit sigmoid models across the whole data set to estimate the expected noise on range measurements and normal estimates as functions of (1) edge distance, and (2) surface angle, as shown in figure 5.
Iv Learning Discriminative Feature Models for Detection
Similarly to other recent object detection systems, our system computes a set of feature model placement score functions, in order to evaluate how well a given model placement hypothesis fits the scene according to different features, such as depth measurements, surface normals, edge locations, etc. In our early experiments with object detection using the generative object models in the previous section, the system was prone to make misclassification errors, because some objects scored consistently higher on certain feature scores (presumably due to training set bias). Because of this problem, we trained discriminative, logistic regression models on each of the score components using the turntable scans with true model placements as positive training examples and a combination of correct object / wrong pose and wrong object / aligned pose as negative examples. Alignments of wrong objects were found by running the full object detection system (from the next section) with the wrong object on the turntable scans. By adding an (independent) discriminative layer to each of the feature score types, we were able to boost the classification accuracy of our system considerably.
V Detecting Single Objects in Clutter
The first stages of our object detection pipeline are very similar to many other stateoftheart systems for 3D object detection, with the exception that we rely more heavily on edge information. We are given as input an RGBD image, such as from a Kinect. If environmental information is available, the image may be preprocessed by another routine to crop the image to an area of interest, and to label background pixels (e.g., belonging to a supporting surface).
As illustrated in figure 6, our algorithm starts by estimating a dense set of surface normals on the 3D point cloud derived from the RGBD image. From these surface normals, it estimates principal curvatures and FPFH features. In addition, it finds and labels three types of edges: range edges, image edges, and curvature edges—points in the RGBD image where there is a depth discontinuity, an image intensity discontinuity^{7}^{7}7We use the Canny edge detector to find image edges.
, or high negative curvature. This edge information is converted into an edge image, which is formed from a spatiallyblurred, weighted average of the three edge pixel masks. Intuitively, this edge image is intended to capture the relative likelihood that each point in the image is part of an object boundary. Then, the algorithm uses kmeans to oversegment the point cloud based on positions, normals, and spatiallyblurred colors (in CIELAB space) into a set of 3D superpixels.
Next, the algorithm samples possible oriented feature correspondences from the scene to the model^{8}^{8}8We currently use only FPFH correspondences in the first stage of detection as we did not find the addition of other feature types, such as SIFT [9] or SHOT [12], to make any difference in our detection rates.. Then, for each correspondence, a candidate object pose is sampled using BPA. Given a set of sampled model poses from single correspondences, we then reject samples for which more than of a subset of randomlyselected model points project into free space—places where the difference between observed range image depth and predicted model depth is above . Next, we run a pose clustering stage, where we group correspondences together whose sampled object poses are within and radians of one another. After pose clustering, we reject any sample with less than two correspondences, then resample object poses with BPA.
At this stage, we have a set of possible model placement hypotheses, with at least two features correspondences each. Because BPA uses additional orientation information, two correspondences is often all it takes to lock down a very precise estimate of an object’s pose when the correspondences are correct (Figure 7).
We proceed with a second round of model placement validation and rejection, this time using a scoring function that includes (1) range and normal differences, which are computed by projecting a new subset of randomlyselected model points into the observed range image, (2) visibility—the ratio of model points in the subset that are unoccluded, (3) edge likelihoods, computed by projecting the model’s edge points from the closest stored viewpoint into the observed edge image, and (4) edge visibility—the ratio of edge points that are unoccluded. Each of the feature score components is computed as a truncated (so as not to overpenalize outliers), average loglikelihood of observed features given model feature distributions. For score components (1) and (3), we weight the average loglikelihood by visibility probabilities, which are equal to 1 if predicted depth observed depth, and otherwise^{9}^{9}9We use in all of our experiments..
After rejecting lowscoring samples in round 2, we then refine alignments by repeating the following three steps:

Assign observed superpixel segments to the model.

Align model to the segments with BPA.

Accept the new alignment if the round 2 model placement score has improved.
In step (1), we sample a set of assigned segments according to the probability that each segment belongs to the model, which we compute as the ratio of segment points (sampled uniformly from the segment) that are within in position and radians in normal orientation from the closest model point. In step (2), we randomly extract a subset of 10 segment points from the set of assigned segments, find nearest neighbor correspondences from the keypoints to the model using the model distance transform, and then use BPA to align the model to the 10 segment points. Segment points are of two types—surface points and edge points. We only assign segment edge points to model edge points (as predicted from the given viewpoint), and surface points to surface points. Figures 1 and 8 show examples of object alignments found after segment alignment, where red points (with red normal vectors and orange principal curvature vectors sticking out of them) indicate surface point correspondences, and magenta points (with no orientations) are the edge point correspondences^{10}^{10}10In future work, we plan to incorporate edge orientations as well..
After round 2 alignments, the system removes redundant samples (with the same or similar poses), and then rejects low scoring samples using the scores found at the end of the segment alignment process. Then, it performs a final, gradientbased alignment, which optimizes the model poses with a local hillclimbing search to directly maximize model placement scores. Since this alignment step is by far the slowest, it is critical that the system has performed as much alignment with BPA and has rejected as many lowscoring samples as possible, to reduce the computational burden.
Finally, the system performs a third round of model placement evaluation, then sorts the pose samples by score and returns them. This third round of scoring includes several additional feature score components:

Random walk score—starting from an observed point corresponding to the model, take a random walk in the edge image (to stay within predicted object boundaries), then measure the distance from the new observed point to the closest model point.

Occlusion edge score—evaluate how well model occlusion edges (where the model surface changes from visible to occluded) fits the observed edge image.

FPFH score—computes how well observed and model FPFH features match.

Segment score—computes distances from segment points to nearest model points.

Segment affinity score—measures how consistent the set of assigned segments is with respect to predicted object boundaries (as measured by the observed edge image, and by differences in segment positions and normals).
Vi Detecting Multiple Objects in Clutter
To detect multiple objects in a scene, we run the individual object detector from the previous section to obtain the 50 best model placements for each model, along with their individual scores. Then, following Aldoma et. al [1], we use simulated annealing to optimize the subset of model placements (out of for models) according to a multiobjectplacement score, which we compute as a weighted sum of the following score components: (1) the average of single object scores, weighted by the number of observed points each object explains, (2) the ratio of explained / total observed points, and (3) a small penalty for the total number of detected objects. We also keep track of the top multiobjectplacement samples found during optimization, so we can return a set of possible scene interpretations to the user (in the spirit of interpretation tree methods [7]). This is particularly useful for robot vision systems because they can use tracking, prior knowledge, or other sensory input (like touch) to provide additional validation of model placements, and we don’t want detections from a single RGBD image to filter out possible model placements prematurely.
Vii Experimental Results
We tested our object detection system on two Kinectbased data sets—the Kinect data set from Aldoma et. al [1] containing 35 models and 50 scenes, and a new, more difficult data set with many more occlusions and object pose variations that we collected for this paper which we will refer to as Clutter, which contains 18 models and 30 scenes (Figure 9). We used the same parameters (score component weights, number of samples, etc.) on both data sets. In table I
, we compare the precision and recall of the top scene interpretations (multiobjectplacement samples) of our method against Aldoma et. al on both data sets
^{11}^{11}11 Since neither ours nor the baseline method uses colors in their object models, we considered a model placement “correct” for the Clutter data set if it was within a threshold of the correct pose (, radians) with respect to the model’s symmetry group. For example, we don’t penalize flipping boxes fronttoback or toptobottom, since the resulting difference in object appearance is purely nongeometric. For the Kinect data set, we used the same correctness measure as the baseline method (RMSE between model in true pose and estimated pose), with a threshold of ..this paper (BPA)  this paper (ICP)  Aldoma et. al [1]  

precision  recall  precision  recall  precision  recall  
Kinect  89.4  86.4  71.8  71.0  90.9  79.5 
Clutter  83.8  73.3  73.8  63.3  82.9  64.2 
# samples  1  2  3  5  10  20 

recall  73.3  77.5  80.0  80.8  83.3  84.2 
Our algorithm (with BPA) achieves stateofthe art recall performance on both data sets. When multiple scene interpretations are considered, we achieve even higher recall rates (Table II). Our precisions are similar to the baseline method (slightly higher on Clutter, slightly lower on Kinect). We were unable to train discriminative feature models on the Kinect data set, because the original training scans were not provided. Training on scenes that are more similar to the cluttered test scenes is also likely to improve precision on the Clutter data set, since each training scan contained only one, fullyvisible object.
Viia BPA vs. ICP
We evaluated the benefits of our new alignment method, BPA, in two ways. First, we compared it to ICP by replacing the BPA alignment step in round 2 with an ICP alignment step^{12}^{12}12Both ICP and BPA used the same point correspondences; the only difference was that BPA incorporated point feature orientations, while ICP used only their positions.. This resulted in a drop of in both precision and recall on the Clutter data.
For a second test of BPA, we initialized 50 model placements by adding random Gaussian noise to the ground truth poses for each object in each scene of the Clutter data set. Then, we ran BPA and ICP for 20 iterations on each of the model placements^{13}^{13}13In other words, we repeated the alignment step of round 2 twenty times, regardless of whether the total score improved.. We then computed the average of the minimum pose errors in each alignment trial, where the minimum at time in a given trial is computed as the minimum pose error from step to step . (The justification for this measure is that this is approximately what the “accept if score improves” step of round 2 is doing.) As shown in figure 10, the pose errors decrease much faster in BPA.
Viii Related Work
Since the release of the Kinect in 2010, much progress has been made on 3D object detection in cluttered RGBD scenes. The two most succesful systems to date are Aldoma et. al [1] and Tang et. al [11]. Aldoma’s system is purely geometric, and uses SHOT features [12] for modelscene correspondences. It relies heavily on pose clustering of feature correspondences to suggest model placements^{14}^{14}14This is essentially a sparse version of the Hough transform [3], which is limited by the number of visible features on an object, and is why their recall rates tend to be lower than in our system for objects that are heavily occluded.. The main contribution of Aldoma’s system is that they jointly optimize multiple model placements for consistency, which inspired our own multiple object detection system.
Tang’s detection system uses both geometry and image features, and placed first in the ICRA 2011 Solutions in Perception instance recognition challenge. Their system relies heavily on being able to segment objects in the scene from one another, and most of the effort is spent on combining geometry and image features for classification of scene segments. It is unclear how well the system would perform if such segmentations are not easy to obtain, as is the case in our new Clutter data set.
The Bingham distribution was first used for 3D cluttered object detection in Glover et. al [6]. However, that system was incomplete in that it lacked any alignment step, and differs greatly from this work because it did not use feature correspondences.
Ix Conclusion and Future Work
We have presented a system for 3D cluttered object detection which uses a new alignment method called Bingham Procrustean Alignment (BPA) to improve detections in highly cluttered scenes, along with a new RGBD data set which contains much more clutter and pose variability than existing data sets. Our system relies heavily on geometry, and will clearly benefit from image and color models, such as in Tang et. al [11]. Our Clutter data set, while challenging, contains zero ambiguity, in that a human could easily detect all of the objects in their correct poses, given enough time to study the models. An important direction of future work is to handle ambiguous scenes, where the parts of objects that are visible are insufficient to perform unique alignments, and instead one ought to return distributions over possible model poses. In early experiments we have performed on this problem, the Bingham distribution has been a useful tool for representing orientation ambiguity.
Appendix
The Bingham Distribution. The Bingham distribution is commonly used to represent uncertainty on 3D rotations (in unit quaternion form) [2, 5, 6]. For quaternions, its density function (PDF) is given by
(9) 
where is a normalizing constant so that the distribution integrates to one over the surface of the unit hypersphere , the ’s are nonpositive () concentration parameters, and the ’s are orthogonal direction vectors.
Product of Bingham PDFs. The product of two Bingham PDFs is given by adding their exponents:
(10) 
After computing the sum in the exponent of equation 10
, we compute the eigenvectors and eigenvalues of
, and then subtract off the lowest magnitude eigenvalue from each spectral component, so that only the eigenvectors corresponding to the largest eigenvalues (in magnitude) are kept, and (as in equation 9). We use the opensource Bingham Statistics Library^{15}^{15}15http://code.google.com/p/bingham to look up the normalization constant.Estimating the Uncertainty on Feature Orientations. When we extract surface features from depth images, we estimate their 3D orientations from their normals and principal curvature directions by computing the rotation matrix , where is the normal vector, is the principal curvature vector, and is the cross product of and . We take the quaternion associated with this rotation matrix to be the feature’s estimated orientation.
These orientation estimates may be incredibly noisy, not only due to typical sensing noise, but because on a flat surface patch the principal curvature direction is undefined and will be chosen completely at random. Therefore it is extremely useful to have an estimate of the uncertainty on each feature orientation that allows for the uncertainty on the normal direction to differ from the uncertainty on the principal curvature direction. Luckily, the Bingham distribution is well suited for this task.
To form such a Bingham distribution, we take the quaternion associated with to be the mode of the distribution, which is orthogonal to all the vectors. Then, we set to be the quaternion associated with , which has the same normal as the mode, but reversed principal curvature direction. In quaternion form, reversing the principal curvature is equivalent to the mapping:
We then take and to be unit vectors orthogonal to the mode and (and each other). Given these ’s, the concentration parameters and penalize deviations in the normal vector, while penalizes deviations in the principal curvature direction. Therefore, we set (we use
in all our experiments in this paper), and we use the heuristic
, where is the ratio of the principal curvature eigenvalues^{16}^{16}16 The principal curvature direction is computed with an eigendecomposition of the covariance of normal vectors in a neighborhood about the feature.. When the surface is completely flat, and , so the resulting Bingham distribution will be completely uniform in the principal curvature direction. When the surface is highly curved, , so will equal , and deviations in the principal curvature will be penalized just as much as deviations in the normal.References
 [1] Aitor Aldoma, Federico Tombari, Luigi Di Stefano, and Markus Vincze. A global hypotheses verification method for 3d object recognition. In ECCV 2012, pages 511–524. Springer, 2012.
 [2] Matthew E Antone. Robust camera pose recovery using stochastic geometry. PhD thesis, Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
 [3] Dana H Ballard. Generalizing the hough transform to detect arbitrary shapes. Pattern recognition, 13(2):111–122, 1981.
 [4] P.J. Besl and N.D. McKay. A method for registration of 3d shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14:239–256, 1992.
 [5] Christopher Bingham. An antipodally symmetric distribution on the sphere. The Annals of Statistics, 2(6):1201–1225, 1974.
 [6] Jared Glover, Radu Rusu, and Gary Bradski. Monte carlo pose estimation with quaternion kernels and the bingham distribution. In Proceedings of Robotics: Science and Systems, Los Angeles, CA, USA, June 2011.
 [7] W Eric L Grimson and Tomas LozanoPerez. Localizing overlapping parts by searching the interpretation tree. Pattern Analysis and Machine Intelligence, IEEE Transactions on, (4):469–482, 1987.
 [8] Berthold KP Horn. Closedform solution of absolute orientation using unit quaternions. JOSA A, 4(4):629–642, 1987.
 [9] David G Lowe. Object recognition from local scaleinvariant features. In Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, volume 2, pages 1150–1157. Ieee, 1999.
 [10] Radu Bogdan Rusu, Nico Blodow, and Michael Beetz. Fast Point Feature Histograms (FPFH) for 3D Registration. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, May 1217 2009.
 [11] Jie Tang, Stephen Miller, Arjun Singh, and Pieter Abbeel. A textured object recognition pipeline for color and depth image data. In Robotics and Automation (ICRA), 2012 IEEE International Conference on, pages 3467–3474. IEEE, 2012.
 [12] Federico Tombari, Samuele Salti, and Luigi Di Stefano. Unique signatures of histograms for local surface description. Computer Vision–ECCV 2010, pages 356–369, 2010.
Comments
There are no comments yet.