Histograms of Gaussian normal distribution for feature matching in clutter scenes

06/19/2017 ∙ by Wei Zhou, et al. ∙ Fraunhofer 0

3D feature descriptor provide information between corresponding models and scenes. 3D objection recognition in cluttered scenes, however, remains a largely unsolved problem. Practical applications impose several challenges which are not fully addressed by existing methods. Especially in cluttered scenes there are many feature mismatches between scenes and models. We therefore propose Histograms of Gaussian Normal Distribution (HGND) for extracting salient features on a local reference frame (LRF) that enables us to solve this problem. We propose a LRF on each local surface patches using the scatter matrix's eigenvectors. Then the HGND information of each salient point is calculated on the LRF, for which we use both the mesh and point data of the depth image. Experiments on 45 cluttered scenes of the Bologna Dataset and 50 cluttered scenes of the UWA Dataset are made to evaluate the robustness and descriptiveness of our HGND. Experiments carried out by us demonstrate that HGND obtains a more reliable matching rate than state-of-the-art approaches in cluttered situations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introdution

Among 3D data processing tasks, 3D object recognition has become one of the most popular researching problems in the last two decades zabulis2016correspondence ; martinek2015Interachtive ; Ahmed2015DTW ; Guo2013Rotational ; Tobari2010Unique ; mian2010repeatability

. The main goals of object recognition are to correctly recognize objects in scenes and accurately estimate their poses

mian2006three . However, the depth information collected by the scanners most often contains noise, varying point density of point clouds, occlusions, and clutter. So recognizing an object and recovering its pose from the recorded scenes are still a challenge in this research area.

Most of the recognition methods can be divided into several phases: feature points extraction, features calculation, feature matching, transform poses generation, and hypothesis verification taati2011local . The key problem in 3D object recognition is how to describe the free-form object effectively, how to match these feature descriptors correctly, how to recognize objects, and get their poses in the scenes. Therefore, the feature descriptor is the key to recognizing objects and its definition directly influences the subsequent phases of the recognition methods taati2011local .


Figure 1: (a) Original Cheff model and with (b) 0.1mr, (c) 0.2mr, and (d) 0.3mr Gaussian noise level. (UWA Dataset.)

The descriptiveness, robustness, and efficiency of the feature descriptors are the three most important issues for feature matching bariya2010scale . Due to the influence of the feature descriptor to the feature applications such as feature matching, transform generation, and object recognition, the descriptiveness of the feature descriptor needs to be sufficiently high to ensure the accuracy of feature matching taati2011local . Furthermore, the feature descriptor should be robust to the influence of a series of disturbances, such as noise (Sample demonstrating in Fig. 1), varying point density, clutter, and occlusion (Sample seeing in Fig. 2) Guo2013Rotational ; boyer2011shrec . In the remainder of the paper we use the wording “cluttered scenes” to describe scenes with such disturbances. In addition, the calculating efficiency of the feature descriptor should be high enough to decrease the calculation time of the algorithm.


Figure 2: Cluttered scenes with different point density. (a) Cluttered scenes with normal point density. (b) Cluttered scenes with 1/8 of normal point density. (Bologna Dataset.)

The problem of feature matching in cluttered scenes is much harder than in normal non-interference scenes. Significant limitations observed with state of the art methods are that their performances depend on whether the model is complete, e.g., occlusion and clutter exist in the scenes. Another difficulty is the point density of point clouds, as their feature matching method requires models and scenes under the same point density. In addition, existing literature focuses on evaluating the descriptors on noiseless data.

The motivation of our proposed technique is to convert the point data and mesh data information into a more descriptive and robust local feature representation that can decrease the feature mismatches between models and cluttered scenes. If that has been done, the performances of many follow-up applications like 3D object recognition, 3D reconstruction, and 3D registration, will be improved.

We propose a novel technique to build Local Reference Frames (LRFs) on 3D keypoints in Section 3 and present our so-called histograms of gaussian normal distribution (HGND) descriptor on the local surface patches in Section 4. A local surface patch is obtained by only considering the neighbor sphere surface around 3D keypoints from the range image. It thus consists of points and mesh data sets. In Section 5 we show the effectiveness of the combination of LRF and HGND.

2 Related work

According to the neighbor support radius, the existing feature descriptors can be divided into two main categories, global feature descriptors and local feature descriptors bariya2010scale ; guo20143d ; zhang2015pose . The first category defines a series of features to describe the entire 3D object, whereas the latter one one use local parts of the object.

Global feature descriptors ignore shape details and the object needs to be segmented from the scenes, they are not suitable for feature matching in cluttered scenes. On the other hand, the local feature description methods construct a series of features which describe the features of the local surface patches of feature points. So the local features are more robust to occlusion and clutter than global feature descriptor methods and it is suitable for feature matching in cluttered and occluded scenes petrelli2011repeatability .

Several local-feature-based method have been proposed in the last decades, e.g.mian2006three .These methods can be divided into two categories by whether they construct a local reference frame (LRF) or not, before defining the feature descriptors Guo2013Rotational . Feature descriptors without LRF mostly adopt geometric information of the local surface to make up the feature.

Transforming the geometrical information of local surface into a histogram by these methods that do not use a local reference frame causes most of the spatial information to be discarded. This has direct negative consequences on the robustness and uniqueness of these methods. Therefore, local feature descriptors with LRF were proposed. They are formed by the geometric information of feature points according to the local reference frame.

Tombari et al. Tobari2010Unique introduced the signature of histograms of orientations (SHOT) feature descriptor by computing local histograms incorporating geometric information of points. They proposed an LRF by calculating the eigenvector of the covariance matrix of the local neighboring surface of the feature points. By analyzing the importance of the LRF, they also proposed the weighted linear combination for calculating the covariance matrix and sign disambiguation. This method is invariant to rotation and translation, and robust to noise and clutter aldoma2012point , but sensitive to varying point density Guo2013Rotational .

Guo et al. Guo2013Rotational introduced the rotational projection statistics (ROPS) descriptor by rotationally projecting the neighboring points of the feature points into three tangent planes and calculating the statistics information of the projecting points. They also used the scatter matrix to form the LRF.

Most of the proposed LRFs do not uniquely generate an invariant descriptor Tobari2010Unique and they can thus not satisfy the requirements of descriptiveness, uniqueness, robustness and distinctiveness. So these may lead the descriptor to be sensitive to noise, varying point density, occlusion and clutter in the scene. Inspired by these approaches, expecially ROPS Guo2013Rotational and SHOT Tobari2010Unique we combine the best parts and propose to construct an unambiguous LRF (Section 3) and combine it with our well-performing statistic counting method, Histograms of Gaussian Normal Distribution (Section 4) in order to get higher recognition results in the feature matching applications (Section 5). Comparison is done with the two aforementioned LRF methods.

3 Local reference frame

Before constructing the feature descriptor, we need to generate the local reference frame (LRF). In order to show the overall processes intuitively, the scheme of our LRF extraction is presented in Figure 3. Our LRF can roughly be divided into two parts: i) the calculation of scatter matrix M and its most two descriptive eigenvectors (details in Section 3.1); ii) the sign disambiguation of and axes (Section 3.2). First, around a feature point on the depth image model or scene, a local surface patch is cropped. The scatter matrices are calculated for each triangle to get the scatter matrix of the local surface patch by distance and area weighted () summation. The and axes are extracted from scatter matrix M, and we totally get 4 different LRFs. Then sign disambiguation is adopted both in axis and axis directions. Finally, the axis is obtained by the cross product of the and axes.

The distance weight is also used as size weight to calculate HGND (Section 4).


Figure 3: LRF generation processes. (a) Asia Dragon of Bologna Dataset. (b) The local surface patch is cropped from model. (c) The scatter matrix of local surface patch is calculated by scatter matrix of each triangular mesh . (d) The most two descriptive eigenvectors are extracted from scatter matrix . (e) LRF is determined by sign disambiguation of and axes. (f) Demonstrate triangular mesh of local surface patch.

3.1 The calculation of scatter matrix M

1:Input: A local surface triangle mesh , neighbor support radius .
2:Output: Eigenvectors {} of scatter matrix.
3:procedure scatter matrix of lrf(M).
4:     for all  do
5:         Compute the triangle centroid and area .
6:         Compute the distance weight and the area weight , Using Eq. (2) and Eq. (3).
7:         Compute of each triangular mesh by integral transform Eq. (5)
8:         Compute the weighted summation M by Eq. (1).
9:         Decompose M to get eigenvectors {}.
10:     end for
11:end procedure
Algorithm 1 Calculation of the scatter matrix M

An outline of the calculation of the scatter matrix M is given in Algorithm 1 and Fig. 3(c). Given a feature point and neighbor support radius , the local surface triangle mesh of local surface patch is obtained by cutting out the sphere surface of support radius and center from the range image. As is shown in Algorithm 1 and Fig. 3, our whole algorithm is calculated on the local surface triangle mesh to get the final local feature descriptor.

A random point of the triangle can be represented by (see also Fig. 3(f)), where and . So can also be expressed as .

For each triangle with vertices we have the centroid (see also Fig. 3(f)) as: .

The so-called scatter matrix M is a statistical measure that is used to estimate the covariance matrix feller2008introduction , represented by , where is the number of points in the local surface patch and is the mean value of all these points.

As adaption of the definition of scatter matrix, our scatter matrix of the local surface patch around the feature point is computed as follows:

(1)

where is the scatter matrix of each triangle, are distance weight and area weight respectively. Different from the SHOT Tobari2010Unique and ROPS Guo2013Rotational methods, our distance weight of the triangle (see also Fig. 3(f) and Fig. 5(a)) is given by Gaussian function:

(2)

where is the parameter of Gaussian function, in this paper we set equal to 5mr. The area of the triangle is given by , and the normalised area weight of each triangle then reads

(3)

where the denotes the cross product.

We now use definite integrals to calculate the scatter matrix of each triangle. We so push all the points into the scatter matrix calculating process, and increase the calculation speed: . As we transformed the coordinate axes from coordinates to the coordinates, the triple integral is transformed into a double integral:

(4)

In the computation process of the triangle’s scatter matrix , we replace the mean point with the local surface patch’s feature point , and substitute the triangle’s points with the triangle’s vertexes for increasing the calculation efficiency:

(5)

By applying Eq. (1) on all we get M. We apply an eigen decomposition on M

to get the eigenvalues and its corresponding eigenvectors {

}.

We select the largest two eigenvalues and its corresponding eigenvectors {} to obtain the axis and axis. As showing in Fig. 3(d), we totally get 4 different LRFs. In order to obtain feature descriptor’s uniqueness, next section we’ll present sign disambiguation of axis and axis.

3.2 Sign disambiguation of LRF

For the sign disambiguation of LRF, we only need to disambiguate the direction of and axes, and then using the cross product of and axes to get axis. Details are given in Algorithm 2.

1:Input: Eigenvectors {}, weight and .
2:Output:

LRF coordinate vector

.
3:procedure sign disambiguation of ().
4:     for all  do
5:         Compute weighted product of .
6:         then compute orientation function of and axes by Eq. (6).
7:     end for
8:     if  then
9:         
10:     else
11:         
12:     end if
13:     if  then
14:         
15:     else
16:         
17:     end if
18:     then normalize (see Eq. (7)).
19:     Calculate .
20:end procedure
Algorithm 2 Sign disambiguation of LRF

We use the orientation function of axis to decide on the orientation of LRF’s axis:

(6)

and

(7)

Similarly, the orientation function of of axis is defined by taking instead of in Eq. 6. Finally, the axis is defined by cross product between and axes, (see also Fig. 3(e)).

Our approach has several advantages. For each triangle, the closer it is to the point and the larger the area is, the greater is the impact to the point . We use all triangle mesh points to calculate the LRF and due to the integral transformation and the integral computation of the triangles, the computational efficiency does not decrease. Besides most of the existing methods don’t determine the unique direction of LRF’s axes. This leads to four LRFs and make the subsequent feature descriptor calculation process ambiguous. They make the axis uniquely defined and result in the uniqueness and descriptiveness of the feature descriptor.

4 Local Feature Descriptor


Figure 4: Feature descriptor generation processes. (a) According to the LRF, the point data is transformed into ; of each triangular mesh are obtained from the transformed data . (b) The transformed data is projected onto several planes to get the point projection data . (c) The area and normal of each triangular mesh are obtained from the LRF calculation processes. (d)

Both the geometrical information and the spatial distribution information are transferred into 2D histograms by linear interpolation.

(e)

The 2D histograms are compressed and transferred into group information by Moment invariants and Shannon entropy.

In the previous Section 3 we constructed the local reference frame for the local surface patch around the feature point . In this section we will present the the subsequent stage: Generating the local feature descriptor in the local reference frame leading to our Histograms of Gaussian Normal Distribution method.

In Section 2

we classified the descriptors into two categories: spatial information based and geometrical information based descriptors. We aim at a local feature descriptor which is descriptive, unique and robust to various kinds of occurring problems, viz. noise, clutter, occlusion and varying point density. We thus design our feature descriptor under these conditions and combine aspects of the two categories. Clutter and occlusion mean that we consider scenes with multiple models that block each other. Our descriptor is inspired by spatial descriptors, but such descriptor often perform weak on sparse data. In the computation process of the LRFs we have computed the Gaussian distance weight

in the local surface patch. We will use this and Gaussian angle weight as the “length” and “direction” of transform normal distribution counting respectively (See also Figure 5). in our descriptors compensating the defects of the spatial information based descriptor.

Fig. 4 shows the total generation processes of our feature descriptor coined “Histograms of Gaussian Normal Distribution”. From this figure, it’s clear that our feature descriptor generation processes consists of two parts: data transform in 3D LRF (Section 4.1) and data counting in 2D surface (Section 4.2).

4.1 Data transform in 3D

Given a certain 3D object or scene, a local surface patch is cut around the feature points. The local surface patch includes the triangle mesh data and the point data . We then calculate the LRF based on the local surface patch. According to the LRF, the point data is transformed to , assuring rotation and translation invariance. Finally the feature descriptors of each feature point on the LRF are calculated.

The transform point coordinate data is calculated by LRF matrix as . Similarly, we can get transformed coordinate data of other points by LRF matrix. Then we obtain the transformed coordinate data of the center point by .

The normal of each transformed triangular mesh can be calculated by:

(8)

Based on Eq. (8) we can get the transformed normal data of local surface patch.

Then we project the and into the three coordinate planes , and of the LRF (See also Fig. 4(b)), and obtain the projection data , ,
of as well as , , of .

4.2 Data counting in 2D projection surface

We introduce an unique Gaussian weights group {} to count the normal histograms:

(9)

The are corresponding to three projection planes’ “length” and “direction” weights respectively.

The calculation of “length” Gaussian weight is similar to (See also in Eq. (2) and Fig. 5(a)):

(10)

Figure 5: Gaussian weight function of “length” and “direction”.

From Fig. 5(a) we can observe that for the center point the more close to the feature point the more great weights it obtains, at the edge of local surface patch it obtains lowest weights. The projection plane’s “length” Gaussian weights (Or 2D “length” Gaussian weight) get similar to this case.

For the calculation of “direction” Gaussian weight , we use

(11)

where is the angle between normal and the center line of the sector, for calculation convenience we replace the numerator of Eq. (11) with (See also in Fig. 5(b)):

(12)

From Fig. 5(b) and Fig. 4(e), we can observe that the angle between 2D normal and the center line of sector is range from to , the smaller the absolute value of the angle, the greater the weight the normal obtains. The projection plane’s “direction” Gaussian weight (Or 2D “direction” Gaussian weight) get similar to this case.

For each of three projection planes, we calculate 2 level histograms. As shown in Fig. 4(b), firstly the point data is divided into 4 parts (4 quadrants) by its projection 2D coordinate value. At the same time, its corresponding “length” Gaussian weights are also calculated. For each quadrant, we divide it into 8 parts (8 direction) by the angles between the projection vectors of normal data and horizontal axis of 2D planes, and its corresponding “direction” Gaussian weights are computed at the same time (See also in Fig. 4(c), detail in Fig. 4(e)). Specially, due to the uncertainty of normal direction, we also count once in each normal’s opposite direction. For example, in Fig. 4(e), the normal in the No.1 direction of the 8 parts we count it once in No.1 direction and also count it once in the opposite direction of No.5. One of the three projection planes ’s calculation is presented in Algorithm 3, get similar to ’s normal histogram calculation.

1:Input: , .
2:Output: 4*8 dimension histograms.
3:procedure  calculation of HGND(4*8 dimension histograms).
4:     for  coordinate plane do
5:         for all  do
6:              project {} into coordinate planes to get {}.
7:              then compute () by Eq. (10) and Eq. (12).
8:              decide the quadrant of in plane.
9:              decide the direction of in ’s corresponding quadrant.
10:              then the value of {}’s corresponding parts plus ().
11:              then the value of {}’s opposite parts plus ().
12:         end for
13:     end for
14:     store the value of 4*8 parts from projection data.
15:end procedure
Algorithm 3 calculation of Histograms of Gaussian Normal Distribution

Our approach has several advantages: 1) efficiency, only using mesh center point and mesh normal. Comparing to the normal methods, most of these methods calculate every point’s normal by the neighbor points, these will result in large amount point calculation, like SHOTTobari2010Unique (See also Fig. 12). For one triangular mesh, ROPSGuo2013Rotational uses total three points of every mesh, these will result in large calculation and also makes the feature descriptor sensitive to low point density (See alsoFig. 2). But we just use one center point of triangular mesh; 2) robustness, two Gaussian weights limit the influence of clutter. Eliminating the uncertainty of normal direction by “double counting”.

5 Experiments

We use the 1-Precision (FP/(FP + TP)) and the Recall (TP/(FN + TP)), where FP (TP) is the number of the False (True) Positives and FN is the number False Negatives.

For fair comparison, we compute these values as follows: Given a model and a scene, we extract points from the original model data and points from the original scene data by uniform sampling, where is the number of models in the scene. By extracting the corresponding points of the model keypoint from the scene keypoints, according to the given ground truth transformation (rotation and translation matrices), we retrieve these matches as TP+FN, that is, all relevant matches. We calculate our feature descriptors for these keypoints and match the scene feature descriptors against all model feature descriptors. We find the nearest and second nearest model feature descriptors with a K-D tree bentley1975multidimensional . If the ratio between the nearest distance and second nearest distance is less than a threshold , the correspondence between the scene feature descriptor and model feature descriptor is marked as a TP+FP, i.e. a selected element. We compare the selected elements, TP+FP, with the corresponding matches TP+FN index to get the true positive (TP) in the corresponding matches, and the other corresponding matches are signed as the false positive (FP) and false negatives (FN). The Recall vs 1-Precision Curve is obtained by varying the value of the corresponding match threshold.

Ideally the curves are located top-left, denoting high recall at low 1-precision. The curves can look complicated, though.

We use the Stanford 3D Scanning Repository Dataset levoy2005standford , Bologna Tobari2010Unique ; tombari2011combined ; bolognadataset2010 (Sample demonstrating in Fig. 2) and the UWA Datasets (Seeing e.g. in Fig. 1) mian2010repeatability ; mian2006three ; uwadataset2009 and compare our method against two state-of-the-art methods. All the methods are implemented in C++ and use the Point Cloud Library (PCL) rusu20113d . PCL is a 3D point cloud processing software which includes the state-of-the-art methods and tools to deal with the 3D data and range image.

5.1 Local Feature Descriptor Parameters

There are three parameters in our feature descriptor calculation processes: support radius r, length Gaussian weight and direction Gaussian weight . The rotation angles and the projection surface affect the generation of the spatial feature information, while the 2D histogram bin number not only affects the geometrical feature information, but also determines the spatial feature information calculation.

According to the analysis for feature descriptor method, the support radius determines the amount of feature information when describing the local surface in local feature descriptor: more large support radius implies more feature information obtained by descriptor, but this is only applicable to no clutter models and scenes. For the scenes with clutter, the support radius will have a critical value: less than this value, the more larger support radius implies more in formation obtained; greater than this value, the more larger support radius also implies more noise and other model’s information are included.

We choose 6 support radii: (0.85, 4.25, 8.5, 17, 21.25, and 25.5mr), where “mr” is mesh resolution, a common description in 3D mesh data processing denotes the mean length of the edges of the triangles of the 3D mesh petrelli2011repeatability . We compare their effects by the Recall vs 1-Precision curves. In this experiment we keep the other parameters constant.

Figure 6: Feature with different support radius on the Bologna ande UWA Dataset

We do the experiment on Bologna Dataset and UWA Dataset seperately, yielding the Recall vs 1-Precision curves in Fig. 6. Besides the average calculation time in models and scenes is shown in Fig. 7. These figures clearly show two optimal support radii of 4.25mr and 8.5mr, as a tradeoff among efficiency, descriptiveness and robustness, viz. time, details and noise. For 8.5mr can obtain more higher Recall rate, finally we choose 8.5mr as our feature’s support radius.


Figure 7: Feature calculation time with different support radius. (Unit: )

The length Gaussian weight is related to the robustness of the local feature descriptor as it determines the main point distribution information. In the experiment we evaluate the length Gaussian weight’s influence to HGND. We select the following seven different length Gaussian weights : 0.35, 0.5, 1, 5, 15, 45, and 500mr. We compare their effects by the Recall vs 1-Precision curves. In this experiment we keep the other parameters constant.

The experimental results are obtained for our ground truth perturbed by adding Gaussian noise with standard deviation

to the surface points (See Fig. 1(c)). The Recall vs 1-Precision curves for Bologna Dataset and UWA Dataset are shown in Fig. 9. From these figures it is clear that using larger than yield best results, and the largest three get the same highest Recall value. But in UWA Dataset can get higher Recall value at low 1-Precision value, so we choose as the length Gaussian weight .

Figure 8: Feature with different length Gaussian weights on the Bologna Dataset (left) and UWA Dataset

The direction Gaussian weight is another important parameter for HGND’s robustness to clutter, since it influence the distribution of normal. We choose from low () via .05 and 5, to high () to compare their results by Recall vs 1-Precision curves, and we set other parameters constant. Other bin numbers generated worse results than those shown here. Again we used for Gaussian noise, yielding the Recall vs 1-Precision curves in Fig. 9. These figures clearly show the more larger implies feature more robust to clutter, when larger than , the Recall vs 1-Precision curves totally are the same. So we choose as the .

Figure 9: Feature with different direction Gaussian weight on the Bologna and UWA Dataset

So the optimal parameters are found as support radius , length Gaussian weight , and direction Gaussian weight . In Section 5.2 we compare our HGND descriptor in 3D scenes with the ROPS and SHOT descriptors.

Figure 10: Feature under 0.1mr (left) and 0.3mr (right) Gaussian noise.

5.2 Feature Descriptor Comparison

In this section we will show that the combination of our LRF and our new feature descriptor yields better overall matching results than state-of-the-art. We compare our method against SHOT Tobari2010Unique and ROPS Guo2013Rotational – see Section 2 for details on these methods.

For the sake of preventing the influence of the selection of the keypoints onto the feature descriptor, we randomly select a set of keypoints from the scenes and the models by uniform sampling. We use default parameters for the descriptors, as presented in Table 1. For fairness of the feature descriptor comparison, ROPS uses the support radius mentioned in their article, SHOT uses the same support radius of HGND and the radius in bracket is using for point normal calculation in SHOT, since the SHOT descriptor need enough points to compute every point’s normal, most commonly the radius for point normal calculation need to be twice as large as support radius for descriptor (We have tried the support radius mentioned in SHOT’s article, but can not get better result than radius we set for them.).

Feature length neighbor radius
HGND 96
ROPS 135
SHOT 320 (point normal:)
Table 1: Parameters setting for three feature descriptors.

The feature descriptors with noise: Fig. 10 shows the results under 0.1mr and 0.3mr Gaussian noise. For low noise ROPS performs a slightly better than HGND, but at a higher noise level, our HGND performs better than ROPS and SHOT, since we have used a Gaussian weight to limit the influence of Gaussian noise.

Reduced mesh resolution: Fig. 11 (top) shows the good results of our approach for taking a mesh resolution of 1/8 compared to the original mesh resolution (See Fig. 2). One sees that the low point and mesh density cause a large TP rate loss for SHOT and ROPS, especially for ROPS. The reason for this, is that ROPS need every point in the local surface patch to calculate feature, the low point density will result in the decreasing of feature descriptor. But our feature uses mesh center point distribution and mesh normal distribution counting obtains good result in low point density. Also the “length” Gaussian weight and “direction” Gaussian weight makes our descriptor invariant to the varying point and mesh density.

Figure 11: Feature under 1/8 sampling density (top) and under 1/2 sampling density and 0.1mr Gaussian noise (bottom).

Both effects: Fig. 11 (bottom) shows the results for combining the two aspects of the previous experiments: combining Gaussian noise() and low point density(). Here all the methods get a big loss on the scenes both with noise and low point density, and also can observe that our descriptor outperforms the other ones gaining a TP rate up to 58. Comparing with the previous experiments, we can find that ROPS gained very low Recall value in low point density, this is due to the fact that it relies on high point density surface making it very sensitive to low density. And SHOT is very sensitive to high Gaussian noise level since it calculates normal for every point by each point’s K-neighbor points.

In these three experiments, one can see that our descriptor can gain a high recall rate near to 90% when the noise stays at a low level. At a high noise level and a normal point density our FFIS can obtain an average rate about 75%, whereas in the combined low point density scenes with a high noise level we can only get a recall rate close to 60%.

Computation times: The total average calculation times for the Bologna and UWA Datasets, both for feature descriptor generation and matching, are visualized in Fig. 12. The experiments were carried out on a computer with a Windows 10 64bit operation system, an Intel(R) Core(TM) i5-6300HQ CPU 2.30GHz processor with 12.0GB RAM. The multi-threading OPENMP (Four threads of calculation) is adopted in all the methods. Our HGND is clearly the fastest method. It furthermore yields the best performance in noisy, cluttered scenes.


Figure 12: Total calculation times (in ).

6 Conclusion

We considered 3D model matching where models are present in scenes but could be altered due to rotation, translation, noise, clutter, occlusion and varying point density. To solve the problem that feature mismatching occurs we presented a novel feature descriptor: Histograms of Gaussian Normal Distribution (HGND).

Our HGND combines geometrical information and spatial distribution information based on two Gaussian weights. We use the transformed mesh center points and transformed mesh normals which were calculated by LRF matrix. With the point and normal transferred in LRF, making the feature descriptor easily computable and invariant to rotation and translation. With the descriptive point distribution, normal distribution and Gaussian weights we obtain 96 dimension histograms, facilitating a better robustness to disturbances.

We performed a set of experiments on the Bologna and UWA Datasets to compare our descriptor against state-of-the-art methods under different situations with noise, clutter, occlusion and varying point density.

The results of these experiments show that HGND performs best with respect to descriptiveness and robustness to disturbances, when comparing against state-of-the-art descriptors (ROPS, SHOT). Especially under a lower noise level our HGND obtained a 90% Recall rate. In general, our approach is able to find more true feature matchings in scenes with different disturbances, in comparison to the other approaches.

We currently focus on further research after 3D feature descriptor(eg. 3D object recognition and pose estimate). We furthermore work with data collected from a 3D scanner.

References

  • (1) Zabulis X, Lourakis M I A, Koutlemanis P.: Correspondence free pose estimation for 3D objects from noisy depth data. The Visual Computer, 1-19 (2016)
  • (2) Martinek M, Grosso R, Greiner G.: Interactive partial 3D shape matching with geometric distance optimization. The Visual Computer, 31(2), 223-233 (2015)
  • (3) Ahmed F, Paul P P, Gavrilova M L.: DTW-based kernel and rank-level fusion for 3D gait recognition using Kinect. The Visual Computer, 31(6-8), 915-924 (2015)
  • (4) Guo Y, Sohel F, Bennamoun M, et al.: Rotational projection statistics for 3D local surface description and object recognition. IJCV, 105(1): 63-86 (2013)
  • (5) Tombari F, Salti S, Di Stefano L.: Unique signatures of histograms for local surface description. ECCV. Springer Berlin Heidelberg, 356-369 (2010)
  • (6) Mian A, Bennamoun M, Owens R.: On the repeatability and quality of keypoints for local feature-based 3d object retrieval from cluttered scenes. IJCV, 89(2-3), 348-361 (2010)
  • (7) Mian A S, Bennamoun M, Owens R.: Three dimensional model-based object recognition and segmentation in cluttered scenes. IEEE TPAMI, 28(10), 1584-1601 (2006)
  • (8) Taati B, Greenspan M.: Local shape descriptor selection for object recognition in range data. CVIU, 115(5), 681-694 (2011)
  • (9) Bariya P, Nishino K.: Scale-hierarchical 3d object recognition in cluttered scenes. CVPR 2010, 1657-1664 (2010)
  • (10) Boyer E, Bronstein A M, Bronstein M M, et al.: SHREC 2011: robust feature detection and description benchmark. arXiv:1102.4258 (2011)
  • (11) Guo Y, Bennamoun M, Sohel F, et al.: 3D object recognition in cluttered scenes with local surface features: A survey. IEEE TPAMI, 36(11), 2270-2287 (2014)
  • (12)

    Zhang Z, Wang L, Zhu Q, et al.: Pose-invariant face recognition using facial landmarks and Weber local descriptor. Knowledge-Based Systems, 84, 78-88 (2015)

  • (13) Petrelli A, Di Stefano L.: On the repeatability of the local reference frame for partial shape matching. ICCV 2011, 2244-2251. (2011)
  • (14) Aldoma A, Marton Z C, Tombari F, et al.: Point cloud library: Three-dimensional object recognition and 6 DoF pose estimation. IEEE Robotics And Automation Magazine, 19(3), 80-91 (2012)
  • (15)

    Feller W.: An introduction to probability theory and its applications. John Wiley and Sons, 2008. (2008)

  • (16) Bentley J L.: Multidimensional binary search trees used for associative searching. Comm. of the ACM, 18(9), 509-517. (1975)
  • (17) Levoy M, Gerth J, Curless B, et al.: The Stanford 3D scanning repository. http://www-graphics.stanford.edu/data/3dscanrep. (2005)
  • (18) Tombari F, Salti S, Di Stefano L.: A combined texture-shape descriptor for enhanced 3D feature matching. ICIP 2011, 809-812. (2011)
  • (19) Tombari F, Salti S, Di Stefano L.: Bologna dataset. http://www.vision.deis.unibo.it/research/80-shot. (2010)
  • (20) Mian A S.: Uwa dataset: 3D modeling and 3d object recognition data. http://www.csse.uwa.edu.au/~ajmal/. (2009)
  • (21) Rusu R B, Cousins S.: 3D is here: Point cloud library (pcl). ICRA 2011, 1-4. (2011)