Partial Procedural Geometric Model Fitting for Point Clouds

10/17/2016 ∙ by Zongliang Zhang, et al. ∙ 0

Geometric model fitting is a fundamental task in computer graphics and computer vision. However, most geometric model fitting methods are unable to fit an arbitrary geometric model (e.g. a surface with holes) to incomplete data, due to that the similarity metrics used in these methods are unable to measure the rigid partial similarity between arbitrary models. This paper hence proposes a novel rigid geometric similarity metric, which is able to measure both the full similarity and the partial similarity between arbitrary geometric models. The proposed metric enables us to perform partial procedural geometric model fitting (PPGMF). The task of PPGMF is to search a procedural geometric model space for the model rigidly similar to a query of non-complete point set. Models in the procedural model space are generated according to a set of parametric modeling rules. A typical query is a point cloud. PPGMF is very useful as it can be used to fit arbitrary geometric models to non-complete (incomplete, over-complete or hybrid-complete) point cloud data. For example, most laser scanning data is non-complete due to occlusion. Our PPGMF method uses Markov chain Monte Carlo technique to optimize the proposed similarity metric over the model space. To accelerate the optimization process, the method also employs a novel coarse-to-fine model dividing strategy to reject dissimilar models in advance. Our method has been demonstrated on a variety of geometric models and non-complete data. Experimental results show that the PPGMF method based on the proposed metric is able to fit non-complete data, while the method based on other metrics is unable. It is also shown that our method can be accelerated by several times via early rejection.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 6

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A geometric model is a continuous point set (e.g. a surface). A geometric model space is a set of geometric models. A procedural model space is defined by a set of parametric modeling rules [Dang et al. 2015]. Retrieving desired model from procedural space is called inverse procedural modeling (IPM). As an important but challenging problem in computer graphics and computer vision, IPM has been actively studied in recent years [Musialski et al. 2013]

. PPGMF is a special case of IPM, as it searches a procedural geometric model space for the model which is rigidly similar to a query of non-complete point set. PPGMF can be used in a number of applications including pattern recognition, shape matching, geometric modeling and point cloud reconstruction.

The task of basic geometric model fitting (BGMF) is to rigidly fit basic geometric models to the input geometric data (i.e. a point set). The most famous BGMF method is RANSAC [Fischler and Bolles 1981], which can be used to fit several basic geometric models such as planes and spheres. Following RANSAC, a lot of BGMF methods have been proposed [Isack and Boykov 2011]. However, BGMF methods cannot be used to fit arbitrary models. To fit arbitrary models, the rigid geometric similarity between arbitrary models has to be calculated. A rigid geometric similarity metric should ensure that a model is most similar to itself than any other models. To the best of our knowledge, symmetric Hausdorff distance (SHD) is the only rigid geometric similarity metric. However, it is time consuming to calculate SHD, making it difficult to perform arbitrary model fitting. This paper hence proposes a novel efficient rigid geometric similarity metric to perform arbitrary model fitting.

Given the similarity metric, the remaining task of arbitrary model fitting is to optimize over a given arbitrary model space. An arbitrary model space is usually defined by a procedural modeling approach. That is, a set of procedural modeling rules are used to generate models [Smelik et al. 2014]. Retrieving desired models from procedural model space is called IPM. Many IPM methods have been proposed using different retrieval criteria such as indicator satisfying [Vanegas et al. 2012b] and image resembling [Teboul et al. 2013] [Lake et al. 2015]. In this paper, we focus on geometric criterion based IPM (GIPM) methods, which aim at fitting procedural geometric models to the input geometric data. As one geometric criterion, voxel difference (VD) has been investigated in GIPM methods [Talton et al. 2011] [Ritchie et al. 2015]. However, VD is an approximate geometric similarity criterion. It is obvious that part information is lost by voxelization, as the voxelization resolution cannot be as small as . Theoretically, our metric does not rely on a resolution. In other words, VD-based GIPM method cannot be used for rigid model fitting, which requires the calculation of rigid geometric similarity. To the best of our knowledge, our PPGMF method is the first GIPM method which can be used for arbitrary rigid model fitting.

PPGMF aims at rigidly fitting procedural geometric models to a query of non-complete (incomplete, over-complete or hybrid-complete) geometric object. “Hybrid-complete” means hybrid incomplete and over-complete. Typical queries are point clouds. There are two key processes in a PPGMF method. The first key process is to calculate the rigid geometric similarity between a model and the query. We have found that a SHD or VD based method is unable to fit non-complete point clouds. We hence propose a novel partial rigid geometric similarity metric to fit non-complete data. The second key process is to optimize over the procedural model space. Optimizing over the procedural space is challenging due to the hierarchical and recursive nature of modeling rules. Markov chain Monte Carlo (MCMC) technique is used to perform optimization. Although the similarity calculation based on our metric is faster than SHD, it is still too slow for practical applications. We hence propose a novel coarse-to-fine model dividing strategy to reject dissimilar models in advance to accelerate this optimization.

PPGMF is of important significance, and is very useful as it can be used to fit arbitrary geometric models to non-complete data. For example, most laser scanning data is non-complete (cluttered) due to occlusion [Guo et al. 2014]. We have tested our metric and PPGMF method on a variety of geometric models and non-complete data. Experimental results show that the PPGMF method based on the proposed metric is able to fit non-complete data, while the SHD or VD based method fails. Experimental results also show that our method can be accelerated by several times using early rejection.

In summary, our contributions are: (1) A novel rigid geometric similarity metric is proposed to measure the similarity between two geometric models. (2) An effective method is proposed to rigidly fit arbitrary geometric models to non-complete data. (3) A coarse-to-fine geometric model dividing strategy is proposed to reject dissimilar models in advance for the acceleration of PPGMF.

The rest of this paper is organized as follows. Sections 2 and 3 present related work and the overview of our method, respectively. Section 4 introduces our rigid geometric similarity metric. Section 5 presents the MCMC-based optimization approach and our coarse-to-fine model dividing strategy. Sections 6 and 7 present experimental results and conclusion, respectively.

2 Related Work

Most GIPM methods take either a particular type of geometric model or geometric data as input. BGMF methods such as [Fischler and Bolles 1981] [Isack and Boykov 2011] work on basic geometric models. [Debevec et al. 1996] [Mathias et al. 2011] rely on image information to achieve IPM while our work does not rely on images. [Ullrich et al. 2008] assumes the number of model parameters is fixed. [Bokeloh et al. 2010] takes symmetry as an assumption. [Wan and Sharf 2012] is limited to facade point clouds and split grammar. [Vanegas et al. 2012a] takes Manhattan-World as an assumption. [Boulch et al. 2013] is limited to constrained attribute grammar. [Toshev et al. 2010] [Lafarge et al. 2010] [Huang et al. 2013] work well on airborne laser scanning data, however, it is hard to extend them to other types of data. [Stava et al. 2014] works on tree models. [Demir et al. 2015] relys on semi-automatic segmentation operations. Our method is full-automatic and makes no assumption about the type of input geometric model and geometric data. Consequently, similar to [Talton et al. 2011] [Ritchie et al. 2015], our method can be used for general-purpose GIPM.

It is worth noting that there are a lot of other types of geometric data reconstruction methods but not GIPM methods such as [Pu and Vosselman 2009] [Zheng et al. 2010] [Nan et al. 2010] [Li et al. 2011] [Lafarge and Mallet 2012] [Poullis 2013] [Lin et al. 2013] [Lin et al. 2015] [Monszpart et al. 2015] [Wang and kai Xu 2016]. A GIPM method results in procedural models, while other types of methods usually result in mesh models. Procedural model is more powerful than mesh model [Weissenberg et al. 2013], as it perceives the abstract causal structure of input data [Lake et al. 2015].

3 Method Overview

Figure 1 shows our PPGMF pipeline. The task of PPGMF is to search a procedural geometric model space for the model which is rigidly geometrically similar to a query of non-complete point set. Hence the input of a PPGMF method consists of an non-complete query and a set of parametric geometric procedural modeling rules, which defines the target model space. A query can be a continuous or discrete point set (i.e. a point cloud). In this paper, we focus on point cloud queries.

Figure 1: Our PPGMF pipeline.

An example set of procedural modeling rules is shown in Table 1, where denotes parameter prior. There are 3 rules , and in this example. The axiom rule manages a non-recursive parameter and a recursive parameter . It is straightforward to optimize for non-recursive parameters. However, it is challenging to deal with recursive parameters. When the rules are executed, a recursive parameter will spawn a family of non-recursive parameters under the same name of this recursive parameter. We have to individually identify every non-recursive parameter spawned from the same recursive parameter. To this end, a calling trace can be used. For example, we can identify a parameter in a calling level 3 like this : .

rule rule rule
 Sample  Sample  …
 Call  Call end rule
 Sample  Sample
 Call  Call
 …  …
end rule end rule
Table 1: An example set of rules.

As shown in Fig. 1, given the query and the rules with parameter , a similarity calculation procedure is used to calculate the rigid geometric similarity between the query and the model generated by the procedural modeling procedure according to the rules. Based on the calculated similarity,

is iteratively updated by the optimization procedure. It is worth noting that the number of parameters may vary during the optimization. Based on Bayesian inference theory, the optimization problem can be formulated as follows:

(1)

where is the query, is the posterior of the parameters given the query, is the likelihood of the query given the parameters, and is the parameter prior.

The prior is directly drawn from the input modeling rules. Assume that represents the model corresponding to , the likelihood depends on the rigid similarity between and (Section 4), the optimization process is then performed (Section 5).

4 Rigid Geometric Similarity

A rigid geometric similarity metric should ensure that a geometric model is most rigidly similar to itself than any other models. Let be the universal geometric model space, the self rigidly similar property is formally stated as:

(2)

where denotes the similarity metric.

4.1 Full Similarity

SHD is the only metric satisfying the self rigidly similar property. The SHD between two point sets and is defined as:

(3)

where is the one-sided Hausdorff distance (OHD):

(4)

where is Euclidean norm. Note that, in general, .

Usually, we can use SHD to exactly measure the rigid similarity between two point sets and . If is , then and are the same. However, to calculate SHD, we have to compute OHD two times, i.e., one time from to and another time from to . In other words, only one OHD is insufficient for similarity assessment [Aspert et al. 2002]. If only one OHD calculation is required, the computational cost for similarity calculation can be reduced. Fortunately, one of the point sets involved in our similarity calculation is a geometric model. The measure of the model allows us to compute OHD only once to assess similarity. It is worth noting that different types of models have different types of measures. For example, the measure of a curve is its length, the measure of a surface is its area.

Our insight is that, in real world, two models and are identical if and only if every point of is in (i.e. ) and the measure of is equal to the measure of . That means the measure can be used for similarity assessment. We hence propose a Mean Measure (MM) to represent the rigid similarity between a model and a query point set . Formally, MM is defined as the ratio of the measure of to the OHD from to :

(5)

where denotes the measure of , is a small positive number used to derive different similarities for the models with different measures but the same OHD of . For example, as shown in Fig. 2, both and are equal to . If is , then both and are infinite despite is more similar to than . Theoretically, when is sufficiently small, the similarity metric defined by MM can ensure that a model is most similar to itself than any other models.

(a)
(b)
(c)
(d)
Figure 2: An illustration of rigid similarity. LABEL:sub@fig:partial-similarity:a Curve , LABEL:sub@fig:partial-similarity:b Curve , LABEL:sub@fig:partial-similarity:c Curve , and LABEL:sub@fig:partial-similarity:d the overlap of , and . The overlapping part (black) shows that is a part of or .

Theorem. MM is a rigid geometric similarity metric for real-world geometric models. Note that, a real-world model has a positive finite measure, i.e. . Let be the real-world geometric model space, this theorem is proved as follows.

Proof. , (1) If , then and , so because ; (2) If , then because and . Meanwhile, because, , and is assumed to be sufficiently small, i.e. . So . So MM is a rigid similarity metric for real-world models according to Eq. (2).

Note that, the values of MM are comparable for the same query, but are incomparable for different queries. That is, it makes no sense to compare the MM values across different queries. For example, as shown in Fig. 2, it makes no sense to compare and , although . It is worth noting that, the query is unnecessary to have a geometric measure. That is, the query can be a discrete point set, i.e., a point cloud. If the query is discrete, then is trivial because is always larger than . In practice, we use squared mean measure (SMM), which is a variant of MM and is defined as:

(6)

4.2 Partial Similarity

SHD and MM are defined as full similarity metrics as they assume that the query is complete. However, if the query is non-complete, we have to calculate partial similarity, which is challenging. Partial similarity is not straightforward and is fundamentally different from full similarity. If two point sets have a common part, then these two point sets are partially similar. As shown in Fig. 2, each pair of , and are partially similar, while they are not fully similar. We expect that the partial similarity between and is equal to the partial similarity between and . Because the common part between and is the same as the common part between and .

Consequently, we propose a Weighted Mean Measure (WMM) to represent the partial similarity between a geometric model and a query point set . We divide into non-overlapping sub-models: , and define WMM as:

(7)

where is the weight: , where is the weighting factor, which is a non-negative number. When is , WMM becomes a full similarity metric. is the weighted mean error [Aspert et al. 2002]:

(8)

By weighting, the sub-models of far away from have less contribution to the computation of WMM than those close sub-models. In other words, the common part of and makes major contribution to WMM, making WMM plausible to measure partial similarity. One merit of WMM is that it has only one argument , as is trivial. Similar to MM, it can be easily proved that WMM is a rigid geometric similarity metric.

Let , and be model spaces containing models , and (Fig. 2), respectively. There are 3 cases of partial similarity for PPGMF. (1) Target model is a part of query. For example, target model is a part of query . This case corresponds to partially fitting to over-complete data . (2) Query is a part of target model. For example, query is a part of target model . This case corresponds to partially fitting to incomplete data . (3) Target model and query have common part. For example, target model and query have common part. This case corresponds to partially fitting to hybrid-complete data .

4.3 Similarity Calculation

To compute MM (SMM or WMM) between a model and a point set, we have to compute OHD from the model to the point set, which consists of two steps. First, the model is uniformly divided into sub-models, and the center points of the sub-models are sampled (Section 5.2). Second, the nearest point is searched in the point set for a query point. This is time-consuming if the point set contains a large number of points (e.g. a laser scanning point cloud consisting of millions of points). We employ the FLANN [Muja and Lowe 2014] algorithm to perform nearest neighbour searching. The computational complexity for computing MM depends on the number of points sampled from the model and the size of the point set.

5 Optimization

Given the rigid geometric similarity defined by MM, we empirically define the likelihood in the optimization problem (see Eq. (1)) as:

(9)

Eq. (1

) defines a derivative-free optimization problem, for which traditional mathematical optimization methods are not applicable. We use the Metropolis-Hastings (MH) algorithm

[Metropolis et al. 1953] [Hastings 1970] to solve Eq. (1). MH algorithm is a general and popular MCMC optimization algorithm [Talton et al. 2011].

5.1 Metropolis-Hastings Algorithm

Let be the value of variable in iteration , the MH algorithm works as follows. First, is randomly initialized as . To determine in each iteration, is sampled from a proposal density function

. The probability of accepting

as is defined as:

(10)

That is, the probability for is , and the probability for is .

We now define the proposal function for the modeling parameter . Similar to [Vanegas et al. 2012b], each parameter is required to be within a range of . For a continuous parameter, we randomly select one of the following two proposal functions, i.e., local move function and global move function in each iteration. The local move function is a Gaussian function, that is, , where , and

is the standard deviation ratio. The global move function is a uniform function, that is,

. We use to denote the probability for selecting local move function, and to denote the probability for selecting global move function. For a discrete parameter, we always perform global move. Since both local move and global move functions are symmetric, the probability of accepting is simplified as:

(11)

5.2 Early Rejection

The acceptance probability indicates that the proposed model with a larger similarity is more likely to be accepted than the model with a smaller similarity. More time will be consumed to obtain more accurate similarity since more points have to be sampled from the model. However, we observe that, it is sufficient to determine the dissimilarity by sampling only one point from the model. As shown in Fig. 3, Curve consists of one horizontal line segment, and Curve consists of two vertical line segments, these two curves are dissimilar. The similarities computed by sampling one point (Fig. (c)c) and four points (Fig. (a)a) are the same and equal to the true similarity. However, if two points are sampled (Fig. (b)b), the computed similarity will be incorrect as it shows that and are similar. It can be inferred that a small similarity between two objects means that these two objects are dissimilar. However, a large similarity between two objects does not mean that these two objects are really similar. In other words, a proposed model should be accepted carefully but rejected boldly.

(a)
(b)
(c)
Figure 3: Overlap between Curves (green) and (blue). Black dots represent the points sampled from . LABEL:sub@fig:accept-carefully:a, LABEL:sub@fig:accept-carefully:b and LABEL:sub@fig:accept-carefully:c show that 4, 2 and 1 point(s) are sampled, respectively.

Consequently, to reduce computational time, we propose a coarse-to-fine model dividing strategy for similarity calculation to reject dissimilar models in advance. We take a square surface for example (as shown in Fig. 4), and the conclusions can be easily adapted to other types of geometric models. Assuming that the length of the square surface is , given a predefined minimal dividing resolution , the top dividing level is:

(12)

At each level , we uniformly divide the surface into sub-surfaces, and sample only one point (center point) from each sub-surface to calculate OHD. The similarity is then calculated to decide whether to accept or reject the proposed surface. If it is accepted, then the surface is divided into more sub-surfaces and more points are sampled at a higher level to obtain more accurate similarity. Otherwise, a new surface is proposed.

(a)
(b)
(c)
Figure 4: A square surface to illustrate the coarse-to-fine model dividing. From left to right, the dividing level is 0, 1 and 2, respectively. The black dots represents the points sampled in the current level, and the white dots represents the points sampled in previous levels.

5.3 Pseudo Code

The pseudo code of our MH-PPGMF method is presented in Algorithm 1, where denotes the posterior computed at dividing level . The minimal model dividing resolution should be set as small as possible to obtain accurate similarity. To achieve better performance, parallel tempering with the same configuration as in [Talton et al. 2011] is also used. That is, Markov chains are run with different temperatures and the chains are randomly swapped.

input: a set of modeling rules with parameter , query , posterior function , computational budget , standard deviation ratio , local move probability , and minimal model dividing resolution .
output

: a maximum a posteriori estimate of

: .
Randomly initialize ,
for  to  do
     Randomly select a parameter
     Sample
     if  and is continuous then
         Sample
     else  Sample      
     Compute of according to
     for  to  do
         
         Sample
         if  then
         else  , break               
     if  then      
Algorithm 1 MH-PPGMF with early rejection

6 Results

We implemented our method in C++ and conducted our experiments on a machine running Ubuntu 14.04 with Intel Core i5-3470 3.20GHz CPU and 12GB RAM. In all experiments, we set , , . should be at least 2 times smaller than query resolution.

6.1 Metric Comparison

We compare several metrics with our WMM metric by fitting 4 models (Fig. 5) to 4 queries (Fig. 6). Model is a ring-like surface between an outer square and an inner square. The outer and inner squares have the same center. The length of the outer and inner squares are 4 and , respectively. Models , and are 0.75, 0.5 and 0.25 part of , respectively. As shown in Fig. 6, for to , the ground-truth model of is . In this paper, we refer to the target model of a query as the model which is partially similar to the ground-truth model. Therefore, for each query in Fig. 6, there is an target model existing in each model space (as shown in Fig. 5). That is, for to and to , the target model of in is .

(a)
(b)
(c)
(d)
Figure 5: Model spaces. From left to right: Model spaces , , and . Each of these 4 spaces has only one parameter .
(a)
(b)
(c)
(d)
Figure 6: Queries. From left to right: Queries , , and . For to , is a point cloud uniformly sampled from Model with resolution. , , and consist of 12288, 9216, 6144 and 3072 points, respectively.

The metrics used for comparison include negative SHD (-SHD), negative VD (-VD), negative OHD from query to model (-OHDQM), and inlier ratio (IR). VD has been used in [Talton et al. 2011] [Ritchie et al. 2015], while OHDQM has been used in [Ullrich et al. 2008]. As the foundation of many BGMF methods such as [Fischler and Bolles 1981] and [Isack and Boykov 2011], IR is defined as:

(13)

where denotes the size of discrete point set. The comparison results of fitting the models (Fig. 5) to the queries (Fig. 6) are shown in Fig. 7. To compute SHD, OHDQM and WMM, we uniformly sample points from the models with a resolution of 0.01, which is half of the query resolution. In these 16 experiments, the weighting factor for WMM calculation is 2.5, and the resolution for VD calculation is 0.04.

As the target models of the queries are models with , it is expected that the models with have the largest similarities. As shown in Fig. 7, our WMM is the only metric to achieve this goal for all experiments. SHD is successful for full fitting (Figs. (a)a, (f)f, (k)k and (p)p), but failed for partial fitting except Fig. (g)g. IR is failed to distinguish models with for all experiments except Fig. (k)k. The total computational time of these 16 experiments is shown in Fig. (a)a, it can be seen that WMM is faster than SHD.

(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
(m)
(n)
(o)
(p)
Figure 7: Metric comparison results. From left to right: the results of fitting Models , , and to the queries. From top to bottom: the results of fitting the models to Queries , , and . The vertical axis denotes the normalized similarity. We uniformly normalize the similarities into a range of . The legend for these figures are presented in LABEL:sub@fig:q1m1. The figures in diagonal, above diagonal and below diagonal represent the results of full fitting on complete data, partial fitting on over-complete data and partial fitting on incomplete data, respectively.
(a)
(b)
(c)
Figure 8: LABEL:sub@fig:comparisontime Total time for similarity calculation on experiments shown in Fig. 7. LABEL:sub@fig:vd -VD similarities of fitting to with resolutions 0.2, 0.08, 0.02, 0.01 and 0.005. LABEL:sub@fig:factorh WMM similarities of fitting to with weighting factor =0.5, 1, 2, 4 and 8.

It is worth noting that SHD, OHDQM and WMM always prefer to sample points from model with smaller resolution to obtain more accurate similarity. However, VD produces worse results with smaller resolution for a discrete query. Fine voxelization of a discrete query produces more empty voxels. Therefore, an empty model (e.g. ) is preferred, of that one example is shown in Fig. (b)b. This indicates that voxelization is not suitable for the fine fitting of point clouds.

It is interesting to find that Fig. (c)c resembles Fig. (d)d. Actually, before normalizing, the original similarities are different. As shown in Table 2, the WMM similarities are comparable across different model spaces for the same query. This table along with Fig. 7 demonstrates that, a model is most similar to itself than any other models using WMM. Finally, we take the experiment of fitting to as an example to evaluate the effect of weighting factor . As shown in Fig. (c)c, WMM is very stable with respect to different values of .

Model
Query
1056.4 792.288 528.195 264.094
239.651 792.288 528.195 264.094
96.4585 113.127 528.195 264.094
32.6646 34.4811 44.3055 264.094
Table 2: WMM similarities between the target models and queries. The diagonal elements represent the similarities between the ground-truth models and queries. It is shown that, for the same query, the similarity between the ground-truth model and the query is the largest among all similarities.

6.2 Fitting Noisy Data

We evaluate our method against uniform and Gaussian noise by fitting a sphere model to 4 queries (Fig. 9). We use the method in [Marsaglia 1972] to sample points from the sphere surface and replace (see Eq. (12)) by , where is the radius of the sphere. The sphere model has 4 parameters, including 3 location parameters and 1 radius parameter. Some sample models of are shown in the top row of Fig. 10

. It is worth noting that all the model parameters involved in this paper are uniformly distributed. As a result, the optimization objective is reduced from posterior to likelihood and then rigid similarity. Besides, we stop model fitting casually in this paper.

(a)
(b)
(c)
(d)
Figure 9: Queries. From left to right: Queries , , and . is a noise-free point cloud consisting of 549 points sampled from a unit sphere surface. and are generated by adding low-level and high-level uniform noise to , respectively. consists of 1098 points, while consists of 2985 points. The uniform noise is distributed within a cube with a length of 2. The cube and the unit sphere have the same center. is generated by adding Gaussian noise with a standard deviation of 0.2 to .
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 10: Fitted sphere models (blue) along with the queries (green, Fig. 9). Top row: randomly initialized models. Bottom row: final fitted models (Fig. 11). From left to right: fitting the sphere to , , and , respectively.

The resolution of the noise-free query (Fig. (a)a) is 0.2. We set and in these sphere fitting experiments, the results are presented in Figs. 10 and 11. As shown in Figs. (a)a and (b)b, the target similarities (log likelihood) still remain the largest similarities after sufficient evolution time. This indicates that our method is robust to uniform noise. Similarly, Fig. (c)c shows that our method is also robust to Gaussian noise. Our method can only be slightly affected by Gaussian noise.

(a)
(b)
(c)
Figure 11: Sphere fitting results. LABEL:sub@fig:sphere-result2, LABEL:sub@fig:sphere-result3 and LABEL:sub@fig:sphere-result4 are the results of fitting the sphere model to , and (Fig. 9), respectively. LL, IPS and ER denote log likelihood, iterations per second, and early rejection, respectively. LL_ER, LL_NER, IPS_ER and IPS_NER denote the evolutions of LL with ER, LL without ER, IPS with ER, and IPS without ER, respectively. LL_Target denotes the target log likelihood.

However, uniform noise influences the efficiency of our method. Particularly, tens of minutes have been consumed to generate the results shown in Figs. (a)a and (c)c, however, several hours have been consumed to obtain a desirable model shown in Fig. (b)b. This is because high-level uniform noise introduces many local maxima to the objective function, making it difficult to find the global maximum. The IPS indicator in Fig. 11 shows the influence of our early rejection strategy. It can be observed that, the optimization process is accelerated by about 3 times using early rejection. These experiments additionally demonstrate that our method is able to fit non-planar models.

6.3 Fitting Models with length-varying parameters

Fitting a model with varying number of parameters is more difficult than fitting a model with a fixed number of parameters. In this paper, we investigate two models and (Figs. 12 and 13) with varying numbers of parameters, which are based on the CGA shape grammar [Müller et al. 2006]. and are models of buildings. The models in consist of 4 facades, while the models in consist of 1 facade. Let be the number of floors, has parameters, i.e., 1 parameter for rotation, 3 parameters for location, 3 parameters for mass size (height, length and width), parameters for window size. depends on the height of building. Different from , does not have the width parameter. Windows in the same floor have the same size, but may have different sizes on different floors. Some sample models of and are shown in Fig. 13.

(a)
(b)
(c)
(d)
Figure 12: Queries and target models. LABEL:sub@fig:recursive-query1 Query , LABEL:sub@fig:recursive-query2 Query . LABEL:sub@fig:recursive-query3 and LABEL:sub@fig:recursive-query4 are models in Model spaces and , respectively. Both and are part of the point cloud which is uniformly sampled from LABEL:sub@fig:recursive-query3 with resolution 0.2. LABEL:sub@fig:recursive-query4 is a part of LABEL:sub@fig:recursive-query3. Consequently, LABEL:sub@fig:recursive-query3 and LABEL:sub@fig:recursive-query4 are target models of (and ) in and , respectively. consists of 4204 points, while consists of 2452 points.
(a)
(b)
(c)
(d)
Figure 13: Some sample models of or along with or .

The results of fitting to (Fig. (a)a) and (Fig. (b)b), fitting to are shown in Figs. 14 and 15. In these 3 experiments, we set , . As shown in Fig. 14, after 5000 iterations, the mass parameters are correctly estimated, while the window parameters are incorrectly estimated. Finally, all the parameters are correctly estimated. These 3 experiments show that our method is able to fit incomplete data with holes. The experiment of fitting to additionally demonstrates that our method is able to fit hybrid-complete data.

(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
Figure 14: Fitted models. From left to right: results of fitting to , to , and to . From top to bottom: the randomly initialized models, fitted models after 5000 iterations, and final fitted models.
(a)
(b)
(c)
Figure 15: Fitting results. LABEL:sub@fig:recursive-log1, LABEL:sub@fig:recursive-log2 and LABEL:sub@fig:recursive-log3 are results of fitting to , fitting to , and fitting to , respectively.

6.4 Fitting Laser Scanning Data

We also conducted experiments for fitting real-world laser scanning point clouds, which were collected by mobile laser scanners [Guan et al. 2014] [Yu et al. 2016]. The results of fitting a facade model to a facade point cloud (Fig. (b)b) are shown in Figs. 16 and Partial Procedural Geometric Model Fitting for Point Clouds. has 15 parameters. Instead of WMM, we use SMM (Eq. (6)) to perform this experiment and set . In contrast to WMM, no argument is needed to calculate SMM. We tested WMM and found that WMM is not very effective for due to that the holes in are corrupted. That is, there are undesired points within the holes. These corrupted holes are incorrectly recognized as missing data by WMM. In other words, although WMM is able to distinguish the difference between uncorrupted holes and missing data (as shown in Section 6.3), WMM is unable to distinguish the difference between corrupted holes and missing data. Fortunately, as shown in the results, SMM is able to deal with corrupted holes.

(a)
(b)
(c)
(d)
Figure 16: Fitted model. LABEL:sub@fig:xiangan-original An original point cloud consisting of 385793 points. LABEL:sub@fig:xiangan-query A query consisting of 23266 points. is generated by downsampling the original point cloud LABEL:sub@fig:xiangan-original with a resolution of 0.2. LABEL:sub@fig:xiangan-model Final fitted model (after 84540 iterations and 36146.0820 seconds) for fitting to . LABEL:sub@fig:xiangan-overlap Overlap between the query and the final model shown in LABEL:sub@fig:xiangan-model.

7 Conclusion

We have proposed a novel rigid geometric similarity metric to measure the similarity between geometric models. Based on the proposed metric, we presented the first method to rigidly fit arbitrary geometric model to non-complete data. We formulate the fitting problem as a Bayesian inference problem and employ MCMC technique to perform the inference. We also proposed a novel technique to accelerate the inference process. Our method has been demonstrated on various geometric models and non-complete data. Experimental results show that our metric is effective for fitting non-complete data, while other metrics are ineffective. It is also shown that our method is robust to noise. In summary, our method is able to fit non-complete data without holes (Section 6.2), non-complete data with uncorrupted holes (Section 6.3) or over-complete data with corrupted holes (Section 6.4).

We believe our work bridges the gap between inverse procedural modeling and geometric model fitting. However, several issues still remain open. For example, due to the curse of dimensionality, the fitting problem becomes intractable if the geometric model has a large number of parameters. New techniques such as deep learning

[Nishida et al. 2016] [Ritchie et al. 2016] are expected to address this problem. Besides, it is also challenging for our method to fit incomplete data with corrupted holes, because a corrupted hole may incorrectly be recognized as missing data. This problem may be addressed using some preprocessing process, e.g., filtering out the undesired points within each hole.

8 Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grants 41471379, 61602499 and 61471371, and by Fujian Collaborative Innovation Center for Big Data Applications in Governments. The authors would like to thank the reviewers for their comments.

References

  • [Aspert et al. 2002] Aspert, N., Santa Cruz, D., and Ebrahimi, T. 2002. MESH: Measuring errors between surfaces using the Hausdorff distance. IEEE International Conference in Multimedia and Expo, 705–708.
  • [Bokeloh et al. 2010] Bokeloh, M., Wand, M., and Seidel, H. 2010. A connection between partial symmetry and inverse procedural modeling. ACM Transactions on Graphics 29, 4, 104.
  • [Boulch et al. 2013] Boulch, A., Houllier, S., Marlet, R., and Tournaire, O. 2013. Semantizing complex 3D scenes using constrained attribute grammars. Computer Graphics Forum 32, 5, 33–42.
  • [Dang et al. 2015] Dang, M., Lienhard, S., Ceylan, D., Neubert, B., Wonka, P., and Pauly, M. 2015.

    Interactive design of probability density functions for shape grammars.

    ACM Transactions on Graphics 34, 6, 206.
  • [Debevec et al. 1996] Debevec, P., Taylor, C. J., and Malik, J. 1996. Modeling and rendering architecture from photographs: a hybrid geometry- and image-based approach. International Conference on Computer Graphics and Interactive Techniques.
  • [Demir et al. 2015] Demir, I., Aliaga, D. G., and Benes, B. 2015. Procedural editing of 3D building point clouds. ICCV.
  • [Fischler and Bolles 1981] Fischler, M. A., and Bolles, R. C. 1981. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24, 6, 381–395.
  • [Guan et al. 2014] Guan, H., Li, J., Yu, Y., Wang, C., Chapman, M. A., and Yang, B. 2014. Using mobile laser scanning data for automated extraction of road markings. ISPRS Journal of Photogrammetry and Remote Sensing 87, 93–107.
  • [Guo et al. 2014] Guo, Y., Bennamoun, M., Sohel, F., Lu, M., and Wan, J. 2014. 3D object recognition in cluttered scenes with local surface features: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 36, 11, 2270–2287.
  • [Hastings 1970] Hastings, W. K. 1970. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57, 1, 97–109.
  • [Huang et al. 2013] Huang, H., Brenner, C., and Sester, M. 2013. A generative statistical approach to automatic 3D building roof reconstruction from laser scanning data. ISPRS Journal of Photogrammetry and Remote Sensing 79, 29–43.
  • [Isack and Boykov 2011] Isack, H., and Boykov, Y. 2011. Energy-based geometric multi-model fitting. International Journal of Computer Vision 97, 2, 123–147.
  • [Lafarge and Mallet 2012] Lafarge, F., and Mallet, C. 2012. Creating large-scale city models from 3D-point clouds: A robust approach with hybrid representation. International Journal of Computer Vision 99, 1, 69–85.
  • [Lafarge et al. 2010] Lafarge, F., Descombes, X., Zerubia, J., and Pierrotdeseilligny, M. 2010. Structural approach for building reconstruction from a single DSM. IEEE Transactions on Pattern Analysis and Machine Intelligence 32, 1, 135–147.
  • [Lake et al. 2015] Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B. 2015. Human-level concept learning through probabilistic program induction. Science 350, 6266, 1332–1338.
  • [Li et al. 2011] Li, Y., Wu, X., Chrysathou, Y., Sharf, A., Cohenor, D., and Mitra, N. J. 2011. GlobFit: consistently fitting primitives by discovering global relations. ACM Transactions on Graphics 30, 4, 52.
  • [Lin et al. 2013] Lin, H., Gao, J., Zhou, Y., Lu, G., Ye, M., Zhang, C., Liu, L., and Yang, R. 2013. Semantic decomposition and reconstruction of residential scenes from LiDAR data. ACM Transactions on Graphics 32, 4, 66.
  • [Lin et al. 2015] Lin, Y., Wang, C., Cheng, J., Chen, B., Jia, F., Chen, Z., and Li, J. 2015. Line segment extraction for large scale unorganized point clouds. ISPRS Journal of Photogrammetry and Remote Sensing 102, 172–183.
  • [Marsaglia 1972] Marsaglia, G. 1972. Choosing a point from the surface of a sphere. Annals of Mathematical Statistics 43, 2, 645–646.
  • [Mathias et al. 2011] Mathias, M., Martinović, A., Weissenberg, J., and Van Gool, L. 2011. Procedural 3D building reconstruction using shape grammars and detectors. International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, 304–311.
  • [Metropolis et al. 1953] Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., and Teller, E. 1953. Equation of state calculations by fast computing machines. The Journal of Chemical Physics 21, 6, 1087–1092.
  • [Monszpart et al. 2015] Monszpart, A., Mellado, N., Brostow, G. J., and Mitra, N. J. 2015. RAPter: rebuilding man-made scenes with regular arrangements of planes. ACM Transactions on Graphics 34, 4, 103.
  • [Muja and Lowe 2014] Muja, M., and Lowe, D. G. 2014.

    Scalable nearest neighbor algorithms for high dimensional data.

    IEEE Transactions on Pattern Analysis and Machine Intelligence 36, 11, 2227–2240.
  • [Müller et al. 2006] Müller, P., Wonka, P., Haegler, S., Ulmer, A., and Van Gool, L. 2006. Procedural modeling of buildings. ACM Transactions on Graphics 25, 3, 614–623.
  • [Musialski et al. 2013] Musialski, P., Wonka, P., Aliaga, D. G., Wimmer, M., Gool, L., and Purgathofer, W. 2013. A survey of urban reconstruction. Computer Graphics Forum 32, 6, 146–177.
  • [Nan et al. 2010] Nan, L., Sharf, A., Zhang, H., Cohenor, D., and Chen, B. 2010. SmartBoxes for interactive urban reconstruction. ACM Transactions on Graphics 29, 4, 93.
  • [Nishida et al. 2016] Nishida, G., Garcia-Dorado, I., Aliaga, D. G., Benes, B., and Bousseau, A. 2016. Interactive sketching of urban procedural models. ACM Transactions on Graphics.
  • [Poullis 2013] Poullis, C. 2013. A framework for automatic modeling from point cloud data. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 11, 2563–2575.
  • [Pu and Vosselman 2009] Pu, S., and Vosselman, G. 2009. Knowledge based reconstruction of building models from terrestrial laser scanning data. ISPRS Journal of Photogrammetry and Remote Sensing 64, 6, 575–584.
  • [Ritchie et al. 2015] Ritchie, D., Mildenhall, B., Goodman, N. D., and Hanrahan, P. 2015. Controlling procedural modeling programs with stochastically-ordered sequential Monte Carlo. ACM Transactions on Graphics 34, 4, 105.
  • [Ritchie et al. 2016] Ritchie, D., Thomas, A., Hanrahan, P., and Goodman, N. D. 2016. Neurally-guided procedural models: Learning to guide procedural models with deep neural networks. arXiv preprint arXiv:1603.06143.
  • [Smelik et al. 2014] Smelik, R. M., Tutenel, T., Bidarra, R., and Benes, B. 2014. A survey on procedural modelling for virtual worlds. Computer Graphics Forum 33, 6, 31–50.
  • [Stava et al. 2014] Stava, O., Pirk, S., Kratt, J., Chen, B., Měch, R., Deussen, O., and Benes, B. 2014. Inverse procedural modelling of trees. Computer Graphics Forum 33, 6, 118–131.
  • [Talton et al. 2011] Talton, J. O., Lou, Y., Lesser, S., Duke, J., Měch, R., and Koltun, V. 2011. Metropolis procedural modeling. ACM Transactions on Graphics 30, 2, 11.
  • [Teboul et al. 2013] Teboul, O., Kokkinos, I., Simon, L., Koutsourakis, P., and Paragios, N. 2013.

    Parsing facades with shape grammars and reinforcement learning.

    IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 7, 1744–1756.
  • [Toshev et al. 2010] Toshev, A., Mordohai, P., and Taskar, B. 2010. Detecting and parsing architecture at city scale from range data. CVPR, 398–405.
  • [Ullrich et al. 2008] Ullrich, T., Settgast, V., and Fellner, D. W. 2008. Semantic fitting and reconstruction. Journal on Computing and Cultural Heritage 1, 2.
  • [Vanegas et al. 2012a] Vanegas, C. A., Aliaga, D. G., and Benes, B. 2012. Automatic extraction of Manhattan-world building masses from 3D laser range scans. IEEE Transactions on Visualization and Computer Graphics 18, 10, 1627–1637.
  • [Vanegas et al. 2012b] Vanegas, C. A., Garcia-Dorado, I., Aliaga, D. G., Benes, B., and Waddell, P. 2012. Inverse design of urban procedural models. ACM Transactions on Graphics 31, 6, 168.
  • [Wan and Sharf 2012] Wan, G., and Sharf, A. 2012. Grammar-based 3D facade segmentation and reconstruction. Computers & Graphics 36, 4, 216–223.
  • [Wang and kai Xu 2016] Wang, J., and kai Xu, K. 2016. Shape detection from raw lidar data with subspace modeling. IEEE Transactions on Visualization and Computer Graphics.
  • [Weissenberg et al. 2013] Weissenberg, J., Riemenschneider, H., Prasad, M., and Gool, L. 2013. Is there a procedural logic to architecture? CVPR, 185–192.
  • [Yu et al. 2016] Yu, Y., Li, J., Wen, C., Guan, H., Luo, H., and Wang, C. 2016. Bag-of-visual-phrases and hierarchical deep models for traffic sign detection and recognition in mobile laser scanning data. ISPRS Journal of Photogrammetry and Remote Sensing 113, 106–123.
  • [Zheng et al. 2010] Zheng, Q., Sharf, A., Wan, G., Li, Y., Mitra, N. J., Cohenor, D., and Chen, B. 2010. Non-local scan consolidation for 3D urban scenes. ACM Transactions on Graphics 29, 4, 94.