Surface Reconstruction with Data-driven Exemplar Priors

01/12/2017 ∙ by Oussama Remil, et al. ∙ Microsoft 0

In this paper, we propose a framework to reconstruct 3D models from raw scanned points by learning the prior knowledge of a specific class of objects. Unlike previous work that heuristically specifies particular regularities and defines parametric models, our shape priors are learned directly from existing 3D models under a framework based on affinity propagation. Given a database of 3D models within the same class of objects, we build a comprehensive library of 3D local shape priors. We then formulate the problem to select as-few-as-possible priors from the library, referred to as exemplar priors. These priors are sufficient to represent the 3D shapes of the whole class of objects from where they are generated. By manipulating these priors, we are able to reconstruct geometrically faithful models with the same class of objects from raw point clouds. Our framework can be easily generalized to reconstruct various categories of 3D objects that have more geometrically or topologically complex structures. Comprehensive experiments exhibit the power of our exemplar priors for gracefully solving several problems in 3D shape reconstruction such as preserving sharp features, recovering fine details and so on.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 8

page 9

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Surface reconstruction from raw point clouds of real-world objects is of great practical importance in computer aided design and computer graphics. Given a set of unorganized points sampled from the surface of an object, the goal is to recover the original surface. This inverse problem is known to be inherently ill-posed if solved without any prior assumption. Despite the recent advances in 3D scanning technology, the point clouds acquired with scanning devices inevitably contain imperfections, such as noise and density anisotropy. Consequently, it makes the problem even more challenging to ensure that the reconstructed surface is geometrically and topologically faithful against the original one with the co-existence of sharp features (such as edges and corners) and data artifacts.

Extensive study has been conducted on surface reconstruction over the past few decades Berger2013 . Existing methods rely on certain prior assumptions either on the sampling process, or about the nature of the surface being processed. Those general prior assumptions, such as the smooth surface and Gaussian noise distribution, often lead to inferior results for the raw point clouds acquired from real-world objects Gal2007 . Several methods take the advantage of explicitly defined priors inspired by a set of example shapes Pauly2005 ; Kraevoy2005 . Recently, a few works leverage the repetition and symmetry assumptions to extract structural priors for surface completion and reconstruction Zheng2010 ; Sung2015 . A major drawback common to these methods is that the priors are defined by heuristics and hence only suitable for particular types of objects, e.g., planar or man-made ones. This limitation prevents these methods from reconstructing models with more general geometry which is not captured by the pre-defined priors.

A geometric surface can be reconstructed using a combination of local patches, which can be referred to as local priors. The number of these priors could be huge, while most of them share the same properties and they are more likely to have the same shape, specially when these priors are extracted from models within the same class of objects. Actually, the models belonging to the same class can possess a significant number of redundant local priors sharing the same geometry, therefore these objects can be only represented by a small number of distinct local patches. Based on this observation, we propose to use a clustering method to learn a set of local priors from a database of 3D models, rather than pre-defining heuristic reconstruction priors. These priors are able to represent the whole class of objects from which they are extracted with much fewer parameters, and they can be used to faithfully reconstruct the raw point cloud of an object belonging to that same class, see Figure 1.

In our framework, we assume no prior knowledge from the shape, and the potential priors that represent the shape of the objects are treated unknown in advance. Our task is more general in that it extracts the priors automatically with a learning process. Given a database of 3D models within the same class of objects, we sample point-set neighborhoods to build a comprehensive library of local shape priors from all those 3D shapes. Our purpose is to select a representative set of priors from the library which can reconstruct any model from the same class of objects. We formulate the problem as minimizing the reconstruction error of all database models using only the representative priors. To solve this problem, we refer to Affinity Propagation (AP) algorithm Frey2007 , given the similarities between all prior library, it finds the set of priors that minimize the overall sum of similarities between all priors, namely exemplar priors. These priors are able to reconstruct the 3D shape with the same class from raw point clouds.

Since our priors are directly generated from the database models, they carry the additional geometrical information such as normals and point feature labels (i.e., regular point, edge point and corner point), and also contain abundant small features. This can be advantageous in two ways: 1) assist in the detection of sharp features from the input scan and 2) recover the fine details within the input point data even though the raw point cloud is fairly noisy and sparse. As a result, the quality of the raw point data can be significantly enhanced, which ensures the following quality surface reconstruction.

It is worth mentioning that Gal et al. Gal2007 also proposed an example-based surface reconstruction method for scanned point sets, which utilizes a library of local shape priors built from a specifically chosen models. Their prior library is created by adding all the generated local patches from the database models. As a consequence, the number of the priors in the library could be huge, which makes the matching process quite time-consuming. As mentioned before, the priors generated from a database within the same class could have duplicate priors that share the same geometric properties, thus the redundancy of the priors would be immoderate. In contrast, our method learns only a set of representative priors from the library, which can be used for the reconstruction process with much less difficulty and time.

We demonstrate through experiments on the synthetic and real scan point clouds that our exemplar priors are an efficient tool to solve a variety of problems for the surface reconstruction. To sum up, the main contributions of this paper are:

  • We devise a framework for surface reconstruction from existing 3D models, which requires no interaction and can easily be generalized to reconstruct various categories of 3D objects that have more complex structures from raw scanned data.

  • We utilize a well-known clustering method (Affinity Propagation) to automatically learn the exemplar priors from a database of 3D shapes, which can represent the 3D shape of the whole class of objects.

2 Related Work

Surface reconstruction.

Surface Reconstruction from range scanned data is a long-standing research topic which has received extensive attention from both the computer vision and computer graphics communities. We refer the reader to the comprehensive survey on recent surface reconstruction techniques 

Berger2013

. Shape reconstruction algorithms aim to convert an input point cloud produced by aligned range scans that may have noisy and incomplete data into a high-quality surface model. An increasing number of surface reconstruction methods merge to handle this difficult and ill-posed problem. One class of these methods is Delaunay based surface reconstruction methods. They interpolate the surface by taking a subset of points from an unorganized input cloud as mesh vertices, such as the Cocone 

Dey2001 ; Amenta2002 and the Power Crust  Amenta2001 algorithms. These methods frequently produce jaggy surfaces in the presence of noise, and they require a pre-processing phase to improve their smoothness.

Another class of algorithms use surface approximation to generate a mesh from an iso-surface of a best-fit implicit function of the input point cloud  Hoppe1992 ; Boissonnat2002 ; Kashdan2006 . Recently, a significant effort has been placed on reconstructing surfaces with sharp features Guennebaud2007 ; Daniels2007 ; Lipman2007 ; Oztireli2009

. This type of methods can handle data imperfections to some extent in the presence of high quality normals. However, normal estimation and orientation are challenging problems, especially when the point data are corrupted. In contrast, our algorithm directly uses the normals of the existing models for the input raw scans, which usually can be accurately estimated from the triangular mesh models. Another difficulty is to preserve sharp features of the input point data for those approaches, they can preserve the sharp features in the face of a low level of noise. However, they have difficulties in obtaining satisfactory results in the presence of a reasonably high level of noise.

Point cloud consolidation. Point cloud consolidation aims to clean up raw input data, removing a variety of data artifacts, and to provide essential geometric attributes. A variety of algorithms have been proposed for this purpose to benefit processing tasks such as sampling and surface reconstruction. For example, Schall et al. Schall2007 presented a noise removal method on the static and time-varying range data. Rosman et al. Rosman2013 introduced a framework for point cloud denoising by patch-collaborative spectral analysis. The Moving Least Squares (MLS) based methods can handle non-uniform data and noise, and are ideally designed to produce smooth surfaces Levin2003 ; Alexa2003 . The variants of MLS have been developed to handle sharp features by fitting a local parametric surface at each point from the input cloud Guennebaud2007 ; Oztireli2009 .

Recently, a locally optimal projection (LOP) operator was introduced by Lipman et al. Lipman2007 . Given an input noisy point cloud, it outputs a new point set which more faithfully adheres to the underlying shape. On this basis, Huang et al. Huang2009

modified and extended the LOP (weighted locally optimal projection) to deal with non-uniform distributions. These methods often fail to recover small features and details when the input data are fairly sparse and severely noisy. Comparatively, on the basis of the

exemplar priors, our method is capable of augmenting and consolidating the raw point data such that the data quality gets significantly enhanced. As a result, the fine details and small features can be recovered successfully.

Non-local filtering. Non-local filtering methods has gained interest over the past decade, Previous works in the image processing field such as Buades2005 attempt to tackle this problem, by introducing the notion of non-local means (NLM). The idea behind is to denoise a pixel by exploiting pixels of the whole image that may entail the same information. This idea was also introduced for surfaces, defined as meshes or point clouds Jones2003 ; Yoshizawa2006 ; Oztireli2009 ; Adams2009 ; Digne2012 where they extend the non-local means concept to mesh and point cloud denoising. Schnabel et al. Schnabel2009 presented a hole filling method where the reconstruction is guided by a set of primitive shapes which has been detected on the input point cloud such as planes and cylinders. Zheng et al. Zheng2010 proposed a scan-consolidation approach to enhance and consolidate imperfect scans of urban models using local self-similarity. Guillemot et al. Guillemot2012 introduced a non local point set surface model that is able to exploit the surface self-similarity.

More related to our work, Hubo2008 ; Digne2014 proposed a compression technique that exploits self-similarity within a point-sampled surface. In  Hubo2008 the patches are clustered by similarity and replaced by the representative patches of each cluster, while in  Digne2014 each patch is represented as a sparse linear combination over a dictionary. Our work investigates the same non-local idea as Gal2007 by using the self-similarity patches to enhance the input cloud data and reconstruct a faithful surface. Instead of using the whole library of patches, our method learns only a set of exemplar priors permitting much less difficulty and time.


Figure 2: An overview of our algorithm. We extract priors from a 3D shape database within the same category (e.g., mechanical parts) to construct a prior library. The affinity propagation clustering method is then performed on the prior library to obtain the set of representative priors, called the exemplar priors. Given an input point cloud, we construct its local neighborhoods and perform priors matching to find the similar exemplar prior to each local neighborhood. Subsequently, we utilize the matched exemplar priors to consolidate the input point scan through an augmentation procedure, with which we can generate the faithful surface where sharp features and fine details are well recovered.

3 Overview

Our proposed method aims to learn a set of exemplar priors from a given 3D shape repository, and use them to reconstruct surfaces from raw scanned point data. The shape repository usually consists of a collection of 3D models with the same shape category. In our work, we explore the 3D shapes from available repositories such as the PSB benchmark Chen2009 , the Co-Seg dataset Wang2012 and the scape database Scape2005 . Our algorithm comprises three major phases, as illustrated in Figure 2.

In the first phase, we build an initial library of 3D shape priors from a set of database models. Specifically, the 3D shapes are sampled, the seed centers of the priors are determined and the 3D shape priors are created within a bounding sphere of a predefined radius. The geometric information such as shape descriptors, normals, labels for points belonging to sharp features, are extracted and stored for all the priors in Section 4.1. The priors are then inserted into a search structure based on their shape descriptors.

The initial library consists of a large amount of shape priors, which may contain numerous redundant elements. To address this issue, we aim to extract the most representative priors from the library. We formulate the problem as minimizing the reconstruction error of all database models using only the representative priors. For that purpose, we perform affinity propagation Frey2007 , which given the similarities between all priors, it selects some exemplar priors that minimize the overall sum of similarities between all the priors and the exemplar ones (see Section 4.2). As a result, a small number of exemplar priors are obtained.

Once the exemplar priors are selected from the library, they can be utilized to augment an input raw point scan from the same class of objects where the priors are created in the first place. Given an input cloud, we first select the seed centers for the local neighborhoods in such a way all the input cloud is covered, and then we construct the local neighborhoods within the same bounding sphere used to create the priors. Priors matching and alignment procedures are then performed to match each local neighborhood to its nearest exemplar prior, transform the exemplar prior to the local frame, and finally register the corresponding prior onto the input scan using ICP algorithm (see Figure 2). Consequently, the input raw point cloud is augmented using the exemplar priors and is well consolidated. By leveraging the moving least squares technique, we are able to generate the faithful surface to the input data, where sharp features and fine details are successfully recovered (see Section 4.3).

4 Algorithm

In this section, we discuss every phase of the proposed algorithm in more detail. We first construct a library of 3D shape priors from a given set of database models, we compute prior descriptors, and perform Affinity Propagation to identify the subset of exemplar priors. Based on these priors only, we consolidate the raw point clouds within the same class of the input 3D shapes and reconstruct faithful surface models.

4.1 Prior Library Creation

Prior library. A prior Pr of a 3D model is defined as the set of points lying within a sphere of radius and a center . Given a 3D shape repository with the assumptions that the models are represented as triangle meshes and they belong to the same class, we construct a library of such local priors, and use them as input for the learning process. The first step is to sample a set of points uniformly for each model, then we select a subset of seed centers from which satisfy the following property: the seed centers are used to create the local priors within a bounding sphere of radius in such a way the point cloud is totally covered.

The seed centers are selected in a dart-throwing fashion same as Digne2014 . Given the sampled set of point for each model, we pick a point as a first center seed, label all its neighborhoods within a radius as covered, and traverse until a non-covered point is found and a new seed center is added to , the process repeats until all points are covered. Note that the radius is chosen as a relative value to the diagonal of the bounding box of each model. The priors are then denoted by:

(1)

where is the geometric distance between the center and the point .

Once the priors are constructed, we store the information of each prior such as normals and points belonging to sharp features for later use, we perform weighted Principal Component Analysis (wPCA) for each prior, in such a way the principal axes coincide with the coordinate system axes and its center of mass is translated to the origin, in addition, we scale the largest principal component to 1 for all priors. Given a database of models, we are able to construct our initial prior library within the same standard coordinate system.

Shape descriptor. Before we cluster similar priors into groups, we require to measure how the priors are similar to each other, in another word, we need to define a shape descriptor to present each prior. Similar to Gal2007

, we introduce the geometric moments as our priors descriptor thanks to their efficient and easy computation. Since our library priors are formed by discrete points, we yield an approximation to the integral form of the moments.

Given a set of points , their moment is approximated with the summation over all the points. In order to make the moment-based representation invariant to the number of vertices of each prior, we introduce a weight for each point that is proportional to the area of the polygonal face Bimbo2006 where that point is sampled. More formally:

(2)

We then choose a subset of moments up to an order

as our prior descriptor, and obtain the vector moments

:

(3)

In our implementation, we choose a sequence of moments up to , resulting in a -dimension descriptor vector. As a result, we are able to compute the corresponding descriptors for all priors in the library, see Algorithm 1.

Input: Database models , .
Output: Priors library .

Algorithm 1 Prior library creation
function Prior library creation()
     for  to  do
         ; sample points and store normals and feature points.
          subset; find seed centers for priors.
         for  to size(do
               Equation 1; generate the prior.
               Weighted PCA ;
               Canonical scaling ;
               Equation 3; compute the descriptor.
              ; add the prior and its descriptor to the library.
         end for
     end for
     return ();
end function

4.2 Exemplar Priors Learning

Since the prior library is created from 3D models belonging to the same category of shapes, it may contain a significant number of redundant priors. In this section, we aim to extract a relatively small number of representative priors or exemplars from the library using the affinity propagation (AP) method.

Problem formulation. Given a library of priors , our purpose is to reconstruct all database models by using only a subset of exemplars priors from , denoted by , which minimizes the cost function defined by the reconstruction error of all these models. Specifically, we take advantage of the similarity measure between each pair of priors to identify the exemplar priors that minimize the standard squared error measure of similarity. Fortunately, this problem can be solved by the affinity propagation method that recursively transmits real-valued pairwise data point similarities until a good set of exemplar data is obtained. It searches for clusters so as to minimize an objective function, i.e., the reconstruction error.

Let be the set of database models and the set of exemplar priors, i.e., . To reconstruct all the input models , we search for each prior in the most similar prior from , we then align the corresponding priors from onto their similar priors from each model to form an entire model, . Hence, we define the reconstruction accuracy as the geometric distance between the original model and the reconstructed model using the exemplar priors from .

(4)

Since the geometric moments can inherently represent the shape, we replace the geometric distances between models and with the sum of the Euclidean distances between the geometric moment vectors of their priors.

Affinity propagation. In the last decade, many clustering methods have been proposed. One particularly elegant and powerful method is the affinity propagation (AP) proposed by  Frey2007

, which can be adapted to solve our selection problem. It is an unsupervised learning algorithm for exemplar-based clustering that finds a set of data points that best describe the input data based on the similarities between them. It associates each data point with an exemplar within the input data in a way to maximize the overall sum of similarities between data points and their exemplars.

The algorithm takes as input a collection of real-valued similarities between the priors, where the similarity between priors and indicates how well prior is suited to be the exemplar for prior , hence the similarity measure is set to the negative Euclidean distance between the geometric moments of priors and :

(5)

Suppose that there are priors in the library, each of which has the geometric moments expressed by an -dimension vector. We construct the similarity matrix by applying equation (5) for each pair of priors in the library.

Affinity propagation does not require the number of cluster to be specified. It takes as input real values for all called as preferences, and these values influence the number of exemplar priors (a prior with a high preference value is more likely to be chosen as an exemplar). In case all the priors are equally suitable as exemplars, their preferences should be set to a shared value such as the median of the input similarities, which results in a moderate number of exemplars (see  Dueck2009 for more detail).

To decide which priors are exemplars, two kinds of message are passed between the priors in the library: (1) the responsibility sent from prior to candidate exemplar , representing how well prior is suited to be the exemplar of prior and (2) the availability sent from candidate exemplar to prior , indicating how well for prior to choose prior as its exemplar. Both messages are updated during each iteration until convergence by using the following equations:

(6)

: for :

(7)

: for :

(8)

we assign to each prior a variable , a value of for means that the prior is assigned to the cluster where is an exemplar prior; and indicates that the prior serves as an exemplar. At any point of the algorithm, responsibilities and availabilities can be combined to make the exemplar decision and identify the cluster assignments defined as:

(9)

Convergence. The algorithm can be monitored by imposing certain terminal conditions: a maximal number of iterations, the changes in the messages are below a threshold or the local decisions stay constant for some number of iterations. We initialize the availabilities to zero and for each iteration, we (1) update the responsibilities given the availabilities; (2) update the availabilities given the responsibilities; (3) combine both of them to monitor the exemplar decision; (4) terminate the algorithm when the decisions do not change for 10 iterations, see Algorithm 2.

Input: Prior library with priors.
Output: Indices of the exemplar priors .

Algorithm 2 Exemplar Priors Learning
function Learning Priors()
     for  to  do
         for  to , do
               equation (5); construct similarity matrix
         end for
     end for
      ; the median of as a vector of preferences
     for  to  do
         ;
     end for
     ; initialize availabilities.
     unconverged=true;
     while unconverged do
          equation (6); update responsibilities.
          equations (7) and (8); update availabilities.
          equation (9); compute cluster assignments.
         unconverged check convergence;
     end while
     ; get indices of exemplar priors.
     return ();
end function

4.3 Consolidation and Reconstruction

Once the exemplar priors are obtained from the affinity propagation algorithm, they can be used to consolidate and reconstruct raw point clouds same as in Gal2007 , see Algorithm 3.

Given an input raw point scan , we construct a set of local neighborhoods Nb from , as we process the initial priors in Section 4.1. We select a set of local centers by picking a first point as a seed and search its neighboring point set within a bounding sphere of radius , and we label the neighboring points as covered and continue with the same manner until all points are covered. As a result, the set of local neighborhoods Nb is obtained. We transform them to a canonical placement using the weighted PCA positioning explained before and store them for later use. Subsequently, the shape descriptor is calculated for each transformed local neighborhood using Equation 3.

Once the shape descriptors of all local neighborhoods within are obtained, we search for each local neighborhood Nb the most similar exemplar prior from by computing the Euclidean distances between their corresponding geometric moments. To find more reliable matches, we do not just rely on the distance between shape descriptors, but also use a more flexible matching refinement procedure, which proceeds on the basis of the confidence score metric defined with the mean squared error of the point distances Gal2007 . Thus, the similar priors can be reliably found for all the local neighborhoods within the input scan.

After finding the corresponding exemplar prior for each local neighborhood, we apply the inverse of the transformations stored before to each matched exemplar prior, and then align them into the input data . The Iterative Closest Point (ICP) registration technique Rusinkiewicz2001 is then performed to refine the alignment of matched priors to . Consequently, the raw point scan is consolidated and thus the data quality is significantly improved.

Having the consolidated point cloud, we exploit the similar reconstruction method in Gal2007 based on the moving least squares technique Levin2003 . As a result, we are able to produce geometrically faithful surfaces, where sharp features and fine details are well preserved.

Input: Raw Point Scan , Exemplar Priors .
Output: Reconstructed Surface .

Algorithm 3 Surface Reconstruction
function Surface Reconstruction(,)
      subset; centers of the local neighborhoods.
     for  to size (do
         ;
          equation 1; generate the local neighborhood.
         Weighted PCA (); store the transformation.
         Canonical scaling (); store the scale.
          equation (3); geometric moment for the local neighborhood.
          ANN ()); nearest exemplar prior.
          Inverse transformation of ;
          ICP( ,);
         ; augmented point cloud.
     end for
      MLS; moving least square.
     return ();
end function

Figure 3: A set of exemplar priors for chair and mechanical part datasets. In the chair database, there are a total of chairs with different design styles. Using our algorithm, exemplar priors are obtained. Theoretically, those representative priors can be used to reconstruct any type of chairs with the same style. For the mechanical parts, we choose model and generate exemplar priors. For ease of exposition, we illustrate only exemplar priors for both datasets.

(a)                              (b)                    (c)                   (d)                                 (e)

Figure 4: Comparison to Gal et al. Gal2007 . (a) Training datasets with a set of learned exemplar priors. (b) Input point data and the reconstruction result from Poisson. (c) Augmentation of the input point data using the exemplar priors and the reconstructed model from our method. (d) Augmentation of the input cloud using all priors in the library and the reconstructed result from Gal et al. (e) The reconstruction errors and the running time in terms of different numbers of exemplar priors.

5 Results

In this section, we run our reconstruction method on a variety of raw point clouds, and we validate its effectiveness in terms of feature preservation and detail recovery. In order to demonstrate the power of our exemplar priors (Figure 3 illustrates the exemplar priors obtained for chair and mechanical part datasets), we comprehensively compare our method to the other related methods on both benchmark data and raw point scans. Additionally, we perform quantitative evaluations to measure the reconstruction quality of our algorithm.

5.1 Parameter Effects

Number of chosen priors. Figure 4 shows the effect of the number of chosen priors on the reconstruction quality and the running time. Given two collections of 3D models, we generate their respective prior libraries. The input raw scans in Figure 4 (a) are sparse and contain a certain level of noise. Using our algorithm, we can plot the reconstruction errors and the running time in terms of the different numbers of exemplar priors. Note that our method learns only a small set of exemplar priors (around 5% of the prior library size), while the reconstruction in Gal et al. Gal2007 utilizes all priors from the library. Based on all original priors, apparently, the reconstruction quality is the best and thus the error is the lowest. However, The priors generated from a collection of 3D models of the same class could have many duplicate priors that share similar geometric properties, thus the number of priors presented in the library could be huge, which makes the matching process quite time-consuming. As can be seen in Figure 4 (e), when the number of chosen priors decreases, the reconstruction time decreases and the error becomes higher. In contrast to Gal2007 , our method makes a good trade-off between the number of exemplar priors and the reconstruction quality, and consolidates input clouds based on a limited number of priors, which offers more efficient priors matching, resulting in faster reconstructions up to three orders of magnitude compared to Gal2007 . When using the set of exemplar priors learned by our algorithm, there is a good balance between the reconstruction quality and the running time. This suggests that we are able to reconstruct a geometrically faithful models from raw point data with much less difficulty and time, even though the scanned model is geometrically and topologically complex.


(a)                   (b)                         (c)

Figure 5: Preserving sharp features on the Fandisk model. (a) Training dataset with a set of learned exemplar priors. (b) Input point data with Gaussian noise and the reconstruction result using RIMLS. (c) Augmentation of the input point data using the exemplar priors and The reconstructed model from our method. Comparatively, the sharp features are better preserved by our method.

Prior size. Our reconstruction method has one major parameter: priors radius , which affects the reconstruction quality and the computation time. Note that the value of the radius should be chosen depending on the size of the models present in the dataset. If the value of is too big, then the priors possess more meaningful semantics with a small number of redundant elements, resulting in a big number of exemplar priors so that the learning phase is a bit time-consuming. In addition, the local neighborhoods may not find their similar exemplar priors in the matching process, which can definitely harm the reconstruction quality. On the contrary, if the value of the radius is too small, the number of priors in the library could be huge, while most of them look like a flat disk, leading to a small number of exemplar priors. As a result, both the learning and the reconstruction phases are quite time-consuming. We choose between 0.05 and 0.1 times the diagonal of the bounding box of each 3D model. In practice, We have conducted a number of testings on different input scans, and found that the values of in the interval [0.05, 0.1] times the diagonal of the bounding box of each 3D model always produce pleasing results in all experiments. For datasets that are very similar to the shape being reconstructed (e.g., humans and armadillos), we set . For the datasets that undergo some geometric variations, we fix at .


(a)                      (b)                         (c)

Figure 6: Preserving sharp features on the model of a Mechanical part. (a) The training mechanical part dataset and a bench of exemplar priors learned by our algorithm. (b) The raw scanned point data with Gaussian noise and the reconstruction result from RIMLS. (c) The augmented point data and the reconstruction result from our method. The close-up views clearly show that our method is capable of recovering sharp features even in the presence of noise.

(a)                       (b)                                (c)                                  (d)                               (e)

Figure 7: Detail recovery on the Armadillo model. The first and second rows represent the front and the back views respectively. (a) The armadillo database used to learn the exemplar priors. (b) The input noisy raw data with a set of learned priors matched with their local neighborhoods. (c) The alignment of the exemplar priors into the input point cloud, where initial alignments are represented on the left and the final augmented point cloud is shown on the right. (d) Poisson reconstruction result of the input point cloud. (e) Our reconstruction result. Poisson reconstruction lacks significant details while our method successfully recovers fine details.

5.2 Reconstruction

Sharp feature preservation. Figure 5 compares the reconstruction results on the fandisk scan from RIMLS Oztireli2009 and our method. As can be seen in Figure 5(a), our algorithm is able to find the exemplar priors that can represent the mechanical part dataset. Using these priors to match and augment the local neighborhoods of the input scan can inherently improve the quality of the input cloud and preserve the sharp features in Figure 5(c), while the sharp features are highly damaged by RIMLS method in Figure 5(b). From the close-up views, the normals of the points around the sharp edges are well recovered with our approach, since they are originated from the triangular meshes. Figure 6 shows the reconstruction result of a mechanical part using our method. We run RIMLS on the input scan data to get the reconstruction result in Figure 6(b), from which sharp features are severely blurred. By learning from a set of mechanical models, our algorithm chooses a set of exemplar priors in Figure 6(a), based on which the low-quality scan is pleasingly consolidated. The reconstruction result of the enhanced input cloud is produced in Figure 6(c), where all sharp edges are well preserved.

Detail recovery. To demonstrate the effectiveness of our method in terms of detail recovery, we generate the surface models from the input scans using our algorithm and the Poisson reconstruction method on the armadillo and the chair models. Figure 7 demonstrates how our method can effectively recover small details from noisy raw point data. Given a dataset of armadillo model from Chen2009 ; Jacobson2012 , we run our algorithm to learn a set of exemplar priors ( Figure 7(a) illustrates only 15 priors from the 324 learned priors), priors matching and point cloud augmentation procedures are shown in Figures  7(b) and  7(c) respectively. The reconstructed surface using Poisson method in Figure 7(d) from the input scan lacks significant details as shown in the zoom-in views. Comparatively, our method succeeds in recovering fine features based on the exemplar priors.

(a)                       (b)                       (c)

Figure 8: Detail recovery on the Chair model. (a) The chair database used to learn the exemplar priors. (b) The Poisson reconstruction result on the input raw points. (c) The reconstruction result using our method. Compared to the poisson reconstruction, our method can recover more details benefiting from the fine details carried by the exemplar priors, as demonstrated in zoom-in views.

(a)            (b)                     (c)                    (d)

Figure 9: Dealing with distinct poses. (a) The training human database with various poses and a set of learned exemplar priors. (b) The input point scans. (c) The augmentation procedure on the input point data. (d) Our reconstruction results. Although the corresponding poses of the input scans are not included in the database, our exemplar priors still can effectively consolidate them and reconstruct geometrically faithful models.

Figure 8 illustrates the reconstruction results on a chair model. We run the poisson reconstruction method on the input scan, where the surface is bumpy and important details are missing, as shown in Figure 8(b). By learning from a dataset of chair models  Alhashim2015 ; Huang2015 , our algorithm chooses a total of 116 exemplar priors (8 of these priors are demonstrated in Figure 8(a)). Using those exemplar priors, our method is capable of recovering fine details, even from a sparse and noisy raw scan, as demonstrated in Figure 8(c).

Pose invariant. Another important characteristic of our learned exemplar priors is that they can deal with models from the same class of objects with different poses, in another word, they can be used to reconstruct a new model with a new pose different from those used in the learning process. Figure 9 shows the reconstruction results on human models with distinct poses from SCAPE database Scape2005 . We train our algorithm on 30 human models of the same person with a variety of poses, as a result, exemplar priors are obtained to be able to represent the human model, as can be seen in Figure 9(a). Given sparse and noisy input scans of human models in Figure 9(b), we use the learned priors to consolidate and augment the input clouds. Figure 9(c) illustrates stages of the augmentation procedure where the exemplar priors are matched and aligned to their corresponding local neighborhoods. The generated dense point clouds contain faithful normals since they are originated from the input mesh models, and therefore we are able to reconstruct geometrically faithful surface models, where the important details can be well recovered, see Figure 9(d).

Figure 10: Reconstruction comparison on the Anchor model from APSS(resolution=), RIMLS (resolution=), Poisson (depth=12) and our method. APSS and RIMLS can preserve sharp features to some extent, while a number of artifacts are yielded in the results. Poisson produces the favorably smooth model, where sharp features are blurred. In contrast, our result is relatively better.
Figure 11: Reconstruction comparison on the Daratech model from APSS (resolution=), RIMLS (resolution=), Poisson (depth=12) and ours . Our method outperforms the others in terms of sharp feature preservation in the face of noise.

5.3 Comparisons to state-of-the-art reconstruction techniques

To demonstrate the effectiveness of our method, we compare it to the state-of-the-art reconstruction techniques, including RIMLS Oztireli2009 , APSS Guennebaud2007 and Poisson Kashdan2006 . We try our best to fine-tune the parameters for all those methods in order to achieve the best experimental results. The experimental data include surface reconstruction benchmark data Berger2013 (Anchor and Daratech point clouds), as well as the raw scanned data of two mechanical parts taken from GlobFit2011 .

Benchmark data. Figure 10 compares the reconstruction results of the Anchor model with sharp features. The input point data contain a certain level of noise. RIMLS and APSS are able to retain sharp features to some extent, while there are some bumpy artifacts. Moreover, a number of small holes exist in their results due to noise. Poisson yields a smooth, closed surface thanks to its reconstruction characteristics. However, sharp features are severely blurred. In contrast, our method consolidates the input point data based on the exemplar priors learned from a set of mechanical part models, and hence produces a quality surface, where sharp features are well preserved. Figure 11 presents the comparison results of the Daratech model with small details. APSS, RIMLS and our method obtain relatively better results than Poisson does in terms of retaining sharp features. Note that notable defects (e.g. holes) are still generated from APSS and RIMLS. As highlighted, RIMLS and ours preserve more details than APSS and Poisson do. However, RIMLS ruins the small features. Comparatively, only ours achieves more favorable results, in which fine details and sharp features are properly recovered.

Figure 12: Reconstruction comparison on the raw scan of the Clutch-part model from APSS (resolution=), RIMLS (resolution=), Poisson (depth=12) and ours.

Raw scans. Figure 12 shows the comparison results of the Clutch part with raw point data. The raw point cloud contains some noise, and the points are distributed non-uniformly. Moreover, the data on some regions are fairly sparse. Due to noise, APSS and RIMLS generate a number of artifacts (e.g. burrs), while Poisson obtains a visually pleasing surface. As shown in the close-up views, the edges of the cylindrical hole are collapsed by APSS and RIMLS. Poisson smooths out those sharp edges, which, however, are well preserved by our method. Figure 13 compares the reconstruction results of the Wheel model with raw point data. In contrast, our method outperforms the other three methods in terms of recovering sharp features, even for a fairly noisy scan.

Figure 13: Reconstruction comparison on the raw scan of the Wheel model from APSS (resolution=), RIMLS (resolution=), Poisson (depth=12) and ours.

5.4 Quantitative Evaluations

The results shown above have visually demonstrated the superiority of our method regarding sharp feature preservation and detail recovery on raw scanned points. In this section, we perform quantitative evaluations on our reconstruction approach. We first show the Hausdorff distance histograms of the reconstruction results from Figures 10-13. We also investigate the robustness of our method to different levels of noise and sparsity in a quantitative manner. Raw scan data always suffer from varying sampling resolution, noise, and sparsity. While it is difficult to exactly simulate real scan data, we incorporate the synthetic experiments to validate the robustness.

Hausdorff distance. To demonstrate the geometric fidelity of the reconstructed surface relative to the original scan, the Hausdorff distance between the original scan and the surface mesh is computed. Figure 14 shows the detailed comparisons of the Hausdorff distance results of Figures 10-13. The horizontal axis is the absolute distance value between the reconstructed mesh and the original scan, and the vertical axis is the corresponding histogram (in percentage) with respect to each distance value. From the histograms, we can see that our method yields shorter Hausdorff distances, indicating that our method produces more geometrically closer surfaces to the original models.

Robustness to noise and sampling rates. To further evaluate the robustness of our reconstruction algorithm to noise and sampling rates, we choose two mesh models (the sphere and the cube) as ground truth, we perform virtual scanning to generate the point data, we reconstruct these point data by using the adequate priors and thus correctly measure the precision to the ground truth, as demonstrated in Figure 15. For each scan, we successfully reconstruct the surface and compute the reconstruction error between the surface and the ground truth using Equation 4

. In such a way, we are able to plot the reconstruction precision of the two models with respect to the level of noise (left) and the sampling rates (right). As observed, with a reasonably high level of noise and outliers, our prior-based reconstruction technique still achieves a high precision. As the artifacts intensify and the corruption becomes extremely severe, the performance of our method could drop considerably. By increasing the sampling ratio of the original points, our approach still obtains a good reconstruction precision. As the level of sampling becomes fairly high, our matching-to-alignment procedure may fail to find the appropriate priors for the local neighborhoods, which may cause wrong matching as shown in the third and fourth columns.

Figure 14: The histograms show the Hausdorff distances between the reconstructed models from the compared methods and the original models. The horizontal axis is the reconstruction error between the reconstructed model and the original one, and the vertical axis stands for the corresponding percentage to each error value.
Figure 15: Reconstruction robustness in terms of noise and density. Here we choose the appropriate priors to reconstruct the sphere point clouds with increasing level of noise (left) and the cube scans with different sampling rates (right). From those tendencies, we note that our reconstruction approach is quite robust to reasonably high levels of noise and sparsity.

5.5 Performance

We have implemented the technique presented in the previous sections in C++ and all experiments are performed on a PC with a 2.4GHz CPU and 6.0 GB of RAM. The total running time of our algorithm can be divided into two parts: the exemplar priors learning and the reconstruction phases. Priors learning phase includes the construction of the prior library and the affinity propagation clustering method, this phase is quite fast and it depends on the number of the input shapes and the number of priors in the library ( the learning time increases as the number of prior library increases). The second phase of our algorithm is relatively slow and it relies on the input point scan (please note that the reconstruction phase includes the local neighborhoods construction, matching and alignment procedures for each local and the MLS surface reconstruction). Overall, the time performance of our method is still comparatively efficient. Table 1 gives the running time performance of our method and some of the crucial parameters used in our experiments.

5.6 Limitations

Our method is expected to behave well with different shape categories, meanwhile there are a few limitations that have to be discussed so far. Our algorithm fails when dealing with more challenging repositories with small number of redundant elements, such as complex organic shapes. In addition, if there are large holes within the input scans or big missing parts, our method may fail to complete them based on the “matching-to-alignment” strategy.

6 Conclusion and Future Work

In this paper, we present a pipeline for surface reconstruction on the basis of learning local shape priors from a database of 3D shapes. Our algorithm exploits the self-similarity patches extracted from a database of 3D shapes within the same class, uses them to enhance the imperfect input scans from the same class, and reconstructs a faithful surface. Our reconstruction framework can be generalized to handle various categories of 3D objects which may have different geometrical and topological structures. Instead of tuning magic threshold parameters of the surface reconstruction techniques, our approach gives the user an intuitive way to extract the priors from existing models for surface reconstruction, which does not depend on any pre-defined parametrical model or heuristic regularities. A variety of experimental results on synthetic and raw point scans demonstrate that our method is capable of producing quality surfaces, where sharp features and fine details are recovered well, in the presence of reasonably high level of noise, outliers and sparsity. The comprehensive experiments also demonstrate that the set of exemplar priors learned from a shape database is a powerful tool to elegantly produce consolidated scans from poor and noisy point clouds, that can be used for several applications.

In the future, we plan to focus on developing the algorithm to handle a variety of problems such as filling holes and dealing with more challenging input point scans. Another interesting research direction is to mine the global, where high-level information can be extracted from the training database models such as local prior graphs, and then integrated with the local information of the priors to handle various geometry processing applications.

Figure Pts M L Time R Time
Fig 5 3999 36 346 0.05 4m01s 6m47s
Fig 6 2999 36 346 0.05 4m01s 6m11s
Fig 7 13999 10 324 0.1 4m54s 13m02s
Fig 8 3499 22 116 0.05 2m17s 5m23s
Fig 9 (1) 5999 30 296 0.1 3m10s 7m53s
Fig 9 (2) 5999 30 296 0.1 3m10s 7m39s
Fig 10 107818 36 346 0.05 4m01s 12m25s
Fig 11 84682 36 346 0.05 4m01s 12m08s
Fig 12 84519 36 346 0.05 4m01s 11m41s
Fig 13 35565 36 346 0.05 4m01s 9m06s
Table 1: Running time statistics and the parameters used in our experiments. Pts: Number of points of the input scan, M: Size of shape repository, : Number of exemplar priors, : Relative radius value, L Time: Priors learning time and R Time: Reconstruction time.

Acknowledgements

We thank the anonymous reviewers for their valuable comments and suggestions. The work was supported in part by National Natural Science Foundation of China (61402224), the Fundamental Research Funds for the Central Universities (NE2014402, NE2016004), the Natural Science Foundation of Jiangsu Province (BK2014833), the NUAA Fundamental Research Funds (NS2015053), and Jiangsu Specially-Appointed Professorship.

References

  • (1) M. Berger, J. A. Levine, L. G. Nonato, G. Taubin, C. T. Silva, A benchmark for surface reconstruction, ACM Trans. Graph. 32 (2) (2013) 20:1–20:17.
  • (2) R. Gal, A. Shamir, T. Hassner, M. Pauly, D. Cohen-Or, Surface reconstruction using local shape priors., in: Symposium on Geometry Processing, 2007, pp. 253–262.
  • (3) M. Pauly, N. J. Mitra, J. Giesen, M. H. Gross, L. J. Guibas, Example-based 3d scan completion., in: Symposium on Geometry Processing, Vol. 255 of ACM International Conference Proceeding Series, Eurographics Association, 2005, pp. 23–32.
  • (4) V. Kraevoy, A. Sheffer, Template-based mesh completion, in: Proceedings of the Third Eurographics Symposium on Geometry Processing, SGP ’05, Eurographics Association, Aire-la-Ville, Switzerland, Switzerland, 2005.
  • (5) Q. Zheng, A. Sharf, G. Wan, Y. Li, N. J. Mitra, D. Cohen-Or, B. Chen, Non-local scan consolidation for 3d urban scenes, ACM Trans. Graph. 29 (4) (2010) 94:1–94:9.
  • (6) M. Sung, V. G. Kim, R. Angst, L. Guibas, Data-driven structural priors for shape completion, ACM Trans. Graph. 34 (6) (2015) 175:1–175:11.
  • (7) B. J. Frey, D. Dueck, Clustering by passing messages between data points, Science 315 (2007) 2007.
  • (8) T. K. Dey, J. Giesen, Detecting undersampling in surface reconstruction., in: D. L. Souvaine (Ed.), Symposium on Computational Geometry, ACM, 2001, pp. 257–263.
  • (9) N. Amenta, S. Choi, T. K. Dey, N. Leekha, A simple algorithm for homeomorphic surface reconstruction, International Journal of Computational Geometry and Applications 12 (1-2) (2002) 125–141.
  • (10) N. Amenta, S. Choi, R. K. Kolluri, The power crust, unions of balls, and the medial axis transform., Comput. Geom. 19 (2-3) (2001) 127–153.
  • (11) H. Hoppe, T. DeRose, T. Duchamp, J. A. McDonald, W. Stuetzle, Surface reconstruction from unorganized points., in: J. J. Thomas (Ed.), SIGGRAPH, ACM, 1992, pp. 71–78.
  • (12) J.-D. Boissonnat, F. Cazals, Smooth surface reconstruction via natural neighbour interpolation of distance functions., Comput. Geom. 22 (1-3) (2002) 185–203.
  • (13) M. M. Kazhdan, M. Bolitho, H. Hoppe, Poisson surface reconstruction., in: Symposium on Geometry Processing, Vol. 256 of ACM International Conference Proceeding Series, Eurographics Association, 2006, pp. 61–70.
  • (14) G. Guennebaud, M. H. Gross, Algebraic point set surfaces., ACM Trans. Graph. 26 (3) (2007) 23.
  • (15)

    J. Daniels, L. K. Ha, T. Ochotta, C. T. Silva, Robust smooth feature extraction from point clouds, in: Shape Modeling International, 2007. Proceedings, 2007.

  • (16) Y. Lipman, D. Cohen-Or, D. Levin, H. Tal-Ezer, Parameterization-free projection for geometry reconstruction., ACM Trans. Graph. 26 (3) (2007) 22.
  • (17) A. C. Öztireli, G. Guennebaud, M. H. Gross, Feature preserving point set surfaces based on non-linear kernel regression., Comput. Graph. Forum 28 (2) (2009) 493–501.
  • (18) O. Schall, A. Belyaev, H.-P. Seidel, Feature-preserving non-local denoising of static and time-varying range data, in: Proceedings of the 2007 ACM Symposium on Solid and Physical Modeling, SPM ’07, ACM, New York, NY, USA, 2007, pp. 217–222.
  • (19) G. Rosman, A. Dubrovina, R. Kimmel, Patch-Collaborative Spectral Point-Cloud Denoising, Computer Graphics Forum.
  • (20) D. Levin, Mesh-independent surface interpolation, in: Geometric Modeling for Scientific Visualization, Mathematics and Visualization, Springer Berlin Heidelberg, 2004, pp. 37–49.
  • (21) M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman, D. Levin, C. T. Silva, Computing and rendering point set surfaces., IEEE Trans. Vis. Comput. Graph. 9 (1) (2003) 3–15.
  • (22) H. Huang, D. Li, H. Zhang, U. Ascher, D. Cohen-Or, Consolidation of unorganized point clouds for surface reconstruction, in: ACM SIGGRAPH Asia 2009 Papers, SIGGRAPH Asia ’09, ACM, New York, NY, USA, 2009, pp. 176:1–176:7.
  • (23)

    A. Buades, B. Coll, J.-M. Morel, A non-local algorithm for image denoising, in: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) - Volume 2 - Volume 02, CVPR ’05, IEEE Computer Society, Washington, DC, USA, 2005, pp. 60–65.

  • (24) T. R. Jones, F. Durand, M. Desbrun, Non-iterative, feature-preserving mesh smoothing, in: ACM SIGGRAPH 2003 Papers, SIGGRAPH ’03, ACM, New York, NY, USA, 2003, pp. 943–949.
  • (25) S. Yoshizawa, A. Belyaev, H.-P. Seidel, Smoothing by example: Mesh denoising by averaging with similarity-based weights, in: IEEE International Conference on Shape Modeling and Applications 2006 (SMI 2006), IEEE VGTC, IEEE, Matsushima, JAPAN, 2006, pp. 38–44.
  • (26) A. Adams, N. Gelfand, J. Dolson, M. Levoy, Gaussian kd-trees for fast high-dimensional filtering, ACM Trans. Graph. 28 (3) (2009) 21:1–21:12.
  • (27) J. Digne, Similarity based filtering of point clouds, in: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence, RI, USA, June 16-21, 2012, 2012, pp. 73–79.
  • (28) R. Schnabel, P. Degener, R. Klein, Completion and reconstruction with primitive shapes., Comput. Graph. Forum 28 (2) (2009) 503–512.
  • (29) T. Guillemot, A. Almansa, T. Boubekeur, Non local point set surfaces, in: Proceedings of the International Conference on 3D Imaging, Moleding, Processing, Visualization and Transmission (3DIMPVT), 2012.
  • (30) E. Hubo, T. Mertens, T. Haber, P. Bekaert, Special section: Point-based graphics: Self-similarity based compression of point set surfaces with application to ray tracing, Comput. Graph. 32 (2) (2008) 221–234.
  • (31) J. Digne, R. Chaine, S. Valette, Self-similarity for accurate compression of point sampled surfaces, Comput. Graph. Forum 33 (2) (2014) 155–164.
  • (32) X. Chen, A. Golovinskiy, T. Funkhouser, A benchmark for 3d mesh segmentation (2009) 73:1–73:12.
  • (33) Y. Wang, S. Asafi, O. van Kaick, H. Zhang, D. Cohen-Or, B. Chen, Active co-analysis of a set of shapes, ACM Trans. Graph. 31 (6) (2012) 165:1–165:10.
  • (34) D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers, J. Davis, Scape: shape completion and animation of people, ACM Trans. Graph 24 (2005) 408–416.
  • (35) A. D. Bimbo, P. Pala, Content-based retrieval of 3d models, ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 2 (1) (2006) 20–43.
  • (36) D. Dueck, Affinity propagation: Clustering data by passing messages (2009).
  • (37) S. Rusinkiewicz, M. Levoy, Efficient variants of the ICP algorithm, in: Third International Conference on 3D Digital Imaging and Modeling (3DIM), 2001.
  • (38) A. Jacobson, I. Baran, L. Kavan, J. Popović, O. Sorkine, Fast automatic skinning transformations, ACM Trans. Graph. 31 (4) (2012) 77:1–77:10.
  • (39) I. Alhashim, K. Xu, Y. Zhuang, J. Cao, P. Simari, H. Zhang, Deformation-driven topology-varying 3d shape correspondence, ACM Trans. Graph. 34 (6) (2015) 236:1–236:13.
  • (40)

    H. Huang, E. Kalogerakis, B. Marlin, Analysis and synthesis of 3d shape families via deep-learned generative models of surfaces, Computer Graphics Forum 34 (5).

  • (41) Y. Li, X. Wu, Y. Chrysathou, A. Sharf, D. Cohen-Or, N. J. Mitra, Globfit: Consistently fitting primitives by discovering global relations, in: ACM SIGGRAPH 2011 Papers, SIGGRAPH ’11, ACM, New York, NY, USA, 2011, pp. 52:1–52:12.