I Introduction
The threedimensional (3D) point cloud has become an important and popular signal representation of volumetric objects in 3D space [1]. 3D point cloud can be acquired directly using lowcost depth sensors like Microsoft Kinect or highresolution 3D scanners like LiDAR. Moreover, multiview stereomatching techniques have been extensively studied in recent years to recover a 3D model from images or videos, where the typical output format is the point cloud [2]. However, in either case, the output point cloud is inherently noisy, which has led to numerous approaches for point cloud denoising [3, 4, 5, 6].
Moving least squares (MLS)based [7, 8] and locally optimal projection (LOP)based methods [9, 10] are two major categories of point cloud denoising approaches, but are often criticized for oversmoothing [5, 6] due to the use of local operators. Sparsitybased methods [5, 4] assume sparse representation of the point normals and achieve good performance [11]
. However at high noise levels, normal estimation can be so poor that it leads to oversmoothing or oversharpening
[5].Nonlocal methods generalize the ideas in the nonlocal means [12] and BM3D [13] image denoising algorithms to point cloud denoising, and are shown to achieve stateoftheart performance and robust to high level of noise [3, 4]. These methods exploit the inherent selfsimilarity characteristic between surface patches in the point cloud to preserve structural details, but their computational complexity is too high to be practical [3].
Utilizing an assumed selfsimilarity characteristic in images has long been a popular strategy in image processing [12, 13]. Extending on these earlier works, a more recent work [14] proposed a new lowdimensional manifold model (LDMM)—similar image patches are samples of a lowdimensional manifold in highdimensional space—and achieved stateoftheart results in various inverse imaging applications, e.g., denoising, inpainting, superresolution, etc..
Inspired by the work in [14], we exploit selfsimilarity of the surface patches by assuming that the surface patches in the point cloud lie on a manifold of low dimension. In the optimization, the manifold dimension serves as a regularization prior, which is approximated by a discrete graph Laplacian regularizer due to discrete observation of the surface patches. The main contributions of our work are as follows:

By adopting the LDMM, we exploit the surface selfsimilarity characteristic and simultaneously denoise similar patches to better preserve sharp features;

The computation of the manifold dimension is approximated by a discrete graph Laplacian regularizer with low computational complexity; and

An efficient similarity measure for discrete pixel patch pairs is designed for graph construction that is robust to noise.
In addition, we show that our graph Laplacian regularizer has a graph spectral interpretation, and has desirable numerical stability properties via eigenanalysis in the graph spectral domain [15]. Extensive simulation results show that our proposed method can outperform the stateoftheart methods in objective metrics and can better preserve visually salient features like edges.
The rest of the paper is organized as follows. Section II overviews some existing works. Section III defines the patch manifold associated with the 3D point cloud. Section IV formulates the denoising problem by describing how the manifold dimension is computed and approximated with the graph Laplacian regularizer. The algorithm implementation is discussed in Section V with graph spectral analysis to interpret the algorithm and a numerical stability analysis. Finally, Section VI and Section VII presents experimental results and concludes the paper respectively.
Ii Related Work
Previous point cloud denoising works can be classified into four categories: moving least squares (MLS)based methods, locally optimal projection (LOP)based methods, sparsitybased methods, and nonlocal similaritybased methods.
MLSbased methods. MLSbased methods approximate a smooth surface from the input samples and project the points to the resulting surface. To construct the surface, the method in [16] first finds the local reference domain for each point that best fits its neighboring points in terms of MLS, then defines a function above the reference domain by fitting a polynomial function to neighboring data.
Several extensions, which address the unstable reconstruction problem in the case of high curvature, e.g., algebraic point set surfaces (APSS) [7] and its variant in [17], or preserve the shape features, e.g., robust MLS (RMLS) [18] and robust implicit MLS (RIMLS) [8], have also been proposed. These methods can robustly generate a smooth surface from extremely noisy input, but are often criticized for oversmoothing [5, 6].
LOPbased methods. Unlike MLSbased methods, LOPbased methods do not compute explicit parameters for the surface. For example, LOP method in [9]
outputs a set of points that represent the underlying surface while enforcing a uniform distribution over the point cloud with a repulse term in the optimization. Its modifications include weighted LOP (WLOP)
[19], which provides a more uniformly distributed output by adapting the repulse term to the local density, and anisotropic WLOP (AWLOP) [10], which preserves sharp features by modifying WLOP to use an anisotropic weighting function. LOPbased methods also suffer from oversmoothing due to the use of local operators, or generate extra features caused by noise [5, 6].Sparsitybased methods. Sparsitybased methods assume sparse representation of the point normals. A sparse reconstruction of the point normals is obtained by solving a global minimization problem with sparsity regularization. Then the point positions are updated by solving another global (or ) minimization problem based on a locally planar assumption. Some of the examples include [20] which utilizes regularization, and [5] and [4] which adopt regularization and achieve stateofart performance [11]. However, when the noise level is high, normal estimation can be so poor that it leads to oversmoothing or oversharpening [5].
Nonlocal methods. Nonlocal methods generalize the ideas in the nonlocal means [12] and BM3D [13] image denoising algorithms to point cloud denoising. These methods exploit the selfsimilarities between surface patches in the point cloud. Some early trials are [21] and [22], which utilize a nonlocal means algorithm, while more recent methods in [3, 4] inspired by the BM3D algorithm are shown to outperform the nonlocal means methods. However, the computational complexity is typically too high to be practical.
Our method belongs to the fourth category, the nonlocal methods. Similar to [3, 4], we also utilize the selfsimilarity among patches via the lowdimensional manifold prior [14]. However, unlike [14] that requires timeconsuming fixed point integration, due to the use of graph Laplacian regularizer, our proposed approach can be efficiently implemented and capable of preserving sharp feature better than existing schemes.
Iii Patch Manifold
We first define a notion of patch manifold given a point cloud , , which is a (roughly uniform) discrete sampling of a 2D surface of a 3D object. Let be the position matrix for the point cloud. Noisecorrupted can be simply modeled as:
(1) 
where contains the true 3D positions, is a zeromean signalindependent noise (we assume Gaussian noise in our experiments), and . To recover the true position , we consider the lowdimensional manifold model prior (LDMM) [14] as a regularization term for the illposed problem.
Iiia Surface Patch
We first define a surface patch in a point cloud. We select a subset of points from as the patch centers, i.e., . Then, patch centered at a given center is defined as the set of nearest neighbors of in , in terms of Euclidean distance.
The union of the patches should cover the whole point cloud, i.e.,
. There can be different choices of patch centers, and the degree of freedom can be used to trade off computation cost and denoising performance. Let
be the patch coordinates, composed of the points in .IiiB Patch Manifold
Here we adopt the basic assumption in [14] that the patches sample a lowdimensional manifold embedded in , which is called the patch manifold associated with the point cloud . In order to evaluate similarity among patches, we first need to align the patches; i.e., the coordinates should be translated with respect to , so that lies on the origin point . Here we set to be the translated coordinates.
IiiC Low Dimensional Patch Manifold Prior
The LDMM prior assumes that the solution contains patches that minimize the patch manifold dimension. We can thus formulate a maximum a posteriori (MAP) problem with prior and fidelity terms as follows:
(2) 
where is a parameter that trades off the prior with the fidelity term, and is the Frobenius norm. Note that given a certain strategy of patch selection, the patches are determined by the point cloud , and the patches in turn define the underlying manifold . Hence we view as a function of .
The patches can be very different and sampled from different manifolds of different dimensions. For example, a flat planar patch belongs to a manifold of lower dimension than a patch with corners. The dimension of the patch manifold, becomes a function of the patch, and the integration of over is used as the regularization term,
(3) 
where is the dimension of at . Here is a point on .
Iv Problem Formulation
In this section, we discuss the calculation of the manifold dimension, approximate this computation with the graph Laplacian regularizer, and design a patch distance measure for the graph construction on the manifold.
Iva Manifold Dimension Computation
To solve (3), we first need to compute the manifold dimension . Here we overview how the manifold dimension is computed in [14].
First, let , where , be the coordinate functions on the manifold embedded in , i.e.,
(4) 
According to [14], the dimension of at is given by:
(5) 
where denotes the gradient of the function on at . Then the integration of over is given as,
(6) 
IvB Discretization of Manifold Dimension
The formula in (6) is a sum of integrals on continuous manifold along different dimensions, but our observations of the manifold are discrete and finite. Hence, we approximate (6) using a discrete graph Laplacian regularizer.
Next, we first introduce the graph construction on a manifold, which induces the graph Laplacian regularizer. Then we discuss how the graph Laplacian regularizer approximates the manifold dimension.
IvB1 Constructing Graph on a Manifold
We construct a discrete graph whose vertex set is the observed surface patches lying on , i.e., . Let denote the edge set, where the edge between th and th patches is weighted as,
(7) 
The kernel is a thresholded Gaussian function
and is the Euclidean distance between the two patches and ,
(8) 
The term is the normalization term, where is the degree of before normalization. The graph constructed in these settings is an neighborhood graph, i.e., no edge has a distance greater than . Here , and is a constant.
IvB2 Graph Laplacian Regularizer
With the edge weights defined above, we define the symmetric adjacency matrix , with the th entry given by . denotes the diagonal degree matrix, where entry . The combinatorial graph Laplacian matrix is [15].
For the coordinate function on defined in (4), sampling at positions of leads to its discretized version, . The graph Laplacian induces the regularizer . It can be shown that
(9) 
IvB3 Approximation with Graph Laplacian Regularizer
We now show the convergence of the discrete graph Laplacian regularizer to the dimension of the underlying continuous manifold.
Assume that
is a Riemannian manifold with boundary, equipped with the probability density function (PDF)
describing the distribution of the vertices on , and that belongs to the class of Hlder functions [23] on . Then according to the proof in [24], there exists a constant depending only on such that for and the weight parameter , where denotes the manifold dimension, such that^{1}^{1}1We refer readers to [23] for the uniform convergence result in a more general setting and its corresponding assumptions on , , and the graph weight kernel function .(10) 
where is induced by the th weighted LaplaceBeltrami operator on , which is given as
(11) 
Assuming that the vertices are uniformly distributed on , then
(12) 
where is the volume of the manifold . For implementation, similar to the setting in [24], we set , then becomes
(13) 
From (10) and (13), the convergence is readily obtained by weakening the uniform convergence of (10) to pointwise convergence:
(14) 
where means there exists a constant depending on , and , such that the equality holds. The rate of convergence depends on and . As the number of samples increases and the neighborhood size shrinks, the graph Laplacian regularization approaches its continuous limit. Moreover, if the manifold dimension is low, we can ensure a good approximation of the continuous regularization functional even if the manifold is embedded in a highdimensional space. Consequently, given a point cloud, one can approximate the dimension of with the ’s and the constructed graph Laplacian .
So far, we obtain the approximation of (6) with (14), but the above graph construction requires the patch coordinates to be ordered so that the ’s can be defined and the patch distance in (8) determines the patch similarity. For example, if the patches are image patches of the same size, then the patch coordinates are naturally ordered according to pixel location, i.e., the th entry in is the pixel value at the th location in the image patch. However, surface patches in the 3D point cloud are unstructured, and there is no natural way to implement global coordinate ordering for all patches.
In the following, we first argue that the computation of the regularization term can be accomplished based on local pairwise correspondence between connected patches, relieving the need for global ordering. Then, we propose a patch distance measure for unstructured point cloud that is efficiently computed and robust to noise.
IvC Patch Distance Measure
IvC1 From Global Coordinate Ordering to Local Correspondence
We modify the manifold dimension formula in (14):
(15)  
(16)  
(17) 
where (17) follows from (16) according to the definition of in (8). From (17) we see that is not necessary to compute the graph Laplacian regularizer, and hence global coordinate ordering is not required. Moreover, since is itself a function of via (7), we can instead find the local pairwise correspondence between neighboring patches to compute patch distance in (17).
IvC2 Distance Measure in Continuous Domain
To measure the distance between the th patch and th patch, ideally
the two patches can be interpolated to two continuous surfaces, and the distance is calculated as the integral of the surface distance over a local domain around the patch center.
To define the underlying surface, we first define a reference plane. In Fig. 1(a), we examine a 2D case for illustration. The reference plane is tangent to the center of patch (origin point) and perpendicular to the surface normal at . Then the surface distance for patch with respect to normal is defined as a function , where is a point on the reference plane, superscript indicates that the reference plane is perpendicular to , while the subscript indicates the function defines patch . is then the perpendicular distance from to surface with respect to normal . Surface is similarly defined as . Note that because the patches are centered, which is the origin, but their surface normals and are typically different.
The patch distance is then computed as
(18) 
where is the local neighborhood at . is the area of . denotes the distance measured with reference plane perpendicular to .
Note that different reference planes lead to different distance values, so we alternately use and to define the reference plane. Fig. 1(b) illustrates the computation of with reference plane perpendicular to .
(19) 
where functions and define surfaces and , respectively, and the local neighborhood at .
is then given as,
(20) 
IvC3 Distance Measure with Discrete Point Observation
Since we only have discrete observations of the points on the patches, we instead measure the sum of the distances between points with the same projection on the reference plane.
First, we compute , where reference plane is perpendicular to the surface normal at . Specifically, patch is composed of points , while patch is composed of . The surface normal is given by,
(21) 
It can be shown via Principal Component Analysis
[25]that the solution is the normalized eigenvector according to the smallest eigenvalue of the covariance matrix
given by,(22) 
The same normal estimation method is used in PCL Library [1]. We then project both and to the reference plane, and the projections are and respectively, where the second index in subscript indicates that the reference plane is perpendicular to . The distances between and give the surface distance , and the distances between and give .
Ideally, for any in patch , there exists a point in patch whose projection as shown in Fig. 2(a). However, in real dataset, usually does not have a match in patch with exactly the same projection, as illustrated in Fig. 2(b). In this case, we replace the displacement value of (the green point in Fig. 2(b)), which has the same projection as , with the value of its nearest neighbor in patch in terms of the distance between their projections and . Then is computed as,
(23) 
Similarly, to compute , we define reference plane with , then compute the projections and and displacements , . For each in patch , we match it to the closest point in patch in terms of projection distance. Then is computed as,
(24) 
The final distance is given by
(25) 
IvC4 Planar Interpolation
The pairwise correspondence is based on nearest neighbor replacement, though more accurate interpolation can be adopted. However, due to the large size of the point cloud, implementing interpolation for all the points can be expensive. Thus we use nearestneighbor replacement when the distance between point pair is under a threshold . When the distance goes above , we apply the interpolation method described as follows.
As shown in Fig. 3(a), for a point , to find its corresponding interpolation on the other patch, we find the three nearest points (also in terms of projection distance) to form a plane, and the interpolation
is given by its projection along the normal vector
on the plane. It can be easily derived that the distance between and is where is the normal vector for the plane , and .Note that we choose to use projection on the reference plane (e.g. in Fig. 2(a)) to find the correspondence instead of the point position (e.g. in Fig. 2(a)), because using projection is more robust to noise. For example in Fig. 3(b), the underlying surfaces for two patches are both planar, where the circle points belong to one patch and the star points belong to the other. The correct connections are between points along the vertical lines (in blue). This is accomplished by using projection on the reference plane. On the other hand, if the connection is decided by point position, then the resulting connections are erroneous (in orange) and thus lead to inefficient denoising. Therefore point connection based on projection is closer to the ground truth and more robust to noise.
IvD Graph Construction
Based on the above patch distance measure strategy, the connection between th and th patch is implemented as follows. If no interpolation is involved, the points in th patch are connected with the nearest points in th patch in terms of their projections on the reference plane decided by surface normal of patch . Also, the points in th patch are connected with the nearest points in th patch in terms of their projections on the reference plane decided by surface normal of patch . The edges are undirected and assigned the same weight decided by in (7).
If interpolation is involved, for example in Fig. 3(a), the weight between and is given by,
(26) 
where is the weight between patch and . Point lies on patch and points , , lie on patch . is distance between and , for , and for . To simplify the implementation, we set a search window to be patches centered at the nearest patch centers, and evaluate patch distance between these nearest patches instead of all the patches in the point cloud.
In this way, the point domain graph is constructed, giving the graph Laplacian where is total number of points in all patches.
V Algorithm Development
Va Denoising Algorithm
Given , the optimization is reformulated as:
(27) 
where are the vectorized ,  and coordinates of the points in patches. Let . can be combined as .
is related to denoised 3D samples as follows:
(28) 
where is a sampling matrix to select points from point cloud to form patches of 3D points each, and is for patch centering. Hence, the objective function can be rewritten as:
(29) 
The optimization is nonconvex because of ’s dependency on patches in . To solve (29) approximately, we take an alternating approach, where in each iteration, we fix and solve for , then update given , and repeat until convergence.
In each iteration, graph Laplacian is easy to update using the previously discussed graph construction strategy. To optimize for fixed , each of the coordinate is given by,
(30) 
where is the index for coordinates, and
is the identity matrix of the same size as
. We iteratively solve the optimization until the result converges. The proposed algorithm is referred to as Graph Laplacian Regularized point cloud denoising (GLR).VB Graph Spectral Analysis
To impart intuition and demonstrate stability of our computation, in each iteration we can compute the optimal ,  and coordinates in (29) separately, resulting in the following system of linear equations:
(31) 
where , and represent the coordinates (first column) of , and , respectively. In Section III, we assume that union of all patches covers all points in the point cloud , hence we can safely assume that .
Because is a sampling matrix, we can define as a principal submatrix^{2}^{2}2A principal submatrix of an original larger matrix is one where the th row and column of are removed iteratively for different . of . Denote by the eigenvalues of matrix . The solution to (31) can thus be written as:
(32) 
where is an eigendecomposition^{3}^{3}3Eigendecomposition is possible because the target matrix is real and symmetric. of matrix ; i.e., contains as columns eigenvectors , and is a diagonal matrix containing eigenvalues on its diagonal. In graph signal processing (GSP) [15], eigenvalues and eigenvectors of a variational operator— in our case—are commonly interpreted as graph frequencies and frequency components. is thus an operator (called
graph Fourier transform
(GFT)) that maps a graphsignal to its GFT coefficients .Observing that in (32) is a diagonal matrix:
(33) 
we can thus interpret the solution in (32) as follows. The noisy observation (offset by centering vector ) is transformed to the GFT domain via and lowpass filtered per coefficient according to (33)—lowpass because weights for low frequencies are larger than large frequencies , for . The fact that we are performing 3D point cloud denoising via graph spectral lowpass filtering should not be surprising.
VC Numerical Stability via EigenAnalysis
We can also estimate the stability of the system of linear equations in (32) via the following eigenanalysis. During graph construction, an edge weight is computed using (7), which is upperbounded by . Denote by the maximum degree of a node in the graph, which in general . According to the Gershgorin circle theorem [25], given a matrix , a Gershgorin disc has radius and center at . For a combinatorial graph Laplacian , the maximum Gershgorin disc radius is the maximum node degree multiplied by the maximum edge weight, which is . Further, the diagonal entry for positive edge weights, which equals . Thus all Gershgorin discs for a combinatorial graph Laplacian matrix have leftends located at . By the Gershgorin circle theorem, all eigenvalues have to locate inside the union of all Gershgorin discs. This means that the maximum eigenvalue for is upperbounded by twice the radius of the largest possible disc, which is .
Now consider principal submatrix of original matrix . By the eigenvalue interlacing theorem, large eigenvalue for is upperbounded by of . For matrix , the smallest eigenvalue , because: i) shifts all eigenvalues of to the right by , and ii) is PSD due to eigenvalue interlacing theorem and the fact that is PSD. We can thus conclude that the condition number^{4}^{4}4Assuming norm is used and the matrix is normal, then the condition number is defined as the ratio . of matrix on the lefthand side of (31) can be upperbounded as follows:
(34) 
Hence for sufficiently small , the linear system of equations in (31) has a stable solution, efficiently solved using indirect methods like conjugate gradient (CG).
VD Complexity Analysis
The complexity of the algorithm relies on two main procedures: one is the patchbased graph construction, and the other is in solving the system of linear equations.
For graph construction, for the patches, the nearest patches to be connected can be found in time. Then for point patch distance measure, each pair takes ; with pairs, the complexity is in total. For the system of linear equations, it can be solved efficiently with CG based methods, with complexity of . Details about the parameter setting are given in Section VI.
Vi Experimental Results
The proposed scheme GLR is compared with existing works: APSS [7], RIMLS [8], AWLOP [10], and the stateoftheart moving robust principal components analysis (MRPCA) algorithm [4]. APSS and RIMLS are implemented with MeshLab software [26], AWLOP is implemented with EAR software [10], and MRPCA source code is provided by the author. Models we use are Fandisk, Gargoyle, DC, Daratech, Anchor, and Lordquas provided in [3] and [4].
Via Implementation Details
To speed up the implementation, we take 50% of the points as the patch centers, with the farthest point sampling [27] to assure spatially uniform selection. The search window size is set to be 16, and the patch size is set to be 30. Parameters that require tuning include in the objective and in deciding the graph edge weight, which will be discussed in Section VIB and VIC in detail for particular models.
ViB Demonstration of Iterations
Here we show the results using the model Fandisk in Fig. 4
(a). Gaussian noise with standard deviation of 0.005 is added to the 3D position of the point cloud shown in Fig.
4(b). The surface reconstruction of denoising results in the first two iterations are shown in Fig. 4(c) and (d). The result after iteration 2 converges, so we only show the first two iterations. The parameter which increases along the iterations, and for iteration 1 and for iteration 2.A comparison with other schemes is shown in Fig. 5. The results of APSS in Fig. 5(b) and RIMLS in Fig. 5(c) are oversmoothed at the folding regions and corners, while AWLOP result in Fig. 5(d) has nonnegligible noise. The stateoftheart MRPCA in Fig. 5(e) has good denoising performance except the artifact at the thin area where the disk shrinks to a sheet. For the proposed scheme in Fig. 5(f), the result is visually better where the folding and corner structures are well preserved without shrinkage at the thin area.
ViC Numerical Evaluation
Apart from visual comparison, we measure the mean square error (MSE) between ground truth and denoising results. Specifically, we first measure the average of the squared Euclidean distances between ground truth points and their closest denoised points, and also between the denoised points and their closest ground truth points, then take the average between the two measures to compute MSE. The MSE results are shown in Table I, where the proposed scheme has the lowest MSE.
Noisy  APSS  RIMLS  AWLOP  MRPCA  GLR 
0.01114  0.00946  0.00954  0.01110  0.00993  0.00914 
More models are included in the evaluation. Gaussian noise with standard deviations of 0.01, 0.02, 0.03, and 0.04 is added to the 3D positions of the point cloud. Numerical results are shown in Table II, III, IV, and V, where the proposed scheme is shown to have the lowest MSE. The models are around 50000 in size. The parameter is set to increase along the iterations, i.e., , where for Gaussian noise with = 0.01, 0.02, 0.03, 0.04, respectively. is set to be for iteration 1, for iteration 2, and for the iterations afterwards.
Model  Noisy  APSS  RIMLS  AWLOP  MRPCA  GLR 
Gargoyle  0.150  0.129  0.129  0.144  0.136  0.120 
DC  0.141  0.117  0.120  0.136  0.122  0.108 
Daratech  0.141  0.125  0.125  0.139  0.136  0.115 
Anchor  0.148  0.126  0.126  0.132  0.130  0.112 
Lordquas  0.137  0.113  0.115  0.126  0.115  0.102 
Model  Noisy  APSS  RIMLS  AWLOP  MRPCA  GLR 
Gargoyle  0.257  0.209  0.211  0.234  0.216  0.201 
DC  0.237  0.186  0.193  0.208  0.189  0.178 
Daratech  0.244  0.205  0.205  0.239  0.225  0.199 
Anchor  0.259  0.210  0.211  0.229  0.205  0.190 
Lordquas  0.224  0.172  0.178  0.196  0.178  0.162 
Model  Noisy  APSS  RIMLS  AWLOP  MRPCA  GLR 
Gargoyle  0.318  0.239  0.252  0.294  0.241  0.233 
DC  0.293  0.210  0.226  0.257  0.211  0.203 
Daratech  0.303  0.242  0.258  0.298  0.259  0.236 
Anchor  0.322  0.239  0.244  0.259  0.230  0.218 
Lordquas  0.274  0.189  0.203  0.224  0.187  0.178 
Model  Noisy  APSS  RIMLS  AWLOP  MRPCA  GLR 
Gargoyle  0.367  0.261  0.275  0.309  0.258  0.251 
DC  0.338  0.227  0.249  0.272  0.224  0.216 
Daratech  0.349  0.284  0.314  0.336  0.287  0.270 
Anchor  0.372  0.253  0.260  0.271  0.241  0.227 
Lordquas  0.318  0.201  0.220  0.233  0.199  0.186 
Illustrations of the Daratech model are shown in Fig. 6 and 7 from different perspectives. AWLOP denoising results are not showing competitive results, which is also seen in Table IV, thus not included in the figures. As shown in Fig. 6(ce) and Fig. 7(ce), the existing schemes are undersmoothed in plane area, while in thin area the surface shrinks and the details are lost. However, for the proposed method, the details are well preserved without undersmoothing.
In addition, illustrations of the Lordquas model are shown in Fig. 8. For the existing schemes, the cigarette is broken and the edges (for example, the ears) are oversmoothed, while the proposed method is visually better, and structural details are preserved without oversmoothing.
Vii Conclusion
In this paper, we propose a graph Laplacian regularization based 3D point cloud denoising algorithm. To utilize the selfsimilarity among surface patches, we adopt the lowdimensional manifold prior, and collaboratively denoise the patches by minimizing the manifold dimension. For efficient implementation, we approximate the manifold dimension with a graph Laplacian regularizer, and construct the patch graph with a new measure for the discrete patch distance. The proposed scheme is shown to have graph spectral lowpass filtering interpretation and numerical stability in solving the linear equation system. Experimental results suggest that our proposal can outperform existing schemes with better structural detail preservation.
References
 [1] R. B. Rusu and S. Cousins, “3D is here: Point cloud library (PCL),” in Robotics and Automation (ICRA), 2011 IEEE International Conference on. IEEE, 2011, pp. 1–4.

[2]
M. Ji, J. Gall, H. Zheng, Y. Liu, and L. Fang, “Surfacenet: An endtoend 3D neural network for multiview stereopsis,”
2017 IEEE International Conference on Computer Vision (ICCV)
, pp. 2326–2334, 2017.  [3] G. Rosman, A. Dubrovina, and R. Kimmel, “Patchcollaborative spectral pointcloud denoising,” Computer Graphics Forum, vol. 32, no. 8, pp. 1–12, 2013.
 [4] E. Mattei and A. Castrodad, “Point cloud denoising via moving rpca,” Computer Graphics Forum, pp. 1–15, 2016.
 [5] Y. Sun, S. Schaefer, and W. Wang, “Denoising point sets via l minimization,” Computer Aided Geometric Design, vol. 35, pp. 2–15, 2015.
 [6] Y. Zheng, G. Li, S. Wu, Y. Liu, and Y. Gao, “Guided point cloud denoising via sharp feature skeletons,” The Visual Computer, pp. 1–11, 2017.
 [7] G. Guennebaud and M. Gross, “Algebraic point set surfaces,” ACM Transactions on Graphics (TOG), vol. 26, no. 3, p. 23, 2007.
 [8] A. C. Öztireli, G. Guennebaud, and M. Gross, “Feature preserving point set surfaces based on nonlinear kernel regression,” Computer Graphics Forum, vol. 28, no. 2, pp. 493–501, 2009.
 [9] Y. Lipman, D. CohenOr, D. Levin, and H. TalEzer, “Parameterizationfree projection for geometry reconstruction,” ACM Transactions on Graphics (TOG), vol. 26, no. 3, p. 22, 2007.
 [10] H. Huang, S. Wu, M. Gong, D. CohenOr, U. Ascher, and H. R. Zhang, “Edgeaware point set resampling,” ACM Transactions on Graphics (TOG), vol. 32, no. 1, p. 9, 2013.
 [11] X.F. Han, J. S. Jin, M.J. Wang, W. Jiang, L. Gao, and L. Xiao, “A review of algorithms for filtering the 3D point cloud,” Signal Processing: Image Communication, 2017.

[12]
A. Buades, B. Coll, and J.M. Morel, “A nonlocal algorithm for image
denoising,” in
Computer Vision and Pattern Recognition (CVPR), 2005 IEEE Computer Society Conference on
, vol. 2. IEEE, 2005, pp. 60–65.  [13] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transformdomain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, 2007.
 [14] S. Osher, Z. Shi, and W. Zhu, “Low dimensional manifold model for image processing,” SIAM Journal on Imaging Sciences, vol. 10, no. 4, pp. 1669–1690, 2017.

[15]
D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, “The emerging field of signal processing on graphs: Extending highdimensional data analysis to networks and other irregular domains,”
IEEE Signal Processing Magazine, vol. 30, no. 3, pp. 83–98, May 2013.  [16] M. Alexa, J. Behr, D. CohenOr, S. Fleishman, D. Levin, and C. T. Silva, “Computing and rendering point set surfaces,” IEEE Transactions on Visualization and Computer Graphics, vol. 9, no. 1, pp. 3–15, 2003.
 [17] G. Guennebaud, M. Germann, and M. Gross, “Dynamic sampling and rendering of algebraic point set surfaces,” Computer Graphics Forum, vol. 27, no. 2, pp. 653–662, 2008.
 [18] R. B. Rusu, N. Blodow, Z. Marton, A. Soos, and M. Beetz, “Towards 3D object maps for autonomous household robots,” in Intelligent Robots and Systems (IROS), 2007 IEEE/RSJ International Conference on. IEEE, 2007, pp. 3191–3198.
 [19] H. Huang, D. Li, H. Zhang, U. Ascher, and D. CohenOr, “Consolidation of unorganized point clouds for surface reconstruction,” ACM Transactions on Graphics (TOG), vol. 28, no. 5, p. 176, 2009.
 [20] H. Avron, A. Sharf, C. Greif, and D. CohenOr, “lsparse reconstruction of sharp point set surfaces,” ACM Transactions on Graphics (TOG), vol. 29, no. 5, p. 135, 2010.
 [21] T. Guillemot, A. Almansa, and T. Boubekeur, “Non local point set surfaces,” in 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2012 Second International Conference on. IEEE, 2012, pp. 324–331.
 [22] J. Digne, “Similarity based filtering of point clouds,” in Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on. IEEE, 2012, pp. 73–79.
 [23] M. Hein, “Uniform convergence of adaptive graphbased regularization,” Lecture Notes in Computer Science, vol. 4005, p. 50, 2006.
 [24] J. Pang and G. Cheung, “Graph Laplacian regularization for image denoising: Analysis in the continuous domain,” IEEE Transactions on Image Processing, vol. 26, no. 4, pp. 1770–1785, 2017.
 [25] R. Horn and C. Johnson, Matrix Analysis. Cambridge University Press, 2012.
 [26] P. Cignoni, M. Callieri, M. Corsini, M. Dellepiane, F. Ganovelli, and G. Ranzuglia, “Meshlab: An opensource mesh processing tool.” in Eurographics Italian Chapter Conference, vol. 2008, 2008, pp. 129–136.
 [27] Y. Eldar, M. Lindenbaum, M. Porat, and Y. Y. Zeevi, “The farthest point strategy for progressive image sampling,” IEEE Transactions on Image Processing, vol. 6, no. 9, pp. 1305–1315, 1997.
Comments
There are no comments yet.