Point cloud denoising based on tensor Tucker decomposition

02/20/2019 ∙ by Jianze Li, et al. ∙ 0

In this paper, we propose an algorithm for point cloud denoising based on the tensor Tucker decomposition. We first represent the local surface patches of a noisy point cloud to be matrices by their distances to a reference point, and stack the similar patch matrices to be a 3rd order tensor. Then we use the Tucker decomposition to compress this patch tensor to be a core tensor of smaller size. We consider this core tensor as the frequency domain and remove the noise by manipulating the hard thresholding. Finally, all the fibers of the denoised patch tensor are placed back, and the average is taken if there are more than one estimators overlapped. The experimental evaluation shows that the proposed algorithm outperforms the state-of-the-art graph Laplacian regularized (GLR) algorithm when the Gaussian noise is high (σ=0.1), and the GLR algorithm is better in lower noise cases (σ=0.04, 0.05, 0.08).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, the low-cost and high-resolution scanners of point cloud are becoming available, and have been promoting the wide applications of point cloud processing in various areas, e.g., remote sensing [1], cultural heritage [2] and geographic information system [3]. However, because of the physical constraints, the raw point cloud data is always corrupted with noises, which has made the denoising an important step for further processing in point cloud.

Up to now, many different types of algorithms have been developed for point cloud denoising [4]

, which can be classified as four categories

[5]: Moving least squares (MLS)-based, Locally optimal projection (LOP)-based, Sparsity-based and Non-local algorithms. Now we briefly introduce the algebraic point set surfaces (APSS) [6] and robust implicit MLS (RIMLS) [7] algorithms in MLS-based category, and the graph Laplacian regularized (GLR) [5] algorithm in Non-local category. In fact, the APSS and RIMLS algorithms are to approximate the smooth surface based on the local reference domains of the noisy points, and then determine the true positions by the resulting surface. The GLR algorithm is based on the assumption that the surface patches in a point cloud lie on a manifold of low dimension, which was earlier studied in the low-dimensional manifold model (LDMM) [8] algorithm for image processing.

Nowadays, as more and more real world data can be represented as the tensor form, e.g., video [9], hyperspectral image [10], tensor decomposition has become a popular tool to solve many signal processing problems [11, 12, 13], e.g., image denoising [14], graph signal processing (GSP) [15, 16]. As one of the most important transformations in tensor field, Tucker decomposition [11, 17] is to transform a tensor into a core tensor of smaller size by a set of column orthogonal matrices. In fact, it can be understood as a higher order version of the principal component analysis (PCA), and includes the

higher order singular value decomposition

(HOSVD) [18] as a special case.

In this paper, inspired by the application of HOSVD in image denoising [14], we try to solve the point cloud denoising problem by a tensor approach. We first construct a 3rd order tensor based on the similarity between different local surface patches in a point cloud. Then we use the Tucker decomposition to compress the patch tensor, and take the core tensor as the frequency domain. In fact, this process has removed some noise, similar to the PCA case. For better denoising performance, we continue to use the hard thresholding on the core tensor to remove more noise. Finally, we place back the fibers111the fiber of a tensor is defined by fixing every index but one., and take the average if there are more than one estimators overlapped. This algorithm belongs to Non-local category, as it is based on the similarity between local surface patches.

The main contribution of this paper is the formulation of the point cloud denoising problem from a tensor point of view, and combining the Tucker decomposition and hard thresholding to solve it. It is shown by the experiments that the proposed algorithm outperforms the state-of-the-art GLR algorithm when the Gaussian noise is high (0.1). To the best of our knowledge, this is the first time applying Tucker decomposition to the point cloud denoising problem.

This paper is organized as follows. In Section 2, we formulate the point cloud denoising problem, and propose a Tucker decomposition based algorithm to solve it. In Section 3, we conduct some experiments to evaluate the performance of the proposed algorithm, and discuss the results. Section 4 includes the conclusion and future work.

Now we give some notations before further discussion. In this paper, we denote to be the linear space of 3rd order real tensors. We denote by

the Frobenius norm of a tensor or a matrix, or the Euclidean norm of a vector. Tensors, matrices, and vectors, will be respectively denoted with bold calligraphic letters, e.g.

, with bold uppercase letters, e.g. , and with bold lowercase letters, e.g. ; corresponding entries will be denoted by , , and . Let be a 3rd order tensor, and be a matrix. We follow the definitions and notations in [11], e.g., the 1-mode product is

2 Point cloud denoising based on Tucker decomposition

2.1 Problem formulation

Assume that is a noisy point cloud, i.e., a set of unstructured spatial points, and is the corresponding position matrix satisfying

(1)

where is the true position matrix and

is a Gaussian noise with zero mean and standard deviation

. In this paper, we study the point cloud denoising problem to find .

To solve this problem, we first represent the surface patches of a point cloud as matrices of the same size, and stack the similar patch matrices to a 3rd order patch tensor , based on the similarity between different local surface patches. Suppose that . Then, we formulate problem (1) to be the following Tucker decomposition [11, 17] problem

(2)

where and are column orthogonal matrices. This is in fact the higher order PCA to compress to be smaller size. The compressed tensor is the core tensor, and the column orthogonal matrices are the factor matrices. In the case that and are of the same size, this problem can be exactly solved by the HOSVD. In the case that has smaller size, it can be solved efficiently by the higher order orthogonal iterations (HOOI) [19] method. In this paper, we use the HOOI algorithm to solve problem (2), and understand the core tensor as the frequency domain similar to the PCA case. Then by the hard thresholding on the core tensor, we remove the noise of the point cloud.

2.2 Tucker decomposition based algorithm

In this subsection, we mainly develop the Tucker decomposition based point cloud denoising (TUDE) algorithm. All the details of this algorithm are summarized in Algorithm 1.

2.2.1 Determining the patch matrices

We first choose a subset as the set of seed points by the downsampling method, which can make the seed points sampled uniformly. For each seed point , we choose the nearest points in , and sort them to be a patch matrix in based on their Euclidean distances to . The should be large enough to guarantee that the union of all the points in patch matrices can cover .

2.2.2 Determining the groups of similar patch matrices

To make the similarity between patch matrices be rotation invariant, we use the distance based on the iterative closest point (ICP) cost function, as in [20].

Given a thresholding value , for each patch matrix , we find all the patch matrices satisfying that the average distance is smaller than . In the process of solving the ICP problem, we find a transformation on . Then we put all of such transformed patch matrices in a group. In other words, the average distance between each patch matrice and the reference one in the same group will be always smaller than . To guarantee the speed, we set a search region N for this searching process.

2.2.3 Patch tensor denoising

For each group of similar patch matrices , we stack them together to be a 3rd order patch tensor . Then we calculate the Tucker decomposition (2) of by the HOOI algorithm, and manipulate the hard thresholding on the core tensor. We keep the entries with absolute value larger than the largest absolute value multiplied by , and eliminate other entries. Then, by the inverse transformation, we get the denoised tensor .

2.2.4 Aggregation

This step is to place all the fibers of denoised tensors back to the original positions. On each patch matrix of the denoised tensors, we first make the inverse transformations of that determined in the process of solving the ICP problem. Then we place back all the patch matrices. It is highly possible that we get more than one estimators for one original patch matrix. In this case, we take the average of these overlapped ones. Finally, we place back all the row vectors of patch matrices. It is also possible that there are several estimators for one point position, when a point appears simultaneously in many patch matrices, and we take the average similarly.

0:  Noisy point cloud ;
  , the number of points in each patch matrix;
  , for finding similar patch matrices;
  N, the search region;
  , the size of core tensor;
  , for the hard thresholding.
  Denoised point cloud . Initialisation
  Represent as a position matrix ; Phase I: Determining the patch matrices.
  Choose a set of seed points by the downsampling method;
  for each seed point  do
     Choose the nearest points in ;
     Sort them to be a patch matrix by the distances to .
  end forPhase II: Determine the groups of similar patch matrices.
  for each patch matrix  do
     Find the group of similar patch matrices with smaller than in a search region N.
  end forPhase III: Patch tensor denoising
  for each group of similar patch matrices do
     Stack them to be a patch tensor ;
     if the size of is greater than  then
        Calculate the Tucker decomposition (2) of ;
        Manipulate the hard thresholding on the core tensor with parameter ;
        Inverse the transformations.
     end if
  end forPhase IV: Aggregation
  Place back all the fibers of denoised tensors;
  Take the average if there are more than one estimators.
Algorithm 1 Tucker decomposition based algorithm

3 Experimental evaluation

In this section, we compare the proposed TUDE algorithm with some existing algorithms: the APSS [6], RIMLS [7], and GLR [5] algorithms. The APSS and RIMLS algorithms are implemented with MeshLab software [21]. The GLR algorithm is implemented with Matlab. The TUDE algorithm is implemented with Python.

3.1 Experimental setup

We use the Gargoyle, DC and Daratech point cloud models from [20, 22, 5] to conduct experiments. The numbers of points and the numbers of seed points after the downsampling process for these three models are 58611, 56645, 32003 and 28361, 27496, 15475, respectively. These models are added the Gaussian noises with , respectively. We use the mean square error (MSE) in [5] to evaluate the denoising performance.

In the proposed TUDE algorithm, we always set , N, and . We set for , respectively. In the GLR algorithm, we set for , for , and for , as we find this setting can generally get the best results. In APSS and RIMLS algorithms, we set the MSL filter scale to be , and choose the best result.

3.2 Experimental results

The experimental results are shown in Tables 4, 3, 2 and 1, where “—” means that the MSE does not decrease after the algorithm. A visualization of the TUDE algorithm denoising result on the Gargoyle model () is shown in Figure 1, where we can see that the denoised model is more compact and regular than the noisy one.

Model Noisy APSS RIMLS GLR TUDE
Gargoyle 0.367 0.258 0.275 0.251 0.283
DC 0.338 0.227 0.245 0.217 0.248
Daratech 0.348 0.264 0.284 0.269 0.301
Table 1: MSE for different models ().
Model Noisy APSS RIMLS GLR TUDE
Gargoyle 0.413 0.281 0.298 0.269 0.301
DC 0.381 0.248 0.266 0.231 0.265
Daratech 0.387 0.328 0.363 0.308 0.331
Table 2: MSE for different models ().
Model Noisy APSS RIMLS GLR TUDE
Gargoyle 0.539 0.393 0.432 0.348 0.367
DC 0.503 0.377 0.409 0.317 0.341
Daratech 0.482 0.458 —- 0.384 0.388
Table 3: MSE for different models ().
Model Noisy APSS RIMLS GLR TUDE
Gargoyle 0.619 0.531 0.583 0.436 0.416
DC 0.577 0.514 0.556 0.400 0.392
Daratech 0.531 —- —- 0.437 0.418
Table 4: MSE for different models ().
(a)
(b)
(c)
Figure 1: A visualization of the TUDE algorithm denoising result on the Gargoyle model (). (a) the true model; (b) the noisy model; (c) the denoised model by TUDE algorithm.

3.3 Discussions

We first make some comparisons based on Tables 4, 3, 2 and 1. (1) Compared with the APSS algorithm, TUDE algorithm has better results when , and APSS is better when . (2) Compared with the RIMLS algorithm, TUDE algorithm has better results when , close results when and RIMLS is better when . (3) Compared with the state-of-the-art GLR algorithm, TUDE algorithm has better results when , and GLR is better when . Then we can summarize that the proposed TUDE algorithm has better results when tackling the point cloud denoisng problem with high noise.

This is because of the following facts: (1) The APSS and RIMLS algorithms are both based on the local reference domains of the noisy points, which are influenced more heavily by the high noise. However, the GLR and TUDE algorithms belong to Non-local category, and thus still work well with high noise. (2) The GLR algorithm uses projection on the reference plane to find the correspondence [5], which may make it more sensitive to the high noise than the TUDE algorithm. (3) In the TUDE algorithm, after the denoising process by the compression of Tucker decomposition (i.e., higher order PCA), we continue to manipulate the hard thresholding on the core tensor. In the experiments, we find that the denoising result is much better than just using the Tucker decomposition. The combination of these two processes may be also an important reason why it works better in high noise case.

The proposed TUDE algorithm is also competitive in the speed. In fact, the running time of each experiment is minutes on the Daratech model, and minutes on the Gargoyle and DC models. This is because of that (1) TUDE algorithm is not an iterative algorithm, and (2) the main procedure in TUDE algorithm is the HOOI algorithm, which is of high efficiency [23].

Now we make some discussions about the parameters of the proposed TUDE algorithm. (1) There is a strong positive correlation between the patch matrix size K and the Gaussian noise level. This is reasonable as we need larger surface patches when the noise is higher. (2) We always set N 20 in the experiments. In fact, the final MSE value will be better if we increase the N value. However, this would make the algorithm slower. In these three models, the speed would be too slow (more than one hour) if N is more than 200. (3) We always set in the experiments. In fact, on the Daratech model, we find that the result will be better, if is set to be smaller. One possible reason is that the Daratech model has more flat surfaces, while other two models have more round surfaces.

In the experiments, we find that the hard thresholding process sometimes keeps only one entry (with the largest absolute value), and eliminates all other entries. In this case, it is natural to ask whether we could get the same result if we take directly, which is corresponding to the best rank-1 approximation [19]. In fact, by the experiments, we find that this is not the case. Taking directly generally gets worse results than the approach of this paper, i.e., combining the Tucker decomposition and hard thresholding is better.

4 Conclusion

In this paper, we propose a point cloud denoising algorithm by using the tensor Tucker decomposition to exploit the self-similarities between local surface patches in a point cloud. After calculating the Tucker decomposition by the HOOI algorithm, we manipulate the hard thresholding on the compressed tensor to remove the noise. Finally all the points are placed back, and the average is taken if there are more than one estimators overlapped. By the experiments, we find that this algorithm is competitive in the speed, and outperforms the state-of-the-art GLR algorithm when the Gaussian noise is high ( 0.1), while the GLR algorithm is better in lower noise cases (). In the future work, based on the TUDE algorithm, we will try to construct a nonlinear model to get better denoising results.

5 Acknowledgement

The authors would like to thank Gene Cheung for providing the point cloud models and Matlab codes of GLR algorithm, and Hei Victor Cheng for valuable discussions about the algorithm setting.

References

  • [1] Liang Cheng, Song Chen, Xiaoqiang Liu, Hao Xu, Yang Wu, Manchun Li, and Yanming Chen, “Registration of laser scanning point clouds: A review,” Sensors, vol. 18, no. 5, pp. 1641, 2018.
  • [2] François Lozes, Abderrahim Elmoataz, and Olivier Lézoray, “Pde-based graph signal processing for 3d color point clouds: Opportunities for cultural heritage,” IEEE Signal Processing Magazine, vol. 32, no. 4, pp. 103–111, 2015.
  • [3] Kazuo Sugimoto, Robert A Cohen, Dong Tian, and Anthony Vetro, “Trends in efficient representation of 3d point clouds,” IEEE, 2017, pp. 364–369.
  • [4] Xian-Feng Han, Jesse S Jin, Ming-Jie Wang, Wei Jiang, Lei Gao, and Liping Xiao, “A review of algorithms for filtering the 3d point cloud,” Signal Processing: Image Communication, vol. 57, pp. 103–112, 2017.
  • [5] Jin Zeng, Gene Cheung, Michael Ng, Jiahao Pang, and Cheng Yang, “3d point cloud denoising using graph laplacian regularization of a low dimensional manifold model,” arXiv:1803.07252, 2018.
  • [6] Gaël Guennebaud and Markus Gross, “Algebraic point set surfaces,” ACM Transactions on Graphics (TOG), vol. 26, no. 3, pp. 23, 2007.
  • [7] A Cengiz Öztireli, Gael Guennebaud, and Markus Gross, “Feature preserving point set surfaces based on non-linear kernel regression,” Computer Graphics Forum, vol. 28, no. 2, pp. 493–501, 2009.
  • [8] Stanley Osher, Zuoqiang Shi, and Wei Zhu, “Low dimensional manifold model for image processing,” SIAM Journal on Imaging Sciences, vol. 10, no. 4, pp. 1669–1690, 2017.
  • [9] Lihua Gui, Gaochao Cui, Qibin Zhao, Dongsheng Wang, Andrzej Cichocki, and Jianting Cao, “Video denoising using low rank tensor decomposition,” in Ninth International Conference on Machine Vision (ICMV 2016). International Society for Optics and Photonics, 2017, vol. 10341, p. 103410V.
  • [10] Chang Li, Yong Ma, Jun Huang, Xiaoguang Mei, and Jiayi Ma, “Hyperspectral image denoising using the robust low-rank tensor recovery,” JOSA A, vol. 32, no. 9, pp. 1604–1612, 2015.
  • [11] Tamara G Kolda and Brett W Bader, “Tensor decompositions and applications,” SIAM review, vol. 51, no. 3, pp. 455–500, 2009.
  • [12] Pierre Comon, “Tensors: a brief introduction,” IEEE Signal Processing Magazine, vol. 31, no. 3, pp. 44–53, 2014.
  • [13] Nicholas D Sidiropoulos, Lieven De Lathauwer, Xiao Fu, Kejun Huang, Evangelos E Papalexakis, and Christos Faloutsos,

    “Tensor decomposition for signal processing and machine learning,”

    IEEE Transactions on Signal Processing, vol. 65, no. 13, pp. 3551–3582, 2017.
  • [14] Ajit Rajwade, Anand Rangarajan, and Arunava Banerjee, “Image denoising using the higher order singular value decomposition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 4, pp. 849–862, 2013.
  • [15] Nauman Shahid, Scalable Low-rank Matrix and Tensor Decomposition on Graphs, Ph.D. thesis, Ecole Polytechnique Fédérale de Lausanne, 2017.
  • [16] Adnan Gavili and Xiao-Ping Zhang, “On the shift operator, graph frequency, and optimal filtering in graph signal processing,” IEEE Transactions on Signal Processing, vol. 65, no. 23, pp. 6303–6318, 2017.
  • [17] Ledyard R Tucker, “Some mathematical notes on three-mode factor analysis,” Psychometrika, vol. 31, no. 3, pp. 279–311, 1966.
  • [18] Lieven De Lathauwer, Bart De Moor, and Joos Vandewalle, “A multilinear singular value decomposition,” SIAM journal on Matrix Analysis and Applications, vol. 21, no. 4, pp. 1253–1278, 2000.
  • [19] Lieven De Lathauwer, Bart De Moor, and Joos Vandewalle, “On the best rank-1 and rank-(r 1, r 2,…, rn) approximation of higher-order tensors,” SIAM journal on Matrix Analysis and Applications, vol. 21, no. 4, pp. 1324–1342, 2000.
  • [20] Guy Rosman, Anastasia Dubrovina, and Ron Kimmel, “Patch-collaborative spectral point-cloud denoising,” in Computer Graphics Forum. Wiley Online Library, 2013, vol. 32, pp. 1–12.
  • [21] Paolo Cignoni, Marco Callieri, Massimiliano Corsini, Matteo Dellepiane, Fabio Ganovelli, and Guido Ranzuglia, “Meshlab: an open-source mesh processing tool.,” in Eurographics Italian chapter conference, 2008, vol. 2008, pp. 129–136.
  • [22] Enrico Mattei and Alexey Castrodad, “Point cloud denoising via moving rpca,” Computer Graphics Forum, vol. 36, no. 8, pp. 123–137, 2017.
  • [23] Anh-Huy Phan, Andrzej Cichocki, and Petr Tichavsky, “On fast algorithms for orthogonal tucker decomposition,” in IEEE ICASSP 2014, pp. 6766–6770.