Point clouds are being widely applied in many fields such as cultural heritage, immersive communication, autonomous navigation, etc . Immersive visual communication, such as virtual reality (VR) and augmented reality (AR) applications, is showing fresh new capabilities and experiences and gaining momentum in the industry . The core to achieving this is constructing an effective and efficient 3D scene capture, compression and communication system . Autonomous navigation and cultural heritage reconstruction applications also require similar capabilities and even higher resolution and fidelity of point clouds.
Point Cloud Compression is established as a working group under MPEG to develop novel solutions to compress 3D geometry and attribute information. This is the key enabler as well as a bottleneck in achieving the immersive communication and autonomous driving vision.
The traditional method to generate 3D models from point clouds is using triangle or polygon meshes to reconstruct the underlying surface model 
. The method needs to estimate the connectivity among points, which causes high complexity and may introduce artifacts. As 3D rendering technologies and computing machines are developing rapidly, producing 3D models with amounts of discrete points becomes more practicable. 3D point clouds have gradually been favored over meshes for representing the surfaces of 3D objects and scenes.
A Point cloud usually contains millions of points and each point is associated with geometry positions and attribute information. Point clouds are irregularly distributed in 3D space without structured points arranged on a grid, where traditional 2D picture coding tools and video coding schemes cannot effectively work. A challenge in 3D point cloud compression is figuring out how to exploit the spatial and temporal correlation among massive discrete points, which are associated with 3D positions and attributes.
There are three main types of compression in point clouds: geometry compression, attribute compression and dynamic motion-compensated compression. Geometry compression aims to code 3D point coordinates in point clouds. Attribute compression intends to reduce the redundancy among point cloud attributes. Dynamic motion-compensated compression targets dynamic point cloud sequences compression. In this paper, we focus on intra-frame compression of point cloud color attributes.
Cha’s GFT-based scheme , Rufael’s 2D-mapping method  and Ricardo’s RAHT scheme  were three typical works on point cloud attribute compression. These contributions were focusing on the improvement of transform techniques. Cha firstly introduced GFT to code point cloud attributes and achieved better coding performance than traditional octree-based methods. But the framework of GFT method was not well optimized. In , Rufael projected point cloud color attributes to 2D grids and used JPEG codec to compress grids. The method was efficient but may introduce artifacts because of the 3D-to-2D mapping process. The state-of-the-art was Ricardo’s work in . They devised an original hierarchical sub-band transform which was more computationally efficient than GFT scheme, while GFT performed better than RAHT on attribute compression.
Actually, the compression performance of a point cloud attribute encoder depends on many coding components. In this paper, we propose an innovative hybrid point cloud attribute coding scheme. The scheme is embodied in layered structure generation, block-based intra prediction, adaptive GFT-based transform, optimized reordering scan before entropy coding. First, a slice-partitioning scheme and a block-division method are adopted to generate the layered structure. Second, on the layered structure, we introduce an efficient block-based intra prediction scheme, which provides a DC mode and five angular modes. The sum of absolute transformed difference (SATD) is used to choose the best mode. Later, Lagrangian optimized GFT and DCT two transform modes are adaptively adopted to achieve better transform efficiency for different types of point clouds. The Lagrange multiplier is off-line derived based on the statistics of color attribute coding. Before entropy coding, multiple reordering scan modes are dedicated to improve coding efficiency. For additional point cloud attributes, such as the normal information, this coding framework can also perform effectively and efficiently.
The rest of this paper is organized as follows. Section 2 presents related works. An overview of the proposed coding scheme is shown in Section 3. Point cloud layered structure, block-based intra prediction, adaptive GFT-based transform and the reordering scan scheme are detailed in Section 4, 5, 6 and 7, respectively. In Section 8, we present experimental results to evaluate the proposed scheme. Finally, we conclude in Section 9.
2 Related Works
There are some works already on point cloud attribute compression. In order to deal with the massive discrete points, the first step is to set up a regular structure for point clouds. Schnabel in  first applied the octree structure in the static point cloud single-rate compression. Later, the octree method was further developed for progressive point cloud compression in  and dynamic point cloud coding in . K-D tree was another popular method to represent point cloud. Devillers in  adopted the kd-tree approach to recursively subdivide the bounding box of a point cloud. Shao in  devised an improved kd-tree scheme to uniform partition without empty blocks. There were other representations of point cloud. Merry in  built a minimum spanning tree for point cloud sing-rate compression, but it performed very poorly on models with disjoint components. Fan in  tried to construct the level of details (LOD) hierarchy through an iterative point clustering process, and Anis in  modeled the point cloud based on consistently-evolving subdivisional triangular meshes.
Current intra prediction schemes on attribute compression did not work well. Inspired by traditional hybrid video coding structures such as H.264/AVC  and HEVC , Robert et al in  proposed the 3D intra prediction method with the octree partition. They projected the reconstructed attributes in neighboring blocks onto the adjacent edge planes of the current block and adopted the projection values as references. The prediction performance depended on the blocks after octree partition, since the number of points in blocks was usually different. Merry in  predicted future vertices using a linear predictor on the basis of a spanning tree, which was resource intensive to generate.
Many improvements on attribute compression were based on transform. Rufael in  mapped color attributes to a 2D grid based on a deep first octree traversal and used DCT-based JPEG codec to encode point cloud colors. The transform scheme made some progress in point cloud attribute compression, but it introduced blocking effects. GFT was first proposed in . Zhang et al formed a graph in each octree leafnode by using edges to connect nearby occupied voxels limited to one unit apart. Then GFT on the graph was used to encode point cloud attributes. Results showed that GFT had better coding performance than DCT, but there were still some problems need to be solved, such as sub-graph problems. Robert in 
adopted k-nearest-neighbor (KNN) method to connect more distant points in a graph; nevertheless, experiments showed that the KNN method cannot solve sub-graph problems thoroughly. A compression framework that combining kd-tree structure and Laplacian sparsity optimized GFT was proposed in. It showed that optimized GFT achieved better performance than general GFT, but it would not run well for all types of point cloud datasets. The RAHT-based method in  is the state-of-the-art for point cloud intra-frame compression. Ricardo et al devised a hierarchical sub-band transform and Laplace distribution-assumed arithmetic coding.
MPEG-3DG Ad Hoc Point Cloud Coding (PCC) Group is focusing on developing point cloud compression standards and has made much progress on static point cloud compression, dynamic point cloud compression, and dynamic acquisition point cloud compression. The latest test models for three categories released by MPEG in  were RAHT-based compression scheme from 8i, video-based coding framework published by Apple and hierarchical coding tools also by Apple, respectively.
3 Overview of Proposed Attribute Coding Scheme
We assume that geometry has been coded via a separate pipeline and geometry decoder would pass decoded geometry as side information to the attribute encoder. Without loss of generality, we use color attributes as an example of point cloud attributes in this paper.
A schematic overview of the complete point cloud attribute compression procedures is illustrated in Figure 1. The proposed mechanism is mainly embodied in the layered structure generation, block-based intra prediction, adaptive transform, reordering scan after quantization and entropy coding.
A static point cloud is a frame. A slice-partitioning scheme is devised to segment a point cloud to several slices. Based on the slice level, the kd-tree method is adopted to divide the slice into macroblocks. Leafnodes of the hierarchical kd-tree are blocks. As Figure 2 shows, frames, slices, macroblocks and blocks constitute the layered structure of a point cloud.
Based on the kd-tree hierarchical structure, blocks are numbered in the breadth traversal order. Then, block-based intra prediction is introduced to reduce redundancy among adjacency blocks. Several intra modes are provided and a mode decision scheme is designed to choose the best mode. Adaptive transform tool support optimized GFT as well as DCT modes, and Lagrangian based methods are devised to mode decision. The combination of GFT and DCT improves transform efficiency than every single one mode and it can achieve better coding performance for different types of point clouds. After uniform quantization, point cloud coefficients need to be scanned to generate a one-dimensional data stream. Multiple reordering scan modes are introduced to improve the coding efficiency of entropy coding.
4 Point Cloud Layered Structure
A layered data structure is a fundamental representation of traditional video sequences. A video frame is divided into several slices which are independently encoded and each slice is flexibly partitioned into macroblocks. It can enhance the robustness of video encoder and improve the coding efficiency.
Inspired by the layered structure of video coding, we devise a scheme to generates suitable layered structure in point clouds. As Figure 2 shows, the scheme segments a point cloud frame into slices, macroblocks and blocks of three layers. The continuity of color attributes among adjacency points is the key to reducing redundancy. The layered structure generation scheme is adopted to cluster points with similar color attributes into a slice or a block. This structure is mirrored in the coded point cloud attribute bitstream.
Regarding the slice-partition scheme, we propose to estimate the color continuity and separate color non-smooth areas from current point cloud into slices. First, we adopt a general kd-tree method to get coding blocks and use color variances of each block to represent the color continuity. Different thresholds are set to rank the continuity. Then, we follow the rank to spilt several slices for the complex point cloud. In this paper, we use a two-slice partition scheme as an example. If the variance in a block is larger than, we regard the block as a non-smooth block, and if the proportion of non-smooth blocks among all point cloud blocks is larger than , we separate points with non-smooth color from all blocks as a slice, and the leftovers as another slice. Examples of the slice partition on two point clouds are presented in Figure3. The and are off-line trained based on the statistics of color attribute coding. For a more complex point cloud, different thresholds are set and several slice partitions are supported.
Kd-tree is a type of binary tree representing a hierarchical subdivision for a k-dimensional space. As Figure 1 shows, we apply kd-tree partition on point cloud slices to get macroblocks and blocks. The essence of the kd-tree scheme in this paper is the geometry-adaptive uniform segmentation on point cloud slices. While building each hierarchy for the kd-tree, the choice of the dimension to split and the splitting points are two major factors affecting the data structure . From the x, y, z three coordinate axes, we choose the splitting dimension with the largest geometry variance, which is regarded as the principal distribution direction of points. Along the principal direction, points are in the more discrete distribution and have weaker geometry correlations among adjacent points. The midpoint in the splitting dimension is set as the splitting point, so that the two parts divided will have almost the same number of points. Loop partition stops when the division times reach the determined kd-tree depth and there will be leaf nodes. The process of kd-tree partition is shown in Figure 4.
After tree partition, we regard leaf nodes as blocks and set indexes for these blocks following a breadth-first tree traversal on the kd-tree. The indexes are the references of block coding order. The process of kd-tree partition and the index of blocks are shown in Figure 4. There will be coding blocks with the index 1, 2, …, .
Considering the characteristics of the human vision system, original point colors in blocks are transformed from RGB color space to YUV color space, according to the standard file ITU-R Rec. BT. 709.
5 Block-based Intra Prediction
The purpose of intra prediction is to make full use of the correlation among adjacency points to reduce the redundancy on color attributes. As the Table 1 shows, the number of points in point cloud blocks may be different. Points in each block are discretely and irregularly distributed in 3D space. Therefore, it is difficult to directly adopt intra prediction scheme from traditional video coding in the compression of point cloud attributes.
After the geometry-adaptive kd-tree partition, we get a serial of numbered blocks. Based on the block structure, these blocks are regarded as one row of frame coding units with a certain order. The color average of contained points in each block represents color attributes of the block, serving as the reference of next block attribute prediction.
Multiple angular intra prediction modes referring to forward blocks and macroblocks are implemented on this serial of numbered coding blocks. Mode decision is devised to choose the best prediction mode for each block.
5.1 Several Intra Prediction Modes
We propose multiple intra prediction modes on the blocks: DC prediction mode, three angular modes referring to three forward blocks and two angular modes referring to two forward macroblocks. If three forward blocks and two forward macroblocks are available, six prediction modes are adopted to reduce spatial redundancy on block’s color attributes.
An example of block intra prediction process is presented in Figure 5, where the black circle represents current . Its five angular prediction references are , , , and .
Regarding mode 5, DC prediction mode uses fixed values as the reference to Y, U, V components for current block. The first coding block must adopt DC mode.
Mode 0, mode 1 and mode 2 are three angular modes on the reference of three forward blocks. On the layer of blocks, it is an effective way to reduce the redundancy on color attributes by exploiting the correlation of adjacency spatial position.
Mode 3 and mode 4 are two angular modes referring to two forward macroblocks. A macroblock is the parent node for two child nodes. Our intra prediction framework not only supports prediction among adjacency tree leadnodes, but can also support the ”parent-child” prediction scheme. The flexible prediction scheme is beneficial for attribute compression on different types of point clouds.
Each block adopts these prediction modes available to obtain residuals. The prediction residual of block is
where , and are the prediction references for current block color attributes , and .
Prediction residuals are adopted in mode decision to get the best intra prediction mode.
5.2 Prediction Mode Decision
In rate-distortion optimization (RDO) mode selection on traditional video coding, the real encoding rate and distortion value usually need to be calculated by performing transform, quantization, entropy coding, inverse quantization and inverse transform. The process is time-consuming and computationally complex.
Inspired by the fast intra mode decision method for H.264/AVC in , in order to reduce the computation cost and maintain coding performance, we develop SATD as the cost criteria for point cloud intra mode decision.
SATD represents the sum of the absolute transformed differences between current block’s attributes and the reference values, which is a comprehensive consideration of rate and distortion. In our paper, after intra prediction on the current block, block attribute residuals are transformed by DCT. Then SATD is calculated to estimate the prediction performance, which is presented as
The smaller the SATD, the better the prediction performs. The intra prediction mode with the smallest SATD will be chosen as the best intra prediction mode.
6 Adaptive Transform
6.1 Two transform modes
Graphs have flexible geometric structures, which are the natural representations of 3D irregular point clouds. Composed of vertex and edges, graphs preserve more underlying information about the real 3D structure and the correlations among points. Current graph transform scheme in  uses kd-tree partition to get coding blocks and adopts Laplacian sparsity optimized GFT in each block, which is demonstrated to achieve better transform efficiency than other works. In this paper, we adopt optimized GFT as one of transform modes.
For each coding block, we form a graph by connecting all points with edges. Define the graph as :
where represents the node in the graph and represents the sets of edges.
In the adjacency matrix , the edge weight describes the correlation between two nodes and by the geometry distance. is presented as:
where denotes the variance of graph nodes and is the Euclidean distance threshold between two nodes.
The degree matrix is a diagonal matrix indicating the popular degree of points. is illustrated as:
We choose the Laplacian matrix presented in Equation (6) as the graph shift operator.
The graph transform matix
in Equation (7) is the eigenvector matrix of the Laplacian matrix.
is a diagonal matrix with eigenvalues of.
Then the graph transform matrix is used to decorrelate point cloud attributes from spatial domain to graph spectrum domain.
However, GFT is not suitable for all types of point clouds. For example, for sparse point clouds, Euclidean distances among points are generally large so it is difficult to construct an effective graph by estimating the underlying relationship between relative geometry position and color attributes. An efficient graph transform matrix cannot be derived under this circumstance. Regarding coding a kind of point cloud with complex geometry but flat color attributes, the GFT scheme cannot work well on attribute compression because it is too dependent on point cloud geometry information. But DCT scheme can handle those kinds of point clouds well.
General one-dimensional DCT is the other transform mode in our transform scheme, which is combined with optimized GFT to deal with different types of point clouds.
6.2 Lagrangian-based Mode Decision
To achieve better transform efficiency for different point clouds, it is necessary to devise a mode decision scheme to adaptively select the best transform mode. Different transform modes have different rate-distortion characteristics and the goal of the mode decision is to optimize its overall performance and reach the best balance between rate and distortion. That is, minimize distortion , subject to a constraint on the number of bits used . The constrained problem is illustrated as
In traditional video coding, Lagrange multiplier optimization technique is usually adopted to solve the optimization task in Equation (8). The Lagrangian formulation of the minimization problem is given:
where is the RD cost of a certain mode and is the Lagrange multiplier to balance the trade-off between rate and distortion . The mode with the minimum will be chosen as the best mode.
Before we calculate RD cost in function (9), bitrate and distortion of each mode need to be estimated and the Lagrange multiplier need to be determined.
6.2.1 Rate and Distortion Estimation
The bitrate of a transform mode is calculated in terms of the bits per point (bpp) of point cloud attribute transformed coefficients after quantization and entropy coding. Meanwhile, more bits are costed for signaling the mode information.
The distortion of a transform mode is estimated by the mean square error (MSE) between YUV components before transform and the reconstructed YUV components after the processing of transform, quantization, inverse quantization and inverse transform shown in Figure 1.
6.2.2 Lagrangian Multiplier Derivation based on -Q model
In traditional video coding, a Lagrangian optimization method is a well-established scheme to mode decision and the Lagrange multiplier can be off-line trained based on a mathematical function of quantization step .
In this paper, we refer to a point cloud Lagrangian optimization method in  to derive Lagrange multiplier for our proposed attribute compression scheme.
The goal of transform mode decision is to find the mode with the minimum RD cost in Function (9). To solve the optimization task, we take the derivative of to quantization step in Function (10).
When the derivative equals 0, the Lagrange multiplier is derived in Function (11) and the minimum RD cost is achieved. It implies that is the slope of the RD performance.
We choose some typical point cloud datasets as test sets for our compression scheme and record rate and distortion values at several different . Fitted RD curves based on those rate and distortion operating points for point cloud test sets are presented in Figure 6. Then we use the slope of the rate and distortion differences between current operating point and neighboring operating points to approximate the Lagrange in Function (11). The mathematical relationship between the Lagrange multiplier and quantization step is estimated based on the statistics of point cloud color coding performance.
Therefore, the model for our proposed compression scheme can be approximated as:
where = 0.14 and = 1.72.
In statistics, the coefficient of determination usually serves as the measure of the fitting degree. In our trained model, is 0.98, which it implies and are well fitted on the fitting curve in Figure 7.
Based on the trained model, Lagrangian optimization can be achieved to adaptively select the best transform mode for different blocks of the point cloud.
7 Reordering Scan of Quantized Coefficients
After transform and uniform quantization, YUV component coefficients of all points in a point cloud are processed. For a block, color information is presented as a coefficient matrix. Each element in the matrix is a luminance or chrominance component for a certain point. Before entropy coding, in a block unit, all the YUV component coefficients need to be scanned and reordered into a string of coefficients stream.
In order to improve entropy coding efficiency, to increase the length of continuous zero component, seven reordering scan modes are supported for every quantized coefficients block. The schematic diagram for seven scan modes is presented in Figure 8. The three elements in each row are Y component, U component and V component of a certain point, respectively. The red point is the beginning of the scan process.
Mode 0 is a raster scan mode. It horizontally scans the first point’s YUV component and then scans the next point’s components . Mode 1 to mode 6 vertically scan a certain kind of component in all points and follow the specified path to scan other components. The differences among mode 1 to mode 6 are the scan beginning and the scan order for YUV components.
After the reordering scan process, the continuous zero sequence in the bottom of bitstream is discarded to cut the unnecessary bit expenses. The reordering scan mode with the longest continuous zero in the bottom of bitstream is the selected scan mode.
8 Experimental Results
To compare the performance of the proposed method with the state-of-the-art RAHT scheme  and the TMC1 anchor latest released in MPEG 121st meeting, we have conducted many tests using different point cloud frames. According to the MPEG PCC PSNR evaluation proposal  and the BD-BR performance evaluation scheme on traditional video coding , our method achieves significant coding performance improvement compared with the RAHT scheme and the MPEG TMC1 anchor.
While comparing with the RAHT scheme, we adopt , , , four datasets tested in  and , , three other datasets. As for comparing with the MPEG TMC1 anchor, we follow the common test conditions (CTC) for category 1.2 in . We use , , and of class A and , , of class B.
8.2 Implementation details
For kd-tree partition, considering the compression performance and the computation complexity, the number of points in each coding block is limited to the empirical range (100, 200). The details of kd-tree partition on two point clouds are shown in Table 1.
About the training of model, we use , , , of the RAHT work and , of MPEG class A and of MPEG class B to estimate the Lagrange multiplier .
We adopt uniform quantization and arithmetic entropy coding on transformed attribute residuals. Intra prediction modes, transform modes, scan modes, the number of discarded zero for each block and block attribute residuals are encoded into bitstreams. Different quantization steps are used to reach the test bitrate point and to obtain different pairs of bitrate and PSNR. We use the bits per point (bpp) to measure the total bitrate of Y, U, V three components for each point. PSNR is calculated on the Y component by the evaluation metric from MPEG PCC standard proposal.
8.3 Objective coding performance evaluation
The performance comparisons between the proposed method and the RAHT scheme are tabulated in Table 2. The results in Table 2 show that the proposed method obtains a 37.95% BD-rate gain in luma component, a 26.83% and a 23.34% BD-rate gain for two chroma components, respectively. On average, a 29.37 BD-rate gain can be achieved. Moreover, the rate-distortion performance comparisons for point cloud , , and are shown in Figure 9. Our proposed method achieves significant coding performance improvement compared with the RAHT scheme on those datasets and the PSNR gain for Y component can be up to 4 dB.
The performance comparisons between the proposed method and the MPEG TMC1 anchor are tabulated in Table 3. The results in Table 3 show that the proposed method achieves a 5.37% BD-rate gain in luma component, a 21.01% and a 22.74% BD-rate gain for two chroma components, respectively. On average, a 16.37 BD-rate gain can be achieved. Moreover, the rate-distortion performance comparisons for four point cloud contents , , and are shown in Figure 10. Because of the well-established intra prediction scheme, our proposed method achieves better performance than the TMC1 for low bit rate coding. On point cloud , the coding performance at high bit rate coding is also better than TMC1 anchor. Regarding the performance at high bit rate coding on other three datasets, there are some improvement need to be done on our transform scheme in future. Overall, our proposed method performs better than the TMC1 anchor in Table 3.
8.4 Ablation study
To further analyze the contributions of different coding tools in our proposed scheme, namely slice-partition, block-based intra prediction, adaptive transform and reordering scan, for point cloud attribute compression, we implement an ablation study.
We construct four models, which are described as follows. First model V1 in  is regarded as the baseline of our work. It adopts the kd-tree and optimized GFT to compress point cloud attributes. The second model V2 adds an adaptive transform tool to the baseline and third model V3 implements the intra prediction tool to the V2. The slice partition scheme is adopted in V4 model. On the basis of model V4, we introduce the reordering scan method to process the bitstream, which completes our full framework on point cloud attribute compression.
The comparison of coding performance among five models on point cloud is presented in Figure 11.Experimental results show that our four coding tools all bring good gains to the R-D performance.
In this paper, we propose an efficient hybrid point cloud attribute compression scheme. The novelty of the proposed scheme lies in layered structure generation and block-based intra prediction. Moreover, the adaptive GFT-based transform is Lagrangian optimized and the Lagrange multiplier is off-line derived based on the statistics of color attribute coding. Multiple reordering scan modes are dedicated to improve coding efficiency for entropy coding. Experimental results demonstrate that our method performs significantly better than the state-of-the-art RAHT system, and on average a 29.37 BD-rate gain is achieved. Comparing with the TMC1 anchor’s coding results on MPEG 121st meeting, on average a 16.37 BD-rate gain is obtained.
-  Christian Tulvan, Rufael Mekuria, Li Zhu, and Laserre Sebastien, “Use cases for point cloud compression,” ISO/IEC JTC1/SC29/WG11 MPEG, p. N16331, 2016.
-  Wanmin Wu and Cha Zhang, “Immersive 3d communication,” in Proceedings of the 22nd ACM international conference on Multimedia. ACM, 2014, pp. 1229–1230.
-  Yu-Hsun Lin, “3d multimedia signal processing,” in Proceedings of the 20th ACM international conference on Multimedia. ACM, 2012, pp. 1445–1448.
-  Niloy J. Mitra, A. N. Nguyen, and Leonidas Guibas, “Estimating surface normals in noisy point cloud data,” International Journal of Computational Geometry & Applications, vol. 14, no. 04n05, pp. 0400147–, 2008.
-  Charles Loop, Cha Zhang, and Zhengyou Zhang, “Real-time high-resolution sparse voxelization with application to image-based modeling,” in High-Performance Graphics Conference, 2013, pp. 73–79.
-  Cha Zhang, Dinei Florêncio, and Charles Loop, “Point cloud attribute compression with graph transform,” in IEEE International Conference on Image Processing, 2014, pp. 2066–2070.
-  Rufael Mekuria, K. Blom, and P. Cesar, “Design, implementation and evaluation of a point cloud codec for tele-immersive video,” IEEE Transactions on Circuits & Systems for Video Technology, vol. PP, no. 99, pp. 1–1, 2016.
-  Ricardo L. De Queiroz and Philip A. Chou, Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform, IEEE Press, 2016.
-  Ruwen Schnabel and Reinhard Klein, “Octree-based point-cloud compression,” in Eurographics / IEEE Vgtc Conference on Point-Based Graphics, 2006, pp. 111–121.
-  Yan Huang, Jingliang Peng, C. C. Jay Kuo, and M. Gopi, “A generic scheme for progressive point cloud coding,” IEEE Transactions on Visualization & Computer Graphics, vol. 14, no. 2, pp. 440–453, 2008.
-  Julius Kammerl, Nico Blodow, Radu Bogdan Rusu, and Suat Gedikli, “Real-time compression of point cloud streams,” in IEEE International Conference on Robotics and Automation, 2012, pp. 778–785.
-  Olivier Devillers and Pierre Marie Gandoin, “Geometric compression for interactive transmission,” in Visualization 2000. Proceedings, 2000, pp. 319–326.
-  Yiting Shao, Zhaobin Zhang, Zhu Li, Kui Fan, and Ge Li, “Attribute compression of 3d point clouds using laplacian sparsity optimized graph transform,” arXiv preprint arXiv:1710.03532, 2017.
-  Bruce Merry, Patrick Marais, and Gain James, “Compression of dense and regular point clouds,” Computer Graphics Forum, vol. 25, no. 4, pp. 709–716, 2006.
-  Yuxue Fan, Yan Huang, and Jingliang Peng, “Point cloud compression based on hierarchical point clustering,” in Signal and Information Processing Association Summit and Conference, 2013, pp. 1–7.
-  Aamir Anis, Philip A. Chou, and Antonio Ortega, “Compression of dynamic 3d point clouds using subdivisional meshes and graph wavelet transforms,” in IEEE International Conference on Acoustics, Speech and Signal Processing, 2016, pp. 6360–6364.
-  T Wiegand, G. J Sullivan, G Bjontegaard, and A Luthra, “Overview of the h.264/avc video coding standard,” IEEE Transactions on Circuits Systems for Video Technology, vol. 13, no. 7, pp. 560–576, 2003.
-  G. J. Sullivan, J. Ohm, Woo Jin Han, and T. Wiegand, “Overview of the high efficiency video coding (hevc) standard,” IEEE Transactions on Circuits Systems for Video Technology, vol. 22, no. 12, pp. 1649–1668, 2012.
-  Robert A. Cohen, Dong Tian, and Anthony Vetro, “Point cloud attribute compression using 3-d intra prediction and shape-adaptive transforms,” in Data Compression Conference, 2016, pp. 141–150.
-  Robert A. Cohen, Dong Tian, and Anthony Vetro, “Attribute compression for sparse point clouds using graph transforms,” in IEEE International Conference on Image Processing, 2016, pp. 1374–1378.
-  Preda Marius, “Report on pcc cfp answers,” ISO/IEC JTC1/SC29/WG11 MPEG, p. w17251, 2017.
K. R. Zalik,
“An efficient k’-means clustering algorithm,”Pattern Recognition Letters, vol. 29, no. 9, pp. 1385–1391, 2008.
-  Y. Lee, Y. Sun, and Y. Lin, “Satd-based intra mode decision for h.264/avc video coding,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, no. 3, pp. 463–469, March 2010.
-  Gary J Sullivan and Thomas Wiegand, “Rate-distortion optimization for video compression,” IEEE Signal Processing Magazine, vol. 15, no. 6, pp. 74–90, 1998.
-  Yiqun Xu, Shanshe Wang, Xinfeng Zhang, Shiqi Wang, Nan Zhang, Siwei Ma, and Wen Gao, “Rate-distortion optimized scan for point cloud color compression,” in Visual Communications and Image Processing (VCIP), 2017 IEEE. IEEE, 2017, pp. 1–4.
-  D. Tian, H. Ochimizu, C. Feng, R. Cohen, and A. Vetro, “Evaluation metrics for point cloud compression,” in m39966,MPEG January 2017,Geneva, 2017.
-  Bjontegaard Gisle, “Improvements of the bd-psnr model,” VCEG-AI11, July 2008.
-  Sebastian Schwarz, Philip A Chou, and Madhukar Budagavi, “Common test conditions for point cloud compression,” ISO/IEC JTC1/SC29/WG11 MPEG, p. N17354, 2018 January.