1 Introduction
Thanks to recent developments in 3D sensoring technology, point cloud has become a useful representation of holograms which enables free viewpoint viewing. It has been used in many fields such as Virtual/Augmented/Mixed reality (VR/AR/MR), smart city, robotics and automated driving[24]. Point cloud compression becomes an increasingly important technique in order to efficiently process and transmit this type of data. It has attracted much attention from researchers as well as the MPEG Point Cloud Compression (PCC) group[24]. Geometry compression and attribute compression are two fundamental problems of static point cloud compression. Geometry compression aims at compressing the point locations. Attribute compression targets at reducing the redundancy among points’ attribute values, given the point locations. This paper focuses on the geometry compression.
Point cloud is a set of unordered points that are irregularly distributed in Euclidean space. Because there is no uniform grid structure like 2D pictures, traditional image compression and video compression schemes cannot effectively work. In recent years, many researchers have been dedicated to developing methods for it. Octrees [12][18][23]
are usually used to compress geometry information of point clouds, and it has been developed for intra and inter frame coding. In the MPEG PCC group, point cloud compression is divided into two profiles: a video coding based method named VPCC and a geometry based method named GPCC. All these methods are carefully designed by human experts who apply various heuristics to reduce the amount of information needing to be preserved and to transform the resulting code in a way that is amenable to lossless compression. However, when designing a point cloud codec, human experts usually focus on a specific type of point cloud and tend to make assumptions about their features because of the diversity of point clouds. For example, an early version of the GPCC reference software, TMC13, was divided into two parts, one for compressing point clouds belonging to category 1, and the other for compressing point clouds belonging to category 3, which means that it was difficult to build a universal point cloud codec. Therefore, given a particular type of point cloud, designing a codec that adapts quickly to the characteristics of such a point cloud and achieves better compression efficiency is a problem worth exploring.
In recent years, 3D machine learning has made great progress in highlevel vision tasks such as classification and detection
[6][22][30]. A natural question is whether we can employ this useful class of methods to further develop the point cloud codec, especially for point cloud sizes for which we do not have carefully designed. Usually, the design of a new point cloud codec can take years, but a compression framework based on neural networks may be able to adapt much more quickly to those niche tasks.
In this work, we consider the point cloud compression as an analysis/synthesis problem with a bottleneck in the middle. There are a number of research aims to teach neural networks to discover compressive representations. Achlioptas et al.[1] proposed an endtoend deep autoencoder that directly takes point clouds as inputs. Yang et al.[29] further proposed a graphbased encoder and a foldingbased decoder. These autoencoders are able to extract compressive representations from point clouds in terms of the transfer classification accuracy. These works motivate us to develop a novel autoencoderbased lossy geometry point cloud codec. Our proposed architecture consists of four modules: a pointnetbased encoder, a uniform quantizer, an entropy estimation block and a nonlinear synthesis transformation module. To the best of our knowledge, it is the first autoencoderbased geometry compression codec that directly takes point clouds as input rather than voxel grids or collections of images.
2 Related works
Several approaches for point cloud geometry compression have been proposed in the literature. Sparse Voxel Octrees (SVOs), also called Octrees[12][18], are usually used to compress geometry information of point clouds[23][10][8][19][13]. R. Schnabel et al.[23] first used Octrees in point cloud compression. This work predicts occupancy codes by surface approximations and uses octree structure to encode color information. Y. Huang et al. [10] further developed it for progressive point cloud coding. They reduced the entropy by bit reordering in the subdivision bytes. Their method also includes attribute coding such as color coding based on frequency of occurrence and normal coding using spherical quantization. There are several methods that adopt inter and intraframe coding in point cloud geometry compression[8][7]. Kammerl et al.[13] developed a prediction octree and used XOR as an intercoding tool. Mekuria et al[19] further proposed an octreebased intra and inter coding system. In MPEG PCC[24], the depth of the octree is constrained at a certain level and the leaf node of the octree is regarded as a single point, a list of points or a geometric model. Triangulations, also called triangle soups, are regarded as the geometric model in PCC. Pavez et al.[21] first explored the polygon soup representation of geometry for point cloud compression.
Recently, 3D machine learning has made great progress. Deep networks that directly handle points in a point set are stateoftheart for supervised learning tasks on point clouds such as classification and segmentation. Qi et al.
[6][22] first proposed a deep neural network that directly takes point clouds as input. After that, many other networks were proposed for highlevel analysis problems with point clouds[15][9][17][28]. There are also a few works that focus on 3D autoencoders. Achlioptas et al.[1] proposed an endtoend deep autoencoder that directly takes point clouds as input. Yang et al.[29] further proposed a graphbased encoder and a foldingbased decoder. To the best of our knowledge, there are few machinelearningbased works focusing on point cloud compression, but several autoencoderbased methods have been proposed to enhance the performance of image compression. Toderici et al.[27]proposed to use recurrent neural networks (RNNs) for image compression. Theis et al.
[25] achieved multiple bit rates by learning a scaling parameter that changes the effective quantization coarseness. Ball et al[2][4][3]. used a similar autoencoder architecture and replaced the nondifferentiable quantization function with a continuous relaxation by adding uniform noise.3 Formulation of point cloud geometry compression
We consider a point cloud codec generally as an analysis/synthesis problem with a bottleneck in the middle. The input point cloud is represented as a set of 3D points , where each point
is a vector of its
coordinate. In order to compress the input point cloud, the encoder should transform the input point cloud in Euclidean space into higher dimensional feature space . In the feature space, we could discard some tiny components by quantization, which reduces the redundancy in the information. So we get the compressive representations as the latent code . Then, the decoder transforms the compressive representations from the feature space back into Euclidean space and we get the reconstructed point set .4 Proposed geometry compression architecture
In this section, we describe the proposed compression architecture. The details of each component will be discussed in the subsections. Our proposed architecture consists of four modules: a pointnetbased encoder, a uniform quantizer, an entropy estimation block and a nonlinear synthesis transformation module. We adopt autoencoder[16] as our basic compression platform. The structure of the autoencoder is shown in Figure 1. Firstly, the input point cloud is downsampled by the sampling layer to create a point cloud with different point density. Then, the downsampled point set goes through the autoencoderbased codec. The codec consists of an encoder that takes an unordered point set as input and produces a compressive representation, a quantizer , and a decoder that takes the quantized representation produced by and produces a reconstructed point cloud. Thus, our compression architecture can be formulated as:
(1) 
where is the original unordered point set and is the reconstructed point cloud.
4.1 Sampling layer
In GPCC, the octreebased geometry coding uses a quantization scale to control the lossy geometry compression[24]. Let be the set of 3D positions associated with the points of the input point cloud. The GPCC encoder computes the quantized positions as follows:
(2) 
where and are userdefined parameters that are signaled in the bitstream. After quantization, there will be many duplicate points sharing the same quantized positions. A common approach is merging those duplicate points, which reduces the number of points in the input point cloud.
Inspired by GPCC, we use a sampling layer[22] to achieve the downsampling step. Given input points , we adopt iterative farthest point sampling (FPS)[22] to select a subset of points , such that is the farthest point (in metric distance) from the former point set with regard to the rest points. In contrast to random sampling, the point density of the resulted point set is more uniform, which is better at keeping the shape characteristics of the original object.
4.2 Encoder and Decoder
Generally, an autoencoder can be regarded as an analysis function, , and a synthesis function, ,where , , and are original point clouds, reconstructed point clouds, and compressed data, respectively. and are optimized parameters in the analysis and the synthesis function.
To learn the encoded compressive representation, we consider the pointnet architecture[6][1]. There are points in the input point set (
matrix). Every point is encoded by several 1D convolution layers with kernel size 1. Each convolution layer is followed by a ReLU
[20]and a batchnormalization layer
[11]. In order to make a model invariant to input permutation, there is a featurewise maximum layer following the last convolutional layer to produce a dimensional latent code. This latent code will be quantized and the quantized latent code will be encoded by the entropy encoder to get the final bitstream. In our experiment, we use 5 1D convolutional layers and the number of filters in each layer is 64, 128, 128, 256 and k respectively. k is decided by the number of input points. See details in the experimental results.Currently, there are two kinds of decoder for point clouds: the fullyconnected decoder[1] and the foldingbased decoder[29]
. Both decoders are able to produce reconstructed point clouds. The foldingbased decoder is much smaller in parameter size than the fullyconnected decoder, but there will be more hyperparameters to choose such as the number of grid points and the interval of grid points. Thus, we choose the fullyconnected decoder as our decoder, which interprets the latent code using 3 fullyconnected layers to produce a
reconstructed point cloud. Each fullyconnected layer is followed by a ReLU, and the number of nodes in each hidden layer is 256, 256 and respectively.4.3 Quantization
To reduce the amount of information necessarily needed for storage and transmission, quantization is a significant step in media compression. However, quantization functions like the rounding function’s derivative is zero or undefined. Theis et al.[25]
replaced the derivative in the backward pass of backpropagation with the derivative of a smooth approximation
. Thus, the rounding function’s derivative becomes:(3) 
During backpropagation, the derivative of the rounding function will be computed by equation (3), while the rounding function will not be replaced by the smooth approximation during the forward pass[25]. This is because if we replace the rounding function with the smooth approximation completely, the decoder may learn to invert the smooth approximation, thereby affecting the entropy bottleneck layer that learns the entropy model of the latent code. In [25], they found to work as well as more sophisticated choices.
4.4 Ratedistortion Loss
In the lossy compression problem, one must trade off between the entropy of the discretized latent code () and the error caused by the compression (). So, our goal is to minimize the weighted sum of the rate (R) and the distortion (D) over the parameters of encoder, decoder and the rate estimation model that will be discussed later, where controls the tradeoff.
Entropy rate estimation has been studied by many researchers who use neural networks to compress images[25] [2] [4]. In our architecture, the encoder transforms the input point cloud into a latent representation , using a nonlinear function with parameters . The latent code is then quantized to form by quantizer . can be losslessly compressed by entropy coding techniques such as arithmetic coding as its value is discrete. The rate of the discrete code ,
, is lowerbounded by the entropy of the discrete probability distribution of
, , that is:(5) 
where is the actual marginal distribution of the discrete latent code. However, the is unknown to us and we need to estimate it by building a probability model according to some prior knowledge. Suppose we get an estimation of the probability model . Then the actual rate is given by the Shannon cross entropy between the marginal and the prior :
(6) 
Therefore, if the estimated model distribution is identical to the actual marginal distribution, the rate is minimum and the estimated rate is the most accurate. Similar to [4], we use the entropy bottleneck layer^{1}^{1}1https://tensorflow.github.io/compression/docs/entropy_bottleneck.html that models the prior using a nonparametric and fully factorized density model:
(7) 
where the vector represents the parameters of each univariate distribution . Note that each nonparametric density is convolved with a standard uniform density, which enables a better match of the prior to the marginal[4].
The distortion is computed by the Chamfer distance. Suppose that there are points in the original point cloud, represented by a matrix. Each row of the matrix is composed of the 3D position . The reconstructed point cloud is represented by a matrix. The number of original points may be different from because of the lossy compression. Suppose the original point cloud is and the reconstructed point set is . Then, the reconstruction error is computed by the Chamfer distance:
(8) 
Finally, our ratedistortion loss function is:
(9) 
where is the nonlinear function of the encoder, is the nonlinear function of the decoder and is the estimated probability model. The expectation will be approximated by averages over a training set of point clouds.
5 Experimental results
5.1 Datasets
Since our datadriven method requires a large number of point clouds for training, we use the ShapeNet dataset[5]. Shapes from ShapeNet dataset are axis aligned and centered into the unit sphere. The pointcloudformat of the ShapeNet data set is obtained by uniformly sampling points on the triangles from the mesh models in the dataset. Without additional statements, we train models with point clouds from a single object class and the train/test splits is 90%10%.
5.2 Implementation Details
The number of points in the original point cloud is 2048 and the latent code’s dimension is 512. To better compare with the reconstructed point clouds that compressed by TMC13, we downsample the original point cloud to 1024, 512, 256 and 128 points and the corresponding latent code sizes are 256, 128, 64 and 32. Because the TMC13 can not compress the normalized point cloud directly, we expand normalized point clouds to be large enough so that the TMC13 can compress it properly. To be fair, we also perform the same operation when we compress point clouds by our model. We add an extra normalization before feeding point clouds into our network, and expand reconstructed point clouds when we compute their distortion.
We implement our model based on the python2.7 platform with Tensorflow 1.12. We run our model in a computer with an i78700 CPU and a GTX1070 GPU (with 8G memory). We use the Adam
[14]method to train our network with learning rate 0.0005. We train our model with entropy optimization within 1200 epochs and train the model without entropy optimization within 500 epochs. The batch size is 8 as limited by our GPU memory.
5.3 Compression results
We compare our method with the TMC13 anchor latest released in the MPEG 125th meeting. We experiment on four types of point clouds: chair, airplane, table and car. These four types of point clouds contain a very rich point cloud shape. The chair category contains 6101 point clouds for training and 678 point clouds for testing. The airplane category contains 3640 point clouds for training and 405 point clouds for testing. The table category contains 7658 point clouds for training and 851 point clouds for testing. The car category contains 6747 point clouds for training and 750 point clouds for testing. Ratedistortion performances for each types of point cloud are shown in Figure 2. To avoid unfairly penalizing the TMC13 due to the unavoidable cost of file headers, we exclude the header size from bitstream produced by TMC13. The distortion in Figure 2 is the pointtopoint geometry PSNR obtained from the pc_error MPEG tool[26]. Ratedistortion curves are obtained by averaging over all test point clouds. Results show that our method outperforms the TMC13 in all types of point clouds at all bitrates. On average, a 73.15% BDrate gain can be achieved.
In Figure 3 we show some test point clouds compressed to low bit rates. In line with objective results, we find that our method produces smaller bits per point than TMC13 under the similar PSNR reconstruction quality. The reconstructed point clouds of proposed method is more dense than those compressed by TMC13.
To further analyze the entropy estimation module in our method, we implement a simple ablation study. We consider the model without the entropy bottleneck layer as our baseline. The comparison of the RD curve between the baseline and our proposed model on point clouds of chair is presented in Figure 4. Results show that the entropy estimation can effectively reduce the size of bitstream, yielding a 19.3% BDrate gain.
6 Conclusion
In this paper, we propose a general deep autoencoderbased architecture for lossy geometry point cloud compression. Compared with handcrafted codecs, this approach not only achieves better coding efficiency, but also can adapt much quicker to new media contents and new media formats. Experimental evaluation demonstrates that for the given benchmark, the proposed model outperforms the TMC13 on the ratedistortion performance, and on average a 73.15% BDrate gain is achieved.
To the best of our knowledge, it is the first autoencoderbased geometry compression codec that directly takes point clouds as input rather than voxel grids or collections of images. The algorithms that we present may also be extended to work on attribute compression of point cloud or even point cloud sequence compression. To encourage future work, we will make all the materials public.
References
 [1] P. Achlioptas, O. Diamanti, I. Mitliagkas, and L. Guibas. Learning representations and generative models for 3d point clouds, 2018.
 [2] J. Ballé, V. Laparra, and E. Simoncelli. Endtoend optimized image compression. 11 2016.
 [3] J. Ballé, V. Laparra, and E. P. Simoncelli. Endtoend optimization of nonlinear transform codes for perceptual quality. In Picture Coding Symposium, 2016.

[4]
J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston.
Variational image compression with a scale hyperprior.
2018.  [5] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu. ShapeNet: An InformationRich 3D Model Repository. Technical Report arXiv:1512.03012 [cs.GR], Stanford University — Princeton University — Toyota Technological Institute at Chicago, 2015.

[6]
R. Q. Charles, S. Hao, K. Mo, and L. J. Guibas.
Pointnet: Deep learning on point sets for 3d classification and segmentation.
InIEEE Conference on Computer Vision & Pattern Recognition
, 2017.  [7] R. L. de Queiroz and P. A. Chou. Motioncompensated compression of point cloud video. In 2017 IEEE International Conference on Image Processing (ICIP), pages 1417–1421, Sep. 2017.
 [8] D. C. Garcia and R. L. de Queiroz. Contextbased octree coding for pointcloud video. In 2017 IEEE International Conference on Image Processing (ICIP), pages 1412–1416, Sep. 2017.

[9]
B. S. Hua, M. K. Tran, and S. K. Yeung.
Pointwise convolutional neural network.
2017.  [10] Y. Huang, J. Peng, C. C. Kuo, and M. Gopi. A generic scheme for progressive point cloud coding. IEEE Transactions on Visualization & Computer Graphics, 14(2):440–453, 2008.
 [11] S. Ioffe and C. Szegedy. Batch normalization: accelerating deep network training by reducing internal covariate shift. In International Conference on International Conference on Machine Learning, 2015.
 [12] C. L. Jackins and S. L. Tanimoto. Octtrees and their use in representing threedimensional objects. Computer Graphics & Image Processing, 14(3):249–270, 1980.
 [13] J. Kammerl, N. Blodow, R. B. Rusu, M. Beetz, E. Steinbach, and S. Gedikli. Realtime compression of point cloud streams. In IEEE International Conference on Robotics & Automation, 2012.
 [14] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
 [15] R. Klokov and V. Lempitsky. Escape from cells: Deep kdnetworks for the recognition of 3d point cloud models. In 2017 IEEE International Conference on Computer Vision (ICCV), 2017.

[16]
A. Krizhevsky and G. E. Hinton.
Using very deep autoencoders for contentbased image retrieval.
In ESANN, 2011.  [17] Y. Li, R. Bu, M. Sun, and B. Chen. Pointcnn. 01 2018.
 [18] D. Meagher. Geometric modeling using octree encoding. Computer Graphics & Image Processing, 19(2):129–147, 1982.
 [19] R. Mekuria, K. Blom, and P. Cesar. Design, implementation and evaluation of a point cloud codec for teleimmersive video. IEEE Transactions on Circuits & Systems for Video Technology, PP(99):1–1, 2016.
 [20] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In International Conference on International Conference on Machine Learning, 2010.
 [21] E. Pavez and P. A. Chou. Dynamic polygon cloud compression. In IEEE International Conference on Acoustics, 2017.
 [22] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5099–5108. Curran Associates, Inc., 2017.
 [23] R. Schnabel and R. Klein. Octreebased pointcloud compression. In Eurographics, 2006.
 [24] S. Schwarz, M. Preda, V. Baroncini, M. Budagavi, P. Cesar, P. Chou, R. Cohen, M. Krivokuća, S. Lasserre, Z. Li, J. Llach, K. Mammou, R. Mekuria, O. Nakagami, E. Siahaan, A. Tabatabai, A. M. Tourapis, and V. Zakharchenko. Emerging mpeg standards for point cloud compression. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, PP:1–1, 12 2018.
 [25] L. Theis, W. Shi, A. Cunningham, and F. Huszár. Lossy image compression with compressive autoencoders. CoRR, abs/1703.00395, 2017.
 [26] D. Tian, H. Ochimizu, C. Feng, R. Cohen, and A. Vetro. Geometric distortion metrics for point cloud compression. pages 3460–3464, 09 2017.
 [27] G. Toderici, S. M. O’Malley, S. J. Hwang, D. Vincent, D. Minnen, S. Baluja, M. Covell, and R. Sukthankar. Variable rate image compression with recurrent neural networks. Computer Science, 2015.
 [28] W. Wang, R. Yu, Q. Huang, and U. Neumann. Sgpn: Similarity group proposal network for 3d point cloud instance segmentation. 2017.
 [29] Y. Yang, C. Feng, Y. Shen, and D. Tian. Foldingnet: Point cloud autoencoder via deep grid deformation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
 [30] Y. Zhou and O. Tuzel. Voxelnet: Endtoend learning for point cloud based 3d object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
Comments
There are no comments yet.