Recently, 3D data generation problems based on deep neural networks have attracted significant research interest and have been addressed through various approaches, including image-to-point cloud[11, 19], image-to-voxel , image-to-mesh , point cloud-to-voxel [6, 50], and point cloud-to-point cloud 
. The generated 3D data has been used to achieve outstanding performance in a wide range of computer vision applications (e.g., segmentation [30, 37, 44], volumetric shape representation , object detection [4, 35]24], contour detection , classification [29, 34]
, and scene understanding[20, 43]).
However, little effort has been devoted to the development of generative adversarial networks (GANs) that can generate 3D point clouds in an unsupervised manner. To the best of our knowledge, the only works on GANs for transforming random latent codes (i.e., vectors) into 3D point clouds are  and . The method in  generates point clouds using only fully connected layers. The method in  exploits local topology by using -nearest neighbor techniques to produce geometrically accurate point clouds. However, it suffers from high computational complexity as the number of dynamic graph updates increases. Additionally, it can only generate a limited number of object categories (e.g., chair, airplane, and sofa) using point clouds.
In this paper, we present a novel method called tree-GAN that can generate 3D point clouds from random latent codes in an unsupervised manner. It can also generate multi-class 3D point clouds without training on each class separately (e.g., ). To achieve state-of-the-art performance in terms of both accuracy and computational efficiency, we propose a novel tree-structured graph convolution network (TreeGCN) as a generator for tree-GAN. The proposed TreeGCN preserves the ancestor information of each point and utilizes this information to extract new points via graph convolutions. A branching process and loop term with supports in TreeGCN further enhance the representation power of points. These two properties enable TreeGCN to produce more accurate point clouds and express more diverse object categories. Additionally, we demonstrate that using the ancestors of features in TreeGCN is more efficient computationally than using the neighbors of features in traditional GCNs.. Fig.3D Point Cloud Generative Adversarial Network Based on Tree Structured Graph Convolutions shows the effectiveness of our tree-GAN.
The main contributions of this paper are fourfold.
We present the novel tree-GAN method, which is a deep generative model that can generate multi-class 3D point clouds in unsupervised settings (Section 3).
We introduce the TreeGCN based generator. The performance of traditional GCNs can be improved significantly by adopting the proposed tree structures for graph convolutions. Based on the proposed tree structures, tree-GAN can generate parts of objects by selecting particular ancestors (Section 4).
We mathematically interpret the TreeGCN and highlight its desirable properties (Section 5).
2 Related Work
Graph Convolutional Networks: Over the past few years, a number of works have focused on the generalization of deep neural networks for graph problems [3, 9, 16, 25]. Defferrard et al.  proposed fast-learning convolutional filters for graph classification problems. Using these filters, they significantly accelerated the spectral decomposition process, which was one of the main computational bottlenecks in traditional graph convolution problmes with large datasets. Kipf and Welling  introduced scalable GCNs based on first-order approximations of spectral graph convolutions for semi-supervised classification, in which convolution filters only use the information from neighboring vertices instead of the information from the entire network.
Because the aforementioned GCNs were originally designed for classification problems, the connectivity of graphs was assumed to be given as prior knowledge. However, this setting is not appropriate for problems of dynamic model generation. For example, in unsupervised settings for 3D point cloud generation, the typologies of 3D point clouds are non-deterministic. Even for the same class (e.g., chairs), 3D point clouds can be represented by various typologies. To represent the diverse typologies of 3D point clouds, our TreeGCN utilizes no prior knowledge regarding object models.
GANs for 3D Point Clouds Generation: GANs  for 2D image generation tasks have been widely studied with great success [10, 18, 23, 26, 31, 36, 41, 42, 48, 49], but GANs for 3D point cloud generation have rarely been studied in the computer vision field. Recently, Achlioptas et al.  proposed a GAN for 3D point clouds called r-GAN, generator of which is based on fully connected layers. As fully connected layers cannot maintain structural information, the r-GAN has difficulty in generating realistic shapes with diversity. Valsesia et al.  used graph convolutions for generators for GANs. At each layer of graph convolutions during training, adjacency matrices were dynamically constructed using the feature vectors from each vertex. Unlike traditional graph convolutions, the connectivity of a graph was not assumed to be given as prior knowledge. However, to extract the connectivity of a graph, computing the adjacency matrix at a single layer incurs quadratic computational complexity where indicates the number of vertices. Therefore, this approach is intractable for multi-batch and multi-layer networks.
Similar to the method in , our tree-GAN requires no prior knowledge regarding the connectivity of a graph. However, unlike the method in , the tree-GAN is computationally efficient because it does not construct adjacency matrices. Instead, the tree-GAN uses ancestor information from the tree to exploit the connectivity of a graph, in which only a list of tree structure is needed.
Tree-structured Deep Networks:5, 21, 27, 28, 32]. However, to the best of our knowledge, no previous methods have used tree structures for either graph convolutions or GANs. For example, Gadelha et al. 
used tree-structured networks to generate 3D point clouds via variational autoencoder (VAE). However, this method needed the assumption that inputs are the 1D-ordered lists of points obtained by space-partitioning algorithms such as K-dimensional tree and random projection tree. Thus, it required additional preprocessing steps for valid implementations. Because its network only comprised 1D convolution layers, the method could not extract the meaningful information from unordered 3D point clouds. In contrast, the proposed tree-GAN can not only deal with unordered points, but also extract semantic parts of objects.
3 3D Point Cloud GAN
. The loss function of a generator,, is defined as
where and denote the generator and discriminator, respectively, and represents a latent code distribution. We design
with a Normal distribution,. The loss function of a discriminator, , is defined as
where are sampled from line segments between real and fake point clouds, and denote generated and real point clouds, respectively, and represents a real data distribution. In (2), we use a gradient penalty to satisfy the -Lipschitz condition , where is a weighting parameter.
4 Proposed TreeGCN
where is the activation unit, is the -th node in the graph (i.e., 3D coordinate of a point cloud) at the -th layer, is the -th neighbor of , and is the set of all neighbors of . Then, and in (2) can be represented by and , respectively, where is the final layer and is the number of points at .
During training, GCNs find the best weights and and best bias at each layer, then generate 3D coordinates for point clouds by using these parameters to ensure similarity to real point clouds. The first and second terms in (3) are called the loop and neighbors terms, respectively.
To enhance a conventional GCN such as that used in , we propose a novel GCN augmented with tree structures (i.e., TreeGCN). The proposed TreeGCN introduces a tree structure for hierarchical GCNs by passing information from ancestors to descendants of vertices. The main unique characteristic of the TreeGCN is that each vertex updates its value by referring to the values of its ancestors in the tree instead of thosed of its neighbors. Traditional GCNs, such as those defined in (3), can be considered as methods that only refer to neighbors at a single depth. Then, the proposed graph convolution is defined as
where there are two major differences compared to (3). One is an improved conventional loop term using a subnetwork , where is generated by supports from . We call this term a loop with -supports, as explained in Section 4.1. The other difference is the consideration of values from, all ancestors in the tree to update the value of a current point, where denotes the set of all ancestors of . We call this term ancestors, as explained in Section 4.1.
4.1 Advanced Graph Convolution
The proposed tree-structured graph convolution (i.e., GraphConv in Fig.1) aims to modify the coordinates of points using the loop with -supports and ancestor terms.
where is a fully connected layer containing nodes. A conventional GCN using first-order approximations adopts a single parameter in its loop term to generate the next point from the current point. However, for large graphs, the representation capacity of a single parameter is insufficient for describing a complex point distribution. Therefore, our loop term utilizes supports to represent a more complex distribution of points, as illustrated in Fig.2.
Ancestor term: For graph convolution, knowing the connectivity of a graph is very important because this information allows a GCN to propagate useful information from a vertex to other connected vertices. However, in our point cloud generation setting, it is impossible to use prior knowledge regarding connectivity because we must be able to generate diverse typologies of point clouds, even for the same object category. Therefore, the dynamic 3D point generation problem cannot be addressed using traditional GCNs because such networks assume that the connectivity of a graph is given. As a replacement for the neighbor term in (3), we define the ancestor term in (4) as follows:
where denotes the set of all ancestors of . This term combines all information from the ancestors through a linear mapping . Because each ancestor belongs to a different feature space at a different layer, our ancestor term can fuse all information from previous layers and different feature spaces. To generate the next point, the current point refers to its ancestors in various feature spaces to find the best mapping to combine ancestor information effectively. By using this new ancestor term, our tree-GAN obtains several desirable mathematical properties, as explained in Section 5. Fig.3 illustrates the graph convolution process with the ancestor term.
Branching is a procedure for increasing the total number of points and is similar to up-sampling in 2D convolution. In branching, transforms a single point into child points, where . Therefore,
where denotes the -th column of matrix . Then, the total number of points in the -th layer is , where is the number of points in the -th layer. In our experiments, we use different branching degrees for different layers (e.g., ). Note that the number of points in the final layer is . Fig.4 presents an example of branching with degree .
5 Mathematical Properties
In this section, we mathematically analyze the geometric relationships between generated points and demonstrate how these relationships are formulated in the output Euclidean space via tree-structured graph convolutions.
Let and in (4) be generated points that share the same parents and different parents, respectively, with in the final layer . Let in (5) be the loop of point at the -th layer. Hereafter, we omit the superscripts of and if the superscript indicates the fianl layer . Then,
where are all ancestors of point in the -th layer. For simplicity, we ignore the branch process and in (5).
Based on Proposition 1, we can prove that following two statements are true:
The geometric distance between two points is determined by the number of shared ancestors. If two points and have different ancestors, then the geometric distance between these points is calculated as the sum of differences between their ancestors in each layer and the differences between their loops . Thus, as their ancestors become increasingly different, the geometric distance between two points increases.
Geometrically related points share the same ancestors. If two points and share the same ancestors, the geometric distance between these points is affected by only their loops, as shown in (8). Thus, the geometric distance between points with the same ancestors in (8) can decreases compared to that between points with different ancestors in (9) as .
6 Fréchet Point Cloud Distance
For quantitative comparisons between GANs, we require evaluation metrics that can accurately measure the quality of the 3D point clouds generated by GANs. In the case of 2D data generation problems, FID  is the most common metric. FID adopts pre-trained inception V3 models  to utilize their feature spaces for evaluation. Although the conventional metrics proposed by Achlioptas et al.  can be used to evaluate the quality of generated points by directly measuring matching distances between real and generated point clouds, they can be considered as sub-optimal metrics because the goal of a GAN is not to generate the similar samples (e.g.
, MMD or CD) but to generate synthetic probability measures that are as close as possible to real probability measures. This perspective has been explored in unsupervised 2D image generation tasks using GANs[33, 17]. Therefore, we propose a novel evaluation metric for generated 3D point clouds called FPD.
Similar to FID, the proposed FPD calculates the 2-Wasserstein distance between real and fake Gaussian measures in the feature spaces extracted by PointNet  as follows:
where and are the mean vector and covariance matrix of the points calculated from real point clouds , respectively, and are the mean vector and covariance matrix calculated from generated point clouds , respectively, where and . In (10), is the sum of the elements along the main diagonal of matrix . In this paper, for evaluation purposes, we use both conventional evaluation metrics  and the proposed FPD.
7 Experimental Results
Implementation details: We used the Adam optimizer for both the generator and discriminator networks with a learning rate of and other coefficients of and
. In generator, we used LeakyReLU as a nonlinearity function without batch normalization. The network architecture of discriminator was the same as that in-GAN . The gradient penalty coefficient was set to and the discriminator was updated five times per iteration, while the generator was updated one time per iteration. As shown in Fig.1, a latent vector was sampled from a normal distribution ) to act as an input. Seven layers () were used for the TreeGCN. The loop term of the TreeGCN in (5) had supports. The total number of points in the final layer was set to .
Comparison: There are only two conventional GANs for 3D point cloud generation: r-GAN  and the GAN proposed by Valsesia et al. . Thus, the proposed tree-GAN was compared to these two GANs. While the conventional GANs in [39, 1] train separate networks for each class, our tree-GAN trains only a single network for multiple classes of objects.
Evaluation metrics: We evaluated the tree-GAN using ShapeNet111https://www.shapenet.org/, which is a large-scale dataset of 3D shapes, containing object classes. Evaluations were conducted in terms of the proposed FPD (Section 6) and the metrics used by Achlioptas et al . As a reference model for FPD, we used the classification module of PointNet  because it can handle partial inputs of objects. This property is suitable for FPD because generated point clouds gradually form shapes, meaning point clouds can be partially complete during training. For the implementation of FPD, we first trained a classification module for epochs to attain an accuracy of for classification tasks. We then extracted a -dimensional feature vector from the output of the dense layers to calculate the mean and covariance in (10).
||Valsesia et al. (no up.)||0.119||0.0033||0.104||26||20|
||Valsesia et al. (up.)||0.100||0.0029||0.097||30||26|
||Valsesia et al. (no up.)||0.164||010010||0.102||24||13|
||Valsesia et al. (up.)||0.083||0.0008||0.071||31||14|
All ( classes)
7.1 Ablation Study
We analyze the proposed tree-GAN and examine its useful properties, namely unsupervised semantic part generation, latent space representation via interpolation, and branching.
Unsupervised semantic part generation: Our tree-GAN can generate point clouds for different semantic parts, even with no prior knowledge regarding those parts during training. The tree-GAN can perform this semantic generation owing to its tree-structured graph convolution, which is a unique characteristic among GAN-based 3D point cloud methods. As stated in Proposition 1, the geometric distance between points is determined by their ancestors in the tree. Different ancestors imply geometrically different families of points. Therefore, by selecting different ancestors, our tree-GAN can generate semantically different parts of point clouds. Note that these geometric families of points are consistent between different latent code inputs. For example, let be sampled latent codes. Let and be their corresponding generated point clouds. Let be a certain subset of point indices. Then, denotes the subset of from the indices . As shown in Fig. 5, if we select the same subsets of indices of point clouds, it results in the same semantic parts, even though the latent code inputs are different. For example, all red points indexed by (e.g., and ) represent the cockpits of airplanes, while all blue points indexed by (e.g., and ) represent the tails of airplanes.
From this ablation study, we can verify that the differences between ancestors determine the semantic differences between points and that two points with the same ancestor (e.g., green and purple points in the left wings in Fig. 5) maintain their relative distances for different latent codes.
Interpolation: We interpolated 3D point clouds by setting the input latent code to based on six alphas . The leftmost and rightmost point clouds of the airplanes in Fig.5 were generated by and , respectively. Our tree-GAN can also generate realistic interpolations between two point clouds.
Branching strategy: We conducted the experiments to show that the convergence dynamics of the proposed metric is not sensitive to different branching strategies. Like other experiments, the total number of the generated points are but different branching degrees were set (e.g., , , ). Please refer to convergence graphs in supplementary materials.
7.2 Comparisons with Other GANs
The proposed tree-GAN was quantitatively and qualitatively compared to other state-of-the-art GANs for point cloud generation, in terms of both accuracy and computational efficiency. Supplementary materials contain more results and comparisons for 3D point clouds generation.
Comparisons: Tables 1 and 2 contain quantitative comparisons in terms of the metrics used by Achlioptas et al.  (i.e., JSD, MMD-CD, MMD-EMD, COV-CD, and COV-EMD) and the proposed FPD, respectively. The proposed tree-GAN consistently outperforms other GANs at a large margin in terms of all metrics, demonstrating the effectiveness of the proposed treeGCN.
For qualitative comparisons, we equally divided the entire index set into four subsets and painted the points in each subset with the same color. Although the real 3D point clouds were unordered as shown in Figs.3D Point Cloud Generative Adversarial Network Based on Tree Structured Graph Convolutions and 6, our tree-GAN successfully generated 3D point clouds with intuitive semantic meaning without any prior knowledge, whereas r-GAN failed to generate semantically ordered point clouds. Additionally, our tree-GAN could generate detailed and complex parts of objects, whereas r-GAN generated more dispersed point distributions. Fig.6 presents qualitative results of our tree-GAN. The tree-GAN generated realistic point clouds for multi-object categories and produced very diverse typologies of point clouds for each class.
Computational cost: In methods using static links for graph convolution, adjacency matrices are typically used for the convolution of vertices. Although these methods are known to produce good results for graph data, prior knowledge regarding connectivity is required. In other methods that use dynamic links for graph convolution, adjacency matrices must be constructed from vertices to derive connectivity information for every convolution layer instead of using prior knowledge. For example, let denote the number of layers, batch size, and induced vertex size of an output graph at the -th layer, respectively. The methods described above require additional computations to utilize connectivity information. These computations require time and memory resources on the order of . However, our TreeGCN does not require any prior connectivity information like static link methods and does not require additional computation like dynamic link methods. Therefore, our network can use time and memory resources much more efficiently and requires less resources on the order of .
In this paper, we proposed a generative adversarial network called the tree-GAN that can generate 3D point clouds in an unsupervised manner. The proposed generator for tree-GAN, which is called tree-GCN, preforms graph convolutions based on tree structures. The tree-GCN utilizes ancestor information from a tree and employs multiple supports to represent 3D point clouds. Thus, the proposed tree-GAN outperforms other GAN based point cloud generation methods in terms of accuracy and computational efficiency. Through various experiments, we demonstrated that the tree-GAN can generate semantic parts of objects without any prior knowledge and can represent 3D point clouds in latent spaces via interpolation.
-  P. Achlioptas, O. Diamanti, I. Mitliagkas, and L. Guibas. Learning representations and generative models for 3D point clouds. In ICLR, 2018.
-  M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein GAN. arXiv preprint arXiv:1701.07875, 2017.
-  J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral networks and locally connected networks on graphs. In ICLR, 2014.
-  X. Chen, K. Kundu, Y. Zhu, A. G. Berneshawi, H. Ma, S. Fidler, and R. Urtasun. 3D object proposals for accurate object class detection. In NIPS, 2015.
-  Z. Cheng, C. Yuan, J. Li, and H. Yang. TreeNet: Learning sentence representations with unconstrained tree structure. In IJCAI, 2018.
-  A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner. ScanNet: Richly-annotated 3D reconstructions of indoor scenes. In CVPR, 2017.
-  S. Dasgupta and Y. Freund. Random projection trees and low dimensional manifolds. In STOC, 2008.
-  M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In NIPS, 2016.
-  D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams. Convolutional networks on graphs for learning molecular fingerprints. In NIPS, 2015.
-  K. Ehsani, R. Mottaghi, and A. Farhadi. SeGAN: Segmenting and generating the invisible. In CVPR, 2018.
-  H. Fan, H. Su, and L. Guibas. A point set generation network for 3D object reconstruction from a single image. In CVPR, 2017.
-  M. Gadelha, R. Wang, and S. Maji. Multiresolution tree networks for 3d point cloud processing. In ECCV, 2018.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
-  I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein GANs. In NIPS, 2017.
-  T. Hackel, J. D. Wegner, and K. Schindler. Contour detection in unstructured 3d point clouds. In CVPR, 2016.
-  M. Henaff, J. Bruna, and Y. LeCun. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163, 2015.
-  M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. GANs trained by a two time-scale update rule converge to a local nash equilibrium. In NIPS, 2017.
-  P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial nets. In CVPR, 2017.
-  L. Jiang, S. Shi, X. Qi, and J. Jia. GAL: Geometric adversarial loss for single-view 3D-object reconstruction. In ECCV, 2018.
-  B.-S. Kim, P. Kohli, and S. Savarese. 3D scene understanding by voxel-CRF. In CVPR, 2013.
-  J. Kim, Y. Park, G. Kim, and S. J. Hwang. Splitnet: Learning to semantically split deep networks for parameter reduction and model parallelization. In ICML, 2017.
-  T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. ICLR, 2017.
C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. P.
Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi.
Photo-realistic single image super-resolution using a generative adversarial network.In CVPR, 2017.
-  Y. Li, S. Pirk, H. Su, C. R. Qi, and L. J. Guibas. FPNN: Field probing neural networks for 3D data. In NIPS, 2016.
-  Y. Li, D. Tarlow, M. Brockschmidt, and R. S. Zemel. Gated graph sequence neural networks. In ICLR, 2016.
-  J. Lin, Y. Xia, T. Qin, Z. Chen, and T.-Y. Liu. Conditional image-to-image translation. In CVPR, 2018.
-  C. Murdock, Z. Li, H. Zhou, and T. Duerig. Blockout: Dynamic model selection for hierarchical deep networks. In CVPR, 2016.
-  H. Nam, M. Baek, and B. Han. Modeling and propagating CNNs in a tree structure for visual tracking. arXiv preprint arXiv:1608.07242, 2016.
C. R. Qi, H. Su, K. Mo, and L. J. Guibas.
Pointnet: Deep learning on point sets for 3D classification and segmentation.In CVPR, 2017.
-  C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In NIPS, 2017.
-  S. Reed, Z. Akata, L. X. Yan, B. Logeswaran, Schiele, and H. Lee. Generative adversarial text to image synthesis. In ICML, 2016.
-  D. Roy, P. Panda, and K. Roy. Tree-CNN: A deep convolutional neural network for lifelong learning. arXiv preprint arXiv:1802.05800, 2018.
-  T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen, and X. Chen. Improved techniques for training GANs. In NIPS. 2016.
-  R. Socher, B. Huval, B. Bath, C. D. Manning, and A. Y. Ng. Convolutional-recursive deep learning for 3D object classification. In NIPS, 2012.
-  S. Song and J. Xiao. Deep sliding shapes for a modal 3D object detection in RGB-D images. In CVPR, 2016.
-  Y. Song, C. Ma, X. Wu, L. Gong, L. Bao, W. Zuo, C. Shen, R. Lau, and M.-H. Yang. VITAL: Visual tracking via adversarial learning. In CVPR, 2018.
-  S. C. Stein, M. Schoeler, J. Papon, and F. Worgotter. Object partitioning using local convexity. In CVPR, 2014.
-  C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016.
-  D. Valsesia, G. Fracastoro, and E. Magli. Learning localized generative models for 3D point clouds via graph convolution. In ICLR, 2019.
-  N. Wang, Y. Zhang, Z. Li, Y. Fu, W. Liu, and Y.-G. Jiang. Pixel2Mesh: Generating 3d mesh models from single RGB images. In ECCV, 2018.
-  X. Wang and A. Gupta. Generative image modeling using style and structure adversarial networks. In ECCV, 2016.
-  X. Wang, A. Shrivastava, and A. Gupta. A-Fast-RCNN: Hard positive generation via adversary for object detection. In CVPR, 2017.
Y. Wang, R. Ji, and S.-F. Chang.
Label propagation from imageNet to 3D point clouds.In CVPR, 2013.
-  Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon. Dynamic graph CNN for learning on point clouds. arXiv preprint arXiv:1801.07829, 2018.
-  J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum. Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. In NIPS, 2016.
-  Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3D shapeNets: A deep representation for volumetric shapes. In CVPR, 2015.
-  Y. Yang, C. Feng, Y. Shen, and D. Tian. FoldingNet: Point cloud auto-encoder via deep grid deformation. In CVPR, 2018.
J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang.
Generative image inpainting with contextual attention.In CVPR, 2018.
-  Z. Zhang, L. Yang, and Y. Zheng. Translating and segmenting multimodal medical volumes with cycle- and shape-consistency generative adversarial network. In CVPR, 2018.
-  Y. Zhou and O. Tuzel. VoxelNet: End-to-end learning for point cloud based 3D object detection. In CVPR, 2018.