1 Introduction
Recently, 3D data generation problems based on deep neural networks have attracted significant research interest and have been addressed through various approaches, including imagetopoint cloud
[11, 19], imagetovoxel [45], imagetomesh [40], point cloudtovoxel [6, 50], and point cloudtopoint cloud [47]. The generated 3D data has been used to achieve outstanding performance in a wide range of computer vision applications (
e.g., segmentation [30, 37, 44], volumetric shape representation [46], object detection [4, 35][24], contour detection [15], classification [29, 34], and scene understanding
[20, 43]).However, little effort has been devoted to the development of generative adversarial networks (GANs) that can generate 3D point clouds in an unsupervised manner. To the best of our knowledge, the only works on GANs for transforming random latent codes (i.e., vectors) into 3D point clouds are [1] and [39]. The method in [1] generates point clouds using only fully connected layers. The method in [39] exploits local topology by using nearest neighbor techniques to produce geometrically accurate point clouds. However, it suffers from high computational complexity as the number of dynamic graph updates increases. Additionally, it can only generate a limited number of object categories (e.g., chair, airplane, and sofa) using point clouds.
In this paper, we present a novel method called treeGAN that can generate 3D point clouds from random latent codes in an unsupervised manner. It can also generate multiclass 3D point clouds without training on each class separately (e.g., [39]). To achieve stateoftheart performance in terms of both accuracy and computational efficiency, we propose a novel treestructured graph convolution network (TreeGCN) as a generator for treeGAN. The proposed TreeGCN preserves the ancestor information of each point and utilizes this information to extract new points via graph convolutions. A branching process and loop term with supports in TreeGCN further enhance the representation power of points. These two properties enable TreeGCN to produce more accurate point clouds and express more diverse object categories. Additionally, we demonstrate that using the ancestors of features in TreeGCN is more efficient computationally than using the neighbors of features in traditional GCNs.. Fig.3D Point Cloud Generative Adversarial Network Based on Tree Structured Graph Convolutions shows the effectiveness of our treeGAN.
The main contributions of this paper are fourfold.
We present the novel treeGAN method, which is a deep generative model that can generate multiclass 3D point clouds in unsupervised settings (Section 3).
We introduce the TreeGCN based generator. The performance of traditional GCNs can be improved significantly by adopting the proposed tree structures for graph convolutions. Based on the proposed tree structures, treeGAN can generate parts of objects by selecting particular ancestors (Section 4).
We mathematically interpret the TreeGCN and highlight its desirable properties (Section 5).
2 Related Work
Graph Convolutional Networks: Over the past few years, a number of works have focused on the generalization of deep neural networks for graph problems [3, 9, 16, 25]. Defferrard et al. [8] proposed fastlearning convolutional filters for graph classification problems. Using these filters, they significantly accelerated the spectral decomposition process, which was one of the main computational bottlenecks in traditional graph convolution problmes with large datasets. Kipf and Welling [22] introduced scalable GCNs based on firstorder approximations of spectral graph convolutions for semisupervised classification, in which convolution filters only use the information from neighboring vertices instead of the information from the entire network.
Because the aforementioned GCNs were originally designed for classification problems, the connectivity of graphs was assumed to be given as prior knowledge. However, this setting is not appropriate for problems of dynamic model generation. For example, in unsupervised settings for 3D point cloud generation, the typologies of 3D point clouds are nondeterministic. Even for the same class (e.g., chairs), 3D point clouds can be represented by various typologies. To represent the diverse typologies of 3D point clouds, our TreeGCN utilizes no prior knowledge regarding object models.
GANs for 3D Point Clouds Generation: GANs [13] for 2D image generation tasks have been widely studied with great success [10, 18, 23, 26, 31, 36, 41, 42, 48, 49], but GANs for 3D point cloud generation have rarely been studied in the computer vision field. Recently, Achlioptas et al. [1] proposed a GAN for 3D point clouds called rGAN, generator of which is based on fully connected layers. As fully connected layers cannot maintain structural information, the rGAN has difficulty in generating realistic shapes with diversity. Valsesia et al. [39] used graph convolutions for generators for GANs. At each layer of graph convolutions during training, adjacency matrices were dynamically constructed using the feature vectors from each vertex. Unlike traditional graph convolutions, the connectivity of a graph was not assumed to be given as prior knowledge. However, to extract the connectivity of a graph, computing the adjacency matrix at a single layer incurs quadratic computational complexity where indicates the number of vertices. Therefore, this approach is intractable for multibatch and multilayer networks.
Similar to the method in [39], our treeGAN requires no prior knowledge regarding the connectivity of a graph. However, unlike the method in [39], the treeGAN is computationally efficient because it does not construct adjacency matrices. Instead, the treeGAN uses ancestor information from the tree to exploit the connectivity of a graph, in which only a list of tree structure is needed.
Treestructured Deep Networks:
There have been several attempts to represent convolutional neural networks or long shortterm memory using tree structures
[5, 21, 27, 28, 32]. However, to the best of our knowledge, no previous methods have used tree structures for either graph convolutions or GANs. For example, Gadelha et al. [12]used treestructured networks to generate 3D point clouds via variational autoencoder (VAE). However, this method needed the assumption that inputs are the 1Dordered lists of points obtained by spacepartitioning algorithms such as Kdimensional tree and random projection tree
[7]. Thus, it required additional preprocessing steps for valid implementations. Because its network only comprised 1D convolution layers, the method could not extract the meaningful information from unordered 3D point clouds. In contrast, the proposed treeGAN can not only deal with unordered points, but also extract semantic parts of objects.3 3D Point Cloud GAN
Fig.1 presents the pipeline of the proposed treeGAN. To generate 3D point clouds from latent code , we utilize the objective function introduced in Wasserstein GAN [2]
. The loss function of a generator,
, is defined as(1) 
where and denote the generator and discriminator, respectively, and represents a latent code distribution. We design
with a Normal distribution,
. The loss function of a discriminator, , is defined as(2) 
where are sampled from line segments between real and fake point clouds, and denote generated and real point clouds, respectively, and represents a real data distribution. In (2), we use a gradient penalty to satisfy the Lipschitz condition [14], where is a weighting parameter.
4 Proposed TreeGCN
To implement in (1), we consider multilayer graph convolutions with firstorder approximations of the Chebyshev expansion introduced by [22] as follows:
(3) 
where is the activation unit, is the th node in the graph (i.e., 3D coordinate of a point cloud) at the th layer, is the th neighbor of , and is the set of all neighbors of . Then, and in (2) can be represented by and , respectively, where is the final layer and is the number of points at .
During training, GCNs find the best weights and and best bias at each layer, then generate 3D coordinates for point clouds by using these parameters to ensure similarity to real point clouds. The first and second terms in (3) are called the loop and neighbors terms, respectively.
To enhance a conventional GCN such as that used in [22], we propose a novel GCN augmented with tree structures (i.e., TreeGCN). The proposed TreeGCN introduces a tree structure for hierarchical GCNs by passing information from ancestors to descendants of vertices. The main unique characteristic of the TreeGCN is that each vertex updates its value by referring to the values of its ancestors in the tree instead of thosed of its neighbors. Traditional GCNs, such as those defined in (3), can be considered as methods that only refer to neighbors at a single depth. Then, the proposed graph convolution is defined as
(4) 
where there are two major differences compared to (3). One is an improved conventional loop term using a subnetwork , where is generated by supports from . We call this term a loop with supports, as explained in Section 4.1. The other difference is the consideration of values from, all ancestors in the tree to update the value of a current point, where denotes the set of all ancestors of . We call this term ancestors, as explained in Section 4.1.
4.1 Advanced Graph Convolution
The proposed treestructured graph convolution (i.e., GraphConv in Fig.1) aims to modify the coordinates of points using the loop with supports and ancestor terms.
Loop term with supports: The goal of the new loop term in (4) is to propose the next point based on supports instead of using only the single parameter in (3) as follows:
(5) 
where is a fully connected layer containing nodes. A conventional GCN using firstorder approximations adopts a single parameter in its loop term to generate the next point from the current point. However, for large graphs, the representation capacity of a single parameter is insufficient for describing a complex point distribution. Therefore, our loop term utilizes supports to represent a more complex distribution of points, as illustrated in Fig.2.
Ancestor term: For graph convolution, knowing the connectivity of a graph is very important because this information allows a GCN to propagate useful information from a vertex to other connected vertices. However, in our point cloud generation setting, it is impossible to use prior knowledge regarding connectivity because we must be able to generate diverse typologies of point clouds, even for the same object category. Therefore, the dynamic 3D point generation problem cannot be addressed using traditional GCNs because such networks assume that the connectivity of a graph is given. As a replacement for the neighbor term in (3), we define the ancestor term in (4) as follows:
(6) 
where denotes the set of all ancestors of . This term combines all information from the ancestors through a linear mapping . Because each ancestor belongs to a different feature space at a different layer, our ancestor term can fuse all information from previous layers and different feature spaces. To generate the next point, the current point refers to its ancestors in various feature spaces to find the best mapping to combine ancestor information effectively. By using this new ancestor term, our treeGAN obtains several desirable mathematical properties, as explained in Section 5. Fig.3 illustrates the graph convolution process with the ancestor term.
4.2 Branching
Branching is a procedure for increasing the total number of points and is similar to upsampling in 2D convolution. In branching, transforms a single point into child points, where . Therefore,
(7) 
where denotes the th column of matrix . Then, the total number of points in the th layer is , where is the number of points in the th layer. In our experiments, we use different branching degrees for different layers (e.g., ). Note that the number of points in the final layer is . Fig.4 presents an example of branching with degree .
5 Mathematical Properties
In this section, we mathematically analyze the geometric relationships between generated points and demonstrate how these relationships are formulated in the output Euclidean space via treestructured graph convolutions.
Proposition 1.
Let and in (4) be generated points that share the same parents and different parents, respectively, with in the final layer . Let in (5) be the loop of point at the th layer. Hereafter, we omit the superscripts of and if the superscript indicates the fianl layer . Then,
(8) 
and
(9) 
where are all ancestors of point in the th layer. For simplicity, we ignore the branch process and in (5).
Based on Proposition 1, we can prove that following two statements are true:
The geometric distance between two points is determined by the number of shared ancestors. If two points and have different ancestors, then the geometric distance between these points is calculated as the sum of differences between their ancestors in each layer and the differences between their loops . Thus, as their ancestors become increasingly different, the geometric distance between two points increases.
Geometrically related points share the same ancestors. If two points and share the same ancestors, the geometric distance between these points is affected by only their loops, as shown in (8). Thus, the geometric distance between points with the same ancestors in (8) can decreases compared to that between points with different ancestors in (9) as .
6 Fréchet Point Cloud Distance
For quantitative comparisons between GANs, we require evaluation metrics that can accurately measure the quality of the 3D point clouds generated by GANs. In the case of 2D data generation problems, FID [17] is the most common metric. FID adopts pretrained inception V3 models [38] to utilize their feature spaces for evaluation. Although the conventional metrics proposed by Achlioptas et al. [1] can be used to evaluate the quality of generated points by directly measuring matching distances between real and generated point clouds, they can be considered as suboptimal metrics because the goal of a GAN is not to generate the similar samples (e.g.
, MMD or CD) but to generate synthetic probability measures that are as close as possible to real probability measures. This perspective has been explored in unsupervised 2D image generation tasks using GANs
[33, 17]. Therefore, we propose a novel evaluation metric for generated 3D point clouds called FPD.Similar to FID, the proposed FPD calculates the 2Wasserstein distance between real and fake Gaussian measures in the feature spaces extracted by PointNet [29] as follows:
(10) 
where and are the mean vector and covariance matrix of the points calculated from real point clouds , respectively, and are the mean vector and covariance matrix calculated from generated point clouds , respectively, where and . In (10), is the sum of the elements along the main diagonal of matrix . In this paper, for evaluation purposes, we use both conventional evaluation metrics [1] and the proposed FPD.
7 Experimental Results
Implementation details: We used the Adam optimizer for both the generator and discriminator networks with a learning rate of and other coefficients of and
. In generator, we used LeakyReLU as a nonlinearity function without batch normalization. The network architecture of discriminator was the same as that in
GAN [1]. The gradient penalty coefficient was set to and the discriminator was updated five times per iteration, while the generator was updated one time per iteration. As shown in Fig.1, a latent vector was sampled from a normal distribution ) to act as an input. Seven layers () were used for the TreeGCN. The loop term of the TreeGCN in (5) had supports. The total number of points in the final layer was set to .Comparison: There are only two conventional GANs for 3D point cloud generation: rGAN [1] and the GAN proposed by Valsesia et al. [39]. Thus, the proposed treeGAN was compared to these two GANs. While the conventional GANs in [39, 1] train separate networks for each class, our treeGAN trains only a single network for multiple classes of objects.
Evaluation metrics: We evaluated the treeGAN using ShapeNet^{1}^{1}1https://www.shapenet.org/, which is a largescale dataset of 3D shapes, containing object classes. Evaluations were conducted in terms of the proposed FPD (Section 6) and the metrics used by Achlioptas et al [1]. As a reference model for FPD, we used the classification module of PointNet [29] because it can handle partial inputs of objects. This property is suitable for FPD because generated point clouds gradually form shapes, meaning point clouds can be partially complete during training. For the implementation of FPD, we first trained a classification module for epochs to attain an accuracy of for classification tasks. We then extracted a dimensional feature vector from the output of the dense layers to calculate the mean and covariance in (10).
Class  Model  JSD  MMDCD  MMDEMD  COVCD  COVEMD 

rGAN (dense)  0.238  0.0029  0.136  33  13 

rGAN (conv)  0.517  0.0030  0.223  23  4 

Valsesia et al. (no up.)  0.119  0.0033  0.104  26  20 

Valsesia et al. (up.)  0.100  0.0029  0.097  30  26 
Chair 
treeGAN (Ours)  0.119  0.0016  0.101  58  30 

rGAN (dense)  0.182  0.0009  0.094  31  9 

rGAN (conv)  0.350  0.0008  0.101  26  7 

Valsesia et al. (no up.)  0.164  010010  0.102  24  13 

Valsesia et al. (up.)  0.083  0.0008  0.071  31  14 
Airplane 
treeGAN (Ours)  0.097  0.0004  0.068  61  20 
All ( classes) 
rGAN (dense)  0.171  0.0021  0.155  58  29 
treeGAN (Ours)  0.105  0.0018  0.107  66  39 
7.1 Ablation Study
We analyze the proposed treeGAN and examine its useful properties, namely unsupervised semantic part generation, latent space representation via interpolation, and branching.
Unsupervised semantic part generation: Our treeGAN can generate point clouds for different semantic parts, even with no prior knowledge regarding those parts during training. The treeGAN can perform this semantic generation owing to its treestructured graph convolution, which is a unique characteristic among GANbased 3D point cloud methods. As stated in Proposition 1, the geometric distance between points is determined by their ancestors in the tree. Different ancestors imply geometrically different families of points. Therefore, by selecting different ancestors, our treeGAN can generate semantically different parts of point clouds. Note that these geometric families of points are consistent between different latent code inputs. For example, let be sampled latent codes. Let and be their corresponding generated point clouds. Let be a certain subset of point indices. Then, denotes the subset of from the indices . As shown in Fig. 5, if we select the same subsets of indices of point clouds, it results in the same semantic parts, even though the latent code inputs are different. For example, all red points indexed by (e.g., and ) represent the cockpits of airplanes, while all blue points indexed by (e.g., and ) represent the tails of airplanes.
From this ablation study, we can verify that the differences between ancestors determine the semantic differences between points and that two points with the same ancestor (e.g., green and purple points in the left wings in Fig. 5) maintain their relative distances for different latent codes.
Interpolation: We interpolated 3D point clouds by setting the input latent code to based on six alphas . The leftmost and rightmost point clouds of the airplanes in Fig.5 were generated by and , respectively. Our treeGAN can also generate realistic interpolations between two point clouds.
Branching strategy: We conducted the experiments to show that the convergence dynamics of the proposed metric is not sensitive to different branching strategies. Like other experiments, the total number of the generated points are but different branching degrees were set (e.g., , , ). Please refer to convergence graphs in supplementary materials.
7.2 Comparisons with Other GANs
The proposed treeGAN was quantitatively and qualitatively compared to other stateoftheart GANs for point cloud generation, in terms of both accuracy and computational efficiency. Supplementary materials contain more results and comparisons for 3D point clouds generation.
Comparisons: Tables 1 and 2 contain quantitative comparisons in terms of the metrics used by Achlioptas et al. [1] (i.e., JSD, MMDCD, MMDEMD, COVCD, and COVEMD) and the proposed FPD, respectively. The proposed treeGAN consistently outperforms other GANs at a large margin in terms of all metrics, demonstrating the effectiveness of the proposed treeGCN.
For qualitative comparisons, we equally divided the entire index set into four subsets and painted the points in each subset with the same color. Although the real 3D point clouds were unordered as shown in Figs.3D Point Cloud Generative Adversarial Network Based on Tree Structured Graph Convolutions and 6, our treeGAN successfully generated 3D point clouds with intuitive semantic meaning without any prior knowledge, whereas rGAN failed to generate semantically ordered point clouds. Additionally, our treeGAN could generate detailed and complex parts of objects, whereas rGAN generated more dispersed point distributions. Fig.6 presents qualitative results of our treeGAN. The treeGAN generated realistic point clouds for multiobject categories and produced very diverse typologies of point clouds for each class.
Computational cost: In methods using static links for graph convolution, adjacency matrices are typically used for the convolution of vertices. Although these methods are known to produce good results for graph data, prior knowledge regarding connectivity is required. In other methods that use dynamic links for graph convolution, adjacency matrices must be constructed from vertices to derive connectivity information for every convolution layer instead of using prior knowledge. For example, let denote the number of layers, batch size, and induced vertex size of an output graph at the th layer, respectively. The methods described above require additional computations to utilize connectivity information. These computations require time and memory resources on the order of . However, our TreeGCN does not require any prior connectivity information like static link methods and does not require additional computation like dynamic link methods. Therefore, our network can use time and memory resources much more efficiently and requires less resources on the order of .
8 Conclusion
In this paper, we proposed a generative adversarial network called the treeGAN that can generate 3D point clouds in an unsupervised manner. The proposed generator for treeGAN, which is called treeGCN, preforms graph convolutions based on tree structures. The treeGCN utilizes ancestor information from a tree and employs multiple supports to represent 3D point clouds. Thus, the proposed treeGAN outperforms other GAN based point cloud generation methods in terms of accuracy and computational efficiency. Through various experiments, we demonstrated that the treeGAN can generate semantic parts of objects without any prior knowledge and can represent 3D point clouds in latent spaces via interpolation.
References
 [1] P. Achlioptas, O. Diamanti, I. Mitliagkas, and L. Guibas. Learning representations and generative models for 3D point clouds. In ICLR, 2018.
 [2] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein GAN. arXiv preprint arXiv:1701.07875, 2017.
 [3] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral networks and locally connected networks on graphs. In ICLR, 2014.
 [4] X. Chen, K. Kundu, Y. Zhu, A. G. Berneshawi, H. Ma, S. Fidler, and R. Urtasun. 3D object proposals for accurate object class detection. In NIPS, 2015.
 [5] Z. Cheng, C. Yuan, J. Li, and H. Yang. TreeNet: Learning sentence representations with unconstrained tree structure. In IJCAI, 2018.
 [6] A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner. ScanNet: Richlyannotated 3D reconstructions of indoor scenes. In CVPR, 2017.
 [7] S. Dasgupta and Y. Freund. Random projection trees and low dimensional manifolds. In STOC, 2008.
 [8] M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In NIPS, 2016.
 [9] D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. AspuruGuzik, and R. P. Adams. Convolutional networks on graphs for learning molecular fingerprints. In NIPS, 2015.
 [10] K. Ehsani, R. Mottaghi, and A. Farhadi. SeGAN: Segmenting and generating the invisible. In CVPR, 2018.
 [11] H. Fan, H. Su, and L. Guibas. A point set generation network for 3D object reconstruction from a single image. In CVPR, 2017.
 [12] M. Gadelha, R. Wang, and S. Maji. Multiresolution tree networks for 3d point cloud processing. In ECCV, 2018.
 [13] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
 [14] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein GANs. In NIPS, 2017.
 [15] T. Hackel, J. D. Wegner, and K. Schindler. Contour detection in unstructured 3d point clouds. In CVPR, 2016.
 [16] M. Henaff, J. Bruna, and Y. LeCun. Deep convolutional networks on graphstructured data. arXiv preprint arXiv:1506.05163, 2015.
 [17] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. GANs trained by a two timescale update rule converge to a local nash equilibrium. In NIPS, 2017.
 [18] P. Isola, J.Y. Zhu, T. Zhou, and A. A. Efros. Imagetoimage translation with conditional adversarial nets. In CVPR, 2017.
 [19] L. Jiang, S. Shi, X. Qi, and J. Jia. GAL: Geometric adversarial loss for singleview 3Dobject reconstruction. In ECCV, 2018.
 [20] B.S. Kim, P. Kohli, and S. Savarese. 3D scene understanding by voxelCRF. In CVPR, 2013.
 [21] J. Kim, Y. Park, G. Kim, and S. J. Hwang. Splitnet: Learning to semantically split deep networks for parameter reduction and model parallelization. In ICML, 2017.
 [22] T. N. Kipf and M. Welling. Semisupervised classification with graph convolutional networks. ICLR, 2017.

[23]
C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. P.
Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi.
Photorealistic single image superresolution using a generative adversarial network.
In CVPR, 2017.  [24] Y. Li, S. Pirk, H. Su, C. R. Qi, and L. J. Guibas. FPNN: Field probing neural networks for 3D data. In NIPS, 2016.
 [25] Y. Li, D. Tarlow, M. Brockschmidt, and R. S. Zemel. Gated graph sequence neural networks. In ICLR, 2016.
 [26] J. Lin, Y. Xia, T. Qin, Z. Chen, and T.Y. Liu. Conditional imagetoimage translation. In CVPR, 2018.
 [27] C. Murdock, Z. Li, H. Zhou, and T. Duerig. Blockout: Dynamic model selection for hierarchical deep networks. In CVPR, 2016.
 [28] H. Nam, M. Baek, and B. Han. Modeling and propagating CNNs in a tree structure for visual tracking. arXiv preprint arXiv:1608.07242, 2016.

[29]
C. R. Qi, H. Su, K. Mo, and L. J. Guibas.
Pointnet: Deep learning on point sets for 3D classification and segmentation.
In CVPR, 2017.  [30] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In NIPS, 2017.
 [31] S. Reed, Z. Akata, L. X. Yan, B. Logeswaran, Schiele, and H. Lee. Generative adversarial text to image synthesis. In ICML, 2016.
 [32] D. Roy, P. Panda, and K. Roy. TreeCNN: A deep convolutional neural network for lifelong learning. arXiv preprint arXiv:1802.05800, 2018.
 [33] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen, and X. Chen. Improved techniques for training GANs. In NIPS. 2016.
 [34] R. Socher, B. Huval, B. Bath, C. D. Manning, and A. Y. Ng. Convolutionalrecursive deep learning for 3D object classification. In NIPS, 2012.
 [35] S. Song and J. Xiao. Deep sliding shapes for a modal 3D object detection in RGBD images. In CVPR, 2016.
 [36] Y. Song, C. Ma, X. Wu, L. Gong, L. Bao, W. Zuo, C. Shen, R. Lau, and M.H. Yang. VITAL: Visual tracking via adversarial learning. In CVPR, 2018.
 [37] S. C. Stein, M. Schoeler, J. Papon, and F. Worgotter. Object partitioning using local convexity. In CVPR, 2014.
 [38] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016.
 [39] D. Valsesia, G. Fracastoro, and E. Magli. Learning localized generative models for 3D point clouds via graph convolution. In ICLR, 2019.
 [40] N. Wang, Y. Zhang, Z. Li, Y. Fu, W. Liu, and Y.G. Jiang. Pixel2Mesh: Generating 3d mesh models from single RGB images. In ECCV, 2018.
 [41] X. Wang and A. Gupta. Generative image modeling using style and structure adversarial networks. In ECCV, 2016.
 [42] X. Wang, A. Shrivastava, and A. Gupta. AFastRCNN: Hard positive generation via adversary for object detection. In CVPR, 2017.

[43]
Y. Wang, R. Ji, and S.F. Chang.
Label propagation from imageNet to 3D point clouds.
In CVPR, 2013.  [44] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon. Dynamic graph CNN for learning on point clouds. arXiv preprint arXiv:1801.07829, 2018.
 [45] J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum. Learning a probabilistic latent space of object shapes via 3D generativeadversarial modeling. In NIPS, 2016.
 [46] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3D shapeNets: A deep representation for volumetric shapes. In CVPR, 2015.
 [47] Y. Yang, C. Feng, Y. Shen, and D. Tian. FoldingNet: Point cloud autoencoder via deep grid deformation. In CVPR, 2018.

[48]
J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang.
Generative image inpainting with contextual attention.
In CVPR, 2018.  [49] Z. Zhang, L. Yang, and Y. Zheng. Translating and segmenting multimodal medical volumes with cycle and shapeconsistency generative adversarial network. In CVPR, 2018.
 [50] Y. Zhou and O. Tuzel. VoxelNet: Endtoend learning for point cloud based 3D object detection. In CVPR, 2018.
Comments
There are no comments yet.