1 Introduction
In recent years, 3D shape datasets have been increasingly available on the Internet. Consequently, datadriven 3D shape analysis has been an active research topic in computer vision and graphics. Apart from traditional datadriven works such as [8]
, recent works attempted to generalize deep neural networks from images to 3D shapes such as
[33, 32, 19] for triangular meshes, [25] for point clouds, [37, 22] for voxel data, and so on. In this paper, we concentrate on deep neural networks for triangular meshes. Unlike images, 3D meshes have complex and irregular connectivity. Most existing works tend to keep mesh connectivity unchanged from layer to layer, thus losing the capability of increased receptive fields when pooling operations are applied.As a generative network, the Variational AutoEncoder (VAE) [17] has been widely used in various kinds of generation tasks, including generation, interpolation and exploration on triangular meshes [32]. The original MeshVAE [32] uses a fully connected network that requires a huge number of parameters and its generalization ability is often weak. Although the fully connected layers allow the changes of mesh connectivity, due to irregular changes, they cannot be followed by convolutional operations. Litany et al. [19] use the VAE structure with graph convolutions, aiming to deal with model completion. Gao et al. [11] include a spatial convolutional mesh VAE in their pipeline. However, these two kinds of convolution operations cannot change the connectivity of the mesh. The work [27]
introduces sampling operations in convolutional neural networks (CNNs) on meshes, but their sampling strategy does not aggregate all the local neighborhood information when reducing the number of vertices. Therefore, in order to deal with denser models and enhance the generalization ability of the network, it is necessary to design a pooling operation for meshes similar to the pooling for images to reduce the number of network parameters. Moreover, it is desired that the defined pooling operation can support further convolutions and allow recovery of the original resolution through a relevant depooling operation.
In this paper we propose a VAE architecture with newly defined pooling operations. Our method uses mesh simplification to form a mesh hierarchy with different levels of details, and achieves effective pooling by keeping track of the mapping between coarser and finer meshes. To avoid generating highly irregular triangles during mesh simplification, we introduce a modified mesh simplification approach based on the classical mesh simplification algorithm [12]. The input to our network is a vertexbased deformation feature representation [10], which unlike 3D coordinates, encodes deformations using deformation gradients defined on vertices. Our framework use a collection of 3D shapes with the same connectivity to train the network. Meshes with consistent connectivity have been commonly used for various applications such as datadriven deformation [31], deformation transfer [11, 2], human shape generation [32] and shape completion [19]. Such meshes can be easily obtained through consistent remeshing. Also, we adopt graph convolutions [7] in our network. In all, our network follows a VAE architecture where pooling operations and graph convolutions are applied. As we will show later, our network not only has better generalization capabilities but also can handle much higher resolution meshes, benefiting various applications, such as shape generation, interpolation and embedding.
2 Related Work
Deep Learning for 3D Shapes. Deep learning on 3D shapes has received increasing attention. From the Euclidean domain, Boscaini et al. [4, 3] generalize CNNs to the nonEuclidean domain, which is useful for 3D shape analysis such as establishing correspondences. Bronstein et al. [6] give an overview of utilizing CNNs on nonEuclidean domains, including graphs and meshes. Masci et al. [21] proposed the first mesh convolutional operations by applying filters to local patches represented in geodesic polar coordinates. Sinha et al. [29] convert 3D shapes to geometry images to obtain a Euclidean parameterization, on which standard CNNs can be applied. Maron et al. [20] parameterize a surface to a planar flattorus to define a natural convolution operator for CNNs on surfaces. Wang et al. [35, 36] proposed octreebased convolutions for 3D shape analysis. Unlike local patches, geometry images, planar flattorus, or octree structure, our work performs convolutional operations using vertex features [10] as inputs.
To analyze meshes with the same connectivity but different geometry, the work [32] first introduced the VAE architecture to 3D mesh data, and demonstrates its usefulness using various applications. Tan et al. [33] use a convolutional autoencoder to extract localized deformation components from mesh datasets with largescale deformations. Gao et al. [11] proposed a network which combines convolutional mesh VAE with CycleGAN [39] for automatic unpaired shape deformation transfer. The works of [33, 11] apply convolutional operations to meshes in the spatial domain, while the works of [7, 13] extend CNNs to irregular graphs by construction in the spectral domain, and show superior performance when compared with spatial convolutions. Following [7, 38], our work also performs convolutional operations in the spectral domain.
While pooling operations have been widely used in the deep networks for image processing, existing meshbased VAE methods either do not support pooling [32, 11], or use a simple sampling process [27], which is not able to aggregate all the local neighborhood information. In fact, the sampling approach in [27], while being also based on a simplification algorithm, directly drops the vertices, and uses the barycentric coordinates in a triangle to recover the lost vertices by interpolation. In contrast, our pooling operations can aggregate local information by recording the simplification procedure, and support direct reverse of the pooling operation to effectively achieve a depooling operation.
Uniform Sampling or Pooling Methods. Based on point cloud, Pointnet++ [26] has proposed a uniform sampling method for point cloud based neural networks. Using the same idea, TextureNet [14] also conducts uniform sampling on the vertices of mesh. This kind of sampling method destroys the connection between vertices, turning mesh data into point cloud and cannot support further graph convolutions. Simplification methods can build mesh hierarchy, so can help us build mesh pooling operation. However, most simplification methods, such as [12], are shapepreserving, not uniform. Remeshing operation, such as [5], can build uniform simplified meshes, but loss the correspondence between hierarchies. We, therefore, modify the classical simplification [12] to simplify mesh more uniform and record the correspondences between coarse mesh and dense mesh for newly defined mesh pooling and depooling operations.
Deforming Mesh Representations and Applications. In order to better represent 3D meshes, a straightforward approach is to use vertex coordinates of a 3D shape. However, vertex coordinates are neither translation invariant nor rotation invariant, making it difficult to learn largescale deformation. We instead use a recent 3D shape deformation representation [10], which compared with another widely used representation [9], has the benefit of recording deformations at vertices, making graph convolutions and pooling operations easier to achieve.
Shape generation and interpolation are common applications on mesh data. Utilizing the VAE structure, MeshVAE [32] generates more deformable mesh shapes, and the method in [27] generates 3D faces with vivid expressions from a latent space. In fact, these VAEbased methods [32, 27, 19] can also be used for the task of shape interpolation. Shape interpolation is a well researched topic. Existing mesh interpolation methods can be largely categorized into geometrybased methods (e.g. [15]) and datadriven methods (e.g. [8]). The latter exploits the information hidden in the shape dataset, and thus such approaches can produce more reasonable and reliable interpolation results. It has been shown that meshVAE [32] achieves better results than existing datadriven methods [8]. However, [32] cannot deal with the shapes that contain too many vertices (e.g. the elephant model from [31]), which restricts the resolution of generated mesh shapes. Although the method in [27] performs well on face shapes, its reconstruction results on human body [34] are not satisfactory. Our work is also based on the VAE architecture, and therefore can naturally be used for shape generation and shape interpolation. Our framework improves over MeshVAE, and has better generalization ability, as we will demonstrate later.
3 Feature Representation
To better represent shapes used in our network, as previously discussed, we utilize a recent deformation representation [10]. We assume that shapes in the dataset have the same mesh connectivity. This assumption is satisfied by many deformable object datasets [1, 34, 31], and the same connectivity can be achieved with consistent remeshing. Let be the number of shapes in the dataset, and the shape is represented as . is the vertex on the model. The deformation gradient is a local affine transform that describes local deformation around a vertex. Using polar decomposition, , that is, the deformation gradient can be decomposed into a rotation matrix and a scale/shear matrix . By collecting nontrivial entries in the rotation and scale/shear components, the deformation around the vertex of the
shape can be represented as a vector
. Following [32], We further apply a linear scaling to map each element of to to allowto be used as an activation function.
4 Our Framework
In this section we introduce the basic operations and network architecture used in our framework. We first describe our modified mesh simplification algorithm, which will be used to build mesh hierarchy for our pooling operations. Then the graph convolutional operation will be introduced. Finally we will summarize our network structure.
4.1 Mesh Simplification
We use mesh simplification to help build reliable pooling operations. For this purpose, mesh simplification not only creates a mesh hierarchy with different levels of details, but also ensures the correspondences between coarser and finer meshes. Our simplification process is based on the classical method [12], which performs repeated edge contraction in an order based on a metric measuring shape changes. However, the original approach cannot guarantee that the simplified mesh contains evenly distributed triangles. To achieve more effective pooling, each vertex in the coarser mesh should correspond to a similarly sized region.
Our observation is that the edge length is an important indicator for this process. To avoid contracting long edges, we incorporate the edge length as one of the criteria to order pairs of points to be simplified. The original work defines the error at vertex
to be a quadratic form , where is the sum of the fundamental error quadrics introduced in [12]. For a given edge contraction , they simply choose to use to be the new matrix which approximates the error at . So the error at will be . We propose to add the new edge length to the original simplification error metric. Specifically, given an edge to be contracted to a new vertex , the total error is defined as:(1)  
where (resp. ) is the new edge length between vertex and vertex (resp. vertex ). (resp. ) is the set of neighboring vertices of vertex (resp. vertex ), and is a weight. Note that we only penalize the maximum edge length around newly created vertices to effectively avoid triangles with too long edges. In our experiments, we contract half of the vertices between adjacent levels of details to support effective pooling. A representative simplification example is shown in Fig. 2, which clearly shows the effect of our modified simplification algorithm. The advantage of our modified simplification algorithm over the original one on pooling and thus shape reconstruction will be discussed in Section 5.1.
4.2 Pooling and Depooling
Mesh simplification is achieved by repeated edge contraction, i.e., contracting two adjacent vertices to a new vertex. We exploit this process to define our pooling operation, in a way similar to imagebased pooling. We use average pooling for our framework (and alternative pooling operations can be similarly defined). As illustrated in Fig. 3, following an edge contraction step, we define the feature of a new vertex as the average feature of the contracted vertices. This ensures that the pooling operation effectively operates at relevant simplified regions. This process has some advantages: It preserves a correct topology to support multiple levels of convolutions/pooling, and makes the receptive field well defined.
Since our network has a decoder structure, we also need to properly define a depooling operation. We similarly take advantage of simplification relationships, and define depooling as the inverse operation: the features of the vertices on the simplified mesh are equally assigned to the corresponding contracted vertices on the dense mesh.
4.3 Graph Convolution
To form a complete neural network architecture, we adopt the spectral graph convolutions introduced in [7]. Let be the input and be the output of a convolution operation. and are matrices where each row corresponds to a vertex and each column corresponds to a feature dimension. Let denote the normalized graph Laplacian. The spectral graph convolution used in our network is then defined as
(2) 
where , is polynomial coefficients, and is the Chebyshev polynomial of order evaluated at .
4.4 Network Structure
As illustrated in Fig. 1, our overall network is built on our average pooling operation and convolutional operation, with a variational autoencoder structure. The input to the encoder is the preprocessed features which are shaped as , where is the number of vertices and 9 is the dimension of the deformation representation.
Unlike the original mesh VAE [32], which uses fully connected layers, our approach benefits from graph convolutions and especially, the newly defined pooling operations to massively reduce the number of parameters. The introduction of the pooling operations also improves the generalizability of the network.
Our network takes the preprocessed vertex features defined in [10] as input, which then go through two graph convolutional layers, followed by one pooling layer and another graph convolutional layer. In order to avoid overfitting, the last convolutional layer does not use any nonlinear activation function. The output of the last convolutional layer is mapped to a mean vector and a deviation vector by two different fullyconnected layers. The mean vector does not have an activation function, and the deviation vector uses sigmoid as the activation function.
To reconstruct the shape representation from the latent vector, the decoder is used, which basically mirrors the encoder steps. For the decoder convolutional layers, we use the transposed weights of the corresponding layers in the encoder, with all layers using the output activation function. Corresponding to the pooling operation, the depooling operation as described in Section 4.2 maps features in a coarser mesh to a finer mesh. The output of the whole network is , which has the identical dimension as the input, and can be rescaled back to the deformation representation and used for reconstructing the deformed shape.
In order to train the model to fit the large dimension of mesh features, we use the mean squared error (MSE) as the reconstruction loss. Combined with the KLdivergence [18]
, the total loss function for the model is defined as
(3) 
where and represent the preprocessed features of the model and the output of the network. is the Frobenius norm of matrix, is the number of shapes in the dataset, is a parameter to adjust the priority between the reconstruction loss and KLdivergence. is the latent vector,
is the prior probability,
is the posterior probability, and
is the KLdivergence.4.5 Conditional VAE
When the VAE is used for shape generation, it is often preferred to allow the selection of shape types to be generated, especially for datasets containing shapes from different categories (such as men and women, thin and fat, see [24] for more examples). To achieve this, we refer to [30] and add labels to the input and the latent vectors to extend our framework. In this case, our loss function is changed to
(4) 
where is the output of the conditional VAE, and and are conditional prior and posterior probabilities, respectively.
4.6 Implementation Details
In our experiments, we contract half of the vertices with in Eq. 1 and set the hyperparameter in graph convolutions, in the total loss function. The latent space dimension is 128 for all our experiments. We also use regularization on the network weights to avoid overfitting. We use Adam optimizer [16] with the learning rate set to .
Dataset  Only  Only  [12]  Uniform  Graph  Mesh  Our 
Spatial Conv.  Spectral Conv.  Simp.  Pooling  Sampling  Method  
SCAPE  0.1086  0.0825  0.0898  0.0813  0.0824  0.0831  0.0763 
Swing  0.0359  0.0282  0.0284  0.0281  0.0292  0.0298  0.0268 
Fat  0.0362  0.0267  0.0285  0.0305  0.0253  0.0289  0.0249 
Hand  0.0300  0.0284  0.0271  0.0280  0.0306  0.0278  0.0260 
Dataset  ‘CPCPC’  ‘CCPCCPC’  Our Network 
SCAPE  0.0942  0.0863  0.0763 
Dataset  #. Vertices  Tan  Gao  Ranjan  Ours 
2018  2018  2018  
SCAPE  12500    0.1086  0.1095  0.0763 
Swing  9971    0.0359  0.0557  0.0268 
Fat  6890  0.0308  0.0362  0.0324  0.0249 
Hand  3573  0.0362  0.0300  0.0632  0.0260 
Face  11849    1.0619  1.1479  0.7257 
Horse  8431    0.0128  0.0510  0.0119 
Camel  11063    0.0134  0.0265  0.0115 
5 Experiments
5.1 Framework Evaluation
To compare different network structures and settings, we use several shape deformation datasets, including SCAPE dataset [1], Swing dataset [34], Face dataset [23], Horse and Camel dataset [31], Fat (ID:50002) from the MPI DYNA dataset [24], and Hand dataset. For each dataset, it is randomly split into halves for training and testing. We test the capability of the network to generate unseen shapes, and report the average RMS (root mean squared) errors.
Effect of Pooling. In Table 1 (Column 3 and 8) we compare the RMS errors of reconstructing unseen shapes with and without pooling. The RMS error is lower by an average of with pooling. The results show the benefit of our pooling and depooling operations.
Comparison with Spatial Convolutions. We compare spectral graph convolutions with alternative spatial convolutions, both with a similar network architecture as shown in Fig. 1. The comparison results are shown in Table 1 (Column 2 and 3). One can easily find that spectral graph convolutions give better results.
Comparisons with Other Pooling or Sampling Methods. To demonstrate the benefit of our simplificationbased pooling operation, we compare our pooling with with the original algorithm [12] for pooling, the existing graph pooling method [28], and the mesh sampling operation introduced in [27]. What’s more, we show comparisons to a representative uniform simplification method based on remeshing [5]. This method is able to distribute vertices uniformly but loses geometry details. In contrast, our method aims for a uniform, and also shapepreserving simplification, which leads to better results. The results are shown in Table 1. The RMS error of our pooling for unseen data is lower by an average of compared to [12]based pooling, compared to graph pooling [28], and compared to mesh sampling [27], which shows our modified simplification algorithm is more effective in term of pooling and our pooling is superior on multiple datasets, leading to better generalization ability.
Alternative Network Structures. In addition to the architecture shown in Fig. 1, we also consider alternative architectures, including adding more convolutional and pooling layers (see Table 2). It can be seen that our chosen architecture has better performance than alternative architectures. This shows that our architecture is flexible enough to learn from the data but not overly complicated which might lead to overfitting.
Comparison with StateoftheArt. In Table 3, we compare our method with the stateoftheart meshbased autoencoder architectures [11, 27, 32] in terms of RMS errors of reconstructing unseen shapes. Thanks to spectral graph convolutions and our pooling, our method consistently reduces the reconstruction errors of unseen data, showing superior generalizability. For example, compared with [11], which uses the same pervertex features as ours, our network achieves and lower average RMS reconstruction errors on the SCAPE and Face datasets. What’s more, we show the qualitative reconstruction comparison with [11] and [27] in Fig. 4, 5 and 6. These figures show that our method leads to more accurate reconstruction results than [11, 27]. In Table 4, we present two comparisons to illustrate that our network requires far fewer parameters than the original MeshVAE.
5.2 Generation of Novel Models
Once our network is trained, we can use the latent space and decoder to generate new shapes. We use the standard normal distribution
as the input to the trained decoder. It can be seen from Fig. 7 that our network is capable of generating reasonable new shapes. To prove that the generated shapes do not exist in the model dataset, we find the nearest shapes based on the average pervertex Euclidean distance in the original datasets for visual comparison. It can be seen that the generated shapes are indeed new and different from any existing shape in the datasets. To show our conditional random generation ability, we train the network on the DYNA dataset from [24]. We use BMI+gender and motion as the condition to train the network. As shown in Fig. 8, our method is able to randomly generate models that are conditioned on the body shape ‘50007’ – a male model with BMI 39.0 and conditioned on the action with the label ‘One Leg Jump’ including lifting the leg.5.3 Mesh Interpolation
Our method can also be used for shape interpolation. This is because VAE compresses complex 3D models into a lowdimensional latent vector. We test the capability of our framework to interpolate two different shapes in the original dataset. First, we generate the mean outputs of the probabilistic encoder for two shapes. Then, we linearly interpolate between the two latent vectors and generate a sequence of latent vectors for the probabilistic decoder. Finally, we use the outputs of the probabilistic decoder to reconstruct a 3D deformation sequence. We compare our method on the SCAPE dataset [1] with Mesh VAE [32], and a stateoftheart datadriven deformation method [8], as shown in Figures 9 and 10 respectively. We can see that Mesh VAE [32] produces interpolation results with obvious artifacts. The results by the datadriven method of [8] tend to follow the movement sequences from the original dataset which has similar start and end states, leading to redundant motions such as the swing of right arm. In contrast, our interpolation results give more reasonable motion sequences. In Fig. 11, we show a comparison with the method of [19], which leads to artifacts especially in the synthesized human hands. We show more interpolation results in Fig. 13, including sequences between newly generated models and models beyond human bodies.
To show the ability of our network for processing denser meshes, we compare our network with Mesh VAE [32]. Mesh VAE can handle much smallerresolution meshes due to the high memory demands of their fully connected network. This is not a problem with our method thanks to the convolution kernels and pooling operations. Therefore our method can recover superior details for interpolation, reconstruction and random generation. A comparison example for interpolation is shown in Fig. 12.
5.4 Embedding
Our method can compress 3D shapes into low dimensional vectors for visualization. To better visualize the embedding, we calculate the two largest variances of the latent vector as the horizontal and vertical coordinates of the model in the 2D embedding graph. We utilize this capability to embed shapes in a lowdimensional space. Our method divides all the models according to their shapes, while allowing models of similar poses to stay in close places. We use a representative motion sequences of different deformation types, namely a horse motion sequence from [31]. The Horse dataset [31] contains a motion sequence of a galloping horse, which forms a cyclic sequence. We can conclude from the embedding result shown in Fig. 14, that a circle is formed that matches the original sequence, which shows that our network has good embedding ability. We also show comparison with tSNE and PCA in Fig. 15. Our result presents as two circles, while tSNE and PCA cannot reveal the intrinsic information of the data.
6 Conclusions
In this paper we introduced a newly defined pooling operation based on a modified mesh simplification algorithm and integrated it into a mesh variational autoencoder architecture, which uses pervertex feature representations as inputs, and utilizes graph convolutions. Through extensive experiments we demonstrated that our generative model has better generalization ability. Compared to the original Mesh VAE, our method can generate high quality deformable models with richer details. Our experiments also show that our method outperforms the stateoftheart methods in various applications including shape generation and shape interpolation. One of the limitations of our method is that it can process only homogeneous meshes. As a future work, it is desirable to develop a framework capable of handling shapes with different topology as input.
References
 [1] (2005) SCAPE: shape completion and animation of people. ACM transactions on graphics 24 (3), pp. 408–416. Cited by: §3, §5.1, §5.3.
 [2] (200907) Semantic deformation transfer. TOG 28 (3), pp. 36:1–36:6. Cited by: §1.
 [3] (2016) Anisotropic diffusion descriptors. Computer Graphics Forum 35 (2), pp. 431–441. Cited by: §2.
 [4] (2016) Learning shape correspondence with anisotropic convolutional neural networks. In NIPS, pp. 3189–3197. Cited by: §2.
 [5] (2004) A remeshing approach to multiresolution modeling. In SGP, pp. 185–192. Cited by: §2, Table 1, §5.1.
 [6] (2017) Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine 34 (4), pp. 18–42. Cited by: §2.
 [7] (2016) Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in NIPS, pp. 3844–3852. Cited by: §1, §2, §4.3.
 [8] (2017) Datadriven shape interpolation and morphing editing. Computer Graphics Forum 36 (8), pp. 19–31. Cited by: §1, §2, Figure 10, §5.3.
 [9] (2016) Efficient and flexible deformation representation for datadriven surface modeling. ACM Transactions on Graphics 35 (5), pp. 158:1–158:17. Cited by: §2.
 [10] (2017) Sparse data driven mesh deformation. arXiv preprint arXiv:1709.01250. Cited by: §1, §2, §2, §3, §4.4.
 [11] (2018) Automatic unpaired shape deformation transfer. ACM Transactions on Graphics 37 (6), pp. 1–15. Cited by: §1, §1, §2, §2, Table 3, Figure 4, §5.1.
 [12] (1997) Surface simplification using quadric error metrics. In Siggraph, pp. 209–216. Cited by: §1, §2, Figure 2, §4.1, §4.1, Table 1, §5.1.
 [13] (2015) Deep convolutional networks on graphstructured data. arXiv preprint arXiv:1506.05163. Cited by: §2.

[14]
(201906)
TextureNet: consistent local parametrizations for learning from highresolution signals on meshes.
In
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, Cited by: §2.  [15] (2017) Smooth interpolation of key frames in a riemannian shell space. Computer Aided Geometric Design 52, pp. 313–328. Cited by: §2.
 [16] (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.6.
 [17] (2013) Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §1.
 [18] (1951) On information and sufficiency. The annals of mathematical statistics 22 (1), pp. 79–86. Cited by: §4.4.
 [19] (201806) Deformable shape completion with graph convolutional autoencoders. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §1, §1, §2, Figure 11, §5.3.
 [20] (2017) Convolutional neural networks on surfaces via seamless toric covers. ACM Transactions on Graphics 36 (4), pp. 71. Cited by: §2.
 [21] (2015) Geodesic convolutional neural networks on riemannian manifolds. In ICCV workshops, pp. 37–45. Cited by: §2.
 [22] (2015) VoxNet: A 3D Convolutional Neural Network for RealTime Object Recognition. In IROS, Cited by: §1.
 [23] (2013) Sparse localized deformation components. ACM Transactions on Graphics (TOG) 32 (6), pp. 179. Cited by: §5.1.
 [24] (2015) Dyna: a model of dynamic human shape in motion. ACM Transactions on Graphics 34 (4), pp. 120. Cited by: §4.5, §5.1, §5.2.
 [25] (2016) PointNet: deep learning on point sets for 3d classification and segmentation. arXiv preprint arXiv:1612.00593. Cited by: §1.
 [26] (2017) PointNet++: deep hierarchical feature learning on point sets in a metric space. In Advances in Neural Information Processing Systems, pp. 5099–5108. Cited by: §2.
 [27] (2018) Generating 3D faces using convolutional mesh autoencoders. In European Conference on Computer Vision (ECCV), pp. 725–741. Cited by: §1, §2, §2, Table 1, Table 3, Figure 5, Figure 6, §5.1, §5.1.
 [28] (2018) Mining point cloud local structures by kernel correlation and graph pooling. In CVPR, Vol. 4. Cited by: Table 1, §5.1.
 [29] (2016) Deep learning 3d shape surfaces using geometry images. In ECCV, pp. 223–240. Cited by: §2.
 [30] (2015) Learning structured output representation using deep conditional generative models. In Advances in NIPS, pp. 3483–3491. Cited by: §4.5.
 [31] (2004) Deformation transfer for triangle meshes. ACM Transactions on Graphics 23 (3), pp. 399–405. Cited by: §1, §2, §3, Figure 12, Figure 14, §5.1, §5.4.
 [32] (201806) Variational autoencoders for deforming 3d mesh models. In CVPR, Cited by: §1, §1, §1, §2, §2, §2, §3, §4.4, Table 3, Table 4, Figure 12, Figure 9, §5.1, §5.3, §5.3.
 [33] (2018) Meshbased autoencoders for localized deformation component analysis. In AAAI, Cited by: §1, §2.
 [34] (2008) Articulated mesh animation from multiview silhouettes. ACM Transactions on Graphics 27 (3), pp. 97. Cited by: §2, §3, §5.1.
 [35] (2017) OCNN: Octreebased Convolutional Neural Networks for 3D Shape Analysis. ACM Transactions on Graphics (SIGGRAPH) 36 (4). Cited by: §2.
 [36] (2018) Adaptive OCNN: A Patchbased Deep Representation of 3D Shapes. ACM Transactions on Graphics (SIGGRAPH Asia) 37 (6). Cited by: §2.
 [37] (2016) Learning a Probabilistic Latent Space of Object Shapes via 3D GenerativeAdversarial Modeling. In Advances In Neural Information Processing Systems, pp. 82–90. Cited by: §1.
 [38] (2017) Syncspeccnn: synchronized spectral cnn for 3d shape segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2282–2290. Cited by: §2.

[39]
(2017)
Unpaired imagetoimage translation using cycleconsistent adversarial networks
. In ICCV, Cited by: §2.
Comments
There are no comments yet.