Log In Sign Up

Picasso: A CUDA-based Library for Deep Learning over 3D Meshes

We present Picasso, a CUDA-based library comprising novel modules for deep learning over complex real-world 3D meshes. Hierarchical neural architectures have proved effective in multi-scale feature extraction which signifies the need for fast mesh decimation. However, existing methods rely on CPU-based implementations to obtain multi-resolution meshes. We design GPU-accelerated mesh decimation to facilitate network resolution reduction efficiently on-the-fly. Pooling and unpooling modules are defined on the vertex clusters gathered during decimation. For feature learning over meshes, Picasso contains three types of novel convolutions namely, facet2vertex, vertex2facet, and facet2facet convolution. Hence, it treats a mesh as a geometric structure comprising vertices and facets, rather than a spatial graph with edges as previous methods do. Picasso also incorporates a fuzzy mechanism in its filters for robustness to mesh sampling (vertex density). It exploits Gaussian mixtures to define fuzzy coefficients for the facet2vertex convolution, and barycentric interpolation to define the coefficients for the remaining two convolutions. In this release, we demonstrate the effectiveness of the proposed modules with competitive segmentation results on S3DIS. The library will be made public through


Geometric Feature Learning for 3D Meshes

Geometric feature learning for 3D meshes is central to computer graphics...

Primal-Dual Mesh Convolutional Neural Networks

Recent works in geometric deep learning have introduced neural networks ...

Laplacian2Mesh: Laplacian-Based Mesh Understanding

Geometric deep learning has sparked a rising interest in computer graphi...

Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric graphs

A common approach to define convolutions on meshes is to interpret them ...

Convolutional Neural Network on Semi-Regular Triangulated Meshes and its Application to Brain Image Data

We developed a convolution neural network (CNN) on semi-regular triangul...

Kaolin: A PyTorch Library for Accelerating 3D Deep Learning Research

We present Kaolin, a PyTorch library aiming to accelerate 3D deep learni...

Temporal Parameter-free Deep Skinning of Animated Meshes

In computer graphics, animation compression is essential for efficient s...

Code Repositories

1 Introduction

Data in computer vision vary commonly from homogeneous format in 2D projective space (e.g. images, videos) to heterogeneous format in 3D Euclidean space (e.g. point clouds, meshes). The success of convolutional feature learning on homogeneous data

[21, 28, 37, 38, 46, 47, 50, 55] has sparked research interest in geometric deep learning [6, 7, 14, 26, 54]

, which aims for equally effective feature learning on heterogeneous data. Due to the rise of autonomous driving and robotics, 3D deep learning has now become an important branch of the geometric research direction. Compared to 3D point clouds, 3D meshes convey richer geometric information about the object surface and topology. Yet, the heterogeneous facet shapes and sizes combined with unstructured vertex locations make its adaption to deep learning more difficult as compared to point clouds. This is why most approaches address real-world 3D scene understanding via convolutions on point clouds

[31, 32, 33, 35, 43, 44]. However, point clouds still lack in preserving the structural details that are easily represented by meshes. There are a few works that learn features from meshes, but they are largely constrained to shape analysis on small synthetic models [5, 20, 39, 41, 45, 68]. These methods either apply convolutions throughout the network to a single mesh resolution (the input), or exploit inefficient CPU-based algorithms to decimate the mesh [16, 17, 51, 71]. However, non-hierarchical network configurations and slow network coarsening are both problematic while dealing with real-world meshes because of their large-scale nature. This calls for mesh simplification methods that are fast and amenable to deep learning for the real-world applications.

(a) The input mesh (b) VC:  ms (c) QEM:  ms (d) Ours:  ms
Figure 1: Runtime comparison of mesh simplification. The input mesh consists of vertices and facets. Different methods simplify it to similar mesh sizes. The number of vertices and facets in the decimated meshs are and for vertex clustering (VC) [51], and for the quadric error metrics (QEM) [16, 17]; and and for our GPU-accelerated algorithm. The runtime of VC, QEM, and our method are respectively  ms,  ms, and  ms. For VC and QEM, we utilize their popular implementations in Open3D [71]. For better visualization, we show wireframes of the meshes only. Best viewed in color and enlarged.

We present a GPU-accelerated mesh simplification algorithm to facilitate the exploration of hierarchical architectures on meshes. The proposed method is not only fast in decimating small-scale watertight meshes from CAD modelling [4, 9, 11, 36], but is also efficient in simplifying large-scale unstructured real-wold meshes [2, 8, 12, 22]. We perform all computations in parallel on GPU, except for the grouping of vertex pairs to be contracted. Meanwhile, to increase its compatibility with modern deep learning modules such as normalization [3, 24, 60, 66], we contract vertex clusters by controlling the desired vertex size of the decimated mesh. This also advances mesh-based modules to be exploited in conjunction with the various point cloud based modules [52]. Our algorithm is able to reduce the number of mesh vertices by half in each iteration. Figure 1 compares the runtime of our method with two well-founded decimation methods, VC [51] and QEM [17]. Notice that our method is

faster than QEM. During the simplification, we record all vertex clustering information into a 1-D tensor. Based on this tensor, we also define max, average and weighted poolings, as well as unpooling. Earlier attempts for convolution on meshes

[5, 39, 41] explored local patch operators in hand-crafted coordinate systems. The development of spatial graph convolutions has led recent methods [20, 45, 52] to predominantly consider (triangular) mesh as a special graph and convolve features of each vertex from their geodesic -ring neighborhood. In contrast, we study mesh as a set of vertices and facets, following its natural geometric structure. To learn features of each vertex, we aggregate their context information from the adjacent facets. We refer to the resulting operation as facet2vertex convolution. Lei et al. [32] showed that fuzzy mechanism makes the convolutional network robust to point density variation. Hence, we further exploit fuzzy coefficients in the facet2vertex convolution. Due to the fact that facet normals are distributed strictly on the surface of a unit sphere, i.e. , we associate learnable filter weights to Gaussian mixtures defined on . The parameters of the Gaussian components can optionally be kept fixed or trainable within the network in our library. On the other hand, to learn features of the mesh facets, we introduce vertex2facet and facet2facet convolutions. The former propagates features from the vertices of a facet to the facet itself, while the latter is applied when facets of the input mesh are rendered with textures. We incorporate fuzziness into these two convolutions using barycentric interpolation. The three proposed convolutions altogether enable flexible vertex-wise and facet-wise feature learning on the mesh. We provide CUDA implementations for all the above mentioned modules and organize them into a self-contained library, named Picasso111Paying homage to Pablo Picasso for cubism in paintings., to facilitate deep learning over the unstructured real-world 3D meshes. We note that meshes and point clouds are tightly bonded together, and it is more desirable to extract features from the two data modalities cooperatively rather than individually or competitively. DCM-Net [52] also validates this argument. For this reason, we additionally incorporate all the point cloud modules from Lei et al. [32, 33] in our library (with author permission). In this maiden release of our library, we demonstrate promise of its proposed modules with competitive segmentation results on S3DIS [2]. The segmentation network is abbreviated as PicassoNet for consistency. We summarize the main contributions of our work below:

  • We present a fast GPU-accelerated 3D mesh decimation technique to reduce mesh resolution on-the-fly. A public implementation with complementary CUDA-based pooling and unpooling operations is provided.

  • We propose three novel convolution modules, i.e. facet2vertex, vertex2facet, facet2facet, to alternatively learn vertex-wise and facet-wise features on a mesh. Diverging from existing methods, we do not rely on restrictive treatment of mesh as an undirected graph with edges. Instead, it’s a geometric structure composed of vertices and facets for our modules, which is also a more conducive representation for the digital devices.

  • With this paper, we release Picasso — a self-contained library for deep learning over unstructured real-world 3D meshes, along with synthetic watertight meshes. The provided anonymous github link will be made public for the broader research community.

2 Related Work

Convolutions on point clouds: 3D-CNNs [15, 18, 40, 49, 67] are the most intuitive solutions of transferring CNNs from images to point clouds. A few methods also explore similar regular-grid kernels in a transformed data domain [56, 57]

. The permutation invariant networks exploit multi-layer perceptrons (MLPs) and max-pooling to learn features from point clouds

[27, 34, 43, 44, 48, 53, 65]. They demonstrate the effectiveness of taking point coordinates as input features. Graph-based networks allow convolutions to be performed in either spectral or spatial domain. However, the mandatory alignment of different graph Laplacians makes the application of spectral graph convolutions to point clouds more difficult than spatial graph convolutions [70]. As a pioneering work, ECC [54] exploited dynamic filters [13] to analyze point clouds with the spatial graph convolutions. Subsequent works explored more effective filter or kernel parameterizations [19, 35, 62, 65, 69]. The recent discrete kernels [31, 32, 33, 59] are appealing alternatives to those dynamic methods as they avoid the dependence of filter generations within the network. KPConv [59] reported more competitive results, while the spherical kernels [32, 33] are more memory and runtime efficient. Convolution on meshes: In nascency of this direction, researchers generally performed convolutions over local patches defined in a hand-crafted coordinate system [5, 39, 41]. The coordinate system could either be established by geodesic level sets [39] or surface normals and principle curvatures [5, 41]. In FeaStNet [61], Verma et al. capitalized on a learnable mapping between filter weights and graph neighborhoods to replace those hand-crafted local patches. TextureNet [23] takes surface meshes with high-resolution facet textures as input, and explores a 4-RoSy field to parameterize the mesh into local planar patches such that standard CNNs [28] are applicable. Ranjan et al. [14] proposed to learn human facial expressions with hierarchical mesh-based networks, using the spectral graph convolutions. Schult et al. [52] proposed to extract features from both meshes and point clouds simultaneously. Similar to [35, 54, 64, 65], they still conduct convolution using dynamically generated filters. Whereas most methods learn vertex-wise features, MeshCNN [20] defines convolution to learn edge-wise features on a mesh. Generally, previous methods treat mesh as an edge-based graph and employ geodesic convolutions over it. In this paper, we explore convolutions on the mesh following its own geometric structure, i.e. vertices and facets. To promote this more natural perspective, we also provide computation and memory optimized CUDA implementations for forward and backward propagations of all the convolutions we propose. Mesh decimation: Hierarchical networks allow convolutions to be applied on an increasing receptive fields of the input data. To create such hierarchical architectures on point clouds, researchers usually exploit random point sampling or farthest point sampling (FPS) [33, 44, 65]. However, because of their inability to track vertex connections, they are not applicable to mesh processing. Fortunately, mesh simplification is a well-studied topic in the graphics community. There are methods available that can be used to decimate a mesh [16, 17, 51]. For example, Ranjan et al. [45] used the quadric error metrics [17] to simplify their synthetic facial meshes. Schult et al. [52] explored vertex clustering [51] and quadric error metrics [16, 17] to simplify their indoor room meshes. In particular, they made use of the simplification functions provided by Open3D [71] - a popular library for 3D geometry processing in Python. However, the implementations in Open3D are CPU-based, which are not amenable to deep learning. In this work, we also introduce a fast mesh decimation method based on the algorithm of Garland et al. [16, 17]. Their method simplifies a mesh through iterative contractions of vertex pairs, and demands that the vertex pair contribution to quadric error be determined after each iteration. This progressive strategy makes the algorithm impossible for parallel deployment on GPUs. In comparison, we sort all the vertex pairs by their quadric errors only once, and group them into isolated vertex clusters. All other computations in our method are mutually independent and can be accelerated via parallel GPU computing. We represent the vertex cluster information as a 1-D tensor to facilitate the pooling and unpooling operations.

3 GPU-Accelerated Mesh Decimation

To explore flexible deep neural networks for meshes, there has been an increasing demand for a fast mesh decimation method in the 3D community. The quadric error simplification

[17] performs iterative contraction of vertex pairs until the desired number of facets is obtained. This method is known to be effective for simplifying meshes while retaining the quality. However, it is not suited to parallel computing due to the implicit dependencies between its iterative contractions. Considering its high relevance, we provide the quadric error method as Algorithm 1. The iterative dependency of the method is clear from lines 7–13 of the algorithm. We extend [17] and propose a fast mesh decimation method that exploits the parallel computing power of a GPU. The main difference is that we do not perform iterative contractions any more. Instead, we group the vertices into multiple isolated clusters based on their connections. Figure 2 provides a toy example to illustrate the clustering process. During clustering, we control the expected vertex number rather than the number of facets or edges. Since the contractions of isolated clusters are independent of each other, they can be executed on a GPU in parallel. More specifically, we initialize the candidates of vertex pairs to be contracted using existing mesh edges only. The vertex clusters are established with disjoint vertex pairs in the candidates, before which we sort the candidates in an ascending order by their quadric cost222

Shuffling strategy is inserted into the sorting of quadrics for variations of the decimated mesh at different epochs.

. See Algorithm 2 for our method. We note that our core algorithm reduces the vertex number roughly by a half, per-iteration. Yet, we can still handle arbitrary number of output vertices. It is achieved by running the core algorithm multiple times. For clarity, we present Algorithm 2

with a single mesh taken as the input. In our library, the decimation function can take mini-batches of concatenated meshes as input. This improves its compatibility with deep learning, e.g. batch normalization 

[24] is easily applicable. The vertex clustering operations (lines 3–11 in Algorithm 2) is currently executed on CPU, that helps in deploying heavy computations to GPU. We note that consistency checks and penalties are excluded in our method in favor of better runtime efficiency. For continuity, we delegate the details on complementary output parameters replace and mapping to the source code itself.

0:  A triangular mesh , the number of output facets , threshold .
0:  A decimated mesh .
1:  select any pair , that is an edge or as candidates
2:  compute the quadric of each vertex pair
3:  for each candidate pair ,  do
4:     (b) determine the target position of contraction
5:     (c) apply consistency checks and penalties
6:  end for
7:  repeat
8:     if the pair (, ) has the least quadric cost  then
9:        (b) perform contraction
10:        (c) update
11:        (d) for each affected pair (, ), recompute its target position and cost as in steps 3-6
12:     end if
13:  until 
14:  return
Algorithm 1 The original quadric error simplification
Figure 2: Illustration of edge-based vertex clustering. (a) shows an input mesh with 10 vertex pairs. (b) initializes the disjoint vertex pairs as clusters - shown by red and blue. (c) groups the remaining points to the vertex clusters they each connect to, resulting in the final vertex clusters and .

Pooling and unpooling: We record the vertex clustering information as well as the degenerated vertex information into different 1-D tensors. The sizes of the two tensors are both identical to the input vertex number. They largely facilitate the pooling and unpooling operations. Given the vertex clusters as neighborhood, different pooling operations can be defined, such as sum, average, max and weighted. We provide average, max, weighted poolings in the library. For unpooling, we simply interpolate features of all vertices in a cluster based on features of the representative vertex in the cluster.

0:  A triangular mesh , the number of output vertices .
0:  A decimated mesh ; vertex clustering information replace; vertex degeneration information mapping.
1:  select any pair , that is an edge.
2:  compute the quadric of each vertex pair
3:  sort all pairs ascendingly based on their quadrics
4:  set the number of vertices to remove as ; the number of vertices removed as
5:  for each pair (, do
6:     if , are not in any cluster and  then
7:        (a) form as a new cluster
8:        (b) set
9:     end if
10:  end for
11:  add the remaining vertices that are not in any cluster to an existing cluster containing vertices connected to it.
12:  for each vertex cluster do
13:     (a) compute quadric
14:     (b) determine the target position of contraction
15:     (c) perform contraction
16:  end for
17:  return
Algorithm 2 Fast GPU-accelerated mesh simplification

4 Mesh Convolution

Let be a triangular mesh. The input features of its vertices are , while the normals and areas of each facet are and . If the mesh is rendered, we denote the textures of a facet as , in which represents the texture resolution of the facet, relates to the colors. Since the facets usually have varying areas, we allow varying for different facets as well.

4.1 Vertex2Facet convolution

We compute the features of each facet based on the features of its vertices. In particular, we define a kernel composed of 3 filters associated to the three vertices of the triangular facet. The Barycentric interpolation is used to incorporate fuzzy scheme into the convolution. We determine the total number of interpolated points as


in which are the minimum and maximum facet areas of the mesh, while are the hyper-parameters. We use in our experiments. Let be the vertex features, and be the filter weights. The Barycentric coordinates satisfies . We use it to interpolate interior points uniformly on the facet, whose features and filter weights are computed respectively as


Here, is the total number of interpolated points defined based on the facet area. With and , we finally compute the feature of each facet as


Facet2Facet convolution: The facet2facet convolution is only applicable when the input mesh is rendered with textures. Each facet contains interpolated points with texture features of size . The definition of the facet2facet convolution is quite similar to that of the vertex2facet convolution. The main difference is that in facet2facet convolution, there is no need to interpolate features for the interior points since their features are already available. We only need to interpolate the filter weights , and then compute the facet features following Eq. (4).

(a) front (b) back (c) top (d) bottom
Figure 3: The distributions of surface normals of an arbitrary room. We show it from different views, including front, back, top and bottom. The six apparent clusters have centers approximating to .

4.2 Facet2Vertex Convolution

We compute vertex features from the features of their adjacent facets. Considering that the normals of each facet are strictly located on the surface of a unit sphere, we define filter weights of our kernel by associating them to different positions on the sphere. Besides, we observe that normals of the real-world meshes generally distribute in distinctive patterns on a unit sphere, especially for the indoor meshes. This is related to the construction preferences of human-beings. We show an example in Fig. 3. The data is taken from S3DIS [2]. There are six main clustering patterns. They correspond to different normal directions, which are roughly . Consequently, we exploit the mixture of different Gaussian components to divide the sphere surface and implicitly cluster normals. Let the total number of Gaussian components be , their expectations and covariance matrices be and . Using its normal , we compute the fuzzy coefficients of the facet as


For simplicity, we use homogeneous diagonal matrix for in our experiments. Let the filter weights in the kernel be , the adjacent facets of vertex be , and the facet features be . The vertex feature gets computed as


In our library, we allow the expectations and covariance matrices of the Gaussians to be constant, or learnable together with the filter weights during training. We initialize the Gaussian means as regularly distributed points on the sphere in our experiments and keep those constant, while allowing the covariance matrices as learnable parameters. We note that the facet2vertex convolution is scale and translation invariant but not rotation invariant because its fuzzy coefficients are computed based on facet normals.

Figure 4:

Convolution operations introduced in Picasso. (a) The vertex2facet convolution propagates features from the original and interpolated points on a facet to the facet itself. (b) The facet2vertex convolution propagates features from the adjacent facets of a vertex to the vertex itself. We omit illustration of the facet2facet convolution for its visual similarity to the vertex2facet convolution. (c) The vertex2vertex convolution is composed of a vertex2facet convolution followed by a facet2vertex convolution. We apply batch normalization to the vertex features.

Figure 5: Convolutional blocks of PicassoNet. The network consists of 5 mesh resolutions including the input, which relates to a network of 5 hierarchical layers. Convolutional blocks are used to learn features within a single hierarchical layer. We test the performance of different block configurations for feature learning. (a) The vanilla block consists of two vertex2vertex convolutions with a shortcut connection. (b) The efficient block type contains two vertex2vertex convolutions as well as one point-based convolution before each feature concatenation. It can process meshes with 1 million facets per-second at the inference time, while maintaining highly competitive results. (c) The generic convolution block uses vertex2vertex convolutions together with a single point-based convolution before the feature combination. We report at-par results to state-of-the-art on S3DIS [2] by using generic block with .

For all the proposed convolutions, we split the operation into channel-wise and depth-wise operations. Doing so is known to improve the computational efficiency of the operation [10, 33]. Our mesh decimation technique forces the vertex number, rather than the facet number, of each decimated mesh in a batch to be the same. Therefore, we apply batch normalization to vertex features for stable computation of the batch statistics. Combining the vertex2facet and facet2vertex convolutions, we can perform vertex2vertex convolution. Figure 4 illustrate the concept of vertex2facet and facet2vertex convolutions, along with a flow chart to elucidate the operations in vertex2vertex convolution.

5 Network Architecture

Receptive field: All the convolutions we propose follow the natural organisation of vertex lists in the mesh facets. Therefore, the operations accumulate context from the first-order neighborhood. This is similar to the receptive field reached by convolution in standard CNNs for image processing. To benefit feature learning from larger context, we apply multiple cascaded vertex2vertex convolutions in our network, which leads to a deeper neural network. Disconnected components:

Real-world meshes are not guaranteed to form a connected graph. It is probable that a mesh is composed of multiple disconnected components. This breaks the context aggregation between different components for mesh convolutions. On the other hand,

forcing the mesh to be connected can damage its geometric structure. In this work, we counter the potential issues of mesh connections using point cloud convolution, e.g. the fuzzy SPH3D [32]. By using range search [42], point cloud convolutions permit arbitrary respective fields. Finally, this strategy enables our feature learning to go beyond single connected components and learn features across multiple components. The PicassoNet: To explore the proposed modules, we design a U-net [50] like architecture for the popular semantic segmentation task. For consistency, we name our network as PicassoNet. The proposed network is composed of 5 mesh resolutions, indicating 5 hierarchical layers. We use convolution blocks to learn features within a single network resolution. Figure 5 shows different block configurations considered in this work. The vanilla block uses only the mesh convolutions, while the efficient block uses mesh convolutions accompanied by a point-based convolution in parallel, before feature concatenation. PicassoNet based on the efficient block is able to process 1 million facets per-second at the inference time, still reporting competitive results. The generic block uses arbitrary number of mesh-based convolutions with one point-based convolution to extract features before the concatenation. Unlike DCM-Net [52], we do not tune the ratio of feature channels between mesh-based and point-based convolutions. We use identical feature channels for both the convolutions, which correspond to a fixed ratio of 0.5. We use max mesh pooling to obtain down-sampled features for the low-resolution networks. For efficiency, we apply the convolution blocks only in the encoder. Whereas in the decoder, we use convolutions and mesh unpooling to upsample the features, somewhat similar to the methods in [32, 59].

6 Experiments

PicassoNet takes 6-dimensional input features , similar to many point cloud networks [32, 33, 59]. Although normals are readily available for both facets and vertices on meshes, we do not observe noticeable performance gain by using vertex normals as input features in our experiments. To investigate the modules presented in the Picasso library, we focus on the semantic segmentation of the real-world dataset S3DIS [2] using the PicassoNet. We choose S3DIS dataset due to the public availability of its training and testing sets. The color values are re-scaled in the range before feeding into the network. Our network is trained on a single GeForce RTX 2080 Ti GPU with Adam Optimizer [25]

. For training, we adopt initial learning rate 0.001 with exponential decay. In specific, we decay the learning rate with a rate of 0.7 every 20K batch updates. Throughout the experiments, we use 18 Gaussian components whose centers are located regularly on the unit sphere for all facet2vertex convolutions. See supplementary for their specific values. The standard deviations of these Gaussians are universally initialized to 0.25, which is determined by the nearest center distances between different Gaussians. We switch off training of the Gaussian expectations on purpose because we observe that their values do not change too much in our experiments. Figure 

6 provides an example to show the updates of the standard deviations of the Gaussians during the training.

Figure 6: Visualized comparison of the trained Gaussians at different mesh resolutions with their universal initialisation. The brightness of the colors also accounts for the difference in the values. The green markers indicates centers of each Gaussian. We show only the results at mesh resolutions . The following resolutions look visually similar to .

We use the fuzzy spherical kernel proposed in [32] to achieve point-based convolution in our efficient and generic blocks. The kernel size and neighborhood search configurations are identical to [32]. Our network uses batch size in the training and takes point clouds of size as inputs. We exploit the widely used data augmentations in our experiments, including random flipping, shifting, random scaling, noisy translation, random azimuth rotation and arbitrary rotation perturbations. We also apply random color dropping to augment the vertex textures. The shuffling strategy we inserted in the mesh simplification also played a role of data augmentation. We apply these augmentations on-the-fly during the network training. Network Configuration: We use identical feature configurations for different convolution blocks shown in Fig. 5. In specific, the desired vertex sizes of meshes , , , , are , , , , respectively. To construct the neighborhoods for point-based convolution, we use an increasing range search radius , , , , . The output feature sizes of the 5 hierarchical layers are 128, 128, 256, 256, 256, respectively. When mesh-based and point-based convolutions are both included, they each compute features of half the size of the defined output feature channels. For example, when the output feature channels are set to 128, they each compute features of 64 channels. This results in their concatenated features to have 128 channels. We set the multiplier of all convolutions to 1 except for the convolution performed on the inputs, which has . In the decoder, we explore mesh unpooling for feature interpolation. The output feature channels of the

convolutions are the same as their corresponding encoder features. PicassoNet also classifies the features obtained at resolution

in the decoder directly for efficiency.

width=1 Methods OA mAcc mIoU ceiling floor wall beam column window door table chair sofa bookcase board clutter Area 5 PointNet [43] - 49.0 41.1 88.8 97.3 69.8 0.1 3.9 46.3 10.8 58.9 52.6 5.9 40.3 26.4 33.2 SEGCloud [58] - 57.4 48.9 90.1 96.1 69.9 0.0 18.4 38.4 23.1 70.4 75.9 40.9 58.4 13.0 41.6 Tangent-Conv [57] 82.5 62.2 52.8 - - - - - - - - - - - - - SPG [30] 86.4 66.5 58.0 89.4 96.9 78.1 0.0 42.8 48.9 61.6 75.4 84.7 52.6 69.8 2.1 52.2 PointCNN [35] 85.9 63.9 57.3 92.3 98.2 79.4 0.0 17.6 22.8 62.1 74.4 80.6 31.7 66.7 62.1 56.7 SSP+SPG [29] 87.9 68.2 61.7 - - - - - - - - - - - - - GACNet [62] 87.8 - 62.9 92.3 98.3 81.9 0.0 20.4 59.1 40.9 78.5 85.8 61.7 70.8 74.7 52.8 SPH3D-GCN [33] 87.7 65.9 59.5 93.3 97.1 81.1 0.0 33.2 45.8 43.8 79.7 86.9 33.2 71.5 54.1 53.7 KPConv [59] - 70.9 65.4 92.6 97.3 81.4 0.0 16.5 54.5 69.5 90.1 80.2 74.6 66.4 63.7 58.1 SegGCN [32] 88.2 70.4 63.6 93.7 98.6 80.6 0.0 28.5 42.6 74.5 80.9 88.7 69.0 71.3 44.4 54.3 DCM-Net[52] - 71.2 64.0 92.1 96.8 78.6 0.0 21.6 61.7 54.6 78.9 88.7 68.1 72.3 66.5 52.4 PicassoNet (Prop. vanilla) 86.6 65.6 58.0 93.2 98.0 78.3 0.0 16.8 28.5 61.7 75.5 86.4 49.2 69.0 45.4 52.1 PicassoNet (Prop. ) 87.8 67.8 61.0 93.8 97.6 80.6 0.0 32.6 48.2 69.3 77.8 87.0 56.8 70.1 25.5 53.3 PicassoNet (Prop. ) 88.2 69.4 62.5 93.2 98.4 81.1 0.0 32.1 45.9 75.6 78.5 85.9 51.5 69.0 45.2 55.3 PicassoNet (Prop. ) 89.4 70.9 64.6 93.3 97.7 83.5 0.0 31.9 53.4 69.2 81.7 88.0 50.5 74.3 58.2 57.9

Table 1: Performance of different configurations for PicassoNet on the fifth fold (Area 5) of S3DIS dataset. The results of PicassoNet using outperforms SegGCN and DCM-Net slightly, and are competitive for KPConv. See supplementary for details of input representations exploited by each method.

width=1 Phase training testing batch size 16 16 32 64 68 128 mesh size (M,M) (M,M) (M,M) (M,M) (M,M) (M,M) runtime ms ms ms ms ms ms testing runtime of SegGCN [32] 153 ms 250 ms 470 ms 485 ms 930 ms

Table 2: Training and testing time of the efficient PicassoNet () for different mini-batch settings. The ‘mesh size’ indicates total number of vertices and facets as (vertices, facets) in the concatenated mesh of a batch. We add the testing runtime of SegGCN under identical settings for pure point cloud input at the bottom row as a reference.

6.1 Semantic Segmentation

The Stanford large-scale 3D Indoor Spaces (S3DIS) dataset [2] is a real-world dataset composed of dense 3D point clouds but sparse 3D meshes of 6 large-scale indoor areas. The data is collected using Matterport scanner from three different buildings on the Standard campus. Semantic segmentation on this dataset is to classify the 13 defined classes, ceiling, floor, wall, beam, column, window, door, table, chair, sofa, bookcase, board, and clutter. We follow the standard training/test split by using Area 5 as the test set while the other areas as training set [30, 35, 43, 58, 63]

. The evaluation metrics for this dataset comprise the Overall Accuracy (OA), average Accuracy of all 13 classes (mAcc), class-wise Intersection Over Union (IoU), together with their average (i.e. mIoU). mIoU is commonly considered a more reliable metric than the others. DCM-Net

[52] used over-tessellation and interpolation to produce high resolution meshes with labels from the original meshes in S3DIS. In contrast, we triangulate the labelled point cloud into triangular meshes using the algorithm from [1]. Our network takes meshes of size cropped from the room mesh as input data. The experiment results for different block configurations in the PicassoNet are provided in Table 1. It can be noticed that PicassoNet () provides competitive results to KPConv [59], and slightly outperforms SegGCN [32] and DCM-Net [52]. Table 2 reports the training time of PicassoNet () for batch size 16, and its testing time for different batches. It can be seen that PicassoNet () can process over a million facets per-second on a commonly available single RTX 2080 Ti GPU.

6.2 Mesh Simplification Efficiency

We summarize the statistics of vertex and facet sizes of room meshes we generate in S3DIS. The minimum, maximum, average of (vertex, facet) numbers are , , and . The standard deviations are as large as . Our fast mesh decimation method contributes largely to the efficiency of our network, e.g. million facets processed per second. We apply our simplification algorithm to all room samples in S3DIS to decimate them each into a mesh of 65536 vertices. We compare the speed of our algorithm to the quadric error metrics (QEM) method [17] that our method builds upon. The results are shown in Fig. 7 (left plot), in which the -axis indicates input vertex sizes of the room meshes while the -axis reports the runtime of the algorithms. Besides, we also test the runtime of our algorithm by decimating those room meshes into different resolutions, which corresponds to vertex sizes of 65536, 32768, 16384, 8192, 4096, 2048, 1024, 512 and 256. These results are reported in Fig. 7 (right plot). We can tell from the figure that the runtime of our algorithm is essentially linear in the size of vertices of the input mesh. These conclusions about the efficiency of our simplification algorithm are generic, as we observe similar results consistently on other mesh datasets also. See the supplementary for further results.

7 Conclusion

We provide a fast CUDA-accelerated mesh decimation technique to facilitate the 3D community to explore hierarchical neural architectures on meshes. We propose 3 novel convolutions, namely; vertex2facet, facet2facet, and facet2vertex. We provide CUDA implementations for forward and backward passes for these convolutions. We also introduce the vertex2vertex convolution on top of vertex2facet and facet2vertex convolutions. We use Gaussian mixtures and Barycentric interpolation to incorporate fuzziness into the proposed convolutions. Our mesh simplification gather vertex clustering information into a 1-D tensor, which is convenient for the CUDA-based pooling and unpooling operations. Most importantly, we introduce Picasso, a self-contained library for deep learning over 3D meshes and point clouds. We also present PicassoNet for semantic segmentation, and test its performance on S3DIS which provides at par results to state-of-the-art. The efficient version of PicassoNet can process 1 million facets during inference while retaining competitive accuracy. We also observe that more mesh convolutions in the deeper version increases the receptive field, and hence learn better features. In the future, we will maintain and upgrade the introduced library for efficiency and further functionalities.

Figure 7: The left figure compares the runtime of our mesh decimation method with QEM. The data is plotted by decimating every sample in S3DIS to a mesh of 65536 vertices by both methods. We notice that QEM takes much more time than ours, usually in ‘seconds’, while our algorithm runs in ‘milliseconds’. The right figure shows the time taken by our simplification algorithm to decimate all mesh samples in S3DIS to different vertex sizes, including 65536, 32768, 16384, 8192, 4096, 2048, 1024, 512, and 256. It can be observed that the runtime of our algorithm remains generally linear to the vertex size of the input mesh.

Acknowledgements: This work is supported by ARC Discovery Grant DP190102443.


  • [1] Surface Reconstruction from Scattered Points, Mathworks 2020. accessed 17-Nov-2020.
  • [2] Iro Armeni, Ozan Sener, Amir R Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio Savarese. 3D semantic parsing of large-scale indoor spaces. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 1534–1543, 2016.
  • [3] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
  • [4] Federica Bogo, Javier Romero, Matthew Loper, and Michael J. Black. FAUST: Dataset and evaluation for 3D mesh registration. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Piscataway, NJ, USA, June 2014. IEEE.
  • [5] Davide Boscaini, Jonathan Masci, Emanuele Rodolà, and Michael Bronstein.

    Learning shape correspondence with anisotropic convolutional neural networks.

    In Advances in neural information processing systems, pages 3189–3197, 2016.
  • [6] Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18–42, 2017.
  • [7] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. In International Conference on Learning Representations, 2014.
  • [8] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from rgb-d data in indoor environments. International Conference on 3D Vision (3DV), 2017.
  • [9] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. ShapeNet: An information-rich 3D model repository. arXiv preprint arXiv:1512.03012, 2015.
  • [10] François Chollet. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1251–1258, 2017.
  • [11] Luca Cosmo, Emanuele Rodolà, Michael M Bronstein, Andrea Torsello, Daniel Cremers, and Y Sahillioglu. Shrec’16: Partial matching of deformable shapes. Proc. 3DOR, 2(9):12, 2016.
  • [12] Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. ScanNet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5828–5839, 2017.
  • [13] Bert De Brabandere, Xu Jia, Tinne Tuytelaars, and Luc Van Gool. Dynamic filter networks. In Advances in Neural Information Processing Systems, 2016.
  • [14] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, pages 3844–3852, 2016.
  • [15] M. Engelcke, D. Rao, D. Zeng Wang, C. Hay Tong, and I. Posner. Vote3Deep: Fast object detection in 3D point clouds using efficient convolutional neural networks. In IEEE International Conference on Robotics and Automation, June 2017.
  • [16] Michael Garland. Quadric-based polygonal surface simplification [thesis]. Pittsburgh: Carnegie Mellon University, 1999.
  • [17] Michael Garland and Paul S Heckbert. Surface simplification using quadric error metrics. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages 209–216, 1997.
  • [18] Benjamin Graham, Martin Engelcke, and Laurens Van Der Maaten. 3D semantic segmentation with submanifold sparse convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9224–9232, 2018.
  • [19] Fabian Groh, Patrick Wieschollek, and Hendrik PA Lensch. Flex-convolution. In Asian Conference on Computer Vision, pages 105–122. Springer, 2018.
  • [20] Rana Hanocka, Amir Hertz, Noa Fish, Raja Giryes, Shachar Fleishman, and Daniel Cohen-Or. Meshcnn: a network with an edge. ACM Transactions on Graphics (TOG), 38(4):1–12, 2019.
  • [21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016.
  • [22] Binh-Son Hua, Quang-Hieu Pham, Duc Thanh Nguyen, Minh-Khoi Tran, Lap-Fai Yu, and Sai-Kit Yeung. Scenenn: A scene meshes dataset with annotations. In International Conference on 3D Vision (3DV), 2016.
  • [23] Jingwei Huang, Haotian Zhang, Li Yi, Thomas Funkhouser, Matthias Nießner, and Leonidas J Guibas. Texturenet: Consistent local parametrizations for learning from high-resolution signals on meshes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4440–4449, 2019.
  • [24] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In

    International Conference on Machine Learning

    , 2015.
  • [25] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations, 2015.
  • [26] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017.
  • [27] Roman Klokov and Victor Lempitsky. Escape from cells: Deep kd-networks for the recognition of 3d point cloud models. In Proceedings of the IEEE International Conference on Computer Vision, pages 863–872. IEEE, 2017.
  • [28] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097–1105, 2012.
  • [29] Loic Landrieu and Mohamed Boussaha. Point cloud oversegmentation with graph-structured deep metric learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.
  • [30] Loic Landrieu and Martin Simonovsky. Large-scale point cloud semantic segmentation with superpoint graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  • [31] Huan Lei, Naveed Akhtar, and Ajmal Mian. Octree guided cnn with spherical kernels for 3d point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9631–9640, 2019.
  • [32] Huan Lei, Naveed Akhtar, and Ajmal Mian. Seggcn: Efficient 3d point cloud segmentation with fuzzy spherical kernel. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11611–11620, 2020.
  • [33] Huan Lei, Naveed Akhtar, and Ajmal Mian. Spherical kernel for efficient graph convolution on 3d point clouds. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.
  • [34] Jiaxin Li, Ben M Chen, and Gim Hee Lee. So-net: Self-organizing network for point cloud analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9397–9406, 2018.
  • [35] Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen. Pointcnn: Convolution on x-transformed points. In Advances in Neural Information Processing Systems, pages 820–830, 2018.
  • [36] Z Lian, A Godil, B Bustos, M Daoudi, J Hermans, S Kawamura, Y Kurita, G Lavoua, and P Dp Suetens. Shape retrieval on non-rigid 3d watertight meshes. In Eurographics workshop on 3d object retrieval (3DOR)

    . Citeseer, 2011.

  • [37] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. SSD: Single shot multibox detector. In European Conference on Computer Vision, pages 21–37, 2016.
  • [38] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431–3440, 2015.
  • [39] Jonathan Masci, Davide Boscaini, Michael Bronstein, and Pierre Vandergheynst. Geodesic convolutional neural networks on riemannian manifolds. In Proceedings of the IEEE international conference on computer vision workshops, pages 37–45, 2015.
  • [40] Daniel Maturana and Sebastian Scherer. VoxNet: A 3D convolutional neural network for real-time object recognition. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 922–928. IEEE, 2015.
  • [41] Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5115–5124, 2017.
  • [42] Franco P Preparata and Michael I Shamos. Computational geometry: an introduction. Springer Science & Business Media, 2012.
  • [43] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. PointNet: Deep learning on point sets for 3D classification and segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 652–660, 2017.
  • [44] Charles R Qi, Li Yi, Hao Su, and Leonidas J Guibas. PointNet++: Deep hierarchical feature learning on point sets in a metric space. Advances in Neural Information Processing Systems, 2017.
  • [45] Anurag Ranjan, Timo Bolkart, Soubhik Sanyal, and Michael J Black.

    Generating 3d faces using convolutional mesh autoencoders.

    In Proceedings of the European Conference on Computer Vision (ECCV), pages 704–720, 2018.
  • [46] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 779–788, 2016.
  • [47] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, pages 91–99, 2015.
  • [48] Dario Rethage, Johanna Wald, Jurgen Sturm, Nassir Navab, and Federico Tombari. Fully-convolutional point networks for large-scale point clouds. In Proceedings of the European Conference on Computer Vision (ECCV), pages 596–611, 2018.
  • [49] Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger. OctNet: Learning deep 3d representations at high resolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3577–3586, 2017.
  • [50] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 234–241, 2015.
  • [51] Jarek Rossignac and Paul Borrel. Multi-resolution 3d approximations for rendering complex scenes. In Modeling in computer graphics, pages 455–465. Springer, 1993.
  • [52] Jonas Schult, Francis Engelmann, Theodora Kontogianni, and Bastian Leibe. Dualconvmesh-net: Joint geodesic and euclidean convolutions on 3d meshes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8612–8622, 2020.
  • [53] Yiru Shen, Chen Feng, Yaoqing Yang, and Dong Tian. Mining point cloud local structures by kernel correlation and graph pooling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 4, 2018.
  • [54] Martin Simonovsky and Nikos Komodakis. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [55] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations, 2015.
  • [56] Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, and Jan Kautz. SPLATNet: Sparse lattice networks for point cloud processing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2530–2539, 2018.
  • [57] Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, and Qian-Yi Zhou. Tangent convolutions for dense prediction in 3d. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3887–3896, 2018.
  • [58] Lyne Tchapmi, Christopher Choy, Iro Armeni, JunYoung Gwak, and Silvio Savarese. Segcloud: Semantic segmentation of 3d point clouds. In International Conference on 3D Vision, pages 537–547. IEEE, 2017.
  • [59] Hugues Thomas, Charles R Qi, Jean-Emmanuel Deschaud, Beatriz Marcotegui, François Goulette, and Leonidas J Guibas. Kpconv: Flexible and deformable convolution for point clouds. Proceedings of the IEEE International Conference on Computer Vision, 2019.
  • [60] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.
  • [61] Nitika Verma, Edmond Boyer, and Jakob Verbeek. Feastnet: Feature-steered graph convolutions for 3d shape analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2598–2606, 2018.
  • [62] Lei Wang, Yuchun Huang, Yaolin Hou, Shenman Zhang, and Jie Shan. Graph attention convolution for point cloud semantic segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition, June 2019.
  • [63] Lei Wang, Yuchun Huang, Yaolin Hou, Shenman Zhang, and Jie Shan. Graph attention convolution for point cloud semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10296–10305, 2019.
  • [64] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. arXiv preprint arXiv:1801.07829, 2018.
  • [65] Wenxuan Wu, Zhongang Qi, and Li Fuxin. Pointconv: Deep convolutional networks on 3d point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9621–9630, 2019.
  • [66] Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the European conference on computer vision (ECCV), pages 3–19, 2018.
  • [67] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3D ShapeNets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1912–1920, 2015.
  • [68] Jin Xie, Yi Fang, Fan Zhu, and Edward Wong. Deepshape: Deep learned shape descriptor for 3d shape matching and retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1275–1283, 2015.
  • [69] Yifan Xu, Tianqi Fan, Mingye Xu, Long Zeng, and Yu Qiao. Spidercnn: Deep learning on point sets with parameterized convolutional filters. In Proceedings of the European Conference on Computer Vision, pages 87–102, 2018.
  • [70] Li Yi, Hao Su, Xingwen Guo, and Leonidas J Guibas. Syncspeccnn: Synchronized spectral cnn for 3d shape segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2282–2290, 2017.
  • [71] Qian-Yi Zhou, Jaesik Park, and Vladlen Koltun. Open3d: A modern library for 3d data processing. arXiv preprint arXiv:1801.09847, 2018.

8 Picasso Library

We summarize the major modules included in Picasso Library as Fig. 8. All the novel operations introduced in this paper are colorized, while the previous point cloud based operations [32, 33] are left as blank. We also compare performance of the included convolutions for a very large input size, i.e. in Table 3. For mesh-based convolutions, we use a batch of 16 meshes as input, whose vertex size each is 65536. For point based convolutions, we use only vertices of those meshes as input. Currently, 3D deep networks are widely taking data samples of 10000 points/vertices as input. For spatial graph convolutions, the graph construction based on neighborhood search takes significant amount of time, especially when the input size is large. For instance, see Table 10 of SPH3D-GCN [33

]. We introduce mesh-based modules to take advantage of its geodesic connections and save graph construction time. It is worth noting that we estimate the runtime under Tensorflow 2.

Figure 8: Deep learning modules included in the Picasso Library.
Convolution Type batch size input vetex/point size kernel configuration runtime (ms)
facet2facet 16 3 6 8 125
vertex2facet 16 3 6 8 89
facet2vertex 16 12 48 2 195
hard SPH3D [33] 16 48 2 118 + graph construction
fuzzy SPH3D [32] 16 48 2 188 + graph construction
Table 3: Runtime comparison of different mesh-based convolutions and point-based convolutions. represents the total number of filters in the kernel. denotes the number of input feature channels, and is the multiplier of separable convolutions. Regardless of graph construction, the speed of mesh-based convolutions are similar to point-based convolutions. However, since point-based convolutions demand the graph to be pre-constructed before convolution, which takes significant amount of time when input size is large. For time concern, mesh-based convolutions are preferred as the input size grows.

9 Quadric error computation

The plane function of an arbitrary facet can be denoted as , where is the point in 3D space, is the facet normal, and is the intercept. The quadric of each facet is defined as . It associates a value named quadric error to an arbitrary point in space , which is computed as


The quadric of each vertex is an accumulation of the quadric of its adjacent facets , represented as


The quadric of each vertex cluster to be contracted is an accumulation of the quadric of all the vertices in it, that is


The optimal vertex placement of each cluster after contraction is ideally computed as


for which we determine if is a full-rank matrix by checking the reciprocal of its condition number. However, we still observe numerical instability in the decimated mesh. We therefore replace the computation of to be


Open3D [ 71] computes in the same way. This improves the final results noticeably. See Table 4 for the performance comparison of PicassoNet () on S3DIS Area 5 using Eq. (11) and Eq. (12).

width=1 Methods OA mAcc mIoU ceiling floor wall beam column window door table chair sofa bookcase board clutter PicassoNet (, 11) 88.2 69.4 62.5 93.2 98.4 81.1 0.0 32.1 45.9 75.6 78.5 85.9 51.5 69.0 45.2 55.3 PicassoNet (, 12) 88.7 69.5 63.1 94.8 98.4 81.4 0.0 30.4 53.7 71.2 76.8 87.5 48.1 69.9 51.2 57.2

Table 4: Performance of PicassoNet ( on the fifth fold (Area 5) of S3DIS dataset under different computations of the optimal vertex placement .

10 Backward propagations of different convolutions

In the main paper, we present forward computations of the vertex2facet and facet2vertex convolutions in Eqs. (1), (2), (3) and Eqs. (4), (5), (6), respectively. In this section, we analyze their backward propagations in Eq. (13) and (14) correspondingly. The computations of facet2facet convolution are similar to those of vertex2facet convolution. We hence omit them here to avoid redundancy. We also provide the relate forward computations in Eq. (13) and 14) as references. Please refer to the main paper for the notations. Finally, as the GMM-based fuzzy coefficients are readily computed with built-in Tensorflow functions, there is no need for us to analyze their backward propagations manually because Tensorflow is able to achieve it automatically.

vertex2facet  kernel: (13)
facet2vertex kernel: (14)

11 6-folds results on S3DIS

We report the 6 folds results of PicassoNet () on S3DIS dataset in Table 5. It can be noticed that the PicassoNet using only layers of mesh convolutions in each block are already competitive to KPConv [59] and DCM-Net [52].

width=1 Methods OA mAcc mIoU ceiling floor wall beam column window door table chair sofa bookcase board clutter All 6 Folds PointNet [43] 78.5 66.2 47.6 88.0 88.7 69.3 42.4 23.1 47.5 51.6 42.0 54.1 38.2 9.6 29.4 35.2 Engelmann et al. [1a] 81.1 66.4 49.7 90.3 92.1 67.9 44.7 24.2 52.3 51.2 47.4 58.1 39.0 6.9 30.0 41.9 SPG [30] 85.5 73.0 62.1 89.9 95.1 76.4 62.8 47.1 55.3 68.4 73.5 69.2 63.2 45.9 8.7 52.9 PointCNN [35] 88.1 75.6 65.4 94.8 97.3 75.8 63.3 51.7 58.4 57.2 71.6 69.1 39.1 61.2 52.2 58.6 SSP+SPG [29] 87.9 78.3 68.4 - - - - - - - - - - - - - DeepGCN [2a] 85.9 - 60.0 93.1 95.3 78.2 33.9 37.4 56.1 68.2 64.9 61.0 34.6 51.5 51.1 54.4 KPConv [59] - 79.1 70.6 93.6 92.4 83.1 63.9 54.3 66.1 76.6 57.8 64.0 69.3 74.9 61.3 60.3 SPH3D-GCN [33] 88.6 77.9 68.9 93.3 96.2 81.9 58.6 55.9 55.9 71.7 72.1 82.4 48.5 64.5 54.8 60.4 SegGCN [32] 87.8 77.1 68.5 92.5 97.6 78.9 44.6 58.2 53.7 67.3 74.6 83.9 68.0 65.7 46.8 58.5 DCM-Net [52] - 80.7 69.7 93.7 96.6 81.2 44.6 44.9 73.0 73.8 71.4 74.3 63.3 63.9 63.0 61.9 PicassoNet (Prop. ) 89.0 78.8 69.8 93.7 96.6 82.2 57.8 54.9 59.9 77.1 71.5 82.5 51.8 63.8 53.9 61.7

Table 5: Performance of PicassoNet () on the entire 6 folds of S3DIS. PicassoNet using only layers of mesh convolutions in the convolution block are competitive to KPConv and DCM-Net.

12 Results on ScanNet

Originally, we trained the PicassoNet using cropped meshes generated from the full-resolution meshes provided in ScanNet dataset [12], which produced mediocre results. We found that the reason of this phenomenon is caused by high-frequency signals in noisy areas as well [52]. We hence voxelize the full-resolution meshes in ScanNet [12] using a grid size of , and re-conducted the experiment. The performance of PicassoNet () on the full and voxelized validation set is reported in Table 6. Similar results are expected on the test set of ScanNet.

width=1 Method mIoU floor wall chair sofa table door cab bed desk toil sink wind pic bkshf curt show cntr fridg bath other DCM-Net (VC,rad/geo) 62.8 - - - - - - - - - - - - - - - - - - - - SPH3D-GCN (full) 60.0 93.9 76.6 86.4 76.7 69.0 40.1 55.6 68.4 55.3 85.3 58.8 51.0 6.8 42.2 57.2 64.8 56.2 38.9 79.5 36.1 SPH3D-GCN () 61.2 95.5 78.3 86.9 78.9 71.4 41.6 57.1 70.6 57.4 87.0 58.9 49.2 7.7 42.2 57.8 63.8 58.7 42.2 82.1 37.2 PicassoNet (, full) 62.5 94.4 78.0 86.5 75.3 69.4 47.4 54.1 68.8 56.6 88.4 62.6 50.6 18.2 54.4 63.4 64.8 54.1 41.6 80.4 40.3 PicassoNet (, ) 64.3 96.1 80.4 87.0 78.2 73.2 50.3 56.6 72.0 59.1 90.0 64.1 49.9 19.2 53.9 63.9 63.7 57.1 44.9 83.5 42.0 ScanNet [12] 30.6 78.6 43.7 52.4 34.8 30.0 18.9 31.1 36.6 34.2 46.0 31.8 18.2 10.2 50.1 0.2 15.2 21.1 24.5 20.3 14.5 PointNet++ [44] 33.9 67.7 52.3 36.0 34.6 23.2 26.1 25.6 47.8 27.8 54.8 36.4 25.2 11.7 45.8 24.7 14.5 25.0 21.2 58.4 18.3 SPLATNET [56] 39.3 92.7 69.9 65.6 51.0 38.3 19.7 31.1 51.1 32.8 59.3 27.1 26.7 0.0 60.6 40.5 24.9 24.5 0.1 47.2 22.7 Tangent-Conv [57] 43.8 91.8 63.3 64.5 56.2 42.7 27.9 36.9 64.6 28.2 61.9 48.7 35.2 14.7 47.4 25.8 29.4 35.3 28.3 43.7 29.8 PointCNN [35] 45.8 94.4 70.9 71.5 54.5 45.6 31.9 32.1 61.1 32.8 75.5 48.4 47.5 16.4 35.6 37.6 22.9 29.9 21.6 57.7 28.5 PointConv [65] 55.6 94.4 76.2 73.9 63.9 50.5 44.5 47.2 64.0 41.8 82.7 54.0 51.5 18.5 57.4 43.3 57.5 43.0 46.4 63.6 37.2 SPH3D-GCN [33] 61.0 93.5 77.3 79.2 70.5 54.9 50.7 53.2 77.2 57.0 85.9 60.2 53.4 4.6 48.9 64.3 70.2 40.4 51.0 85.8 41.4 KPConv [59] 68.4 93.5 81.9 81.4 78.5 61.4 59.4 64.7 75.8 60.5 88.2 69.0 63.2 18.1 78.4 77.2 80.5 47.3 58.7 84.7 45.0 SegGCN [32] 58.9 93.6 77.1 78.9 70.0 56.3 48.4 51.4 73.1 57.3 87.4 59.4 49.3 6.1 53.9 46.7 50.7 44.8 50.1 83.3 39.6 DCM-Net [52] 65.8 94.1 80.3 81.3 72.7 56.8 52.4 61.9 70.2 49.4 82.6 67.5 63.7 29.8 80.6 69.3 82.1 46.8 51.0 77.8 44.9

Table 6: 3D semantic segmentation performance of PicassoNet () on the ScanNet validation set. ‘full’ refers to results on the full-resolution meshes, while ‘’ refers to the results on the voxelized meshes using a voxel size of . Similar results are expected on the test set of ScanNet. For reference, we provide the results of DCM-Net and SPH3D-GCN on the validation set, as well as the results of popular approaches on the test set.

13 Mesh Simplification Efficiency

We summarize the statistics of vertex and facet sizes of the full-resolution meshes in ScanNet [12]. The minimum, maximum, average of (vertex, facet) number are , , and . The standard deviations are . Similarly, we apply the proposed simplification algorithm to decimate all room samples in ScanNet into a mesh of 65536 vertices. The runtime comparison of our algorithm and QEM are shown in the left plot of Fig. 9. We also test the runtime of our algorithm while decimating the room meshes into different resolutions, for which we use identical configurations to those for S3DIS. The results are shown in the right plot of Fig. 7. It can be noticed from the figure that our conclusions based on S3DIS dataset consistently hold for ScanNet.

Figure 9: The left figure compares the runtime of our mesh decimation method with QEM. The data is plotted by decimating every sample in ScanNet to a mesh of 65536 vertices by both methods. The right figure shows the time taken by our simplification algorithm to decimate all mesh samples in ScanNet to different vertex sizes, including 65536, 32768, 16384, 8192, 4096, 2048, 1024, 512, and 256.
Methods Neural Element Input Features
Voxel Point Super-Point Geometric Texture
SPG [30] [position, observation, geometrics]
SSP+SPG [29] [position, radiometry]
SEGCloud [58] occupancy
PointNet [43]
Tangent-Conv [57] [distance to tangent plane, height, normal]
PointCNN [35]
GACNet [62]

[height, eigenvalues]

SPH3D-GCN [33]
KPConv [59]
SegGCN [32]
DCM-Net [52]
PicassoNet (Proposed)
Table 7: Input representations of different methods on the S3DIS dataset.

[1a] Francis Engelmann, Theodora Kontogianni, Alexander Hermans, and Bastian Leibe. Exploring spatial context for 3D semantic seg-mentation of point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 716–724, 2017. [2a] Guohao Li, Matthias Müller, Ali Thabet, and Bernard Ghanem. DeepGCNs: Can GCNs go as deep as CNNs? In Proceedings of the IEEE International Conference on Computer Vision, 2019.