Exploiting Local Geometry for Feature and Graph Construction for Better 3D Point Cloud Processing with Graph Neural Networks

03/28/2021 ∙ by Siddharth Srivastava, et al. ∙ 0

We propose simple yet effective improvements in point representations and local neighborhood graph construction within the general framework of graph neural networks (GNNs) for 3D point cloud processing. As a first contribution, we propose to augment the vertex representations with important local geometric information of the points, followed by nonlinear projection using a MLP. As a second contribution, we propose to improve the graph construction for GNNs for 3D point clouds. The existing methods work with a k-nn based approach for constructing the local neighborhood graph. We argue that it might lead to reduction in coverage in case of dense sampling by sensors in some regions of the scene. The proposed methods aims to counter such problems and improve coverage in such cases. As the traditional GNNs were designed to work with general graphs, where vertices may have no geometric interpretations, we see both our proposals as augmenting the general graphs to incorporate the geometric nature of 3D point clouds. While being simple, we demonstrate with multiple challenging benchmarks, with relatively clean CAD models, as well as with real world noisy scans, that the proposed method achieves state of the art results on benchmarks for 3D classification (ModelNet40) , part segmentation (ShapeNet) and semantic segmentation (Stanford 3D Indoor Scenes Dataset). We also show that the proposed network achieves faster training convergence, i.e.  40 https://siddharthsrivastava.github.io/publication/geomgcnn/

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Significant progress has been made recently towards designing neural network architectures for directly processing unstructured 3D point clouds [13]

. Since 3D point clouds are obtained as the native output from commonly available scanning sensors, the ability to process them and apply machine learning methods directly to them is interesting, and allows for many useful applications 

[25]. Like many other input modalities, both handcrafted and learning based methods are available for extracting meaningful information from 3D point clouds. The current state-of-the-art techniques for extracting such information are mostly based on deep neural networks.

Fig. 1: We propose two components which work within the framework of graph neural networks for 3D point cloud processing. First, we propose to augment the point representations, not just by point features (e.g. coordinates) but also with local geometric information such as the in and out plane distances (shown in left), w.r.t. the tangent plane of point , to the nearby points, e.g. . We further propose to nonlinearly project the resulting feature with a small neural network. Second, we propose to construct the local neighborhood graph in a geometrically aware way. While in the usual -NN based construction, the local graph can get biased towards a certain local neighborhood in the case of selective dense sampling, the proposed method aims to avoid that (shown in the two right blocks), by selecting the neighbors in a geometrically aware fashion. Best viewed in color.

As the point clouds are unordered, it is difficult to use CNNs with them. The popular ways of overcoming the problem of unstructured point clouds have been to either (i) voxelize them, e.g. [29] or (ii) learn a permutation invariant mapping, e.g. [31]. Point clouds can also be seen as graphs with potential edges defined between neighboring points. Therefore, utilizing graph neural networks (GNN) for processing point cloud data emerges as a natural choice. Most of the earlier GNN methods [41] considered the points in isolation, and did not explicitly utilize the information from local neighborhoods. While such local geometric information can be helpful for higher level tasks, these methods expected the network to learn the important intricacies directly from isolated points as inputs. However, the Dynamic EdgeConv network [39]

showed that local geometric properties are important for learning feature representations from point clouds. They proposed to use the differences between the 3D points and their neighbors, which can be interpreted as a 3D direction vector, to encode the local structure. In addition, they converted the point cloud to a graph using a

-NN based construction, which is another way of encoding local geometric structure for use by the method. By doing this they were able to close the gap between GNN based methods and other state-of-the-art point cloud processing methods.

Along the lines of Dynamic EdgeConv, we believe that presenting more geometric information encoded in a task relevant way is important for the success of GNNs for point cloud processing. The generic graph networks [41] were designed to work with general graphs, e.g. citation networks, social network graphs etc. In such graphs there is no notion of geometric relations between vertices. However, point cloud data is inherently geometric and one could expect the performance to increase when the two natural properties of the point cloud data, i.e. graph like structure, and geometric information, are co-exploited. We see Dynamic EdgeConv work as a step along, and propose to go further in this direction.

We propose two novel, simple but extremely effective, components within the framework of general GNNs based on the intuitions above as shown in Figure 1

. First, we propose to encode detailed local geometric information, inspired by traditional handcrafted 3D descriptors, about the points. Further, to convert the raw geometric information to a form which is easily exploited, we propose to use a multi layer perceptron network (MLP) to nonlinearly project the point features into another space. We then propose to present the projected representations as inputs to the graph network. This goes beyond the usual recommendation in previous works, where the possibility of adding more information about the point, e.g. color, depth, in addition to its 3D coordinates, has been often mentioned. We encode not only more information about the point, but also encode the information derived from the local neighborhood, around the point, as shown beneficial by previous works in handcrafted 3D descriptors. We also project such information nonlinearly to make it better exploitable by the GNN. While, in theory such information can be extracted by the neural network from 3D point coordinate inputs, we believe that presenting this information upfront allows the network to start from a better point, and even regularizes the learning.

Second, we propose a geometrically informed graph construction which goes beyond the usual -NN based construction. We constrain the sequential selection of points to be added to a local graph based on their angles and distances with already selected points, while considering the point of interest as the center of reference. This increases the coverage of the local connectivity and especially addresses cases where the points might be very densely sampled in one neighborhood, while relatively sparsely sampled in another.

We show empirically that both the contributions, while being relatively simple, are significant and help improve the performance of the baseline GNN. We report results on multiple publicly available challenging benchmarks of point clouds including both CAD models and real world 3D scans.

The results on the real world datasets help us make the point that the local geometry extraction methods work despite the noise in real world scans.

In summary, the contributions of this work are:

  • We propose to represent the vertices in a point cloud derived graph, corresponding to the 3D points, with geometric properties computed with respect to their local neighborhoods.

  • We propose a geometrically informed construction of the local neighborhood graph, instead of using a naïve -NN based initialization.

  • We give extensive empirical results on five public benchmarks i.e. classification on ModelNet40 [40], part segmentation on ShapeNet [44] and semantic segmentation on ScanNet [8], S3DIS [1] and Paris-Lille-3D [33], reporting state-of-the-art results on three of them, while competing with both graph based as well as other neural networks for point cloud processing. We also give exhaustive ablation results and indicative qualitative results to fully validate the method.

Ii Related Works

Many tasks in 3D, such as classification and segmentation, can be solved by exploiting the local geometric structure of point clouds. In earlier efforts, handcrafted descriptors did this by encoding various properties of local regions in point clouds. The handcrafted descriptors, in general, fall into two categories. First are based on spatial distribution of the points, while the second directly utilize geometric properties of the points on surface.

Most of the handcrafted descriptors are based on finding a local reference frame which provides robustness to various types of transformations of the point cloud. Spin Image [14] uses in-plane and out-plane distance of neighborhood points within the support region of the keypoint to discretize the space and construct the descriptor. Local Surface Patches [5, 6] utilizes shape index [17] and angle between normals to accumulate points forming the final descriptor.

3D Binary Signatures [35] uses binary comparisons of geometrical properties in a local neighborhood. We refer the reader to Guo et al. [11] for a comprehensive survey of handcrafted 3D local descriptors.

Further, learning based methods have also been applied to obtain both local and global descriptors in 3D. Compact Geometric Features (CGF) [15] learns an embedding to a low dimensional space where the similar points are closer. DLAN [10] performs aggregation of local 3D features using deep neural network for the task of retrieval. DeepPoint3D [36] learns discriminative local descriptors by using deep metric learning.

While 3D local descriptors are still preferred for tasks such as point cloud registration, 3D deep networks trained for generating a global descriptor have provided state-of-the-art results on various classification, segmentation and retrieval benchmarks. The deep networks applied to these tasks are primarily based on volumetric representations (voxels, meshes, point clouds) and/or multi-view renderings of 3D shapes. We focus on techniques which directly process raw 3D point clouds as we work with them as well.

PointNet [31] directly processes 3D point clouds by aggregating information using a symmetric function. PointNet++ [32] applies hierarchical learning to PointNet to obtain geometrically more robust features. Kd-Net [16] constructs a network of kd-trees for parameter learning and sharing. PCNN [2], RS-CNN [26], LP-3DCNN [18], MinkowskiNet [7], FKAConv [4], PointASNL [43] propose modifications to convolution kernel or layers to produce features.

Recently, graph based neural networks have shown good performance on such unstructured data [22, 38, 24]

. Dynamic Edge Convolutional Neural Network (DGCNN)

[39] showed that by representing the local neighborhood of a point as a graph, it can achieve better performance than the previous techniques. Moreover, many previous works such as PointNet, PointNet++ become a special case with various Edge Convolution functions. LDGCNN [46] extends DGCNN by hierarchically linking features from the dynamic graphs in DGCNN. Authors in  [21, 20] propose modifications to kernels to capture geoemtric relationships between points. Our proposals are mainly within the purview of GNNs, however we extensively compare with most of the existing 3D point cloud based methods and show better results.

Fig. 2:

Illustration of of the proposed method. The input point cloud is processed point by point to generate an eventual graph, with vertices represented as real vectors and having directed edges denoting local neighborhoods. The representation includes local geometric properties of the points (which is relative to other points in their neighborhood), along with their coordinates. The graph formulation is geometrically aware and counters oversampling in some regions of the space. The finally constructed graph can then be used with a graph neural network for supervised learning, e.g. for classification and part segmentation. Best viewed in color.

Iii Approach

We are interested in processing a set , i.e. a point cloud, of 3D points. We follow the general graph neural network (GNN) based processing, where we work with a directed graph with the points being the vertices and the edges being potential connections between them. The vertices in are represented by a dimensional feature vector, which is often just the 3D coordinate of the points, and the set of edges is constructed usually with a -NN based graph construction algorithm. We now describe our two contributions pertaining to (i) the representation of the points using local geometric information and (ii) constructing the graph edges with a geometrically constrained algorithm. Figure 2 gives an overall block diagram of our full system.

Iii-a Local Geometric Representation of Points

Majority of the GNN based previous works use a basic representations of points, i.e. their 3D coordinates, and feed that to the GNN to extract more complex interactions between neighboring points. Since GNNs, e.g. [41], were designed for general graphs which may not have any geometric properties, e.g. social network graphs, they do not have any explicit mechanism to encode geometry. More recent works on point cloud processing have started exploiting the geometric nature of 3D data more explicitly. As a specific case, EdgeConv [39] uses an edge function of the form

where is the point under consideration, and is one of its neighbors. is a nonlinear function and is a set of learnable parameters. This function thus specifically uses a global description of the point, i.e. the point coordinates themselves, and a local description, which is the direction vector from the point to a nearby point. Providing such explicit local geometric description as input to the network layers helps EdgeConv outperform the baseline GNNs.

We go further in this direction and propose to use detailed geometric description of the local neighborhood of the point. Significant ingenuity was spent on the design of handcrafted 3D features (see [11] for a comparison), before deep neural networks became competitive for 3D point cloud processing. We take inspiration from such works in deriving the geometric descriptions of the points.

We include the following components in the representation of each point. As used in previous works, the first part is plain 3D coordinates of the point , followed by the normal vector of the point. Then we take inspiration from the Spin Image descriptor [14] and add the in-plane and out-plane distances w.r.t. the point. Finally, inspired by the Local Surface Patch Descriptor [5, 6], we include the shape index () as well.

Hence, compared to a simple 3D coordinate based description of the points, we represent them with a 9D descriptor containing more local contextual geometric information. We then further pass this 9D descriptor through a small neural network to nonlinearly project it into another space which is more adapted to be used with the GNN. Finally, we pass the nonlinearly projected point representation to the graph neural network.

While one can argue that such augmentation may be redundant as the GNN should be able to extract it out from 3D coordinates only, we postulate that providing these, successful handcrafted descriptor inspired and nonlinearly projected features, makes the job of the GNN easier. Another argument could be raised that these descriptors might be very task specific and hence might not be needed for a task of interest. In that case, we would argue that it should be possible for GNN to essentially ignore them by learning zero weights for weights/edges corresponding to these input units in the neural network. While this is a seemingly simple augmentation, we show empirically in Section IV that, in fact, adding such explicit geometric information does help the tasks of classification, part segmentation and semantic segmentation on challenging benchmarks, both based on clean CAD models of objects, as well as from real world noisy scans of indoor and outdoor spaces.

Iii-B Geometrically Constrained Neighborhood Graph Construction

Once the feature description is available, the next step is to define the directed edges between the points to fully construct the graph. We now detail our second contribution pertaining to the construction of the graph.

Previous methods construct the graphs using a -NN scheme. Each point is considered, and directed edges are added from the point to its nearest neighbors. The graph thus constructed is given as input to the GNN which performs the final task.

A -NN based graph construction scheme, while being intuitive and easy to implement, has an inherent problem. It is not robust to sampling frequency difference which can happen often in practice. Certain object may be densely sampled at some location in the scene, but the same object when placed at another location in the scene might only be sparsely sampled. Such scenarios can happen, for instance, in LiDAR based scanning, where the spatial sampling frequency is higher when the object is nearer to the sensor, but is lower when the object is farther. In such cases, -NN may not cover the object sufficiently when the sampling density is higher, as all the nearest neighbors might be very close together, as shown in Figure 1 (center), and fall on the same subregion of the object. Hence, we propose a locality adaptive graph construction algorithm which promotes coverage in the local graph.

Fig. 3: The local neighborhood graph construction constraints in 2D: Once a point is selected, for a point , points from the yellow region, i.e. within an angle and a distance are discarded. This encourages coverage, and tackles densely sampled parts, in the local neighborhood graph. Best viewed in color.

The design of the graph construction is inspired by log-polar histograms which have been popular for many engineering applications. The main element is illustrated in Figure 3 in the case of 2D points. The construction iterates over each point one by one, and considers the sorted nearest neighbors of each point. However, instead of selecting the -NN points, it greedily selects the neighbors as follows. At point , as soon as it selects a point , it removes all points in the region within an angle from the line with origin at , and with distance less than , with both being tunable hyper parameters, shown as the region marked in yellow in Figure 3. The simple intuition being that once a point from a local neighborhood has been sampled, we look for points which are sufficiently away from that local neighborhood. This ensures coverage in the local graph construction and improves robustness to sensor sampling frequency artefacts.

To illustrate the construction with a small toy example, we refer to Figure 2 (step 2). In the usual -NN based graph construction, w.r.t. point , the points which would have been selected would be (for ), which would concentrate the local neighborhood graph only on one region of the space. In contrast, with the proposed method, with appropriate , the selection of points can be avoided due to the presence of , and the final six selected points would be which are well distributed in the space around the main point. Hence the problem of oversampling in some regions by the sensor can be avoided.

Iii-C Use with GNN

Given the above point descriptions, as well as graph edge constructions, we have a fully specified graph derived from the point cloud data. This graph is general enough to be used with any GNN algorithm, while being specific to 3D point clouds as the vertex descriptions contain local geometric information, and the edges have been constructed in a geometrically constrained way. We stress here that while we give results with one specific choice of GNN in this paper, our graph can be used with any available GNN working on the input graph .

Iv Experimental Results

We now give the results of the proposed methods on challenging benchmarks.

We denote the proposed methods as (i) ours (representation), when we incorporate the proposed local geometric representation, and (ii) ours (representation + graph) when we incorporate the proposed geometrically constrained graph construction as well.

Iv-a Datasets and Experimental Setup

We evaluate the proposed method on the following datasets ModelNet40 [40], ShapeNet [44], ScanNet [8], Stanford Large-Scale 3D Indoor Spaces (S3DIS) scans [1], and Paris-Lille-3D (PL3D) [33]. We follow standard settings for training and evaluation on respective datasets. We use Dynamic Edge Convolutional network (DGCNN) [39] as the baseline network. Further, the input to the MLP is the 9D vector, it contains two FC layers with and units respectively.

Iv-B Parametric Study

Table I gives the performances of the methods with different number of nearest neighbors. We see that for the baseline method, DGCNN with -NN based graph construction, the performance initially increases with an increase in but then the drops a little, e.g. mean class accuracy goes from to for , but then drops to for . The authors in DGCNN [39] attributed this to the fact that for densities with large the Euclidean distance is not able to approximate the geodesic sufficiently well. However, for the proposed method the performance improves by a modest amount and does not drop, i.e. for respectively. This indicates that with explicit geometric information provided as input, the geometry is relatively better maintained cf. DGCNN. As the performance increase from to is modest for our method, we use as a good tradeoff between performance and time.

The values of and were set based on validation experiments on a subset of the training and validation data for all experiments reported.

# NN DGCNN (-NN) ours (repr.) + -NN ours (graph) ours (repr. + graph)
m-acc(%) ov-acc(%) m-acc(%) ov-acc(%) m-acc(%) ov-acc(%) m-acc(%) ov-acc(%)
5 88.0 90.5 88.9 91.1 88.3 90.8 89.8 91.9
10 88.9 91.4 90.1 92.4 89.2 92.6 91.6 93.9
20 90.2 92.9 92.0 95.1 90.8 94.4 93.1 95.9
40 89.4 92.4 91.4 94.6 91.2 95.1 93.8 96.6
TABLE I: Results on ModelNet40 classification with varying number of nearest neighbors (#NN) for graph construction
Method ModelNet40 ShapeNet
m-acc(%) ov-acc(%) mIoU(%)

PointNet [31]
86.0 89.2 83.7
PointNet++ [32] - 90.7 85.1
Kd-net [16] - 90.6 82.3

DensePoint [47]
- 93.2 86.4
LP-3DCNN [18] - 92.1 -
DGCNN [39] 90.2 92.9 85.2
DGCNN (2048 pts) [39] 90.7 93.5 -
LDGCNN [46] 90.3 92.9 85.1
KPConv-rigid [37] - 92.9 86.2
KPConv-deform [37] - 92.7 86.4
GSN [42] - 92.9 -
GSN (2048 pts) [42] - 93.3 -
LS [28] - - 88.8
PointASNL [43] - 93.2 -
SPH3D-GCN (2048 pts)  [21] 88.5 91.4 86.8
FKAConv [4] 89.9 92.5 -
FKAConv (2048 pts) [4] 89.7 92.5 85.7
ConvPoint [3] 88.5 91.8 85.8
ConvPoint (2048 pts) [3] 89.6 92.5 -
Ours (repr.) 92.0 95.1 87.4
Ours (repr. + graph) 93.1 95.9 89.1
Ours (repr. + graph) - 2048 pts 94.1 96.9 -
TABLE II: Comparison with state of the art methods on ModelNet40 classification and ShapeNet part segmentation datasets.

Iv-C Comparison with state of the art


Classification on ModelNet40. The classification results on the ModelNet40 datasets are shown in Table II with comparison against methods based on 3D input. In the last block the first two rows of our method are with points while the third (last) is with points. Our full method achieves the best results, e.g. overall class accuracy, compared to many recent competitive methods such as PointASNL (), RS-CNN () and the baseline DGCNN (). We also see that our method improves more when the point sampling is increased from to with mean accuracy going from to , cf. DGCNN to for the same settings.


ShapeNet Part Segmentation. Table II (Column 4) shows results on ShapeNet part segmentation. Similar to the classification case, we achieve higher results than many recent methods. Our representation only method achieves mIoU cf. of DGCNN [39]. While our full, representation and graph construction based method achieves which is higher than recently proposed learning to segment (LS) by . However, LS projects the 3D points to 2D images and uses U-Net segmentation, while the proposed method directly computes the part labels on 3D points.


Semantic Segmentation. Table III show results of semantic segmentation on ScanNet, Stanford Indoor Spaces Dataset (S3DIS) and Paris-Lille-3D datasets (PL3D). On S3DIS dataset, we outperform all the compared methods w.r.t. mIoU and overall accuracy (ov-acc) on Area-5 protocol and w.r.t. overall accuracy on k-fold evaluation protocol. Further, we also outperform other methods w.r.t. overall accuracy on PL3D dataset. We outperform the next best method on PL3D (KPConv-deform, ConvPoint) by on mIoU while lag behind FKAConv by , however, we outperform FKAConv on S3DIS (k-fold) by w.r.t. mIoU. On ScanNet we achieve an mIoU of lagging behind MinkowskiNet () and Virtual MVFusion (), however, we outperform these methods on S3DIS by .

Overall, we observe that our method consistently provides results amongst top- methods on all the compared benchmarks, demonstrating the robustness of the proposed modifications to datasets with significant noise e.g. ScanNet and well defined CAD models e.g. ModelNet40.

Method Scannet S3DIS (k-fold) S3DIS (Area 5) PL3D
mIoU mIoU ov-acc mIoU ov-acc mIoU ov-acc
PointNet [31] - 47.6 78.6 41.1 - 40.2 93.9
PointNet++ [32] 33.9 54.5 81.0 - - 36.1 88.7
PointCNN [23] 45.8 65.4 88.1 57.3 85.9 65.4 -
KPConv-rigid [37] 68.6 69.6 - 65.4 - 72.3 -
KPConv-deform [37] 68.4 70.6 - 67.1 - 75.9 -
DGCNN [39] - 56.1 84.1 - - 62.5 97.0
MinkowskiNet [7] 73.6 - - 65.4 - - -
HDGCNN [24] - 66.9 - 59.3 - 68.3 -
DPC [9] 59.2 - - 61.3 86.8 - -
Virtual MVFusion [19] 74.6 - - 65.3 - - -
JSENet [12] 69.9 - - 67.7 - - -
FusionNet [45] 68.8 - - 65.38 - - -
DCMNet [34] 65.8 69.7 - 64.0 - - -
PointASNL [43] 63.0 68.7 - - - - -
SPH3D-GCN [21] 61.0 68.9 88.6 59.5 87.7 - -
SegGCN [20] 58.9 68.5 87.8 63.6 88.2 - -
ResGCN-28 [22] - 60.0 85.9 - - - -
FKAConv [4] - 68.4 - - - 82.7 -
ConvPoint [3] - 69.2 88.8 - - 75.9 -
Ours (repr.+graph) 72.4 70.1 89.8 69.4 89.4 78.5 98.0
RANK 3 2 1 1 1 2 1
TABLE III: Comparison with state of the art methods for 3D scene segmentation on Scannet v2, Stanford 3D Indoor Spaces (S3DIS) and Paris-Lille-3D (PL3D) datasets.

Iv-D Qualitative Results

Figure 4 shows semantic segmentation results on S3DIS. In general, we observe that the proposed technique provides closer to ground-truth segmentation especially at the boundaries. In first row, we observe that the structures on the wall are better constructed by the proposed method as compared to DGCNN especially around the edges. In the box at center, we observe that the proposed method marks the points on the wall behind the table correctly while DGCNN incorrectly labels them as wall. Similarly, in second row, the labels in the box at top are closer to the ground truth with a clear separation from the wall. In the bottom box, we observe that the points on the table are clearly labelled by our method while DGCNN labels many points as walls. These results indicate that the proposed method is robust in narrow regions and on fine structures due to the explicit introduction of geometrical features to GCNN.

Fig. 4: Qualitative results on S3DIS dataset, notice the fine structures captured by our method and missed by DGCNN, in the annotated black boxes. Best viewed zoomed in color.
ModelNet40 ShapeNet S3DIS
    graph m-acc ov-acc mIoU mIoU ov-acc
90.2 92.9 85.2 56.1 84.1
90.8 93.4 86.1 57.8 84.7
90.7 93.8 85.9 57.8 85.1
90.8 94.4 86.2 59.4 86.6
92.1 95.1 87.1 61.2 87.4
91.8 94.9 86.8 62.1 87.4
92.0 94.0 86.6 65.5 88.1
93.1 95.9 89.1 69.4 89.4
TABLE IV: Ablation studies on impact of various components of the proposed method.

Iv-E Ablation Experiments

We perform ablation experiments to give the complete results with and without including the different components of the proposed method.

For ablation on absence of various local features, we only set the corresponding elements of the input vector to the MLP and train the projection MLP as usual.


Ablation on Classification. In Table IV columns 4 and 5, starting from a baseline of overall accuracy, we see that the local geometric features help individually, but their combination is better at . Similarly, the graph construction method gives alone but improves with each of the geometric features added, and performance is best when all the features as well as graph construction are used together, at .


Ablation on Part Segmentation. In Table IV column 6, we see similar trends as in the classification case, where each of the geometric features contribute (), the graph construction helps when used alone (), and the full combination achieves the best performance at mIoU, cf. the baseline at mIoU.


Ablation on Semantic Segmentation. In Table IV columns 7 and 8, we observe that with the graph construction only the mIoU increases from to while it increases to with only the combination of local features. However, the full combination provides the best performance at mIoU of and overall accuracy of .

  ModelNet40   ShapeNet   S3DIS
    graph  tr. epochs frac.  tr. epochs frac.  tr. epochs frac.
233 1.00 190 1.00 258 1.00
205 0.87 174 0.91 243 0.94
164 0.70 151 0.79 215 0.83
139 0.60 133 0.70 191 0.73
TABLE V: Ablation study on the training convergence of models

Ablation on Model Convergence The number of epochs required for the different models to converge are shown in Table V. We observe that using the proposed geometric representation results in reduction by , and epochs for classification, part segmentation and semantic segmentation respectively. With the proposed geometrically aware graph construction, we observe a reduction of for classification, for part segmentation and for semantic segmentation. Hence, providing a geometrically refined local connectivity information to the graph results in significant reduction in the number of epochs required by the network. Finally, the combination of the proposed geometric representation and the proposed graph construction results in a reduction of epochs required for the tasks respectively.

The actual training time required to train our full method on ModelNet40 is hours as compared to hours required by DGCNN, providing a training speed-up of nearly , which is similar to the epoch reduction of , i.e. less epochs translate directly into reduction in training time.


Ablation on MLP. With only 3D coordinates as input to the MLP, the m-acc on ModelNet40 is cf. (DGCNN). With 9D vector as a direct input without MLP to DGCNN, the m-acc is cf. with MLP (Table II, Ours(repr.)). This shows that MLP is able to extract a robust geometric representation in the feature space.

V Conclusion

We proposed two simple but important contributions, to be used within the framework of graph neural networks (GNN) for point cloud processing, on (i) representation of points using local geometric properties, and (ii) construction of the neighborhood graph using geometric constraints to improve coverage. We also reported highly competitive results, better than many recent state-of-the-art methods on both synthetic and real-world datsets showing robustness of the proposed approach to noisy data. Our work highlights the fact that the current generation of GNN based methods, for 3D point cloud processing, are not able to fully capture the local geometric information, and hence benefit from having that as input explicitly. Further, recently Liu et al. [27] showed that, the local aggregation operators used in various point cloud processing techniques, if carefully tuned, provide similar performances. Parida et al. [30] showed that using region/point based properties from echoes or type of material can help in learning more robust representations. Therefore, the proposed modifications, with explicit prior on geometry, can be extended to other point cloud based deep networks and potentially motivate future works with simpler and more efficient networks for processing point clouds.

References

  • [1] I. Armeni, O. Sener, A. R. Zamir, H. Jiang, I. Brilakis, M. Fischer, and S. Savarese (2016) 3d semantic parsing of large-scale indoor spaces. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 1534–1543. Cited by: 3rd item, §IV-A.
  • [2] M. Atzmon, H. Maron, and Y. Lipman (2018) Point convolutional neural networks by extension operators. ACM Transactions on Graphics. Cited by: §II.
  • [3] A. Boulch (2020) ConvPoint: continuous convolutions for point cloud processing. Computers & Graphics. Cited by: TABLE II, TABLE III.
  • [4] A. Boulch et al. (2020) FKAConv: Feature-Kernel Alignment for Point Cloud Convolution. arXiv preprint arXiv:2004.04462. Cited by: §II, TABLE II, TABLE III.
  • [5] H. Chen and B. Bhanu (2007) 3D free-form object recognition in range images using local surface patches. Pattern Recognition Letters 28 (10), pp. 1252–1262. Cited by: §II, §III-A.
  • [6] H. Chen and B. Bhanu (2007) Human ear recognition in 3d. IEEE Transactions on Pattern Analysis and Machine Intelligence 29 (4), pp. 718–737. Cited by: §II, §III-A.
  • [7] C. Choy et al. (2019) 4D spatio-temporal convnets: minkowski convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3075–3084. Cited by: §II, TABLE III.
  • [8] A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner (2017) Scannet: richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5828–5839. Cited by: 3rd item, §IV-A.
  • [9] F. Engelmann et al. (2019) Dilated point convolutions: on the receptive field of point convolutions. ICRA. Cited by: TABLE III.
  • [10] T. Furuya and R. Ohbuchi (2016) Deep aggregation of local 3d geometric features for 3d model retrieval.. In BMVC, Cited by: §II.
  • [11] Y. Guo, M. Bennamoun, F. Sohel, M. Lu, J. Wan, and N. M. Kwok (2016) A comprehensive performance evaluation of 3d local feature descriptors. International Journal of Computer Vision 116 (1), pp. 66–89. Cited by: §II, §III-A.
  • [12] Z. Hu, M. Zhen, X. Bai, H. Fu, and C. Tai (2020) Jsenet: joint semantic segmentation and edge detection network for 3d point clouds. ECCV. Cited by: TABLE III.
  • [13] A. Ioannidou, E. Chatzilari, S. Nikolopoulos, and I. Kompatsiaris (2017) Deep learning advances in computer vision with 3d data: a survey. ACM Computing Surveys (CSUR) 50 (2), pp. 20. Cited by: §I.
  • [14] A. E. Johnson and M. Hebert (1998) Surface matching for object recognition in complex three-dimensional scenes. Image and Vision Computing 16 (9-10), pp. 635–651. Cited by: §II, §III-A.
  • [15] M. Khoury, Q. Zhou, and V. Koltun (2017) Learning compact geometric features. Proceedings of the IEEE International Conference on Computer Vision. Cited by: §II.
  • [16] R. Klokov and V. Lempitsky (2017) Escape from cells: deep kd-networks for the recognition of 3d point cloud models. Proceedings of the IEEE International Conference on Computer Vision. Cited by: §II, TABLE II.
  • [17] J. J. Koenderink and A. J. Van Doorn (1992) Surface shape and curvature scales. Image and vision computing 10 (8), pp. 557–564. Cited by: §II.
  • [18] S. Kumawat and S. Raman (2019) LP-3dcnn: unveiling local phase in 3d convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4903–4912. Cited by: §II, TABLE II.
  • [19] A. Kundu, X. Yin, A. Fathi, D. Ross, B. Brewington, T. Funkhouser, and C. Pantofaru (2020) Virtual multi-view fusion for 3d semantic segmentation. ECCV. Cited by: TABLE III.
  • [20] H. Lei, N. Akhtar, and A. Mian (2020) SegGCN: efficient 3d point cloud segmentation with fuzzy spherical kernel. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11611–11620. Cited by: §II, TABLE III.
  • [21] H. Lei, N. Akhtar, and A. Mian (2020) Spherical kernel for efficient graph convolution on 3d point clouds. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §II, TABLE II, TABLE III.
  • [22] G. Li, M. Muller, A. Thabet, and B. Ghanem (2019) Deepgcns: can gcns go as deep as cnns?. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9267–9276. Cited by: §II, TABLE III.
  • [23] Y. Li, R. Bu, M. Sun, W. Wu, X. Di, and B. Chen (2018) Pointcnn: convolution on x-transformed points. In Advances in Neural Information Processing Systems, pp. 820–830. Cited by: TABLE III.
  • [24] Z. Liang, M. Yang, L. Deng, C. Wang, and B. Wang (2019) Hierarchical depthwise graph convolutional neural network for 3d semantic segmentation of point clouds. In 2019 International Conference on Robotics and Automation (ICRA), pp. 8152–8158. Cited by: §II, TABLE III.
  • [25] W. Liu, J. Sun, W. Li, T. Hu, and P. Wang (2019) Deep learning on point clouds and its application: a survey. Sensors 19 (19), pp. 4188. Cited by: §I.
  • [26] Y. Liu, B. Fan, S. Xiang, and C. Pan (2019) Relation-shape convolutional neural network for point cloud analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8895–8904. Cited by: §II.
  • [27] Z. Liu, H. Hu, Y. Cao, Z. Zhang, and X. Tong (2020) A closer look at local aggregation operators in point cloud analysis. ECCV. Cited by: §V.
  • [28] Y. Lyu, X. Huang, and Z. Zhang (2020) Learning to segment 3d point clouds in 2d image space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12255–12264. Cited by: TABLE II.
  • [29] D. Maturana and S. Scherer (2015) Voxnet: a 3d convolutional neural network for real-time object recognition. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, pp. 922–928. Cited by: §I.
  • [30] K. K. Parida, S. Srivastava, and G. Sharma (2021) Beyond image to depth: improving depth prediction using echoes. Cited by: §V.
  • [31] C. R. Qi, H. Su, K. Mo, and L. J. Guibas (2017) Pointnet: deep learning on point sets for 3d classification and segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition. Cited by: §I, §II, TABLE II, TABLE III.
  • [32] C. R. Qi, L. Yi, H. Su, and L. J. Guibas (2017) PointNet++: deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems. Cited by: §II, TABLE II, TABLE III.
  • [33] X. Roynard, J. Deschaud, and F. Goulette (2018) Paris-lille-3d: a large and high-quality ground-truth urban point cloud dataset for automatic segmentation and classification. The International Journal of Robotics Research 37 (6), pp. 545–557. Cited by: 3rd item, §IV-A.
  • [34] J. Schult, F. Engelmann, T. Kontogianni, and B. Leibe (2020) DualConvMesh-net: joint geodesic and euclidean convolutions on 3d meshes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8612–8622. Cited by: TABLE III.
  • [35] S. Srivastava and B. Lall (2016) 3D binary signatures. In Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing, pp. 77. Cited by: §II.
  • [36] S. Srivastava and B. Lall (2019) DeepPoint3D: learning discriminative local descriptors using deep metric learning on 3d point clouds. Pattern Recognition Letters. Cited by: §II.
  • [37] H. Thomas, C. R. Qi, J. Deschaud, B. Marcotegui, F. Goulette, and L. J. Guibas (2019) Kpconv: flexible and deformable convolution for point clouds. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6411–6420. Cited by: TABLE II, TABLE III.
  • [38] L. Wang, Y. Huang, Y. Hou, S. Zhang, and J. Shan (2019) Graph attention convolution for point cloud semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10296–10305. Cited by: §II.
  • [39] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon (2019) Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (TOG) 38 (5), pp. 146. Cited by: §I, §II, §III-A, §IV-A, §IV-B, §IV-C, TABLE II, TABLE III.
  • [40] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao (2015) 3d shapenets: a deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1912–1920. Cited by: 3rd item, §IV-A.
  • [41] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu (2020) A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems. Cited by: §I, §I, §III-A.
  • [42] M. Xu, Z. Zhou, and Y. Qiao (2020) Geometry sharing network for 3d point cloud classification and segmentation.. In AAAI, pp. 12500–12507. Cited by: TABLE II.
  • [43] X. Yan, C. Zheng, Z. Li, S. Wang, and S. Cui (2020) PointASNL: robust point clouds processing using nonlocal neural networks with adaptive sampling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5589–5598. Cited by: §II, TABLE II, TABLE III.
  • [44] L. Yi, V. G. Kim, D. Ceylan, I. Shen, M. Yan, H. Su, C. Lu, Q. Huang, A. Sheffer, L. Guibas, et al. (2016) A scalable active framework for region annotation in 3d shape collections. ACM Transactions on Graphics (TOG) 35 (6), pp. 210. Cited by: 3rd item, §IV-A.
  • [45] F. Zhang, J. Fang, B. Wah, and P. Torr (2020) Deep fusionnet for point cloud semantic segmentation. ECCV. Cited by: TABLE III.
  • [46] K. Zhang, M. Hao, J. Wang, C. W. de Silva, and C. Fu (2019) Linked dynamic graph cnn: learning on point cloud via linking hierarchical features. arXiv preprint arXiv:1904.10014. Cited by: §II, TABLE II.
  • [47] Y. Zhao, T. Birdal, H. Deng, and F. Tombari (2019) 3D point capsule networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1009–1018. Cited by: TABLE II.