The recent advance of 3D technologies in autonomous driving, robotics, and reverse engineering has attracted greater attention in understanding and analyzing 3D data. Often in the form of point clouds, 3D data is a set of unordered 3D points, sometimes with additional features (e.g., RGB, normal) defined on each point. Due to the inherent unorderness and irregularity property of 3D data, it is difficult to extend traditional convolutional neural networks to 3D data, because a permutation in the order of 3D points should not affect their geometric structures.
Multiple approaches [pointnet, pointnet++, dgcnn, pointweb] have been developed recently to process raw point clouds. These approaches can be grouped into several categories according to the point correlation they exploit for feature learning: (1) Self correlation. A seminal work on point cloud feature learning is the PointNet [pointnet], which uses shared Multi-Layer Perceptions (MLPs) on each point individually, then gathers global representation with pooling functions. Its description ability is limited by not being able to incorporate contextual information from each point’s neighbors. (2) Local correlation. PointNet++ [pointnet++]
improves the PointNet by applying the PointNet recursively on partitioned pointsets and encoding local features with max pooling. PointNet++ essentially treats each point individually and hence still neglects its correlation between neighbors. Later, DGCNN[dgcnn] employs shared MLPs to process edges linking the centroid point and its neighboring points. But because the edges only connect to the centroid point, the neighboring correlation is not fully exploited. The more recent approach, PointWeb [pointweb], considers a point’s correlation by weighting all edges among its neighbors. However, although PointWeb explores the complete correlation from a point’s neighbors, it stores redundant connections between two points of different labels. Furthermore, to our best knowledge, in all these existing approaches, most existing approaches do not consider non-local correlation among distant points that are not neighbors but could potentially exhibit long-range dependencies.
Representing point cloud with the graph structure is appropriate for correlation learning. There are some graph-based methods [dynamicedge, dynamicfilters] that use graph convolutions to analyze point cloud on local regions; however, explicit correlations of the neighboring points are not well captured by the predefined convolution operations and this limits their feature modeling capability.
To better leverage correlations of points from different levels, we formulate a novel high-dimensional node graph model, named Point2Node, to learn correlation among 3D points. Point2Node proposes a Dynamic Node Correlation module, which constructs a high-dimensional node graph to sequentially explore correlations at different levels. Specifically, it explores self correlation between point’s different channels, local correlation between neighboring points, and also, non-local correlation among distant points having long-range dependency. Meanwhile, we dynamically update nodes after each correlation learning to enhance node characteristics. Furthermore, for aggregating more discriminate features, our Point2Node employs a data-aware gate embedding, named Adaptive Feature Aggregation, to pass or block channels of high-dimensional nodes from different correlation learning, thereby generating new nodes to balance self, local and non-local correlation. Experimental results on various datasets show better feature representation capability. Our contributions are summarized as follows:
We propose a Point2Node model to dynamically reasons the different-levels correlation including self correlation, local correlation, and non-local correlation. This model largely enhances nodes characteristic from different scale correlation learning, especially embedding long-range correlation.
We introduce a data-aware gate mechanism, which self-adaptively aggregates features from different correlation learning and maintains effective components in different levels, resulting in more discriminate features.
Our end-to-end Point2Node model achieves state-of-the-art performance on various point cloud benchmark, thus demonstrating its effectiveness and generalization ability.
View-based and volumetric methods.
A classic category of deep-learning based 3D representations is to use the multi-view approach[voandmulti, multiview, projective], where a point cloud is projected to a collection of images rendered from different views. By this means, point data can be recognized using conventional convolution in 2D images. However, these viewed-based methods only achieve a dominant performance in classification. Scene segmentation is nontrivial due to the loss of the discriminate geometric relationship of the 3D points during 2D projection.
Influenced by the success of traditional convolution in regular data format (e.g., 2D image), another strategy is to generalize CNN to 3D voxel structures [voxnet, modelnet, scannet], then assign 3D points to regular volumetric occupancy grids. Due to the intractable computational cost, these methods are usually constrained by low spatial resolution. To overcome this limitation, Kd-Net [kdnet] and OctNet [ocnet] skip the computation on empty voxels so that they can handle higher-resolutional grids. However, during voxelization, it is inevitable to lose geometric information. To avoid geometric information loss, unlike view-based and volumetric methods, our method processes 3D point clouds directly.
Point cloud-based methods. The pioneering method in this category is PointNet [pointnet], which considers each point independently with shared MLPs for permutation invariance and aggregates the global signature with max-pooling. PointNet is limited by the fine-grained information in local regions. PointNet++ [pointnet++] extends PointNet by exploiting a hierarchical structure to incorporate local dependencies with Farthest Point Sampling (FPS) and a geometric grouping strategy. DGCNN [dgcnn] proposes an edge-conv operation which encodes the edges between the center point and k nearest neighbors with shared MLPs. PointWeb [pointweb] enriches point features by weighting all edges in local regions with point-pair difference. Our method also exploits local edges, but we explore the adaptive connection of similar nodes in local regions, to avoid redundant connections between two points of different labels, and this leads to better shape-awareness.
A few other methods extend the convolution operator to process unordered points. PCCN [PCCN] proposes parametric continuous convolutions that exploit the parameterized kernel functions on point clouds. By reordering and weighting local points with an -transformation, PointCNN [PointCNN] exploits the canonical order of points and then applies convolution. PointConv [pointconv] extends the dynamic filter to a new convolution operation and build deep convolutional networks for point clouds. A-CNN [A-CNN] employs annular convolution to capture the local neighborhood geometry of each point. Unlike these methods that use convolution in point clouds, we pay our attention to correlation learning among 3D points in point clouds.
Graph-based methods. Because it is flexible to represent irregular point clouds with graphs, there is an increasing interest in generalizing convolutions to the graphs. In line with the domain in which convolutions operate, graph-based methods can be categorized into spectral and non-spectral methods. Spectral methods [semisuper, syncspecnn] perform convolution in the spectral representation, which is constructed by computing with a Laplacian matrix. The major limitation of these methods is that they require a fixed graph structure. Thus they have difficulty in adapting to point clouds that have varying graph structures. Instead, we perform correlation learning on dynamic nodes, which are not limited by varying graph structures. Alternatively, non-spectral methods [dynamicedge, diffusion, dynamicfilters] perform convolution-like operations on the spatially close nodes of a graph, thereby propagating information along the neighboring nodes. But, hand-crafted ways of encoding local contextual information limits their feature modeling capability; instead, for more fine-grained contextual information in the local region, we explore the direct correlation between local nodes.
We illustrate our end-to-end Point2Node framework in Fig. 1. Given a set of unordered points , where is the number of points and denotes the -dimension and additional properties (e.g., for RGB or normal information), through Feature Extraction, we construct their high-level representation, forming high-dimensional nodes , where each node
is a learned multi-channel feature vector of point, and is the dimension of each node. The Feature Extraction module employs a -conv operator from PointCNN [PointCNN] and builds a U-Net [U-net] structure for feature learning. Then through a Dynamic Node Correlation (DNC) module, we explore point correlations from different levels, and dynamically generate high-dimensional nodes , and , respectively. Simultaneously, we build an Adaptive Feature Aggregation (AFA) module to self-adaptively aggregate the features of nodes learned in different levels. Finally, the high-dimensional nodes are fed into fully-connected layers for label assignment.
Dynamic Nodes Correlation Module
To fully explore correlation among the points from different levels, we propose a Dynamic Nodes Correlation module. Given the initial high-dimensional nodes , we introduce the cross-channel edges and cross-node edges. Cross-channel edges are defined between two channels of the same node, and they are used to explore correlation between this node’s different channels. Cross-node edges are defined between different nodes, and have two types. Between neighboring (spatially close) nodes, local edges are defined to explore local contextual information; while between non-local (spatially distant) nodes, non-local edges are defined to exploit latent long-range dependencies. Therefore, three types of correlation: Self Correlation, Local Correlation, and Non-local Correlation will be learned in this DNC module. Meanwhile, we dynamically update the high-dimensional nodes after each correlation learning. The learning of these three types of correlation is elaborated in the following sections.
The dependencies among channels of the same node are denoted by self correlation. Such dependencies are informative for feature representation. To model self correlation of each node, we can construct edges between different channels. However, for each channel, we do not compute its correlation weights w.r.t. other channels, but make a simplification. We use convolution to integrate these weights from different channels and encode them into a single weight.
Channel Weights. Inspired by traditional 1D convolution that aggregates all channels of one pixel in a 2D image, we similarly do such an encoding to integrate each channel’s correlations into one weight through Multi-Layer Perception (MLP). To reduce the number of parameters, we implement the MLP with 1D separable convolutions [xception]. Consequently, for each -dimensional node , we can calculate a -dimensional channel weight
And to make weight coefficients comparable between different channels, we perform a normalization using ,
where is the -th component of , and is the normalized .
After correlation learning, we update nodes with residual connections:
where and denote channel-wise summation and multiplication, respectively, and denotes a learnable scale parameter. This update transforms nodes into new nodes .
Note that unlike directly transforming to by convolutions, which share weights among all the nodes, our self correlation module learns different correlations in different nodes and can embody different nodes’ individual characteristics.
To fully capture local contextual information, which is of central importance in point cloud understanding, we also build a new Local Correlation sub-module. Existing approaches [dgcnn, pointweb] usually construct partial or complete edges of local regions for contextual information, but these tend to store redundant connections between two points of different labels. In contrast, our sub-module adaptively connects locally similar nodes to construct a local graph, whose adjacency matrix weights correlation of relevant node pairs.
Local Graph. Given
, we consider each node as a centroid and construct a local graph. To increase the receptive field without redundant computational cost, we employ the dilated KNN[PointCNN] to search for neighboring nodes, and form Dilated -Nearest Neighbor Graphs (DKNNG) in local regions. Specifically, we sample nodes at equal intervals from the top neighbor nodes, where denotes the number of samples, and is the dilate rate.
Adjacency Matrix. For local correlation learning of node , given its DKNNG where is the neighboring nodes set of centroid node , we construct its adjacency matrix for all the directed edges in . Inspired by the Non-Local Neural Network [non-local], which has been proven to be effective in capturing pixel correlation in 2D images, we generalize its non-local operation to our graph structure, to generate the adjacency matrix. Please refer to Appendix A for more detail about our generalization.
Specifically, we first map to low-dimension feature space with two 1-D convolutional mapping functions, and , resulting in low-dimension node representations and respectively,
where is the reduction rate and is a shared super-parameter in the full paper. With and , we compute by matrix multiplication,
where denotes matrix multiplication.
Next, we prune the adjacency matrix with the function (Eq.6) to cut off redundant connections that link nodes of different labels, resulting in .
where is the masked , and the final weight of the edge that links the -th node and -th node in the neighboring nodes set .
To better exploit latent long-range dependency, which is essential in capturing global characteristics of point clouds, we also propose a Non-local Correlation sub-module. Traditional point cloud networks usually construct global representations by stacking multi-layers local operations, but such a high-level global representation can hardly capture direct correlation between spatially distant nodes. To model such non-local correlation, our proposed module adaptively connects non-local nodes to construct a global graph, whose adjacency matrix reveals latent long-range dependencies between nodes.
Adjacency Matrix. Given nodes , we construct a global graph and an adjacency matrix . is constructed in a similar way to local correlation. We construct the low-dimensional and from mapping functions and ,
then compose the adjacency matrix by
Unlike local correlation, the non-local correlation learning can explore dependence between nodes that have different features in different scales. Therefore, to make weight coefficients comparable, we perform a normalization on the adjacency matrix ,
where is the normalized weight of the edge linking nodes and .
Updating Nodes. We update all the nodes with the correlation function , resulting in :
Note that all the three aforementioned sub-modules perform node-wise operations. These operations are permutation-equivalent (See a proof in Appendix B) and hence can effectively process unordered 3D points.
Adaptive Feature Aggregation
Traditional neural networks aggregate different features using linear aggregation such as skip connections. But linear aggregations simply mix up channels of the features and are not data-adaptive. To better reflect the characteristics of given point cloud and make the aggregation data-aware, we propose an Adaptive Feature Aggregation (AFA) module 111Note that because the self correlation sub-module already enhances the cross-channel attention by linear aggregation, we do not place an AFA after self correlation, as Fig. 1 shows. and use a gate mechanism to integrate node features according to their characteristics. To model the channel-wise characteristics and their relative importance, we consider two models, namely, the parameter-free to parameterized descriptions.
Parameter-free Characteristic Modeling. Given high-dimensional nodes and that are from different correlation learning, we first describe their characteristics using a global node average pooling, and obtain channel-wise descriptors and :
where refers to the -th channel of , denotes the -th channel of the -th node in , and denotes the -th channel of the -th node in .
Parameterized Characteristic Modeling. To further explore the characteristics of the high-dimensional nodes, we introduce learnable parameters into characteristics. Specifically, we employ different MLPs to transform channel-wise descriptors and to new descriptors and . To reduce computational costs, we simplify the MLPs through one dimension-reduction convolution and one dimension-increasing convolution layers:
Gate Mechanism. Informative descriptions of the aggregated nodes features consist of discriminate correlation representations of different levels. Linear aggregations mix up different correlation representations. To combine features in a non-linear and data-adaptive way, we construct a gate mechanism (Fig. 2) by applying a on the compact features and , by
where and denote the masked and on the -th channel, and .
Aggregation. Finally, we aggregate the ultimate nodes by the masked and ,
where denotes the -th channel set of .
This AFA module performs node-wise operations, which are again permutation-equivariant (A proof is given in Appendix C) and suitable for unordered point clouds.
We evaluate our model on multiple point cloud benchmarks, including Stanford Large-Scale 3D Indoor Space (S3DIS) [s3dis] for semantic segmentation, ScanNet [scannet] for semantic voxel labeling, and ModelNet40 [modelnet] for shape classification.
S3DIS Semantic Segmentation
Dataset and Metric. The S3DIS [s3dis] dataset contains six sets of point cloud data from three different buildings (including 271 rooms). Each point, with xyz coordinates and RGB features, is annotated with one semantic label from 13 categories. We followed the same data pre-processing approaches given in PointCNN [PointCNN].
We conducted our experiments in 6-fold cross validation [pointnet] and fifth fold cross validation [segcloud]
. Because there are no overlaps between Area 5 and the other areas, experiments on Area 5 could measure the framework’s generalizability. For the evaluation metrics, we calculated mean of class-wise intersection over union (mIoU), mean of class-wise accuracy (mAcc), and overall point-wise accuracy (OA).
Performance Comparison. The quantitative results are given in Table 1 and Table 2. Our Point2Node performs better than the other competitive methods on both evaluation settings. In particular, despite the dataset contains objects of various sizes, our approach achieves considerable gains on both relatively small objects (e.g., chair, table, and board) and big structures (e.g., ceiling, wall, column and bookcase), which demonstrates that our model can capture discriminate features on both local and global level. For some similar geometric structures such as column vs. wall, our model also works very well, which indicates that our model is capable of distinguishing ambiguous structures due to the latent correlation they capture between the structures and other structures of the scene. Some qualitative results on different scenes are visualized in Fig. 3.
For ablation studies, we integrate each correlation learning sub-module and its corresponding feature aggregation methods, forming self correlation block, local correlation block and non-local correlation block, respectively. Our baseline method employs -conv [PointCNN] and builds Encoder-Decoder structure without any correlation learning block. We conducted ablation studies on Area 5 of the S3DIS.
To show the effectiveness of each proposed block in our model, we separately added each block after the Encoder-Decoder structure while keeping the rest unchanged in our baseline, forming Self only, Local only and Non-local only. Our proposed Point2Node employs all three blocks to form a more powerful framework. As shown in Table 3, each block plays an important role in our framework. Moreover, compared with the baseline, the performance improvement (6.26%) of Point2Node exceeds the simple summation (5.02%) of the separate block, demonstrating that our sequential structure further enhances the node features.
To study the effectiveness of sequentially dynamic nodes update strategy in our model, we compared our original sequential structure with parallel structure variants. Specifically, (1) by placing three blocks in parallel, denoted by “Parallel_1”, the nodes get updated just once after these parallel blocks; (2) by placing the local correlation block and non-local correlation block in parallel, denoted by “Parallel_2”, the nodes get updated after self correlation and after the parallel blocks. As shown in Table 3, “Parallel_2” performs better than “Parallel_1”. The proposed Point2Node model, with sequential and fully updated nodes, has the best performance and indicates the effectiveness of this node update mechanism.
To show our adaptive aggregation can lead to more discriminate features than linear aggregation, we replaced the adaptive aggregation of local correlation block and non-local correlation block with the linear aggregation in the architecture of Point2Node, to form a new network (LA). For a fair comparison, we also employed the parameter-free characteristics of the AFA module to construct a parameter-free adaptive aggregation, named “Parameter-free AA”. The results demonstrate that our adaptive aggregation captures more discriminate features than linear aggregation does, and the parameterized adaptive aggregation (Point2Node) is more effective than parameter-free AA.
As shown in Table 4, we significantly improve the performance (+11.03%) while increasing very few parameters (+1.46%). In our Point2Node, we frequently employ different MLPs, and we control the reduction rate ( in our experiments) to reduce parameters in adjacency matrix computing and parameterized characteristics.
ScanNet Semantic Voxel Labeling
The ScanNet [scannet] dataset contains over 1,500 scanned and reconstructed indoor scenes, which consist of 21 semantic categories. The dataset provides a 1,201/312 scene split for training and testing. We followed the same data pre-processing strategies as with S3DIS, and reported the per-voxel accuracy (OA) as evaluation metrics. Table 5 shows the comparisons between our Point2Node and other competitive methods. Our method achieves the state-of-the-art performances, due to its more effective feature learning.
ModelNet40 Shape Classification
The ModelNet40 [modelnet] dataset is created using 12,311 3D mesh models from 40 man-made object categories, with an official 9,843/2,468 split for training and testing. Following PointCNN [PointCNN], we uniformly sampled 1,024 points from each mesh without normal information to train our model. For classification, we removed the decoder in the Point2Node model and reported the overall accuracy (OA) as the evaluation metric.
To make a fair comparison with previous approaches that may take different information as input, we list detailed experiment setting and results in Table 6. Point2Node achieves the highest accuracy among the --input methods. Furthermore, our --input model outperforms most additional-input methods, even if performing slightly worse than the best additional-input method SO-Net [sonet]. Experimental results also demonstrate the effectiveness of our model in point cloud understanding.
We present a novel framework, Point2Node, for correlation learning among 3D points. Our Point2Node dynamically enhances the nodes representation capability by exploiting correlation of different levels and adaptively aggregates more discriminate features to better understand point cloud. Ablation studies show that our performance gain comes from our new correlation learning modules. Results from a series of experiments demonstrate the effectiveness and generalization ability of our method.
Limitations. The correlation learning in Point2Node is based on refining the features (currently PointCNN [PointCNN] is used) extracted on nodes. While it can be considered as a general feature enhancement tool, the final performance is also limited by the capability of the pre-determined features. An ideal strategy would be to integrate the feature learning and correlation learning into a joint pipeline, so that the inter-channel and inter-node correlations could impact the feature extraction. We will explore the design of such an end-to-end network in the future.
This work was supported by the National Natural Science Foundation of China (Grants No. 61771413, and U1605254).
A. Relationship to standard Non-local operations
Standard Non-Local operations [non-local] are widely employed in recent literature to capture long-range dependencies in 2D images and videos. To capture correlation among pixels in videos, these operations map input features with 3D convolution, and construct adjacency matrix with reshape operation and matrix multiplication.
In our papers, to capture the correlation among nodes, we generalize standard Non-Local operations to the graph domain, where the adjacency matrix is more naturally applied for weighting edges. We not only employ Non-local operations on non-local nodes to capture long-range dependencies, but also extend them to neighboring nodes to encode local contextual information.
Moreover, our computation is less expensive. In [non-local], the dimension for video data fed to the network is , where is the time of videos, is the height of videos, is the width of videos, and
is the channel of videos. The dimension of our node tensor is, where is the number of nodes and is the channel of nodes. Because the 3D convolutions are replaced with 1D convolutions, and the frequent “reshape” operations are no needed, our computation is much more efficient.
B. Proof of Permutation Equivariance of Dynamic Nodes Correlation
Following [pat], we prove the permutation equivariance of our proposed modules, and provide the proof process as follows.
Lemma 1. (Permutation matrix and permutation function). , permutation matrix of size , is a permutation function:
Lemma 2. Given , permutation matrix of size ,
. Consider a permutation function for , using Eq. 17, we get:
Proposition 1. Self Correlation (SC) operation is permutation-equivariant, i.e., given input , permutation matrix of size ,
where MLP is a node-wise function that shares weights among all the nodes,
using Eq. 19, we get:
Proposition 2. Local Correlation (LC) operation is permutation-equivariant, i.e., given input , permutation matrix of size ,
where Dilated K Nearest Neighbors (DKNN) operation is a node-wise operation,
and are both node-wise functions that share weights among all the nodes,
Local Nodes Max-Pooling (LNMP) operation is invariant to local nodes’ order, and permutation-equivariant to input order,
using Eq. 18, we get:
Proposition 3. Non-local Correlation (NLC) operation is permutation-equivariant, i.e., given input , permutation matrix of size ,
where and are both node-wise functions that share weights among all the nodes,
using Eq. 18, we get:
C. Proof of Permutation Equivariance of Adaptive Feature Aggregation
Proposition 1. Adaptive Feature Aggregation (AFA) operation is permutation-equivariant, i.e., given input and , permutation matrix of size ,
where Global Node Pooling (GNP) operation is invariant to input order,
Gate operation is shared among all the points,
in this way,