Introduction
The recent advance of 3D technologies in autonomous driving, robotics, and reverse engineering has attracted greater attention in understanding and analyzing 3D data. Often in the form of point clouds, 3D data is a set of unordered 3D points, sometimes with additional features (e.g., RGB, normal) defined on each point. Due to the inherent unorderness and irregularity property of 3D data, it is difficult to extend traditional convolutional neural networks to 3D data, because a permutation in the order of 3D points should not affect their geometric structures.
Multiple approaches [pointnet, pointnet++, dgcnn, pointweb] have been developed recently to process raw point clouds. These approaches can be grouped into several categories according to the point correlation they exploit for feature learning: (1) Self correlation. A seminal work on point cloud feature learning is the PointNet [pointnet], which uses shared MultiLayer Perceptions (MLPs) on each point individually, then gathers global representation with pooling functions. Its description ability is limited by not being able to incorporate contextual information from each point’s neighbors. (2) Local correlation. PointNet++ [pointnet++]
improves the PointNet by applying the PointNet recursively on partitioned pointsets and encoding local features with max pooling. PointNet++ essentially treats each point individually and hence still neglects its correlation between neighbors. Later, DGCNN
[dgcnn] employs shared MLPs to process edges linking the centroid point and its neighboring points. But because the edges only connect to the centroid point, the neighboring correlation is not fully exploited. The more recent approach, PointWeb [pointweb], considers a point’s correlation by weighting all edges among its neighbors. However, although PointWeb explores the complete correlation from a point’s neighbors, it stores redundant connections between two points of different labels. Furthermore, to our best knowledge, in all these existing approaches, most existing approaches do not consider nonlocal correlation among distant points that are not neighbors but could potentially exhibit longrange dependencies.Representing point cloud with the graph structure is appropriate for correlation learning. There are some graphbased methods [dynamicedge, dynamicfilters] that use graph convolutions to analyze point cloud on local regions; however, explicit correlations of the neighboring points are not well captured by the predefined convolution operations and this limits their feature modeling capability.
To better leverage correlations of points from different levels, we formulate a novel highdimensional node graph model, named Point2Node, to learn correlation among 3D points. Point2Node proposes a Dynamic Node Correlation module, which constructs a highdimensional node graph to sequentially explore correlations at different levels. Specifically, it explores self correlation between point’s different channels, local correlation between neighboring points, and also, nonlocal correlation among distant points having longrange dependency. Meanwhile, we dynamically update nodes after each correlation learning to enhance node characteristics. Furthermore, for aggregating more discriminate features, our Point2Node employs a dataaware gate embedding, named Adaptive Feature Aggregation, to pass or block channels of highdimensional nodes from different correlation learning, thereby generating new nodes to balance self, local and nonlocal correlation. Experimental results on various datasets show better feature representation capability. Our contributions are summarized as follows:

We propose a Point2Node model to dynamically reasons the differentlevels correlation including self correlation, local correlation, and nonlocal correlation. This model largely enhances nodes characteristic from different scale correlation learning, especially embedding longrange correlation.

We introduce a dataaware gate mechanism, which selfadaptively aggregates features from different correlation learning and maintains effective components in different levels, resulting in more discriminate features.

Our endtoend Point2Node model achieves stateoftheart performance on various point cloud benchmark, thus demonstrating its effectiveness and generalization ability.
Related work
Viewbased and volumetric methods.
A classic category of deeplearning based 3D representations is to use the multiview approach
[voandmulti, multiview, projective], where a point cloud is projected to a collection of images rendered from different views. By this means, point data can be recognized using conventional convolution in 2D images. However, these viewedbased methods only achieve a dominant performance in classification. Scene segmentation is nontrivial due to the loss of the discriminate geometric relationship of the 3D points during 2D projection.Influenced by the success of traditional convolution in regular data format (e.g., 2D image), another strategy is to generalize CNN to 3D voxel structures [voxnet, modelnet, scannet], then assign 3D points to regular volumetric occupancy grids. Due to the intractable computational cost, these methods are usually constrained by low spatial resolution. To overcome this limitation, KdNet [kdnet] and OctNet [ocnet] skip the computation on empty voxels so that they can handle higherresolutional grids. However, during voxelization, it is inevitable to lose geometric information. To avoid geometric information loss, unlike viewbased and volumetric methods, our method processes 3D point clouds directly.
Point cloudbased methods. The pioneering method in this category is PointNet [pointnet], which considers each point independently with shared MLPs for permutation invariance and aggregates the global signature with maxpooling. PointNet is limited by the finegrained information in local regions. PointNet++ [pointnet++] extends PointNet by exploiting a hierarchical structure to incorporate local dependencies with Farthest Point Sampling (FPS) and a geometric grouping strategy. DGCNN [dgcnn] proposes an edgeconv operation which encodes the edges between the center point and k nearest neighbors with shared MLPs. PointWeb [pointweb] enriches point features by weighting all edges in local regions with pointpair difference. Our method also exploits local edges, but we explore the adaptive connection of similar nodes in local regions, to avoid redundant connections between two points of different labels, and this leads to better shapeawareness.
A few other methods extend the convolution operator to process unordered points. PCCN [PCCN] proposes parametric continuous convolutions that exploit the parameterized kernel functions on point clouds. By reordering and weighting local points with an transformation, PointCNN [PointCNN] exploits the canonical order of points and then applies convolution. PointConv [pointconv] extends the dynamic filter to a new convolution operation and build deep convolutional networks for point clouds. ACNN [ACNN] employs annular convolution to capture the local neighborhood geometry of each point. Unlike these methods that use convolution in point clouds, we pay our attention to correlation learning among 3D points in point clouds.
Graphbased methods. Because it is flexible to represent irregular point clouds with graphs, there is an increasing interest in generalizing convolutions to the graphs. In line with the domain in which convolutions operate, graphbased methods can be categorized into spectral and nonspectral methods. Spectral methods [semisuper, syncspecnn] perform convolution in the spectral representation, which is constructed by computing with a Laplacian matrix. The major limitation of these methods is that they require a fixed graph structure. Thus they have difficulty in adapting to point clouds that have varying graph structures. Instead, we perform correlation learning on dynamic nodes, which are not limited by varying graph structures. Alternatively, nonspectral methods [dynamicedge, diffusion, dynamicfilters] perform convolutionlike operations on the spatially close nodes of a graph, thereby propagating information along the neighboring nodes. But, handcrafted ways of encoding local contextual information limits their feature modeling capability; instead, for more finegrained contextual information in the local region, we explore the direct correlation between local nodes.
Method
Overview
We illustrate our endtoend Point2Node framework in Fig. 1. Given a set of unordered points , where is the number of points and denotes the dimension and additional properties (e.g., for RGB or normal information), through Feature Extraction, we construct their highlevel representation, forming highdimensional nodes , where each node
is a learned multichannel feature vector of point
, and is the dimension of each node. The Feature Extraction module employs a conv operator from PointCNN [PointCNN] and builds a UNet [Unet] structure for feature learning. Then through a Dynamic Node Correlation (DNC) module, we explore point correlations from different levels, and dynamically generate highdimensional nodes , and , respectively. Simultaneously, we build an Adaptive Feature Aggregation (AFA) module to selfadaptively aggregate the features of nodes learned in different levels. Finally, the highdimensional nodes are fed into fullyconnected layers for label assignment.Dynamic Nodes Correlation Module
To fully explore correlation among the points from different levels, we propose a Dynamic Nodes Correlation module. Given the initial highdimensional nodes , we introduce the crosschannel edges and crossnode edges. Crosschannel edges are defined between two channels of the same node, and they are used to explore correlation between this node’s different channels. Crossnode edges are defined between different nodes, and have two types. Between neighboring (spatially close) nodes, local edges are defined to explore local contextual information; while between nonlocal (spatially distant) nodes, nonlocal edges are defined to exploit latent longrange dependencies. Therefore, three types of correlation: Self Correlation, Local Correlation, and Nonlocal Correlation will be learned in this DNC module. Meanwhile, we dynamically update the highdimensional nodes after each correlation learning. The learning of these three types of correlation is elaborated in the following sections.
Self Correlation
The dependencies among channels of the same node are denoted by self correlation. Such dependencies are informative for feature representation. To model self correlation of each node, we can construct edges between different channels. However, for each channel, we do not compute its correlation weights w.r.t. other channels, but make a simplification. We use convolution to integrate these weights from different channels and encode them into a single weight.
Channel Weights. Inspired by traditional 1D convolution that aggregates all channels of one pixel in a 2D image, we similarly do such an encoding to integrate each channel’s correlations into one weight through MultiLayer Perception (MLP). To reduce the number of parameters, we implement the MLP with 1D separable convolutions [xception]. Consequently, for each dimensional node , we can calculate a dimensional channel weight
(1) 
And to make weight coefficients comparable between different channels, we perform a normalization using ,
(2) 
where is the th component of , and is the normalized .
Updating Nodes.
After correlation learning, we update nodes with residual connections:
(3) 
where and denote channelwise summation and multiplication, respectively, and denotes a learnable scale parameter. This update transforms nodes into new nodes .
Note that unlike directly transforming to by convolutions, which share weights among all the nodes, our self correlation module learns different correlations in different nodes and can embody different nodes’ individual characteristics.
Local Correlation
To fully capture local contextual information, which is of central importance in point cloud understanding, we also build a new Local Correlation submodule. Existing approaches [dgcnn, pointweb] usually construct partial or complete edges of local regions for contextual information, but these tend to store redundant connections between two points of different labels. In contrast, our submodule adaptively connects locally similar nodes to construct a local graph, whose adjacency matrix weights correlation of relevant node pairs.
Local Graph. Given
, we consider each node as a centroid and construct a local graph. To increase the receptive field without redundant computational cost, we employ the dilated KNN
[PointCNN] to search for neighboring nodes, and form Dilated Nearest Neighbor Graphs (DKNNG) in local regions. Specifically, we sample nodes at equal intervals from the top neighbor nodes, where denotes the number of samples, and is the dilate rate.Adjacency Matrix. For local correlation learning of node , given its DKNNG where is the neighboring nodes set of centroid node , we construct its adjacency matrix for all the directed edges in . Inspired by the NonLocal Neural Network [nonlocal], which has been proven to be effective in capturing pixel correlation in 2D images, we generalize its nonlocal operation to our graph structure, to generate the adjacency matrix. Please refer to Appendix A for more detail about our generalization.
Specifically, we first map to lowdimension feature space with two 1D convolutional mapping functions, and , resulting in lowdimension node representations and respectively,
(4) 
where is the reduction rate and is a shared superparameter in the full paper. With and , we compute by matrix multiplication,
(5) 
where denotes matrix multiplication.
Next, we prune the adjacency matrix with the function (Eq.6) to cut off redundant connections that link nodes of different labels, resulting in .
(6) 
where is the masked , and the final weight of the edge that links the th node and th node in the neighboring nodes set .
Nonlocal Correlation
To better exploit latent longrange dependency, which is essential in capturing global characteristics of point clouds, we also propose a Nonlocal Correlation submodule. Traditional point cloud networks usually construct global representations by stacking multilayers local operations, but such a highlevel global representation can hardly capture direct correlation between spatially distant nodes. To model such nonlocal correlation, our proposed module adaptively connects nonlocal nodes to construct a global graph, whose adjacency matrix reveals latent longrange dependencies between nodes.
Adjacency Matrix. Given nodes , we construct a global graph and an adjacency matrix . is constructed in a similar way to local correlation. We construct the lowdimensional and from mapping functions and ,
(9) 
then compose the adjacency matrix by
(10) 
Unlike local correlation, the nonlocal correlation learning can explore dependence between nodes that have different features in different scales. Therefore, to make weight coefficients comparable, we perform a normalization on the adjacency matrix ,
(11) 
where is the normalized weight of the edge linking nodes and .
Updating Nodes. We update all the nodes with the correlation function , resulting in :
(12) 
Note that all the three aforementioned submodules perform nodewise operations. These operations are permutationequivalent (See a proof in Appendix B) and hence can effectively process unordered 3D points.
Adaptive Feature Aggregation
Traditional neural networks aggregate different features using linear aggregation such as skip connections. But linear aggregations simply mix up channels of the features and are not dataadaptive. To better reflect the characteristics of given point cloud and make the aggregation dataaware, we propose an Adaptive Feature Aggregation (AFA) module ^{1}^{1}1Note that because the self correlation submodule already enhances the crosschannel attention by linear aggregation, we do not place an AFA after self correlation, as Fig. 1 shows. and use a gate mechanism to integrate node features according to their characteristics. To model the channelwise characteristics and their relative importance, we consider two models, namely, the parameterfree to parameterized descriptions.
Parameterfree Characteristic Modeling. Given highdimensional nodes and that are from different correlation learning, we first describe their characteristics using a global node average pooling, and obtain channelwise descriptors and :
(13) 
where refers to the th channel of , denotes the th channel of the th node in , and denotes the th channel of the th node in .
Parameterized Characteristic Modeling. To further explore the characteristics of the highdimensional nodes, we introduce learnable parameters into characteristics. Specifically, we employ different MLPs to transform channelwise descriptors and to new descriptors and . To reduce computational costs, we simplify the MLPs through one dimensionreduction convolution and one dimensionincreasing convolution layers:
(14) 
Gate Mechanism. Informative descriptions of the aggregated nodes features consist of discriminate correlation representations of different levels. Linear aggregations mix up different correlation representations. To combine features in a nonlinear and dataadaptive way, we construct a gate mechanism (Fig. 2) by applying a on the compact features and , by
(15) 
where and denote the masked and on the th channel, and .
Aggregation. Finally, we aggregate the ultimate nodes by the masked and ,
(16) 
where denotes the th channel set of .
This AFA module performs nodewise operations, which are again permutationequivariant (A proof is given in Appendix C) and suitable for unordered point clouds.
Experiments
We evaluate our model on multiple point cloud benchmarks, including Stanford LargeScale 3D Indoor Space (S3DIS) [s3dis] for semantic segmentation, ScanNet [scannet] for semantic voxel labeling, and ModelNet40 [modelnet] for shape classification.
S3DIS Semantic Segmentation
Methods  OA  mAcc  mIoU  ceiling  flooring  wall  beam  column  window  door  table  chair  sofa  bookcase  board  clutter 

PointNet  78.50  66.20  47.80  88.00  88.70  69.30  42.40  23.10  47.50  51.60  54.10  42.00  9.60  38.20  29.40  35.20 
SPGraph  85.50  73.00  62.10  89.90  95.10  76.40  62.80  47.10  55.30  68.40  69.20  73.50  45.90  63.20  8.70  52.90 
RSNet    66.45  56.47  92.48  92.83  78.56  32.75  34.37  51.62  68.11  59.72  60.13  16.42  50.22  44.85  52.03 
DGCNN  84.10    56.10                           
PointCNN  88.14  75.61  65.39  94.80  97.30  75.80  63.30  51.70  58.40  57.20  69.10  71.60  61.20  39.10  52.20  58.60 
PointWeb  87.31  76.19  66.73  93.54  94.21  80.84  52.44  41.33  64.89  68.13  71.35  67.05  50.34  62.68  62.20  58.49 
Point2Node  89.01  79.10  70.00  94.08  97.28  83.42  62.68  52.28  72.31  64.30  75.77  70.78  65.73  49.83  60.26  60.90 
Methods  OA  mAcc  mIoU  ceiling  flooring  wall  beam  column  window  door  table  chair  sofa  bookcase  board  clutter 

PointNet    48.98  41.09  88.80  97.33  69.80  0.05  3.92  46.26  10.76  58.93  52.61  5.85  40.28  26.38  33.22 
SegCloud    57.35  48.92  90.06  96.05  69.86  0.00  18.37  38.35  23.12  70.40  75.89  40.88  58.42  12.96  41.60 
SPGraph  86.38  66.50  58.04  89.35  96.87  78.12  0.00  42.81  48.93  61.58  84.66  75.41  69.84  52.60  2.10  52.22 
PointCNN  85.91  63.86  57.26  92.31  98.24  79.41  0.00  17.60  22.77  62.09  74.39  80.59  31.67  66.67  62.05  56.74 
PCCN    67.01  58.27  92.26  96.20  75.89  0.27  5.98  69.49  63.45  66.87  65.63  47.28  68.91  59.10  46.22 
PointWeb  86.97  66.64  60.28  91.95  98.48  79.39  0.00  21.11  59.72  34.81  76.33  88.27  46.89  69.30  64.91  52.46 
GACNet  87.79    62.85  92.28  98.27  81.90  0.00  20.35  59.07  40.85  85.80  78.54  70.75  61.70  74.66  52.82 
Point2Node  88.81  70.02  62.96  93.88  98.26  83.30  0.00  35.65  55.31  58.78  79.51  84.67  44.07  71.13  58.72  55.17 
Dataset and Metric. The S3DIS [s3dis] dataset contains six sets of point cloud data from three different buildings (including 271 rooms). Each point, with xyz coordinates and RGB features, is annotated with one semantic label from 13 categories. We followed the same data preprocessing approaches given in PointCNN [PointCNN].
We conducted our experiments in 6fold cross validation [pointnet] and fifth fold cross validation [segcloud]
. Because there are no overlaps between Area 5 and the other areas, experiments on Area 5 could measure the framework’s generalizability. For the evaluation metrics, we calculated mean of classwise intersection over union (mIoU), mean of classwise accuracy (mAcc), and overall pointwise accuracy (OA).
Performance Comparison. The quantitative results are given in Table 1 and Table 2. Our Point2Node performs better than the other competitive methods on both evaluation settings. In particular, despite the dataset contains objects of various sizes, our approach achieves considerable gains on both relatively small objects (e.g., chair, table, and board) and big structures (e.g., ceiling, wall, column and bookcase), which demonstrates that our model can capture discriminate features on both local and global level. For some similar geometric structures such as column vs. wall, our model also works very well, which indicates that our model is capable of distinguishing ambiguous structures due to the latent correlation they capture between the structures and other structures of the scene. Some qualitative results on different scenes are visualized in Fig. 3.
Ablation Study
For ablation studies, we integrate each correlation learning submodule and its corresponding feature aggregation methods, forming self correlation block, local correlation block and nonlocal correlation block, respectively. Our baseline method employs conv [PointCNN] and builds EncoderDecoder structure without any correlation learning block. We conducted ablation studies on Area 5 of the S3DIS.
To show the effectiveness of each proposed block in our model, we separately added each block after the EncoderDecoder structure while keeping the rest unchanged in our baseline, forming Self only, Local only and Nonlocal only. Our proposed Point2Node employs all three blocks to form a more powerful framework. As shown in Table 3, each block plays an important role in our framework. Moreover, compared with the baseline, the performance improvement (6.26%) of Point2Node exceeds the simple summation (5.02%) of the separate block, demonstrating that our sequential structure further enhances the node features.
Ablation studies  mIoU  

Baseline  56.70  +0.00 
Self only  57.45  +0.75 
Local only  57.91  +1.23 
Nonlocal only  59.74  +3.04 
Parallel_1  59.51  +2.81 
Parallel_2  60.57  +3.87 
LA  58.17  +1.47 
Parameterfree AA  60.57  +3.87 
Point2Node  62.96  +6.26 
To study the effectiveness of sequentially dynamic nodes update strategy in our model, we compared our original sequential structure with parallel structure variants. Specifically, (1) by placing three blocks in parallel, denoted by “Parallel_1”, the nodes get updated just once after these parallel blocks; (2) by placing the local correlation block and nonlocal correlation block in parallel, denoted by “Parallel_2”, the nodes get updated after self correlation and after the parallel blocks. As shown in Table 3, “Parallel_2” performs better than “Parallel_1”. The proposed Point2Node model, with sequential and fully updated nodes, has the best performance and indicates the effectiveness of this node update mechanism.
To show our adaptive aggregation can lead to more discriminate features than linear aggregation, we replaced the adaptive aggregation of local correlation block and nonlocal correlation block with the linear aggregation in the architecture of Point2Node, to form a new network (LA). For a fair comparison, we also employed the parameterfree characteristics of the AFA module to construct a parameterfree adaptive aggregation, named “Parameterfree AA”. The results demonstrate that our adaptive aggregation captures more discriminate features than linear aggregation does, and the parameterized adaptive aggregation (Point2Node) is more effective than parameterfree AA.
model size  mIoU  

Baseline  10.98M  56.70 
Point2Node  11.14M(+1.46%)  62.96(+11.04%) 
As shown in Table 4, we significantly improve the performance (+11.03%) while increasing very few parameters (+1.46%). In our Point2Node, we frequently employ different MLPs, and we control the reduction rate ( in our experiments) to reduce parameters in adjacency matrix computing and parameterized characteristics.
Mehtods  OA 

3DCNN [3dcnn]  73.0 
PointNet [pointnet]  73.9 
TCDP [tcdp]  80.9 
PointNet++ [pointnet++]  84.5 
PointCNN [PointCNN]  85.1 
ACNN [ACNN]  85.4 
PointWeb [pointweb]  85.9 
Point2Node  86.3 
ScanNet Semantic Voxel Labeling
The ScanNet [scannet] dataset contains over 1,500 scanned and reconstructed indoor scenes, which consist of 21 semantic categories. The dataset provides a 1,201/312 scene split for training and testing. We followed the same data preprocessing strategies as with S3DIS, and reported the pervoxel accuracy (OA) as evaluation metrics. Table 5 shows the comparisons between our Point2Node and other competitive methods. Our method achieves the stateoftheart performances, due to its more effective feature learning.
Methods  input  #points  OA 

PointNet [pointnet]  89.2  
SCN [scnet]  90.0  
PointNet++ [pointnet++]  90.7  
KCNet [KCNet]  91.0  
PointCNN [PointCNN]  92.2  
DGCNN [dgcnn]  92.2  
PCNN [pcnn]  92.3  
Point2Sequence [Point2Sequence]  92.6  
ACNN [ACNN]  92.6  
Point2Node  93.0  
SONet [sonet]  90.9  
PointNet++ [pointnet++]  91.9  
PointWeb [pointweb]  92.3  
PointConv [pointconv]  92.5  
SpiderCNN [SpiderCNN]  92.4  
SONet [sonet]  93.4 
ModelNet40 Shape Classification
The ModelNet40 [modelnet] dataset is created using 12,311 3D mesh models from 40 manmade object categories, with an official 9,843/2,468 split for training and testing. Following PointCNN [PointCNN], we uniformly sampled 1,024 points from each mesh without normal information to train our model. For classification, we removed the decoder in the Point2Node model and reported the overall accuracy (OA) as the evaluation metric.
To make a fair comparison with previous approaches that may take different information as input, we list detailed experiment setting and results in Table 6. Point2Node achieves the highest accuracy among the input methods. Furthermore, our input model outperforms most additionalinput methods, even if performing slightly worse than the best additionalinput method SONet [sonet]. Experimental results also demonstrate the effectiveness of our model in point cloud understanding.
Conclusions
We present a novel framework, Point2Node, for correlation learning among 3D points. Our Point2Node dynamically enhances the nodes representation capability by exploiting correlation of different levels and adaptively aggregates more discriminate features to better understand point cloud. Ablation studies show that our performance gain comes from our new correlation learning modules. Results from a series of experiments demonstrate the effectiveness and generalization ability of our method.
Limitations. The correlation learning in Point2Node is based on refining the features (currently PointCNN [PointCNN] is used) extracted on nodes. While it can be considered as a general feature enhancement tool, the final performance is also limited by the capability of the predetermined features. An ideal strategy would be to integrate the feature learning and correlation learning into a joint pipeline, so that the interchannel and internode correlations could impact the feature extraction. We will explore the design of such an endtoend network in the future.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (Grants No. 61771413, and U1605254).
References
Appendix
A. Relationship to standard Nonlocal operations
Standard NonLocal operations [nonlocal] are widely employed in recent literature to capture longrange dependencies in 2D images and videos. To capture correlation among pixels in videos, these operations map input features with 3D convolution, and construct adjacency matrix with reshape operation and matrix multiplication.
In our papers, to capture the correlation among nodes, we generalize standard NonLocal operations to the graph domain, where the adjacency matrix is more naturally applied for weighting edges. We not only employ Nonlocal operations on nonlocal nodes to capture longrange dependencies, but also extend them to neighboring nodes to encode local contextual information.
Moreover, our computation is less expensive. In [nonlocal], the dimension for video data fed to the network is , where is the time of videos, is the height of videos, is the width of videos, and
is the channel of videos. The dimension of our node tensor is
, where is the number of nodes and is the channel of nodes. Because the 3D convolutions are replaced with 1D convolutions, and the frequent “reshape” operations are no needed, our computation is much more efficient.B. Proof of Permutation Equivariance of Dynamic Nodes Correlation
Following [pat], we prove the permutation equivariance of our proposed modules, and provide the proof process as follows.
Lemma 1. (Permutation matrix and permutation function). , permutation matrix of size , is a permutation function:
(17) 
Lemma 2. Given , permutation matrix of size ,
(18) 
Proof.
.
Consider a permutation function for , using Eq. 17, we get:
which implies,
similarly,
(19) 
(20) 
Proposition 1. Self Correlation (SC) operation is permutationequivariant, i.e., given input , permutation matrix of size ,
Proof.
where MLP is a nodewise function that shares weights among all the nodes,
using Eq. 19, we get:
Proposition 2. Local Correlation (LC) operation is permutationequivariant, i.e., given input , permutation matrix of size ,
Proof.
where Dilated K Nearest Neighbors (DKNN) operation is a nodewise operation,
and are both nodewise functions that share weights among all the nodes,
Local Nodes MaxPooling (LNMP) operation is invariant to local nodes’ order, and permutationequivariant to input order,
using Eq. 18, we get:
Proposition 3. Nonlocal Correlation (NLC) operation is permutationequivariant, i.e., given input , permutation matrix of size ,
Proof.
where and are both nodewise functions that share weights among all the nodes,
using Eq. 18, we get:
C. Proof of Permutation Equivariance of Adaptive Feature Aggregation
Proposition 1. Adaptive Feature Aggregation (AFA) operation is permutationequivariant, i.e., given input and , permutation matrix of size ,
Proof.
where Global Node Pooling (GNP) operation is invariant to input order,
Gate operation is shared among all the points,
in this way,
Comments
There are no comments yet.