Neighborhood Enlargement in Graph Neural Networks

Graph Neural Network (GNN) is an effective framework for representation learning and prediction for graph structural data. A neighborhood aggregation scheme is applied in the training of GNN and variants, that representation of each node is calculated through recursively aggregating and transforming representation of the neighboring nodes. A variety of GNNS and the variants are build and have achieved state-of-the-art results on both node and graph classification tasks. However, despite common neighborhood which is used in the state-of-the-art GNN models, there is little analysis on the properties of the neighborhood in the neighborhood aggregation scheme. Here, we analyze the properties of the node, edges, and neighborhood of the graph model. Our results characterize the efficiency of the common neighborhood used in the state-of-the-art GNNs, and show that it is not sufficient for the representation learning of the nodes. We propose a simple neighborhood which is likely to be more sufficient. We empirically validate our theoretical analysis on a number of graph classification benchmarks and demonstrate that our methods achieve state-of-the-art performance on listed benchmarks. The implementation code is available at <https://github.com/CODE-SUBMIT/Neighborhood-Enlargement-in-Graph-Network>.

Authors

• 12 publications
• 12 publications
• 5 publications
• 4 publications
• How Powerful are Graph Neural Networks?

Graph Neural Networks (GNNs) for representation learning of graphs broad...
10/01/2018 ∙ by Keyulu Xu, et al. ∙ 14

• NEAR: Neighborhood Edge AggregatoR for Graph Classification

Learning graph-structured data with graph neural networks (GNNs) has bee...
09/06/2019 ∙ by Cheolhyeong Kim, et al. ∙ 0

• Enhance Information Propagation for Graph Neural Network by Heterogeneous Aggregations

Graph neural networks are emerging as continuation of deep learning succ...
02/08/2021 ∙ by Dawei Leng, et al. ∙ 0

• Binarized Graph Neural Network

Recently, there have been some breakthroughs in graph analysis by applyi...
04/19/2020 ∙ by Hanchen Wang, et al. ∙ 14

• Representation Learning on Graphs with Jumping Knowledge Networks

Recent deep learning approaches for representation learning on graphs fo...
06/09/2018 ∙ by Keyulu Xu, et al. ∙ 0

• A Novel Higher-order Weisfeiler-Lehman Graph Convolution

Current GNN architectures use a vertex neighborhood aggregation scheme, ...
07/01/2020 ∙ by Clemens Damke, et al. ∙ 0

• Just Jump: Dynamic Neighborhood Aggregation in Graph Neural Networks

We propose a dynamic neighborhood aggregation (DNA) procedure guided by ...
04/09/2019 ∙ by Matthias Fey, et al. ∙ 0

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Graph structured data such as molecules, social, biological and financial networks are learned through deep models with effective representation of the graph structure hamilton2017inductive . A variety of Graph Neural Network (GNN) is developed for the representation learning of graph structures recently li2015gated ; hamilton2017representation ; kipf2016semi ; velivckovic2017graph . A recursive neighborhood aggregation scheme is broadly followed in the learning of Graph Neural Networks. Each node aggregates feature representation of its neighbors to calculate it’s new feature representation. Literately, after

iterations of aggregation, each node is represented by a feature vector which captures the structural information within the node’s k-hop neighborhood. Then the summary operation is applied for the learned representation of each node such as pooling, which is used for the representation of an entire graph.

Many GNN variants applying a variety of neighborhood aggregation, node-level operation, and graph-level pooling have been proposed scarselli2009graph ; battaglia2016interaction ; xu2018representation ; verma2018graph ; santoro2017simple . Structural information is learned in many tasks including node classification, link prediction, and graph classification. They achieved state-of-the-art performance in these tasks. However, in the literate neighborhood aggregation scheme, there is little analysis of the properties of the graph structural data during the learning of the representation. Besides, formal analysis of the graph models in the learning is also limited.

A theoretical framework for analyzing the properties of the graph models is proposed. We formally characterize the properties of the graph models in the literate neighborhood aggregation scheme. In detail, we first introduce Markov blanket, second neighborhood, bi-directed graph, and extraverted edges. Analysis of the close connection between the bi-directed graph and the graph model is then made. Besides, we further explore that basically, a Markov neighborhood is larger than the common neighborhood used in GNN modelsatwood2016diffusion ; niepert2016learning ; zhang2018end ; yanardag2015deep ; normalization2015accelerating ; xu2018representation recently. We then analyze the effectiveness of the proposed second neighborhood, and explore a strategy for the literate neighborhood aggregation scheme in the learning of GNN models.

To formalize the above analysis mathematically, we first represent that in the literate neighborhood aggregation scheme, the graph model is very likely to be a bi-directed graph with extraverted edges for a variety of GNN models. Then, the common neighborhood is smaller than Markov blanket in the graph. Hence, the common neighborhood is not sufficient to represent each node in the graph. We further study that the second neighborhood is very likely to be the Markov blanket.

Our main results are summarized as follows:

• We establish conditions for the learning of GNNs in the iterate neighbor aggregation scheme and show that GNNs are very likely as a bi-directed graph if the conditions hold.

• In the iterate neighbor aggregation scheme, We analyze that common neighborhood is not sufficient in the learning of GNNs when (1) is held.

• We identify the efficiency of a second neighborhood in the learning of GNNs when (1) is held, and precisely characterize the kinds of graph structures capturing this efficiency.

• We develop a simple neighborhood-enlarge strategy and apply it in the graph structures.

The proposed method is validated via experiments on the graph classification datasets, where the strategy of setting the neighborhood for each node is crucial to capture the graph structures. In detail, the performance of GNNs with various aggregation functions atwood2016diffusion ; niepert2016learning ; zhang2018end ; xu2018powerful and WL tests shervashidze2011weisfeiler are compared with the proposed method. Our results confirm the efficiency of the proposed neighborhood enlargement for the learning of GNNs, in the iterate neighbor aggregation scheme. In addition, the proposed method for the neighborhood enlargement outperform the other baselines by test set accuracy. The state-of-the-art performances on many graph classification benchmarks are achieved.

2 Related Work

In the following, both empirical and theoretical related work are reviewed.

2.1 Empirical study

The design of new GNNS is based on empirical intuition. A GNN model is build to directly process most of the practically useful types of graphs scarselli2009graph , the interaction network is introduced which can reason about how objects in complex systems interact battaglia2016interaction , in the context of spectral graph theory, a formulation of CNNs is developed defferrard2016convolutional

, circular fingerprints are applied to develop a standard molecular feature extraction to learn a graph

duvenaud2015convolutional , a general inductive frame that leverages node features information for sufficient generation of node embeddings are developed hamilton2017inductive , an encoding of the molecular graph is used build the graph convolutions kearnes2016molecular , a diffusion-convolution representation can be learned from graph-structured data for node classification atwood2016diffusion , analogous to image-based convolutional networks, a general approach to extracting locally connected regions from graphs are developed niepert2016learning , both a localized graph convolution module and a novel SortPooling layer are designed for a deep graph model zhang2018end .

2.2 Theoretical study

Besides the empirical success of GNNS, mathematically studies on GNN’s properties are made. Earliest GNN model can approximate measurable functions in probability

scarselli2009graph . RKHS of the graph kernels is applied to build a graph architecture lei2017deriving . Graph Isomorphism Network is theoretically motivated and a theoretical framework of the sum aggregation is build to learn the graph network xu2018powerful . Besides, models based on classical theory is also developed. For example, the WL subtree kernel shervashidze2011weisfeiler which is based on the theoretical basis of Weisfeiler-Lehman(WL) graph weisfeiler1968reduction is developed. a differentiable graph pooling module is built to generate hierarchical representations of graphs ying2018hierarchical

, the feature representation with SVM classifier is applied to increase classification accuracy of CNN algorithms and traditional graph kernels

ivanov2018anonymous .

3 Preliminaries

Our Notation is introduced here along with common GNN models. Let denote a graph where is a set of vertices and is a set of edges. Let denote the feature vector for each node . In common, graph neural networks learn the representation of each node to provide solutions to the following two tasks: Node Classification, where denote the label of each node and the goal is to learn the function Graph Classification, where denote the label of each graph . Similarly, the goal is to learn the function , where denote the feature vector of .

GNN atwood2016diffusion ; niepert2016learning ; zhang2018end ; yanardag2015deep ; normalization2015accelerating ; xu2018representation models learn the representation of node , , or of the graph , . Neighborhood aggregation is applied in the learning process for GNNs currently. The representation of each node is iteratively updated through aggregating representations of its neighbors. In detail, to iteratively update , is firstly initialized and the node aggregates the representation of its neighbors applying an aggregation function which operates over the set of nodes . Each node is then assigned a new representation vector and is combined with the previous representation vector.

Formally, in the layer of GNN models,

 hkN(v)=AGGREGATEk({hk−1μ,∀μ∈N(v)}) (1)
 hkv=σ(Wk∙COMBINE(hk−1v,hN(v)k)) (2)

where is the feature vector of node at the kth iteration/layer. is a set of nodes which are adjacent to , commonly, the neighborhood of . is the aggregated representation of the neighbors of the node at the kth iteration/layer. As far as we know, the variants of GNN models apply different architectures for AGGREGATE.

The formulation for the pooling variant of GraphSAGR hamilton2017inductive is applied as

 (3)

where

is a matrix through learning, MAX denotes the operation of element-wise max-pooling. Similarly, MEAN operation

kipf2016semi is replaced with the MAX operation, and this MEAN denotes the operation of element-wise mean-pooling. Furthermore, SUM operation xu2018powerful is also applied where the representation of each node in the neighborhood is accumulated as following:

 hkN(v)=∑μ∈Nv(hk−1μ(v)). (4)

The discriminative/representational power of the Graph Isomorphism Network ()GIN) xu2018powerful is shown to be equal to the power of the WL test weisfeiler1968reduction . Besides, many other GNN variants can be represented similarly as Equation (1)-(4). For both node and graph classification, the node representation is used for prediction.

4 Proposed Approach

Before the introduction to the proposed work, several definitions are introduced as follows. Then, in the literate neighborhood aggregation scheme, analysis on the properties of nodes, edges and neighborhood in GNNs are made. Particularly, the common neighborhood applied in the state-of-the-art GNNs xu2018powerful ; kipf2016semi ; atwood2016diffusion ; niepert2016learning ; zhang2018end are not sufficient for the representation of nodes in the graph. Therefore, we propose a second neighborhood to bring with more sufficiency than the common one.

Definitions are made before the introduction of theoretical analysis on the properties of nodes, edges, and neighborhood in the GNNs.

Definition 1 (Joint neighborhood function and simultaneous learning)

In the literately neighborhood aggregation scheme, learning a GNN model for the representation of nodes , through Equation 1-4 with variants of operation. For example, if the operation is the sum operation and mean operation, formally, in the layer of GNN models, . If the operation is the max operation, similarly, , is the chosen representation through . Then, in the learning of the current state-of-the-art GNNs xu2018powerful ; kipf2016semi ; atwood2016diffusion ; niepert2016learning ; zhang2018end , the representation of all nodes are learned simultaneously in each iteration, that is the joint neighborhood function for each node is learned simultaneously.

Definition 2 (Markov blanket)

A Markov blanket is a generalized concept of a set for a node in a graphical model, which contains all the variables that shield the node from the rest of the network. More formally, the Markov blanket for a node in a graph model is the set of nodes composed of parents, its children, and its children’s other parents. It’s also denoted by . Besides, each set of nodes in the graph is conditionally independent of when conditioned on the set . Formally, for distinct nodes and , .

Definition 3 (Neighborhood and second neighborhood)

A neighborhood of a vertex in a graph is the subgraph of induced by all vertices adjacent to , i.e., the graph composed of the vertices adjacent to and all edges connecting vertices adjacent to . it’s denoted as . Formally, if the distance of all edges connecting vertices in a graph is set to be distance one, then the common neighborhood contains all vertices at distance one from and all edges connecting to the corresponding nodes, it’s denoted as , the second neighborhood of in a graph contains all vertices at distance two from and all the edges connecting to the corresponding nodes, it’s denoted as

Definition 4

(Bidirected graph and extraverted edge). A bidirected graph is a graph in which each edge is given an independent orientation at each end. Particularly, an extraverted edge is a type of edge where arrows point away from the vertices at both ends of the edges.

Let denote a graph where is a set of vertices and is a set of edges. The Graph is very likely be a bi-directed if condition(a) holds. The graph is very likely to be non-bidirected if condition(b) holds.

(a). In the literately neighborhood aggregation scheme with mean or sum operation, formally in the form of joint neighborhood function as , a graph is likely to be a bi-directed graph in the learning of representation of each node .

(b). In the literately neighborhood aggregation scheme with max operation, formally in the from of joinr neighborhood function as , is the chosen representation through , and the graph is likely to be a bi-directed graph in the learning of representation of each node .

4.1 Analysis on condition (a)

Formally, at the kth layer of GNN models with the sum operation , for and , suppose that the node dose not get information from , then, following the sum operation for the node as , then the node . However, the node is contradiction with that the node is adjacent to that . Therefore, , node gets information from , and directly forward, for , , get information from each other.

Similarly, at the kth layer of GNN models with the mean operation , representing the mean operation as a linear function of the sum operation, formally, is the total number of representation vector . Therefore, the node and the node get the information with each other if these two nodes are adjacent.

To be noted, as the node and the node get the information from each other, the edges in the are the extraverted edges.

4.2 Analysis on condition (b)

However, at the kth layer of GNN models with the max operation , suppose is a bi-directed graph with extraverted edges in the learning of representation of . Then, for , gets information from nodes . This is contradiction with the max operation, that only get information from one node in . Therefore, in the literately neighborhood aggregation scheme applying the max operation to learn GNN models, is not likely to be a bi-directed graph with extraverted edges.

4.3 Common neighborhood is not sufficient

In a bi-directed graph , the node is conditionally dependent on three sets of nodes , and , and the relation is formally as and , , where is the Markov blanket for the node .

Formally, , in a bi-directed graph , and , it’s directly forward that . If , then, . If , then which shares the edge with at least one node in , the node is linked with and , .

Besides, if , then , if , as is a bi-directed graph, the node get information from the node , and which links with a node in , could be linked with . Therefore, .

Therefore, in the neighborhood aggregation scheme for the learning of GNN models, the neighborhood is not sufficient to the representation of the node .

4.4 Second neighborhood is likely to be a Markov blanket

In a bi-directed graph , and , the Markov blanket of the node : is the subset of the second neighbourhood .

Formally, suppose which is out of the , as is a bi-directed graph, then the node is linked with the node through two edges and an intermediate . Denote the edge linking the node and the node as , and the edge linking the node and as .

As and are extraverted edges, then could be the child of and could be the parent of . Therefore, should be in the . This is in contradiction with the hypothesis that is out of the .

From the other side, in a bi-directed graph , and , the second neighbourhood is the subset of the Markov blanket of the node .

Similarly, suppose which is out of , then, could be the parent of the node , or could be the parent of the node children. If is the parent of , then is directly linked with , so , this is in contradiction with the hypothesis that is out of . If is the parent of the node children, then which is the child of and is linked with . So . This is in contradiction with the hypothesis that is out of .

Therefore, in a bi-directed graph , and , the is very likely to be equal as .

4.5 Enlarge neighborhood

After the above analysis, in the learning of the graph and literately neighborhood aggregation scheme which condition(a) holds, the neighborhood is not sufficient for the learning of representation vector of each node. We propose a neighborhood which is defined as , which could be applied in the neighborhood aggregation scheme.

Besides, random sampling is applied to chosen nodes in the second neighborhood randomly, the chosen nodes are used along with all nodes in together, in case that is very large which contains many nodes.

5 Experiments

Dataset. We evaluate graph classification benchmarks. There are bioinformatics datasets (MUTAG, PTC, NCI1, PROTEINS) and social datasets (COLLAB, IMDB-BINARY, IMDB-MULTI, REDDITBINARY and REDDIT-MULTI5k) yanardag2015deep

. For a fair comparison with the state-of-the-art, we set the nodes in the same setting. In detail, the node features are set as the categorical input features for the bioinformatic graphs, the node features are created for social networks. All node feature vectors are set to be the same for the REDDIT datasets, one-hot encodings of node degrees are set for other social networks.

Experiment configurations. Our proposed neighborhood enlargement strategy is evaluated. Under this strategy, we consider two variants: (1) the second neighborhood is applied in GINs xu2018powerful .(2) nodes are randomly chosen and added to the common neighborhood in GNNS atwood2016diffusion ; niepert2016learning ; zhang2018end . Both methods are applied to train GINs for the classification experiments. As we will see, both methods show strong empirical performance, not only do the two methods outperform the performance of the state-of-the-art, but also demonstrates easy implementation. For model configuration of the baseline GNNs atwood2016diffusion ; niepert2016learning ; zhang2018end and GINs xu2018powerful

, sum, mean and max-pooling are applied in the aggregation. Besides, MLPs with 1-layer perceptrons, a linear mapping and ReLU are applied in the GNNs. GCN applies mean-1-layer and GraphSAGE

hamilton2017inductive applies max-1-layer respectively. The same graph-level readout (READOUT in Eq) is applied in all models in the experiments, specifically, sum readout and mean readout are applied on bioinformatics and social dataset respectively.

For a fair comparison, 10-fold cross-validation for all datasets is made. Both average and standard deviation of validation accuracies cross the 10-fold validation are reported. For all experiments,

GNN layers and

MLPs are applied. Batch normalization is applied to each hidden layer. Adam optimizer is used for the training of all datasets. The initial learning rate is set as

, and the learning rate is decayed by every epochs. The other hyper-parameters for tuning each graph data are also the same with the baseline model. The batch size is set as or , the dropout ratio is set as or after the dense layer, the number of epochs is set as . For social graphs, the number of hidden units is set as , for bioinformatics graphs, the number of hidden units is set as or . The best cross-validation accuracy averaged over folds is selected for each epoch.

Baseline models.

The state-of-the-art baseline models including GNNs variants, GINs variants, kernel-based models and classical models are compared for graph classification tasks. (1). state-of-the-art deep learning graph network architectures,i.e., Diffusion-convolutional neural networks (DCNN)

atwood2016diffusion , PATCHY-SAN niepert2016learning , Deep Graph CNN (DGCNN) zhang2018end and GraphSAGE (DGSAGE) kipf2016semi ; (2). state-of-the-art deep learning Graph Isomorphism Network xu2018powerful , i.e. GIN that learns noted as GIN-, GIN- that is set as , mean-MLPs,mean-1-Layer,max-MLPs,sum-1-layer which replace the GIN-0 aggregation with mean or max-pooling, or replace MLPS with 1-layer perceptrons; (3). the WL subtree kernel-based model shervashidze2011weisfeiler , where the classifier is used as C-SVM and hyper-parameters for the number of WL iterations is sets as ; (4). Anonymous Walk Embeddings (AWL) ivanov2018anonymous .

Results. We validate our theoretical analysis of the neighborhood aggregation of GNNs and proposed methods by comparing the test accuracies. Although our theoretical analysis does not directly speak about the generalization and prediction ability of GNNs, it is expected that our proposed methods based on the above analysis can better accurately capture graph structures of interest better in comparison with state-of-the-art for the listed benchmarks.

First, the proposed neighborhood enlargement strategy is applied on GIN-0(NESG), outperform the listed state-of-the-art GNNs, GINs, and other models for all the listed datasets including bioinformatics graph datasets and social graph datasets. In detail, NESG outperforms the listed GNN variants, classical graph models and GIN variants for listed graph datasets. Particularly, for bioinformatics graph datasets, the test accuracy is on MUTAG which is better than the best performance from baselines. Similarly, is achieved on PROTEINS and is higher than . and are obtained for PTC and NCL1 respectively, While, the best performance from the baselines are and . For social graph datasets, the performance achieved from NESG is , , and for IMDB-B, COLLAB, RDT-M5K and IMDB-M respectively. They are higher than , , and which are the best performance of the baseline graph models.

6 Conclusion

In this paper, in the iterate neighborhood aggregation scheme, we analyzed the properties of nodes, edges, and neighborhood of the GNNs. Then, on the basis of our analysis, we find the insufficiency of common neighborhood applied in the state-of-the-art GNNs and GINs. Therefore, we proposed a new neighborhood which is more sufficient than the common one. However, from our experiments, we find this new neighborhood costs a high computation if the graph dataset contains several hundreds of nodes. And the achieved performance is still far from satisfactory for some benchmarks. Therefore, future work is likely to reduce computation and improve performance as well.