Sample Code for Graph Partition Neural Networks
We present graph partition neural networks (GPNN), an extension of graph neural networks (GNNs) able to handle extremely large graphs. GPNNs alternate between locally propagating information between nodes in small subgraphs and globally propagating information between the subgraphs. To efficiently partition graphs, we experiment with several partitioning algorithms and also propose a novel variant for fast processing of large scale graphs. We extensively test our model on a variety of semi-supervised node classification tasks. Experimental results indicate that GPNNs are either superior or comparable to state-of-the-art methods on a wide variety of datasets for graph-based semi-supervised classification. We also show that GPNNs can achieve similar performance as standard GNNs with fewer propagation steps.READ FULL TEXT VIEW PDF
Existing graph neural networks may suffer from the "suspended animation
Graph convolutional neural networks (GCNs) have recently demonstrated
Graph Neural Networks (GNNs) have recently been used for node and graph
By analyzing Bayesian inference of generative model for random networks ...
Neural message passing algorithms for semi-supervised classification on
Multitask algorithms typically use task similarity information as a bias...
Motivated by a geometric problem, we introduce a new non-convex graph
Sample Code for Graph Partition Neural Networks
Graphs are a flexible way of encoding data, and many tasks can be cast as learning from graph-structured inputs. Examples include prediction of properties of chemical molecules 
, answering questions about knowledge graphs
, natural language processing with parse-structured inputs (trees or richer structures like Abstract Meaning Representations), predicting properties of data structures or source code in programming languages [22, 2], and making predictions from scene graphs . Sequence data can be seen as a special case of a simple chain-structured graph. Thus, we are interested in training high-capacity neural network-like models on these types of graph-structured inputs. Graph Neural Networks (GNNs) [14, 29, 22, 28, 21, 11, 42] are one of the best contenders, although there has been much recent interest in applying other neural network-like models to graph data, including generalizations of convolutional architectures [9, 18, 30]. Gilmer et al.  recently reviewed and unified many of these models.
An important issue that has not received much attention in GNN models is how information gets propagated across the graph. There are often scenarios in which information has to be propagated over long distances across a graph, e.g., when we have long sequences augmented with additional relationships between elements of the sequence, like in text, programming language source code, or temporal streams. The simplest approach, and the one adopted by almost all graph-based neural networks is to follow synchronous message-passing systems  from distributed computing theory. Specifically, inference is executed as a sequence of rounds: in each round, every node sends messages to all of its neighbors, the messages are delivered and every node does some computation based on the received messages. While this approach has the benefit of being simple and easy to implement, it is especially inefficient when the task requires spreading information across long distances in the graph. For example, in processing sequence data, if we were to employ the above schedule for a sequence of length , it would take messages to propagate information from the beginning of the sequence to the end, and during training all messages must be stored in memory. In contrast, the common practice with sequence data is to use a forward pass followed by a backward pass at a cost of
One possible approach for tackling this problem is to propagate information over the graph following some pre-specified sequential order, as in Bidirectional LSTMs. However, this sequential solution has several issues. First, if a graph used for training has large diameter, the unrolled GNN computational graph will be large (cf. Bidirectional LSTMs on long sequences). This leads to fundamental issues with learning (e.g., vanishing/exploding gradients) and implementation difficulties (i.e., resource constraints). Second, sequential schedules are typically less amenable to efficient acceleration on parallel hardware. More recently, Gilmer et al.  attempted to tackle the first problem by introducing a “dummy node” with connections to all nodes in the input graph, meaning that all nodes are at most two steps away from each other. However, we note that the graph structure itself often contains important information, which is modified by adding additional nodes and edges.
In this work, we propose graph partition neural networks (GPNN) that exploit a propagation schedule combining features of synchronous and sequential propagation schedules. Concretely, we first partition the graph into disjunct subgraphs and a cut set, and then alternate steps of synchronous propagation within subgraphs with synchronous propagation within the cut set. In Sect. 3, we discuss different propagation schedules on an example, showing that GPNNs can be substantially more efficient than standard GNNs, and then present our model formally. Finally, we evaluate our model in Sect. 4 on a variety of semi-supervised classification benchmarks. The empirical results suggest that our models are either superior to or comparable with state-of-the-art learning systems on graphs.
There are many neural network models for handling graph-structured inputs. They can be roughly categorized into generalizations of recurrent neural networks (RNNs) [13, 14, 29, 34, 37, 22, 25, 28, 21]
and generalizations of convolutional neural networks (CNNs)[7, 9, 18, 30]. Gilmer et al.  provide a good review and unification of many of these models, and they present some additional model variations that lead to strong empirical results in making predictions from molecule-structured inputs.
In RNN-like models, the standard approach is to propagate information using a synchronous schedule. In convolution-like models, the node updates mimic standard convolutions where all nodes in a layer are updated as functions of neighboring node states in the previous layer. This leads to information propagating across the graph in the same pattern as synchronous schedules. While our focus has been mainly on the RNN-like model of Li et al. , it would be interesting to apply our schedules to the other models as well.
Some of the RNN based neural network models operate on restricted classes of graphs and employ sequential or sequential-like schedules. For example, recursive neural networks [13, 33] and tree-LSTMs  have bidirectional variants that use fully sequential schedules. Sukhbaatar et al.  modeling of agents can be viewed as a GNN model with a sequential schedule, where messages are passed inwards towards a master node that aggregates messages from different agents, and then outwards from the master node to all the agents. The difference in our work is the focus on graphs with arbitrary structure (not necessarily a sequence or tree). Recently, Marino et al.  developed an attention-like mechanism to dynamically select a subset of graph nodes to propagate information from, but the propagation is synchronous amongst selected nodes.
Recently, Hamilton et al.  propose a graph sample and aggregate (GraphSAGE) method. It first samples a neighborhood graph for each node which can be regarded as overlapping partitions of the original graph. An improved graph convolutional network (GCN)  is then applied to each neighborhood graph independently. They show that this partition based strategy facilitates the unsupervised representation learning on large scale graphs.
An area where scheduling has been studied extensively is in the probabilistic inference literature. It is common to decompose a graph into a set of spanning trees and sequentially update the tree structures . Graph partition based schedules have been explored in belief propagation (BP) , generalized belief propagation (GBP) [48, 43], generalized mean-field inference [45, 46] and dual decomposition based inference [19, 49]. In generalized mean-field inference , a graph partition algorithm, e.g., graph cut, is applied to obtain the clusters of nodes. A sequential update schedule among clusters is adopted to perform variational inference. Zhang et al.  adopt a partition-based strategy to better distribute the dual decomposition based message passing algorithm for high order MRF. The junction tree algorithm  can also be viewed as a partition based inference where the partition is obtained by finding the maximum spanning tree on the weighted clique graph. Each node of the junction tree corresponds to a cluster of nodes, i.e., maximal clique, in the original graph. A sequential update can then be executed on the junction tree. See also [10, 38, 36] for more discussion of sequential updates in the context of belief propagation. Finally, the question of sequential versus synchronous updates arises in numerical linear algebra. Jacobi iteration uses a synchronous update while Gauss-Seidel applies the same algorithm but according to a sequential schedule.
In this section, we briefly recapitulate graph neural networks (GNNs) and then describe our graph partition neural networks (GPNN). A graph has nodes and edges . We focus on directed graphs, as our approach readily applies to undirected graphs by splitting any undirected edge into two directed edges. We denote the out-going neighborhood as , and similarly, the incoming neighborhood as . We associate an edge type with every edge , where is some pre-specified total number of edge types. Such edge types are used to encode different relationships between nodes. Note that one can also associate multiple edge types with the same edge which results in a multi-graph. We assume one edge type per directed edge to simplify the notation here, but the model can be easily generalized to the multi-edge case.
Graph neural networks [29, 22] can be viewed as an extension of recurrent neural networks (RNNs) to arbitrary graphs. Each node in the graph is associated with an initial state vector at time step . Initial state vectors can be observed features or annotations as in . At time step , an outgoing message is computed for each edge by transforming the source state according to the edge type, i.e.,
where is a message function, which could be the identity or a fully connected neural network. Note the subscript indicating that different edges of the same type share the same instance of the message function. We then aggregate all messages at the receiving nodes, i.e.,
is the aggregation function, which may be a summation, average or max-pooling function. Finally, every node will update its state vector based on its current state vector and the aggregated message, i.e.,
is the update function, which may be a gated recurrent unit (GRU), a long short term memory (LSTM) unit, or a fully connected network. Note that all nodes share the same instance of update function. The described propagation step is repeatedly applied for a fixed number of time steps, to obtain final state vectors . A node classification task can then be implemented by feeding these state vectors to a fully connected neural network which is shared by all nodes. Back-propagation through time (BPTT) is typically adopted for learning the model.
The above inference process is described from the perspective of an individual node. If we look at the same process from the graph view, we observe a synchronous schedule in which all nodes receive and send messages at the same time, cf. the illustration in Fig. 1(d). A natural question is to consider different propagation schedules in which not all nodes in the graph send messages at the same time, e.g., sequential schedules, in which nodes are ordered in some linear sequence and messages are sent only from one node at a time. A mix of the two ideas leads to our Graph Partition Neural Networks (GPNN), which we will discuss before elaborating on how to partition graphs appropriately. Finally, we discuss how to handle initial node labels and node classification tasks.
We first consider the example graph in Fig. 1 (a). A corresponding computational graph that shows how information is propagated from time step to time step using the standard (synchronous) propagation schedule is shown in Fig. 1 (d). The example graph’s diameter is , and it hence requires at least steps to propagate information over the graph. Fig. 1(c) instead shows two possible sequences that show how information can be propagated between nodes to and to . These visualizations show that (i) a full synchronous propagation schedule requires significant computation at each step, and (ii) a sequential propagation schedule, in which we only propagate along sequences of nodes, results in very sparse and deep computational graphs. Moreover, experimentally, we found sequential schedules to require multiple propagation rounds across the whole graph, resulting in an even deeper computational graph.
In order to achieve both efficient propagation and tractable learning, we propose a new propagation schedule that follows a divide and conquer strategy. In particular, we first partition the graph into disjunct subgraphs. We will explain the details of how to compute graph partitions below. For now, we assume that we already have subgraphs such that each subgraph contains a subset of nodes and the edges induced by this subset. We will also have a cut set, i.e., the set of edges that connect different subgraphs. One possible partition of our example is visualized in Fig. 1 (b).
In GPNNs, we alternate between propagating information in parallel local to each subgraph (making use of highly parallel computing units such as GPUs) and propagating messages between subgraphs. Our propagation schedule is shown in Alg. 1. To understand the benefit of this schedule, consider a broadcasting problem over the example graph in Fig. 1. When information from any one node has reached all other nodes in the graph for the first time, this problem is considered as solved. We will compare the number of messages required to solve this problem for different propagation schedules.
Synchronous propagation: Fig. 1(d) shows that a synchronous step requires 10 messages. Broadcasting requires sufficient propagation steps to cover the graph diameter (in this case, 5), giving a total of messages.
Partitioned propagation: For simplicity, we analyze the case , , where is the maximum diameter of the subgraphs. Using the partitioning in 1(e), we have and each step of intra-subgraph propagation requires 8 messages. After steps ( messages) the broadcast problem is solved within each subgraph. Inter-subgraph propagation requires 2 messages in this example, giving messages per outer loop iteration in Alg. 1. The example requires outer iterations to broadcast between all subgraphs, giving a total of messages.
In general, our propagation schedule requires no more messages than the synchronous schedule to solve broadcast (if the number of subgraphs is set to or then our schedule reduces to the synchronous one). We analyze the number of messages required to solve the broadcast problem on chain graphs in detail in Sect. A.1. Overall, our method avoids the large number of messages required by synchronous schedules, while avoiding the very deep computational graphs required by sequential schedules. Our experiments in Sect. 4 show that this makes learning tractable even on extremely large graphs.
We now investigate how to construct graph partitions. First, since partition problems in graph theory typically are NP-hard, we are only looking for approximations in practice. A simple approach is to re-use the classical spectral partition method. Specifically, we follow the normalized cut method in  and use the random walk normalized graph Laplacian matrix , where
is the identity matrix,is the degree matrix and is the weight matrix of graph (i.e., the adjacency matrix if no weights are presented).
However, the spectral partition method is slow and hard to scale with large graphs 
. For performance reasons, we developed the following heuristic method based on a multi-seed flood fill partition algorithm as listed in Alg.2. We first randomly sample the initial seed nodes biased towards nodes which are labeled and have a large out-degree. We maintain a global dictionary assigning nodes to subgraphs, and initially assign each selected seed node to its own subgraph. We then grow the dictionary using flood fill, attaching unassigned nodes that are direct neighbors of a subgraph to that graph. To avoid bias towards the first subgraph, we randomly permute the order of subgraphs at the beginning of each round. This procedure is repeatedly applied until no subgraph grows anymore. There may still be disconnected components left in the graph, which we assign to the smallest subgraph found so far to balance subgraph sizes.
In practice, problems using graph-structured data sometimes (1) do not have observed features associated with every node ; (2) have very high dimensional sparse features per node . We develop two types of models for the initial node labels: embedding-input and feature-input. For embedding-input, we introduce learnable node embeddings into the model to solve challenge (1), inspired by other graph embedding methods. For nodes with observed features we initialize the embeddings to these observations, and all other nodes are initialized randomly. All embeddings are fed to the propagation model and are treated as learnable parameters. For feature-input, we apply a sparse fully-connected network to input features to tackle challenge (2). The dimension-reduced feature is then fed to the propagation model, and the sparse network is jointly learned with the rest of model.
We also empirically found that concatenating the input features with the final embedding produced by the propagation model is helpful in boosting the performance.
We test our model on a variety of semi-supervised tasks 111Our code is released at https://github.com/Microsoft/graph-partition-neural-network-samples : document classification on citation networks; entity classification in a bipartite graph extracted from a knowledge graph; and distantly-supervised entity extraction. We then compare different partition methods exploited by our model. We also compare the effectiveness of different propagation schedules. We follow the datasets and experimental setups in . The statistics are summarized in Tab. 1, revealing that the datasets vary a lot in terms of scale, label rate and feature dimension. We report the details of hyper-parameters for all experiments in the appendix.
|NELL||65,755||266,144||210||5,414||0.1, 0.01, 0.001|
We first discuss experimental results on three citation networks: Citeseer, Cora and Pubmed . The datasets contain sparse bag-of-words feature vectors for each document and a list of citation links between documents. Documents and citation links are regarded as nodes and edges while constructing the graph.
instances are sampled for each class as labeled data, 1000 instances as test data, and the rest are used as unlabeled data. The goal is to classify each document into one of the predefined classes. We use the same data split as in and 
. We use an additional validation set of 500 labeled nodes for tuning hyperparameters as in.
The experimental results are shown in Tab. 2. We report the results of baselines directly from  and . We see that GPNN is on par with other state-of-the-art methods on these small graphs. We also conducted experiments with random splits and results are reported in the appendix. We found these datasets easy to overfit due to their small size, and use feature-input rather than embedding-input, as the latter case increases the model capacity as well as the risk of overfitting. We also show a t-SNE  visualization of node representations produced by the propagation model of GGNN and GPNN on the Cora dataset in Fig. 2 (a) and (b) respectively. The visualizations show that the node representations of GPNN are better separated.
Next, we consider experimental results of entity classification task on the NELL dataset extracted from the knowledge graph first presented in . A knowledge graph consists of a set of entities and a set of directed edges which have labels (i.e., different types of relation). Following , each triplet of entities and relation in the knowledge graph is split into two tuples. Specifically, we assign separate relation nodes and to each entity and thus obtain and . Entity nodes are associated with sparse feature vectors. We follow  to extend the number of features by assigning a unique one-hot representation for every relation node. This results in a -dim sparse feature vector per node. An additional validation set of labeled nodes under the label rate as in  is used for tuning hyperparameters. The chosen hyperparameters are then used for other label rates. The semi-supervised task here considers three different label rates , , per class in the training set. We run the released code of GCN with the reported hyperparameters in . Since we did not observe overfitting on this dataset, we choose the embedding-input variant as the input model. The results are shown in Tab. 2, where we see that our model outperforms competitors under the most challenging label rate and obtain comparable results with the state of the art on other label rates.
Finally, we consider the DIEL (Distant Information Extraction using coordinate-term Lists) dataset . This dataset constructs a bipartite graph where nodes are medical entities and texts (referred as mentions and coordinate lists in the original paper). Texts contain some facts about the medical entities. Edges of the graph are links between entities and texts. Each entity is associated with a pre-extracted sparse feature vector. The goal is to extract medical entities from text given sparse feature vectors and the graph. As shown in Tab. 1, this dataset is very challenging due to its extremely large scale and very high-dimensional sparse features. Note that we attempted to run the released code GCN model on this dataset, but ran out of memory. Thus, we adapted the public implementation of GCN to make it successfully run on this dataset, and also implemented GCN with our partition-based schedule.
We follow the exact experimental setup as in [6, 47], including different data splits, preprocessing of entity mentions and coordinate lists, and evaluation. We randomly sample of the training nodes as the validation set. We regard the top- entities returned by a model as positive instances and compute recall
as the evaluation metric whereas in [6, 47]. Average recall over runs is reported in Tab. 3, and we see that GPNN outperforms all other models. Note that since Freebase is used as ground truth and some entities are not present in texts, the upper bound of recall given by  is .
|Planetoid (Transductive) ||50.00|
|Planetoid (Inductive) ||50.10|
|GCN + Partition||48.47|
We now compare the two partition methods we considered for our model: spectral partition and our modified multi-seed flood fill. We use the NELL data set to benchmark and report the average validation accuracy over runs in Tab. 4, in which we also report the average runtime of the partition process. The accuracies of the trained models do not allow for a clear conclusion as to which method to use, and in our further experiments they seem to highly depend on the number of subgraphs, the connectivity of input graphs, optimization and other factors. However, our multi-seed flood fill partition method is substantially faster and is efficiently applicable to very large graphs.
|Number of subgraphs||Spectral Partition||Modified Multi-seed Flood Fill|
Besides the synchronous and our partition based propagation schedules, we also investigated two further schedules based on a sequential order and a series of minimum spanning trees (MST).
To generate a sequential schedule, we first perform graph traversal via breadth first search (BFS) which gives us a visiting order. We then split the edges into those that follow the visiting order and those that violate it. The edges in each class construct a directed acyclic graph (DAG), and we construct a propagation schedule from each DAG following the principle that every node will send messages once it receives all messages from its parents and updates its own state. An example of the schedule is given in the appendix. Note that this sequential schedule reduces to a standard bidirectional recurrent neural network on a chain graph.
For the MST schedule, we find a sequence of minimum spanning trees as follows. We first assign random positive weights between and to every edge and then apply Kruskal’s algorithm to find an MST. Next we increase the weights by for edges which are present in the MST we found so far. This process is iterated until we find MSTs where is the total number of propagation steps.
We compare all four schedules by varying the number of propagation steps on the Cora dataset. The validation accuracies are shown in Fig. 2 (c). To clarify, assuming graph is singly connected, then the number of edges per propagation step of MST, Sequential, Synchronous and Partition in Fig. 2 (c) are , , and respectively. Here, and are the set of nodes and edges. We also show the average results of runs with different random seeds on Cora in Tab. 5.
|MST||59.94% 0.89||71.83% 0.96||77.1% 0.72|
|Sequential||73.04% 1.93||77.55% 0.65||74.89% 1.26|
|Synchronous||67.36% 1.44||80.15% 0.80||80.06% 0.98|
|Partition||68.1% 1.98||80.27% 0.78||80.12% 0.93|
In these results, the meaning of one propagation step varies. For the synchronous schedule, a propagation step means that every node sent and received messages once and updated its state. For the sequential schedule, it means that messages from all roots of the two DAGs were sent to all the leaves. For the MST-based schedule, it means sending messages from the root to all leaves on one minimum spanning tree. For our partition schedules, it means one outer loop of the algorithm. In this sense, messages are propagated furthest through the graph for the sequential schedule within one propagate step. This becomes visible in the results on a single propagation step, in which the sequential schedule yields the highest accuracy. However, when increasing the number of propagation steps, the computation graph associated with the sequential schedule becomes extremely deep, making the learning problem very hard. Our proposed partition schedule performs similarly to the synchronous schedule (while requiring less computation), and better than other asynchronous schedules when using more than a single propagation step.
We presented graph partition neural networks, which extend graph neural networks. Relying on graph partitions, our model alternates between locally propagating information between nodes in small subgraphs and globally propagating information between the subgraphs. Moreover, we propose a modified multi-seed flood fill for fast partitioning of large scale graphs. Empirical results show that our model performs better or is comparable to state-of-the-art methods on a wide variety of semi-supervised node classification tasks. However, in contrast to existing models, our GPNNs are able to handle extremely large graphs well.
There are quite a few exciting directions to explore in the future. One is to learn the graph partitioning as well as the GNN weights, using a soft partition assignment. Other types of propagation schedules which have proven useful in probabilistic graphical models are also worthwhile to explore in the context of GNNs. To further improve the efficiency of propagating information, different nodes within the graph could share some memory, which mimics the shared memory model in the theory of distributed computing. Perhaps most importantly, this work makes it possible to run GNN models on very large graphs, which potentially opens the door to many new applications.
Dynamic pooling and unfolding recursive autoencoders for paraphrase detection.In NIPS, pages 801–809, 2011.
A tutorial on spectral clustering.Statistics and computing, 17(4):395–416, 2007.
Exploring artificial intelligence in the new millennium, 8:236–239, 2003.
In this section, we revisit the broadcast problem on bi-direction chain graphs. We show that our propagation schedule has advantages over the synchronous one via the following proposition.
Let be a bi-direction chain of size . We have: (1) Synchronous propagation schedule requires messages to solve the problem; (2) If we partition the chain evenly into sub-chains for , GPNN propagation schedule can solve the problem with messages.
We first analyze the case for synchronous propagation schedule. At each round, it needs messages to propagate messages one step away. Since it requires at least steps for message from one endpoint of the chain to reach the other, the number of messages to solve broadcast is thus .
We now turn to our schedule. Since the chain is evenly partitioned, each sub-chain is of nodes. We need to perform propagation steps to traverse a sub-chain, so we set . The number of messages required by a single sub-chain during the intra-subgraph propagation phase is , and so all sub-chains collectively require messages. Between intra-subgraph propagation, we perform step of inter-subgraph propagation to transfer messages over the cut edges between sub-chains. Each inter-subgraph step requires messages per cut edge - i.e. 2(K-1) messages in total. We need outer loops to ensure that message from any node can reach any other nodes, and strictly speaking, the the last inter-subgraph propagation step is unnecessary. So in total, we require messages, which proves the proposition. ∎
One can see from the above proposition that if we take and , the number of messages of our schedule matches the synchronous one. We can also derive the optimal value of as resulting in a factor of reduction in the total messages sent compared to the synchronous schedule.
We train all models using Adam  with a learning rate of . We also use early stopping with a window size of . We clip the norm gradient to ensure that it is no larger than
. The maximum epoch of all experiments except NELL is set to. The one of NELL is . The weight decays for Cora, Citeseer, Pubmed, NELL and DIEL are set to , , , and respectively. The dimensions of state vectors of GPNNfor Cora, Citeseer, Pubmed, NELL and DIEL are set to , , , and
. The output model for Cora, Citeseer, NELL is just softmax layer. For Pubmed and DIEL, we add one hidden layer withactivation function before the softmax which have dimension and respectively.
We include the results on citation networks with random splits in Table 6. From the table, we can see that our results are comparable with the state-of-the-art on these small scale datasets.
We did an experiment on schedules which are determined by random partitions of the graph. In particular, for -step propagation, we randomly sample proportion of edges from the whole edge set without replacement and use them for update. We summarize the results ( runs) on the Cora dataset in Table 7.
From the results, we can see that the best average accuracy is which is still lower than both synchronous and our partition based schedule. Note that this result roughly matches the one with spanning trees. The reason might be that random schedules typically need more propagation steps to spread information throughout the graph. However, more propagation steps of GNNs may lead to issues in learning with BPTT.
The released code of GGNN 
is implemented in Torch. We implement both our own version of GGNN and our model in Tensorflow. To ensure correctness, we first reproduced the experimental results of the paper on bAbI artificial intelligence (AI) tasks with our implementations of GGNN. Our code will be released soon. One challenging part is the implementation of synchronous propagation within subgraphs. We implicitly implement the parallel part by building one separate branch of the computational graph for each subgraphs (i.e., use a Python for loop rather than tf.while_loop). This relies on the claim that tensorflow optimizes the execution of the computational graph in a way that independent branches of the graph will be executed in parallel as decribed in . However, since we have no control of the optimization of the computational graph, this part could be improved by explicitly putting each branch on one separate computation device, just like the multi-tower solution for training convolutional neural networks (CNNs) on multiple GPUs.