Log In Sign Up

On Graph Classification Networks, Datasets and Baselines

by   Enxhell Luzhnica, et al.

Graph classification receives a great deal of attention from the non-Euclidean machine learning community. Recent advances in graph coarsening have enabled the training of deeper networks and produced new state-of-the-art results in many benchmark tasks. We examine how these architectures train and find that performance is highly-sensitive to initialisation and depends strongly on jumping-knowledge structures. We then show that, despite the great complexity of these models, competitive performance is achieved by the simplest of models -- structure-blind MLP, single-layer GCN and fixed-weight GCN -- and propose these be included as baselines in future.


page 1

page 2

page 3

page 4


Automated Graph Learning via Population Based Self-Tuning GCN

Owing to the remarkable capability of extracting effective graph embeddi...

Semi-supervised User Geolocation via Graph Convolutional Networks

Social media user geolocation is vital to many applications such as even...

Can GCNs Go as Deep as CNNs?

Convolutional Neural Networks (CNNs) achieve impressive results in a wid...

PK-GCN: Prior Knowledge Assisted Image Classification using Graph Convolution Networks

Deep learning has gained great success in various classification tasks. ...

GraphMDN: Leveraging graph structure and deep learning to solve inverse problems

The recent introduction of Graph Neural Networks (GNNs) and their growin...

PathSAGE: Spatial Graph Attention Neural Networks With Random Path Sampling

Graph Convolutional Networks (GCNs) achieve great success in non-Euclide...

1 Introduction

Deep learning has produced remarkable results across the full breadth of machine learning research. For the most part this has been achieved through the reapplication of the two main architectures, the cnn and rnn, adapted to two Euclidean cases – omnidirectional (image-like) and unidirectional (series) – respectively. As such there is great interest in extending the general techniques to non-Euclidean cases and graph-structured data problems in particular.

These efforts are mostly inspired by the cnn and attempting to find suitable analogs to its core components, the convolutional and pooling operators. Early work set out to develop convolution-like graph operators. The focus has now turned to developing pooling operations, often referred to as coarsening in the context of graphs. Besides static methods (Luzhnica et al., 2019), differentiable pooling frameworks have been developed. DiffPool achieved state-of-the-art (sota) performance across many benchmark tasks (Ying et al., 2018), however a dense representation, quadratic in memory, is required. The Graph U-Net introduces a sparse method based on pruning nodes () (Gao & Ji, 2019). Cangea et al. (2018) apply the method in graph classification by incorporating pools in a gcn model, achieving performance competitive with the sota with scalable memory requirements.

In this work we show that, under standard initialisation (Glorot & Bengio, 2010; He et al., 2015), using the gcn and operator together results in vanishing gradients beyond the first layers. In addition, we show that it is possible to attain good performance on smaller benchmark tasks simply using a global-pool111A simple mean or sum over the features of all nodes. followed by an mlp. Furthermore, to achieve results on a par with Graph U-Net in all benchmarks a single-layer gcn with a jumping-knowledge (jk) connection (Xu et al., 2018) from the input graph followed by an mlp is sufficient, whether the weights of the gcn are trained or not.

Considering the implications of these results, we primarily argue for the importance of including strong, simple baselines in evaluation. We also define an initialisation scheme that remedies the vanishing gradient issue by design though we find that this does not consistently improve performance.


This work was motivated by studies of network activations and gradient flow in deeper gnns with jk structures and pooling. We found that, at initialisation, activations into the network rapidly vanish and that throughout training the gradients flowed mostly into earlier layers. These findings prompt two questions: firstly, are deeper networks only trainable thanks to jk structures bypassing later layers? and secondly, how important are the later layers to performance anyway?

2 Preliminaries

We use the standard notation: a graph of nodes with features per node is represented by the pair with adjacency matrix, , and node feature matrix, .

Graph Convolution

Relu activations and the improved gcn (Gao & Ji, 2019) are used throughout. This differs from the standard gcn in that is used i.e. self-loops have a weight of 2.


pooling is used (Gao & Ji, 2019). The pooling operator drops nodes, where

is a fixed hyperparameter. In all experiments this was set to

. Nodes are dropped based on the ranked projection of features on a learnable vector,

, as

where are the scores for each node (rows in ) and are the indices of the top- nodes based on their scores.

Jumping Knowledge Networks

In node aggregating schemes, the range of nodes222Analogous to the receptive field in cnns. that a node’s representation draws from is strongly dependent on the neighbourhood structure (Xu et al., 2018). Jk-structures were introduced to allow some flexibility over the degree of aggregation and thus even out the “range” by introducing layer skipping connections. For a node, , this takes the form

where the aggregation function is typically concatenation, summation or an elementwise max, the result being passed to a classifier.

3 Removing JK & Initialisation

Whilst jk-connections were introduced to tackle the problem of node-specific range, in deeper networks they are acting as bypasses of later layers and a hierarchy of representations is not actually being produced. Clearly it runs counter to the core concept of allowing the range to vary over nodes if the higher ranges are not used. To test this we expose the gradient flow and activations in a net of four blocks of gcn+ with the final representation aggregated with a global mean and entered into an mlp. Relu activations are used in the gcn. The gcn weights are initialised using Kaiming (He et al., 2015), while the pools are initialized using Glorot (Glorot & Bengio, 2010)333The authors note the mixed naming conventions here but this seems to be what the community has settled on.

. We refer to this combination as the ‘standard initialisation’. Under standard initialisation, layer activations decay into the network, gradients are vanishingly small and the latter part of the network is effectively static under backpropagation.

3.1 ReInit

To expose this problem we propose a data-driven approach similar to lsuv-initialisation (Mishkin & Matas, 2015)

to maintain variance across layers. The idea is simply to initialise under some scheme and then pass the entire batch through each block, scaling the layer weights in turn by

to maintain variance, a process we refer to as ReInit. This is implemented as scaling factors that are set progressively

with the result that . We deviate from lsuv in not ortho-normalising as there is not an analogue that could be applied to the layers so simply rescaling has a more consistent meaning over the network. We have also found that attempting to derive a semi-analytic solution, in the footsteps of Glorot & Bengio (2010), is not possible for the gcn due to the structural asymmetries in neighbourhood aggregation. In essence, the expected variance is sensitive to the number and similarity of neighbours to such a degree that properly accounting for these variations would require specific node-level information. This also allows ReInit to be applied on top of any initial scheme, so the ‘shape’ is not fixed in that sense.

4 Shallower, Simpler Networks

To see how much later gcn layers contribute to performance we tested three shallower networks on standard benchmarks. The models could be thought of as extreme ablations.

Structure-blind MLP

A three-layer mlp. The adjacency matrix is discarded, the features are globally pooled and passed as input. Three weight matrices, biases; relu activations. This model cannot see even the number of nodes let alone their individual features or structural relationships.

Single-layer JK GCN+MLP

A single layer gcn with a jk-skip preceding the mlp described above. We test this set up both with the weights of the gcn fixed at the random initialization values, denoted (r), and free to update. The fixed method is intended to provide a minimal structural addition to the plain mlp.

5 Experiments & Results

Figure 1: Outputs of each layer during training with the standard initialization (top) and ours (bottom). Note the scale difference. The standard initialization quickly converges to zero for all layers, while with ReInit the values vary widely
Figure 2: Gradients flowing into the weights of all layers with regular initialization (top) and ReInit (bottom). The gradients of all the layers apart from the last MLP layer are almost 0 for the regular initialization. The reinitialized network manages to train the other layers, although noticeably less gradients flow into the latter layers, possibly by choice rather than a network problem.

Figure 3: Training loss for the standard initialization and ReInit. The loss does not change for the standard initialization while with ReInit the network is successfully trained.

Figure 4:

Output values in different training and initialisation routines when training for 300 epochs on the

dd dataset. The first plot shows pre-activations vanish in a simple jk-net under standard initialization, trained with Adam with weight decay. The second shows the same network trained without weight decay. The third has no weight decay and is initialized with ReInit. The last figure shows the performance of the three setups on the dd dataset (over 10 folds) as we vary the number of epochs.


We first present the comparison of activations, gradient flow and training dynamics for a 4-block gnn (as described in 3) in figures 1, 2 & 3, respectively. Detailed analysis of these plots is presented as captions, though the overall picture is that under ReInit training is able to occur whilst under standard initialisation it is not.

5.1 Shallow baselines

We conduct several experiments with the networks described in section 4: a simple mlp; a randomly initialized gcn, which is not updated during the training process, denoted gcn(r)-mlp; and a gcn that is free to update (gcn-mlp).

We find that these models surpass most of the previous methods. In some cases surpassing even the recent differentiable pooling methods. We note that the performance of the random gcn should not come as a surprise given its connection to WL-test (Kipf & Welling, 2016). This is most relevant in the case of the random gcn, having very little power in the featural domain but adding structural information comparable to 1-WL.

These initial results (presented in table 1) show that there is room for advancements in graph classification and that these simple models are to be considered strong baselines. These networks, particularly the mlp, are simple and appear as subnetworks in many methods. As such, it is of paramount importance to undertake thorough ablation studies to show the benefit of complexifying networks. For instance, we can add additional components that improve upon other approaches but do so by relying heavily on these simpler subnetworks. We explore this idea below.

5.2 Bloated networks

We use the following architecture in the next few experiments: gcn-pool-gcn-pool-gcn-pool-mlp with the global max and sum of each layer passed to the mlp through jk-structures. Due to the initialization problem, if weight decay is used444Here we use with a learning rate of but smaller values achieve similar results. the network is unable to recover from a bad initialization and as such it cannot learn in the deeper layers (see Figure 4). This method (jk-sum-decay) is competitive with most results, performing closely to the simple sub-network it contains: gcn-mlp.

Table 1: Classification accuracy percentages. The results of other networks are taken from Cangea et al. 2018 with which we share 10-fold splits for benchmarking our methods. Bold indicates top-performance, blue indicates weaker performance than the mlp.
Model Reddit 555Reddit-Multi-12K DD Collab Prot.
PatchySAN 41.32 76.27 72.60 75.00
GraphSAGE 42.24 75.42 68.25 70.48
ECC 41.73 74.10 67.79 72.65
Set2Set 43.49 78.12 71.75 74.29
SortPool 41.82 79.37 73.76 75.54
DiffPool-Det 46.18 75.47 82.13 75.62
DiffPool-NoLP 46.65 79.98 75.63 77.42
DiffPool 47.08 81.15 75.50 78.10
GU-Net/SHGC - 78.59 74.54 75.46
MLP 40.96 80.22 74.00 75.74
GCN(R)-MLP 36.15 78.61 75.38 76.28
GCN-MLP 45.01 79.29 76.50 75.64
JK-Sum 47.16 79.02 77.00 75.82
JK-Sum-Decay 43.87 79.11 74.14 75.82
JK-Sum-ReInit 46.77 75.97 77.20 75.46

Next, even if we do not use any weight decay the network will only be able to recover the deeper layers after a significant number of epochs. For instance, for DD the network only starts to recover the deeper layers after epoch as shown in Figure 4. Although, to fully recover the layers (similarly to the network with ReInit) we found that the network needs to be trained for more than epochs and, if early-stopping causes training to end in an earlier epoch, we would still be using only the first two layers (gcn+pool). In fact, the optimal number of epochs to train the network for was which is what we report in the results in Table 1 (jk-sum). However, the network behaves very differently when initialized using ReInit as the method does not need to recover the layers one-by-one, changing the dynamics and ultimately how and what the network learns. The same figure shows that in the case of ReInit all the layers are trainable from the beginning. In that case, we notice that the performance goes up sharply in the very first few epochs for DD (less than 10, see last plot of Figure 4) and then drops and converges to roughly the same as the recovered network with standard initialization (without weight decay). While for small datasets (DD, Proteins) unleashing the power of the deeper network from the beginning is not beneficial since it can cause over-fitting (a single layer gcn already performs well) for collab we see that this differs. In fact, for these small datasets, the method with ReInit achieved highest accuracy in fewer than 50 epochs, while for Collab it was 300. The same network without ReInit had the best performance training for 100 epochs, but resulted in a lower quality model. This hints that for this bigger dataset all 3-layers are needed, while for smaller problems the network is likely over-parameterised and this is exposed by ReInit.

Closing remarks

We have demonstrated that some very simple models are competitive with the sota and that jk-structures may permit models to perform well through these subnetworks. We hope that these baselines and a greater interest in ablation studies will be adopted by the community.