Deep learning has produced remarkable results across the full breadth of machine learning research. For the most part this has been achieved through the reapplication of the two main architectures, the cnn and rnn, adapted to two Euclidean cases – omnidirectional (image-like) and unidirectional (series) – respectively. As such there is great interest in extending the general techniques to non-Euclidean cases and graph-structured data problems in particular.
These efforts are mostly inspired by the cnn and attempting to find suitable analogs to its core components, the convolutional and pooling operators. Early work set out to develop convolution-like graph operators. The focus has now turned to developing pooling operations, often referred to as coarsening in the context of graphs. Besides static methods (Luzhnica et al., 2019), differentiable pooling frameworks have been developed. DiffPool achieved state-of-the-art (sota) performance across many benchmark tasks (Ying et al., 2018), however a dense representation, quadratic in memory, is required. The Graph U-Net introduces a sparse method based on pruning nodes () (Gao & Ji, 2019). Cangea et al. (2018) apply the method in graph classification by incorporating pools in a gcn model, achieving performance competitive with the sota with scalable memory requirements.
In this work we show that, under standard initialisation (Glorot & Bengio, 2010; He et al., 2015), using the gcn and operator together results in vanishing gradients beyond the first layers. In addition, we show that it is possible to attain good performance on smaller benchmark tasks simply using a global-pool111A simple mean or sum over the features of all nodes. followed by an mlp. Furthermore, to achieve results on a par with Graph U-Net in all benchmarks a single-layer gcn with a jumping-knowledge (jk) connection (Xu et al., 2018) from the input graph followed by an mlp is sufficient, whether the weights of the gcn are trained or not.
Considering the implications of these results, we primarily argue for the importance of including strong, simple baselines in evaluation. We also define an initialisation scheme that remedies the vanishing gradient issue by design though we find that this does not consistently improve performance.
This work was motivated by studies of network activations and gradient flow in deeper gnns with jk structures and pooling. We found that, at initialisation, activations into the network rapidly vanish and that throughout training the gradients flowed mostly into earlier layers. These findings prompt two questions: firstly, are deeper networks only trainable thanks to jk structures bypassing later layers? and secondly, how important are the later layers to performance anyway?
We use the standard notation: a graph of nodes with features per node is represented by the pair with adjacency matrix, , and node feature matrix, .
Relu activations and the improved gcn (Gao & Ji, 2019) are used throughout. This differs from the standard gcn in that is used i.e. self-loops have a weight of 2.
pooling is used (Gao & Ji, 2019). The pooling operator drops nodes, where
is a fixed hyperparameter. In all experiments this was set to
. Nodes are dropped based on the ranked projection of features on a learnable vector,, as
where are the scores for each node (rows in ) and are the indices of the top- nodes based on their scores.
Jumping Knowledge Networks
In node aggregating schemes, the range of nodes222Analogous to the receptive field in cnns. that a node’s representation draws from is strongly dependent on the neighbourhood structure (Xu et al., 2018). Jk-structures were introduced to allow some flexibility over the degree of aggregation and thus even out the “range” by introducing layer skipping connections. For a node, , this takes the form
where the aggregation function is typically concatenation, summation or an elementwise max, the result being passed to a classifier.
3 Removing JK & Initialisation
Whilst jk-connections were introduced to tackle the problem of node-specific range, in deeper networks they are acting as bypasses of later layers and a hierarchy of representations is not actually being produced. Clearly it runs counter to the core concept of allowing the range to vary over nodes if the higher ranges are not used. To test this we expose the gradient flow and activations in a net of four blocks of gcn+ with the final representation aggregated with a global mean and entered into an mlp. Relu activations are used in the gcn. The gcn weights are initialised using Kaiming (He et al., 2015), while the pools are initialized using Glorot (Glorot & Bengio, 2010)333The authors note the mixed naming conventions here but this seems to be what the community has settled on.
. We refer to this combination as the ‘standard initialisation’. Under standard initialisation, layer activations decay into the network, gradients are vanishingly small and the latter part of the network is effectively static under backpropagation.
To expose this problem we propose a data-driven approach similar to lsuv-initialisation (Mishkin & Matas, 2015)
to maintain variance across layers. The idea is simply to initialise under some scheme and then pass the entire batch through each block, scaling the layer weights in turn byto maintain variance, a process we refer to as ReInit. This is implemented as scaling factors that are set progressively
with the result that . We deviate from lsuv in not ortho-normalising as there is not an analogue that could be applied to the layers so simply rescaling has a more consistent meaning over the network. We have also found that attempting to derive a semi-analytic solution, in the footsteps of Glorot & Bengio (2010), is not possible for the gcn due to the structural asymmetries in neighbourhood aggregation. In essence, the expected variance is sensitive to the number and similarity of neighbours to such a degree that properly accounting for these variations would require specific node-level information. This also allows ReInit to be applied on top of any initial scheme, so the ‘shape’ is not fixed in that sense.
4 Shallower, Simpler Networks
To see how much later gcn layers contribute to performance we tested three shallower networks on standard benchmarks. The models could be thought of as extreme ablations.
A three-layer mlp. The adjacency matrix is discarded, the features are globally pooled and passed as input. Three weight matrices, biases; relu activations. This model cannot see even the number of nodes let alone their individual features or structural relationships.
Single-layer JK GCN+MLP
A single layer gcn with a jk-skip preceding the mlp described above. We test this set up both with the weights of the gcn fixed at the random initialization values, denoted (r), and free to update. The fixed method is intended to provide a minimal structural addition to the plain mlp.
5 Experiments & Results
We first present the comparison of activations, gradient flow and training dynamics for a 4-block gnn (as described in 3) in figures 1, 2 & 3, respectively. Detailed analysis of these plots is presented as captions, though the overall picture is that under ReInit training is able to occur whilst under standard initialisation it is not.
5.1 Shallow baselines
We conduct several experiments with the networks described in section 4: a simple mlp; a randomly initialized gcn, which is not updated during the training process, denoted gcn(r)-mlp; and a gcn that is free to update (gcn-mlp).
We find that these models surpass most of the previous methods. In some cases surpassing even the recent differentiable pooling methods. We note that the performance of the random gcn should not come as a surprise given its connection to WL-test (Kipf & Welling, 2016). This is most relevant in the case of the random gcn, having very little power in the featural domain but adding structural information comparable to 1-WL.
These initial results (presented in table 1) show that there is room for advancements in graph classification and that these simple models are to be considered strong baselines. These networks, particularly the mlp, are simple and appear as subnetworks in many methods. As such, it is of paramount importance to undertake thorough ablation studies to show the benefit of complexifying networks. For instance, we can add additional components that improve upon other approaches but do so by relying heavily on these simpler subnetworks. We explore this idea below.
5.2 Bloated networks
We use the following architecture in the next few experiments: gcn-pool-gcn-pool-gcn-pool-mlp with the global max and sum of each layer passed to the mlp through jk-structures. Due to the initialization problem, if weight decay is used444Here we use with a learning rate of but smaller values achieve similar results. the network is unable to recover from a bad initialization and as such it cannot learn in the deeper layers (see Figure 4). This method (jk-sum-decay) is competitive with most results, performing closely to the simple sub-network it contains: gcn-mlp.
Next, even if we do not use any weight decay the network will only be able to recover the deeper layers after a significant number of epochs. For instance, for DD the network only starts to recover the deeper layers after epoch as shown in Figure 4. Although, to fully recover the layers (similarly to the network with ReInit) we found that the network needs to be trained for more than epochs and, if early-stopping causes training to end in an earlier epoch, we would still be using only the first two layers (gcn+pool). In fact, the optimal number of epochs to train the network for was which is what we report in the results in Table 1 (jk-sum). However, the network behaves very differently when initialized using ReInit as the method does not need to recover the layers one-by-one, changing the dynamics and ultimately how and what the network learns. The same figure shows that in the case of ReInit all the layers are trainable from the beginning. In that case, we notice that the performance goes up sharply in the very first few epochs for DD (less than 10, see last plot of Figure 4) and then drops and converges to roughly the same as the recovered network with standard initialization (without weight decay). While for small datasets (DD, Proteins) unleashing the power of the deeper network from the beginning is not beneficial since it can cause over-fitting (a single layer gcn already performs well) for collab we see that this differs. In fact, for these small datasets, the method with ReInit achieved highest accuracy in fewer than 50 epochs, while for Collab it was 300. The same network without ReInit had the best performance training for 100 epochs, but resulted in a lower quality model. This hints that for this bigger dataset all 3-layers are needed, while for smaller problems the network is likely over-parameterised and this is exposed by ReInit.
We have demonstrated that some very simple models are competitive with the sota and that jk-structures may permit models to perform well through these subnetworks. We hope that these baselines and a greater interest in ablation studies will be adopted by the community.
- Bronstein et al. (2017) Bronstein, M. M., LeCun, Y., Szlam, A., Vandergheynst, P., and Bruna, J. Geometric Deep Learning: Going beyond Euclidean data. IEEE Signal Processing Magazine, 34(4):18–42, 11 2017. ISSN 1053-5888. doi: 10.1109/msp.2017.2693418. URL http://arxiv.org/abs/1611.08097http://dx.doi.org/10.1109/MSP.2017.2693418.
- Cangea et al. (2018) Cangea, C., Veličković, P., Jovanović, N., Kipf, T., and Liò, P. Towards Sparse Hierarchical Graph Classifiers. 11 2018. URL http://arxiv.org/abs/1811.01287.
Fey & Lenssen (2019)
Fey, M. and Lenssen, J. E.
Fast graph representation learning with PyTorch Geometric.In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
- Gao & Ji (2019) Gao, H. and Ji, S. Graph u-net, 2019. URL https://openreview.net/forum?id=HJePRoAct7.
- Gilmer et al. (2017) Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1263–1272. JMLR. org, 2017.
Glorot & Bengio (2010)
Glorot, X. and Bengio, Y.
Understanding the difficulty of training deep feedforward neural networks, 3 2010.ISSN 1938-7228. URL http://proceedings.mlr.press/v9/glorot10a.html.
Glorot et al. (2011)
Glorot, X., Bordes, A., and Bengio, Y.
Deep sparse rectifier neural networks.
In Gordon, G., Dunson, D., and Dudík, M. (eds.),
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, volume 15 of Proceedings of Machine Learning Research, pp. 315–323, Fort Lauderdale, FL, USA, 11–13 Apr 2011. PMLR. URL http://proceedings.mlr.press/v15/glorot11a.html.
- Hamilton et al. (2017) Hamilton, W., Ying, Z., and Leskovec, J. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pp. 1024–1034, 2017.
- He et al. (2015) He, K., Zhang, X., Ren, S., and Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. 2 2015. URL http://arxiv.org/abs/1502.01852.
- He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. doi: 10.1109/cvpr.2016.90. URL http://dx.doi.org/10.1109/CVPR.2016.90.
- Kipf & Welling (2016) Kipf, T. N. and Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. 9 2016. URL http://arxiv.org/abs/1609.02907.
- Luzhnica et al. (2019) Luzhnica, E., Day, B., and Lio’, P. Clique pooling for graph classification. 3 2019. URL http://arxiv.org/abs/1904.00374.
- Mishkin & Matas (2015) Mishkin, D. and Matas, J. All you need is a good init. 11 2015. URL http://arxiv.org/abs/1511.06422.
Niepert et al. (2016)
Niepert, M., Ahmed, M., and Kutzkov, K.
Learning convolutional neural networks for graphs, 2016.
- Simonovsky & Komodakis (2017) Simonovsky, M. and Komodakis, N. Dynamic edge-conditioned filters in convolutional neural networks on graphs. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017. doi: 10.1109/cvpr.2017.11. URL http://dx.doi.org/10.1109/CVPR.2017.11.
- Xu et al. (2018) Xu, K., Li, C., Tian, Y., Sonobe, T., Kawarabayashi, K.-i., and Jegelka, S. Representation Learning on Graphs with Jumping Knowledge Networks. 6 2018. URL http://arxiv.org/abs/1806.03536.
- Ying et al. (2018) Ying, R., You, J., Morris, C., Ren, X., Hamilton, W. L., and Leskovec, J. Hierarchical Graph Representation Learning with Differentiable Pooling. 2018. ISSN 18160948. doi: arXiv:1806.08804v3. URL https://arxiv.org/pdf/1806.08804.pdfhttp://arxiv.org/abs/1806.08804.
- Zhang et al. (2018) Zhang, M., Cui, Z., Neumann, M., and Chen, Y. An end-to-end deep learning architecture for graph classification. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.