The Truly Deep Graph Convolutional Networks for Node Classification

07/25/2019 ∙ by Yu Rong, et al. ∙ 0

Existing Graph Convolutional Networks (GCNs) are shallow---the number of the layers is usually not larger than 2. The deeper variants by simply stacking more layers, unfortunately perform worse, even involving well-known tricks like weight penalizing, dropout, and residual connections. This paper reveals that developing deep GCNs mainly encounters two obstacles: over-fitting and over-smoothing. The over-fitting issue weakens the generalization ability on small graphs, while over-smoothing impedes model training by isolating output representations from the input features with the increase in network depth. Hence, we propose DropEdge, a novel technique to alleviate both issues. At its core, DropEdge randomly removes a certain number of edges from the input graphs, acting like a data augmenter and also a message passing reducer. More importantly, DropEdge enables us to recast a wider range of Convolutional Neural Networks (CNNs) from the image field to the graph domain; in particular, we study DenseNet and InceptionNet in this paper. Extensive experiments on several benchmarks demonstrate that our method allows deep GCNs to achieve promising performance, even when the number of layers exceeds 30---the deepest GCN that has ever been proposed.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Graph Convolutional Networks (GCNs), which exploit the concept of message passing or equivalently the neighborhood aggregation function to extract high-level features from a node as well as its neighborhoods, have boosted the state-of-the-arts for a variety of tasks on graphs, including node classification Bhagat2011 ; Zhang2018 , social recommendation Freeman2000 ; perozzi2014deepwalk , link prediction Liben-Nowell2007 and many others. In other words, GCNs have been becoming one of the most crucial tools for graph representation learning.

Yet, when we revisit typical successful GCNs, such as the architecture developed in Kipf2017 , one conspicuous observation is that they are all “shallow”—the number of the layers is never larger than 2. Deeper variants by simply stacking more layers, in principle can access more information, but perform worse Li2018 ; Xu2018 . Even with the residue connections that are proved to be powerful in very deep Convolutional Neural Networks (CNNs), there is still no evidence to affirm that, GCNs with more than 2 layers perform as well as the 2-layer one on popular benchmarks (e.g. Cora sen2008collective ). So the following questions remain: “what are the very factors that impede deeper GCNs to perform promisingly” and “how can we eliminate those factors by developing techniques specific to graphs”, both of which motivate the study of this paper.

Figure 1: Comparisons of the training loss (in dash line) and validation loss (in bold line) between various architectures on Cora. We implement 2-layer PlainGCN, 6-layer PlainGCN / DeepGCN and 32-layer PlainGCN / DeepGCN. Particularly for DeepGCNs, we use the inception backbone + DropEdge

. Regarding PlainGCNs, the 6-layer network gets stuck in the over-fitting issue attaining lower training error but higher testing error than the 2-layer one; the 32-layer network fails to converge probably due to over-smoothing. By contrast, our DeepGCNs work well for both training and testing.

We begin by investigating two contradict factors: over-fitting and over-smoothing. The over-fitting issue comes from the case when we utilize an over-parameterized model to fit a distribution with limited training data, where the model we learn fits the training data very well but generalizes poorly to the testing data. This issue does exist if we apply a deep GCN on small graphs (see the empirical comparisons between 2-layer GCN and 6-layer GCN on Cora in Fig. 1). Rather, the over-fitting issue is hard to be solved satisfactorily, even we have considered certain well-known tricks like weight penalizing and dropout Hinton2012 . A more efficient method is in demand. By contrast, over-smoothing, towards the other extreme, makes training a very deep GCN difficult. As first introduced by Li2018 and further explained in Wu2019 ; Xu2018 ; Klicpera2019 , graph convolutions essentially push representations of adjacent nodes mixed with each other, such that, if extremely we go with an infinite number of layers, all nodes’ representations will converge to a stationary point, making them unrelated to the input features. We call this phenomenon as over-smoothing of node features. To illustrate its effect, we have conducted an example experiment with 32-layer GCN in Fig. 1, in which the training of such a very deep GCN is observed not to converge and fail.

Both of the above two issues can be addressed, using DropEdge. The term “DropEdge” refers to randomly dropping out certain rate of edges of the input graph for each training time. In its particular form, each edge is independently dropped with a fixed probability , with being a hyper-parameter and determined by validation. There are several benefits in applying DropEdge for the GCN training. First, DropEdge can be considered as a data augmentation technique. By DropEdge, we are actually generating different random deformed copies of the original graph; as such, we augment the randomness and the diversity of the input data, thus better capable of preventing over-fitting. It is analogous to performing random rotation, cropping, or flapping for robust CNN training in the context of images. Second, DropEdge can also be treated as a message passing reducer. In GCNs, the message passing between adjacent nodes is conducted along edge paths. Removing certain edges is making node connections more sparse, and hence avoiding over-smoothing to some extent when the GCN goes very deep. Indeed, as we will draw in this paper, DropEdge theoretically slows down the smoothing of the hidden node features by a certain ratio. Finally, DropEdge is related but distinct to other concepts, such as the dropout skill Hinton2012 that drops out the activation units of the network by random. Since activation dropout does not perform any data augmentation, its effect on alleviating over-fitting is not so strong as DropEdge, and it does not help prevent over-smoothing neither. We defer more discussions of DropEdge to other methods in § 4.1.

We also explore what kind of architectures can facilitate the training of GCNs, and what can be compatible with our DropEdge. To do so, we first review several successful CNN architectures that operate on images, and then recast them in the graph domain. We study ResNet he2016deep , DenseNet Huang2017 and InceptionNet Szegedy2016 in this paper. The method by Kipf2017 has already imitated the residual connections of ResNets to GCNs, but the performance is unsatisfactory. DenseNet, which further generalizes the idea of skip connections, connects each layer to every other layer in a feed-forward fashion. Here, for more efficiency, we instantiate the dense version of GCNs by retaining all short paths from immediate layers to the output layer but removing all others between immediate layers. From a graphical perspective, the outputs along the shortcut from k-step away layers are actually messages from k-hop neighborhoods; in other words, DenseNet allows us to obtain multi-hop messages output with one single network. Also, the short connections enable direct back-propagation from the loss to lower layers, and it alleviates the effect of vanishing gradients as observed in deep neural networks.

InceptionNet is another typical CNN structure. By its original design, it performs convolutions in each layer with multiple sizes of kernels/receptive-fields, so as to model objective variations in size. This property is also crucial to the graph data, for the local structures within an input graph is diverse and deserved to capture. We then take inspiration from InceptionNet by adopting multiple atom GCNs of different layers to represent receptive fields of different sizes, and then concatenating all their outputs as an inception block. Stacking these blocks one by one leads to our inception variant of GCNs. Figure 2 illustrates an example of deep GCN with the inception backbone. We defer more details of different GCNs to  § 4.2.

For the experiments on four public benchmarks (e.g. Cora, Citeseer, Pubmed sen2008collective , and Reddit Hamilton2017 ), we demonstrate that the residual connections, dense connections and inception blocks are compatible with the DropEdge skill, and they obtain promising testing error even when the number of layers is large (e.g. larger than 30 in Fig. 1)—to the best of our knowledge, this is the deepest GCN that has ever been developed, and more importantly it performs promisingly. Moreover, when equipped with DropEdge, we find that both dense GCNs and inception GCNs are able to promote the training consistently and hence give rise to better performance, compared to the plain GCNs.

Figure 2: An example DeepGCN model with three inception blocks.

2 Related Work

Inspired by the huge success of CNNs in computer vision, a large number of methods come redefining the notion of convolution on graphs under the umbrella of GCNs. The first prominent research on GCNs is presented in

bruna2013spectral , which develops graph convolution based on spectral graph theory. Later, Kipf2017 ; defferrard2016convolutional ; henaff2015deep ; Li2018a ; Levie2017 apply improvements, extensions, and approximations on spectral-based GCNs. With contending the scalability issue of spectral-based GCNs on large graphs, spatial-based GCNs have been rapidly developed hamilton2017inductive ; Monti2017 ; niepert2016learning ; Gao2018 . These methods directly perform convolution in the graph domain by aggregating the information from neighbor nodes. By recent, several sampling-based methods have been proposed for fast graph representation learning, including the node-wise sampling methods hamilton2017inductive , the layer-wise approach chen2018fastgcn and its layer-dependent variant Huang2018 .

Despite the fruitful progress, most previous works only focus on shallow GCNs while the deeper extension is seldom discussed. The work by Li2018 first introduces the concept of over-smoothing in GCNs, but it never proposes a deep GCN with addressing this issue. Its following study Klicpera2019 solves over-smoothing by using personalized PageRank that additionally involves the rooted node into the message passing loop; however, the accuracy is still observed to decrease when the depth of GCN increases from 2. The JKNet Xu2018 employs skip connections for multi-hop message passing, and it enables different neighborhood ranges for better structure-aware representation learning. Unexpectedly, as shown in the experiments, the JKNets that obtain the best accuracy have depth less than 3 on all datasets, except the one on Cora where the best result is given by the 6-layer network. In this paper, we propose the notion of DropEdge to overcome both the over-fitting and over-smoothing issues simultaneously, and combine it with various backbone architectures to drive an in-depth analysis on deep GCNs.

3 Notations and Preliminaries

Notations. Let represent the input graph, with nodes , edges , and defining the number of the nodes. The node features are denoted as , and the adjacent matrix associates each edge with its element . The degrees for all nodes are given by where computes the sum of edge weights connected to node . For simplicity, we define as the degree matrix with its diagonal elements given by .

PlainGCN. We call the original GCN developed by Kipf and Welling Kipf2017 as PlainGCN. By defining as the hidden feature in the -th layer for node , the feed forward propagation becomes

(1)

where

are the hidden vectors of

-th layer; is the re-normalization of the adjacency matrix; is a nonlinear function, i.e.

the relu function; and

is the filter matrix in the -th layer. We denote one-layer GCN as computed by Eq. (1) as Graph Convolutional Layer (GCL) in what follows.

4 DeepGCN

In this section, we first introduce the formulation of DropEdge, and then follow it up by presenting several backbone architectures to extend PlainGCNs.

4.1 DropEdge

To involve randomness into the training data, the DropEdge technique drops out the edges of the input graph at each training iteration. Formally, it randomly enforces non-zero elements of the adjacent matrix to be zeros, where is the total number of edges and is the dropping rate. If we denote the resulting adjacent matrix as , then its relation to becomes

(2)

where is a sparse matrix expanded by a random subset of size from original edges . Following the idea of Kipf2017 , we also perform the re-normalization trick on , leading to .

It is clear that DropEdge can prevent over-fitting since the model is fed with diverse at different training times. Despite the randomness, the inputs for different training iterations still share all the same nodes and input features, hence all inputs could be considered to be drawn from similar underlying distribution; in other words, the training is still meaningful.

Now we focus more on the over-smoothing issue. To be specific, the over-smoothing issue states that all nodes’ features will degenerate to a stationary point and become isolated to the input features, if we employ a GCN of an infinite number of layers. This will impede model training, for the discriminative information of the input features is eliminated. To reveal what incurs over-smoothing and understand how it acts, we consider the random-walk Lovasz1993 version for the update in Eq. (1) as

(3)

where we have omitted the non-linear function and parameter matrix for simplicity. Here, can be viewed as the transition probability of the random walk. When goes to infinity, we arrive at

(4)

where the stationary solution has been proved to satisfy regardless of the input state Lovasz1993 . This yet implies the independence to the initial point, i.e. the input feature . In other words, the information of the input feature has vanished. In practice, we also observe the same trend in the standard GCNs (see Fig. 6).

By virtue of DropEdge, we replace with in Eq. (2). Although we will still go to the stationary point under the infinity case, we are able to slow down the convergent speed if using instead. This can be validated by using the concept of mixing time that has been studied in the random walk theory Lovasz1993 . As its name implies, mixing time measures how fast the random walk converges to its limiting distribution. Its computation is given by .

Theorem 1

If drops from a graph by removing an edge, then mixing time can only increase. i.e.

(5)

Please refer to the supplemental materials for the proof of Theorem 1.

Corollary 1

By increasing the mixing time, the deeper layers of the deep GCNs converge more slowly towards its limiting distribution Lovasz1993 . Therefore, DropEdge alleviates the effect of over-smoothing and become more friendly to deeper models.

Our DropEdge is related to the dropout trick Hinton2012 and node sampling methods chen2018fastgcn ; Huang2018 . The dropout trick is trying to disturb the feature matrix by randomly setting features to be zeros, which may reduce the effect of over-fitting but has no help to the over-smoothing. The node sampling methods is trying to drop nodes by random or based on the adjacent connections between layers to reduce the computational complexity. However, node sampling only delivers a sub-graph and reduce too much information of node features. In contrast, DropEdge only drops edges without losing the features of the nodes, with more input information retained.

4.2 Network Architecture Design

Figure 3: Two basic building blocks in DeepGCN.

Even though we can alleviate the over-smoothing and train the -layer PlainGCN model by DropEdge, we argue that PlainGCN has inevitable shortcomings that it doesn’t consider the graph locality and treat all nodes within -hop equally. Graph locality Linial1992 is very essential to obtain better node representation since it pays more attention to the nodes which are close to the target nodes.

The authors in Kipf2017 have applied residual connections between hidden layers to facilitate the training of deeper models. The residual connection carries over information from the previous layer, and it can be implemented by additionally adding the identity mapping in the right size of Eq. (1). While it does enable efficient back-propagation via shortcuts, the residual connection is still insufficient in capturing multi-hop neighbour messages, since it is only limited to connect between input and output of the same layer. In this section, we generalize the idea of the residual connection, and introduce two more powerful building blocks for GCNs: Dense Block and Inception Block, inspired from the success of DenseNet Huang2017 and InceptionNet Szegedy2016 , respectively.

Dense Block.

The Dense Block is consist of a fixed-number of GCLs. Besides the feed-forward connections, we add shortcuts that are linked to the output layer from each GCL and the input layer (see Figure. 3

(b)). All messages from the top GCL and skip connections are concatenated together to formulate an eventual output. In this way, we encode multi-hop neighbor information into the output representation, which allows us to capture diverse local graph structures with different neighbor expanding ranges. For example, suppose a node’s representation is sufficiently characterized by its 2-hop neighbour features but not those beyond, the model will be trained to attend more closely to the 2-hop outputs and overlook the 3-hop ones. In a machine learning point of view, concatenating the outputs from sub-networks of different layers acts like performing an ensemble of different models, and the training will perform model selection, leading to better performance.

Inception Block.

The basic idea of inception operation Szegedy2016 is to capture multi-scale objective patterns by using convolution kernels of different sizes. In the graph domain, the size of the receptive field is explained as the distance/hop from the target node to its neighborhood. Similar to the design of Dense Block, we adopt the sub-network of depth to define the convolution kernel of size . We concatenate those sub-networks of multiple depths to derive an inception block. As shown in Figure. 3(a), we build an inception block that contains 3 sub-networks with depth ranging from 1 to 3. Unlike Dense Block, all sub-networks in Inception Block do not share any GCL, by which we expect architectures with more capacity to capture various local graph structures. This is more prone to over-fitting since more parameters are involved, but it still works promisingly when combined with our DropEdge. Furthermore, the independence of each sub-network enables a more flexible model and brings more generalization ability to model complex graphs. We will provide more discussions in the experimental section.

5 Experiments

Datasets.

Joining the previous works’ practice, we focus on four benchmark datasets varying in graph size and feature type: (1) classifying the research topic of papers in three citation datasets: Cora, Citeseer and Pubmed 

sen2008collective ; (2) predicting which community different posts belong to in the Reddit social network hamilton2017inductive . Note that the tasks in Cora, Citeseer and Pubmed are transductive underlying all node features are accessible during training, while the task in Reddit is inductive meaning the testing nodes are unseen for training. We apply the full-supervised training fashion used in Huang2018 ; chen2018fastgcn on all datasets in our experiments.

Baselines. We compare our models against four baselines: the original GCN model Kipf2017 (denoted as PlainGCN); GraphSAGE hamilton2017inductive , FastGCN chen2018fastgcn and AsGCN Huang2018 . All baselines contain two layers, and we download their public codes for re-implementation. As for DeepGCNs, we compare the variants that use different blocks. Specifically, DeepGCN-I and DeepGCN-D indicate using the inception block and the dense block, respectively. The DropEdge skill is equipped with our models by default; if not, we use the suffix “(ND)” for the specification: for example, DeepGCN-D (ND) means no DropEdge is involved. We also perform DropEdge in PlainGCN, denoted as PlainGCN+DropEdge, to evaluate how DropEdge affects the performance of PlainGCN.

Implementation.

We implement our models in Pytorch 

paszke2017automatic

and use the Adam optimizer for network training. To ensure the re-productivity of the results, the random seed of all experiments is fixed. We fix the training epoch to

for Cora, Citeseer and Pubmed, and for Reddit. The early stopping criterion is applied during training over all datasets. We utilize the same set of hyper-parameters for the model with and without DropEdge to avoid the unintentional “hyper-parameter hacking”. For testing, we utilize the whole graph as the input without using DropEdge. We defer more implementation details in the supplementary material.

5.1 Comparison with State-of-the-art Methods

Table 1 summaries the classification errors of our method as well as the baselines on four datasets. For DeepGCN, the number in parentheses indicates the number of layers it contains; we only report the best results among the architectures by ranging the depth from 2 to 15. It is observed that our methods outperform all other baselines on all datasets significantly. If removing DropEdge, both DeepGCN-I and DeepGCN-D exhibit remarkable performance drop, which explains the necessity of conducting DropEdge in deep GCNs. We also find that PlainGCNs with DropEdge deliver much better performance than those without DropEdge (over 10% improvement on Cora and Pubmed), justifying the importance of DropEdge once again. Another interesting observation is DeepGCN-I performs better than DeepGCN-D with DropEdge, but worse without DropEdge. This is consistent with our statement in § 4.2, as DeepGCN-I contains more parameters and it will be more prone to over-fitting. Even so, DeepGCN-I works promisingly when combined with DropEdge. In general, the results here confirm the superiority of our proposed methods.

Transductive Inductive
Cora Citeseer Pubmed Reddit
FastGCN 15.00 22.40 12.00 6.30
GraphSAGE 17.80 28.60 12.90 5.68
AsGCN 12.56 20.34 9.40 3.73
PlainGCN 14.00 22.70 10.20 3.52
DeepGCN-I (No DropEdge) 13.90 22.00 10.60 3.25
DeepGCN-D (No DropEdge) 13.30 20.90 9.70 3.52
PlainGCN+DropEdge 12.80 20.90 9.10 3.46
DeepGCN-I 11.70 (6) 19.50 (6) 8.60 (14) 3.13 (10)
DeepGCN-D 11.90 (11) 19.70 (4) 8.60 (10) 3.22 (14)
avg. % reduce over DropEdge 11.6% 8.3% 13.7% 4.6%
Table 1: Error Rates (%) Comparisons with state-of-the-art methods.

5.2 Ablation Studies

Now we continue a more detailed analysis on our models. Due to the space limit, we only provide the results on Cora, and defer the evaluations on other datasets to the supplementary material. Note that this section mainly focuses on assessing each component in DeepGCNs, without the concern about pushing state-of-the-art results. So, we do not perform delicate model selection in what follows. We construct the DeepGCN architecture which contains one input GCL, one inception/dense block and one output GCL. The number of layers in inception/dense block is selected as 4, thus the complete network depth is 6. As a comparison, we also implement another popular architecture: ResGCN that shortly connects the input with output in each immediate GCL. The hidden dimension, learning rate and weight decay are fixed to 256, 0.005 and 0.0005, receptively. The random seed is fixed and no early stopping is adopted. We train all models with 200 epochs.

The Comparison with Different Architectures. We investigate the converging behaviors of different architectures without DropEdge. Figure 4 displays the training (in dash line) and validation (in bold line) loss of all architectures of depth 6 on Cora (we provide more experimental results in the supplementary material). We also plot the results by 2-layer PlainGCN as a reference.

We have two major observations from Figure 4. First, both DeepGCN-I and DeepGCN-D exhibit consistently lower training error and fast convergence rate compared with PlainGCN, which verifies the benefit of the architecture design for the dense and inception block. Second, compared with 2-layer PlainGCN, all variants of DeepGCNs suffer from over-fitting when the training epochs are larger than 30. Actually, we will demonstrate carrying out DropEdge helps prevent over-fitting in the following experiment.

Figure 4: Comparison of different architectures, left sub-figure for training loss and right for validation loss. PlainGCN- denotes PlainGCN of depth ; similar denotation follows for other methods.

How important is the DropEdge? To justify the importance of DropEdge, we contrast the validation loss of models with and without DropEdge. We fix the dropping rate for all cases. We first check the results by PlainGCNs. Figure 4(a) summarizes the validation loss of different layers. It shows that, DropEdge generally helps all PlainGCNs to obtain lower validation loss, except for the 2-layer one; since the shallow PlainGCN could be free of the over-fitting issue, operating DropEdge is unnecessary. Regarding DeepGCNs, as depicted in Figure 4(b), the architectures considering DropEdge outperform those without DropEdge significantly over different numbers of layers (4 and 6) and different types of building block (the inception, dense and also residual block). We also observe the loss curves of DeepGCN-I-6 and DeepGCN-D-6 are close to each other but clearly lower than that of ResGCN-6. This explains the superiority of dense connection and inception block compared to the residual design.

The Justification of Over-Smoothing. We now justify how DropEdge help alleviate the over-smoothing issue. As discussed in § 4.1, the over-smoothing issue is incurred when the top-layer output of GCN converge to the stationary point and become unrelated to the input features with the increase in depth. In other words, the closer the output is to the stationary point, the more serious the over-smoothing issue becomes. Since we are unable to derive the stationary point explicitly, we instead compute the difference between the outputs of adjacent layers to measure the degree of over-smoothing. We adopt the Euclidean distance for the difference computation. Lower distance means more serious self-smoothing.

Experiments are conducted on 8-layer PlainGCN. All parameters are initialized by the same uniform distribution. Figure 

6 (a) shows the distances of different intermediate layers (from 2 to 6) under different edge dropping rates (0 and 0.8). Clearly, the over-smoothing issue becomes more serious as the layer grows, which is consistent with our conjecture. Moreover, we find that the model with DropEdge () reveals higher distance and slower convergent speed than that without DropEdge (), implying the crucial importance of DropEdge on alleviating over-smoothing.

We are also interested in how the over-smoothing will act after model training. For this purpose, we display the results after 150-epoch training in Figure 6 (b). For PlainGCN (), the difference between outputs of the 5-th and 6-th layers is equal to 0, indicating that hidden features have converged to a certain stationary point. Compatible with such observation, Figure 6 (c) shows that the training loss fails to decrease for PlainGCN (). By contrast, PlainGCN () exhibits more promising outcomes, as the distance increases when the number of layers grows. It indicates that PlainGCN () has successfully learned meaningful node representations after training, which can also be validated by the training loss in Figure 6 (c)

Results of very deep GCNs. We test whether our methods still help or not for the very deep GCNs, when, for example, setting the number of layers to exceed 30. To answer this question, we implement 32-layer PlainGCN and our 32-layer DeepGCN-I on Cora, where the edge dropping rate is . We report the training and validation loss in Figure. 1. We observe that the training of 32-layer PlainGCN fails to converge probably due to over-smoothing, while our 32-layer DeepGCN works quite well for both training and validation. This is exciting as our proposed techniques have enabled more broader choices for network design with much deeper layers.

(a) PlainGCN with and without DropEdge.
(b) Performance comparisons with and without DropEdge.
Figure 5: The validation loss on different architectures.
Figure 6: Analysis on the over-smoothing issue. Smaller distance means more serious self-smoothing.

6 Conclusion

We have presented DropEdge, a novel and efficient technique to facilitate the development of deep Graph Convolutional Networks (GCNs). By dropping out a certain rate of edges by random, DropEdge includes more diversity into the input data to prevent over-fitting, and reduces message passing in graph convolution to alleviate over-smoothing. By enjoying its benefits, DropEdge enables us to consider various kinds of building blocks in deep GCNs, including the dense and inception blocks. Considerable experiments on Cora, Citeseer, Pubmed and Reddit have verified the effectiveness of Deep GCNs when the proposed techniques are embedded. It is expected that our research will open up a new venue on more in-depth exploration of deep GCNs for broader potential applications.

References

  • [1] Smriti Bhagat, Graham Cormode, and S Muthukrishnan. Node classification in social networks. In Social network data analytics, pages 115–148. Springer, 2011.
  • [2] Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen.

    An end-to-end deep learning architecture for graph classification.

    In

    Thirty-Second AAAI Conference on Artificial Intelligence

    , 2018.
  • [3] Linton C Freeman. Visualizing social networks. Journal of social structure, 1(1):4, 2000.
  • [4] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701–710. ACM, 2014.
  • [5] David Liben-Nowell and Jon Kleinberg. The link-prediction problem for social networks. Journal of the American society for information science and technology, 58(7):1019–1031, 2007.
  • [6] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In Proceedings of the International Conference on Learning Representations, 2017.
  • [7] Qimai Li, Zhichao Han, and Xiao-Ming Wu.

    Deeper insights into graph convolutional networks for semi-supervised learning.

    In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
  • [8] Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In Proceedings of the 35th International Conference on Machine Learning, 2018.
  • [9] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93, 2008.
  • [10] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.
  • [11] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S Yu. A comprehensive survey on graph neural networks. arXiv preprint arXiv:1901.00596, 2019.
  • [12] Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. Predict then propagate: Graph neural networks meet personalized pagerank. In Proceedings of the 7th International Conference on Learning Representations, 2019.
  • [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pages 770–778, 2016.
  • [14] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
  • [15] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016.
  • [16] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pages 1024–1034, 2017.
  • [17] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. In Proceedings of International Conference on Learning Representations, 2013.
  • [18] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, pages 3844–3852, 2016.
  • [19] Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163, 2015.
  • [20] Ruoyu Li, Sheng Wang, Feiyun Zhu, and Junzhou Huang. Adaptive graph convolutional neural networks. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
  • [21] Ron Levie, Federico Monti, Xavier Bresson, and Michael M Bronstein. Cayleynets: Graph convolutional neural networks with complex rational spectral filters. IEEE Transactions on Signal Processing, 67(1):97–109, 2017.
  • [22] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pages 1025–1035, 2017.
  • [23] Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5115–5124, 2017.
  • [24] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In International conference on machine learning, pages 2014–2023, 2016.
  • [25] Hongyang Gao, Zhengyang Wang, and Shuiwang Ji. Large-scale learnable graph convolutional networks. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1416–1424. ACM, 2018.
  • [26] Jie Chen, Tengfei Ma, and Cao Xiao. Fastgcn: Fast learning with graph convolutional networks via importance sampling. In Proceedings of the 6th International Conference on Learning Representations, 2018.
  • [27] Wenbing Huang, Tong Zhang, Yu Rong, and Junzhou Huang. Adaptive sampling towards fast graph representation learning. In Advances in Neural Information Processing Systems, pages 4558–4567, 2018.
  • [28] László Lovász et al. Random walks on graphs: A survey. Combinatorics, Paul erdos is eighty, 2(1):1–46, 1993.
  • [29] Nathan Linial. Locality in distributed graph algorithms. SIAM Journal on Computing, 21(1):193–201, 1992.
  • [30] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in PyTorch. In NIPS Autodiff Workshop, 2017.
  • [31] Arpita Ghosh, Stephen Boyd, and Amin Saberi. Minimizing effective resistance of a graph. SIAM review, 50(1):37–66, 2008.
  • [32] Alex Fout, Jonathon Byrd, Basir Shariat, and Asa Ben-Hur. Protein interface prediction using graph convolutional networks. In Advances in Neural Information Processing Systems, pages 6530–6539, 2017.

7 Proof of Theorem 1

To explain why the conductance of graph can only decrease after removing edges from the original graph, we need to adopt some concepts from electrical networks. Consider the graph as electrical networks, where each edge represents a unit resistance. Then the effective resistance, from node to node is defined as the total resistance between node and . Moreover, the conductance of the graph is defined as the following.

Definition 1

Let as a graph and , . The conductance of the is defined as:

and the conductance of the graph is defined as

By the graph theory in [28], mixing time of the graph is bounded by the conductance of the graph as

(6)

Then, the conductance of the graph can also be bounded by the first eigenvalue gap of the re-normalized adjacency matrix as

Note that the first eigenvalue of the re-normalized adjacency matrix is always equal to 1. Moreover, the first eigenvalue gap of the re-normalized adjacency matrix is bounded by the effective resistance as

Therefore,

(7)

By the properties of effective resistance, the effective resistance can only increase if one edge that not connected to either or is removed from the circuit.[31] It implies that according to Inequality 7, the conductance of the graph can only decrease if one edge is removed from the graph . Consequently, it slows down the mixing time after DropEdge by Inequality 6.

8 More Details in Experiments

Datasets

The statistics of all datasets are summarized in Table 2.

Datasets Nodes Edges Classes Features Traing/Validation/Testing Type
Cora 2,708 5,429 7 1,433 1,208/500/1,000 Transductive
Citeseer 3,327 4,732 6 3,703 1,812/500/1,000 Transductive
Pubmed 19,717 44,338 3 500 18,217/500/1,000 Transductive
Reddit 232,965 11,606,919 41 602 152,410/23,699/55,334 Inductive
Table 2: Dataset Statistics

Self Feature Modeling To emphasize the importance the self-features, we also implement a variant of GCN with self feature modeling [32]:

(8)

where .

Network Architecture and Hyperparameters

We conduct random search strategy to optimize the hyperparameter for each dataset in Section 5.1. The hyperparameter decryption is summarized in Table 

3. Table 4 reports the network architecture and hyperparameters. In Table 4: “Architecture” column, GCL indicate graph convolution Layer, D is the dense block with layers and I is the inception block with layers.

Hyperparameter Description
hidden the number of hidden dimension in intermediate layers
lr learning rate
weight-decay L2 regulation weight
p DropEdge percent
dropout dropout rate
withloop using self feature modeling
withbn

using batch normalization

Table 3: Hyperparameter Description
Dataset Model Architeceture Hyperparameters
Cora DeepGCN-I GCL - I1 - I1 - I1 - I1 - GCL hidden:128, lr:0.007, weight_decay:5e-3, p:0.8, dropout:0.9, withbn
DeepGCN-D GCL - D3 - D3 - D3 - GCL hidden:512, lr:0.0005, weight_decay:5e-4, p:0.6, dropout:0.5
Citeseer DeepGCN-I GCL - I2 - I2 - GCL hidden:256, lr:0.009, weight_decay:5e-3, p:0.85, dropout:0.9, withloop, withbn
DeepGCN-D GCL - D2 - GCL hidden:128, lr:0.012, weight_decay:5e-4, p:0.85, dropout:0.7, withloop
Pubmed DeepGCN-I GCL - I3 - I3 - I3 - I3 - GCL hidden:64, lr:0.0005, weight_decay:1e-4, p:0.6, dropout:0.2, withloop, withbn
DeepGCN-D GCL - D4 - D4 - GCL hidden:64,lr:0.001, weight_decay:1e-3, p:0.2, dropout:0.6, withloop, withbn
Reddit DeepGCN-I GCL - I2 - I2 - I2 - I2 - GCL hidden:64,lr:0.002, weight_decay:1e-5, sampling_percent:0.6, dropout:0.4, withloop, withbn
DeepGCN-D GCL - D4 - D4 - D4 - GCL hidden:64,lr:0.003, weight_decay:1e-4, p:0.05, dropout:0.5, withloop
Table 4: The Architecture and Hyperparameters.

8.1 More Results in Ablation Studies

The Comparison with Different Architectures.

Fig. 7 reports the result of training and validation loss on Cora and Citesser.

Figure 7: Comparison of different architectures, left sub-figure for training loss and right for validation loss. PlainGCN- denotes PlainGCN of depth ; similar denotation follows for other methods.

How important is the DropEdge?

Fig. 8 demonstrates the validation loss of different architectures with Fig. 8(a) summarizes the validation loss of different layers with and without DropEdge on Citeseer. Fig. 8(b) summarizes the validation loss of architectures described in Section 5.2 with and without DropEdge on Citeseer.

Figure 8: The validation loss of different architectures with and without DropEdge on Cora.
(a) Comparison of PlainGCN with and without DropEdge.
(b) Performance comparisons with and without DropEdge.
Figure 9: The validation loss of different architectures.

The Justification of Over-smoothing.

Fig. 10 demonstrates more results about the distances of different intermediate layers under differnent edge dropping rates (0, 0.2, 0.4, 0.6, 0.8).

Figure 10: More justification of over-smoothing issue with different edge dropping rates.