Link Prediction Based on Graph Neural Networks

02/27/2018 ∙ by Muhan Zhang, et al. ∙ Washington University in St Louis 0

Traditional methods for link prediction can be categorized into three main types: graph structure feature-based, latent feature-based, and explicit feature-based. Graph structure feature methods leverage some handcrafted node proximity scores, e.g., common neighbors, to estimate the likelihood of links. Latent feature methods rely on factorizing networks' matrix representations to learn an embedding for each node. Explicit feature methods train a machine learning model on two nodes' explicit attributes. Each of the three types of methods has its unique merits. In this paper, we propose SEAL (learning from Subgraphs, Embeddings, and Attributes for Link prediction), a new framework for link prediction which combines the power of all the three types into a single graph neural network (GNN). GNN is a new type of neural network which directly accepts graphs as input and outputs their labels. In SEAL, the input to the GNN is a local subgraph around each target link. We prove theoretically that our local subgraphs also reserve a great deal of high-order graph structure features related to link existence. Another key feature is that our GNN can naturally incorporate latent features and explicit features. It is achieved by concatenating node embeddings (latent features) and node attributes (explicit features) in the node information matrix for each subgraph, thus combining the three types of features to enhance GNN learning. Through extensive experiments, SEAL shows unprecedentedly strong performance against a wide range of baseline methods, including various link prediction heuristics and network embedding methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

SEAL

Code for SEAL (learning from Subgraphs, Embeddings, and Attributes for Link prediction)


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Link prediction is to predict whether two nodes in a network are likely to have a link [1]. Given the ubiquitous existence of networks, it has many applications such as friend recommendation [2], movie recommendation [3]

, knowledge graph completion

[4], and metabolic network reconstruction [5].

One class of simple yet effective approaches for link prediction is called heuristic methods. Heuristic methods compute some heuristic node similarity scores as the likelihood of links [1, 6]. Existing heuristics can be categorized based on the maximum hop of neighbors needed to calculate the score. For example, common neighbors (CN) and preferential attachment (PA) [7] are first-order heuristics, since they only involve the one-hop neighbors of two target nodes. Adamic-Adar (AA) and resource allocation (RA) [8] are second-order heuristics, as they are calculated from up to two-hop neighborhood of the target nodes. We define -order heuristics to be those heuristics which require knowing up to -hop neighborhood of the target nodes. There are also some high-order heuristics which require knowing the entire network. Examples include Katz, rooted PageRank (PR) [9], and SimRank (SR) [10]. Table LABEL:heuristics in Appendix A summarizes eight popular heuristics.

Although working well in practice, heuristic methods have strong assumptions on when links may exist. For example, the common neighbor heuristic assumes that two nodes are more likely to connect if they have many common neighbors. This assumption may be correct in social networks, but is shown to fail in protein-protein interaction (PPI) networks – two proteins sharing many common neighbors are actually less likely to interact [11].

In fact, the heuristics belong to a more generic class, namely graph structure features. Graph structure features are those features located inside the observed node and edge structures of the network, which can be calculated directly from the graph. Since heuristics can be viewed as predefined graph structure features, a natural idea is to automatically learn such features from the network. Zhang and Chen [12] first studied this problem. They extract local enclosing subgraphs around links as the training data, and use a fully-connected neural network to learn which enclosing subgraphs correspond to link existence. Their method called Weisfeiler-Lehman Neural Machine (WLNM) has achieved state-of-the-art link prediction performance. The enclosing subgraph for a node pair is the subgraph induced from the network by the union of and ’s neighbors up to hops. Figure 1 illustrates the 1-hop enclosing subgraphs for and . These enclosing subgraphs are very informative for link prediction – all first-order heuristics such as common neighbors can be directly calculated from the 1-hop enclosing subgraphs.

However, it is shown that high-order heuristics such as rooted PageRank and Katz often have much better performance than first and second-order ones [6]. To effectively learn good high-order features, it seems that we need a very large hop number so that the enclosing subgraph becomes the entire network. This results in unaffordable time and memory consumption for most practical networks. But do we really need such a large to learn high-order heuristics?

Fortunately, as our first contribution, we show that we do not necessarily need a very large to learn high-order graph structure features. We dive into the inherent mechanisms of link prediction heuristics, and find that most high-order heuristics can be unified by a -decaying theory. We prove that, under mild conditions, any -decaying heuristic can be effectively approximated from an -hop enclosing subgraph, where the approximation error decreases at least exponentially with . This means that we can safely use even a small to learn good high-order features. It also implies that the “effective order” of these high-order heuristics is not that high.

Based on our theoretical results, we propose a novel link prediction framework, SEAL, to learn general graph structure features from local enclosing subgraphs. SEAL fixes multiple drawbacks of WLNM. First, a graph neural network (GNN) [13, 14, 15, 16, 17] is used to replace the fully-connected neural network in WLNM, which enables better graph feature learning ability. Second, SEAL permits learning from not only subgraph structures, but also latent and explicit node features, thus absorbing multiple types of information. We empirically verified its much improved performance.

Our contributions are summarized as follows. 1) We present a new theory for learning link prediction heuristics, justifying learning from local subgraphs instead of entire networks. 2) We propose SEAL, a novel link prediction framework based on GNN (illustrated in Figure 1). SEAL outperforms all heuristic methods, latent feature methods, and recent network embedding methods by large margins. SEAL also outperforms the previous state-of-the-art method, WLNM.

(a)
Figure 1: The SEAL framework. For each target link, SEAL extracts a local enclosing subgraph around it, and uses a GNN to learn general graph structure features for link prediction. Note that the heuristics listed inside the box are just for illustration – the learned features may be completely different from existing heuristics.

2 Preliminaries

Notations   Let be an undirected graph, where is the set of vertices and is the set of observed links. Its adjacency matrix is , where if and otherwise. For any nodes , let be the 1-hop neighbors of , and be the shortest path distance between and . A walk is a sequence of nodes with . We use to denote the length of the walk , which is here.

Latent features and explicit features   Besides graph structure features, latent features and explicit features are also studied for link prediction. Latent feature methods [3, 18, 19, 20] factorize some matrix representations of the network to learn a low-dimensional latent representation/embedding for each node. Examples include matrix factorization [3] and stochastic block model [18] etc. Recently, a number of network embedding techniques have been proposed, such as DeepWalk [19], LINE [21] and node2vec [20], which are also latent feature methods since they implicitly factorize some matrices too [22]. Explicit features are often available in the form of node attributes, describing all kinds of side information about individual nodes. It is shown that combining graph structure features with latent features and explicit features can improve the performance [23, 24].

Graph neural networks   Graph neural network (GNN) is a new type of neural network for learning over graphs [13, 14, 15, 16, 25, 26]). Here, we only briefly introduce the components of a GNN since this paper is not about GNN innovations but is a novel application of GNN. A GNN usually consists of 1) graph convolution layers which extract local substructure features for individual nodes, and 2) a graph aggregation layer

which aggregates node-level features into a graph-level feature vector. Many graph convolution layers can be unified into a message passing framework

[27].

Supervised heuristic learning   There are some previous attempts to learn supervised heuristics for link prediction. The closest work to ours is the Weisfeiler-Lehman Neural Machine (WLNM) [12]

, which also learns from local subgraphs. However, WLNM has several drawbacks. Firstly, WLNM trains a fully-connected neural network on the subgraphs’ adjacency matrices. Since fully-connected neural networks only accept fixed-size tensors as input, WLNM requires truncating different subgraphs to the same size, which may lose much structural information. Secondly, due to the limitation of adjacency matrix representations, WLNM cannot learn from latent or explicit features. Thirdly, theoretical justifications are also missing. We include more discussion on WLNM in Appendix

D

. Another related line of research is to train a supervised learning model on different heuristics’ combination. For example, the path ranking algorithm

[28]

trains logistic regression on different path types’ probabilities to predict relations in knowledge graphs.

Nickel et al. [23] propose to incorporate heuristic features into tensor factorization models. However, these models still rely on predefined heuristics – they cannot learn general graph structure features.

3 A theory for unifying link prediction heuristics

In this section, we aim to understand deeper the mechanisms behind various link prediction heuristics, and thus motivating the idea of learning heuristics from local subgraphs. Due to the large number of graph learning techniques, note that we are not concerned with the generalization error of a particular method, but focus on the information reserved in the subgraphs for calculating existing heuristics.

Definition 1.

(Enclosing subgraph) For a graph , given two nodes , the -hop enclosing subgraph for is the subgraph induced from by the set of nodes .

The enclosing subgraph describes the “-hop surrounding environment" of . Since contains all -hop neighbors of and , we naturally have the following theorem.

Theorem 1.

Any -order heuristic for can be accurately calculated from .

For example, a -hop enclosing subgraph will contain all the information needed to calculate any first and second-order heuristics. However, although first and second-order heuristics are well covered by local enclosing subgraphs, an extremely large seems to be still needed for learning high-order heuristics. Surprisingly, our following analysis shows that learning high-order heuristics is also feasible with a small . We support this first by defining the -decaying heuristic. We will show that under certain conditions, a -decaying heuristic can be very well approximated from the -hop enclosing subgraph. Moreover, we will show that almost all well-known high-order heuristics can be unified into this -decaying heuristic framework.

Definition 2.

(-decaying heuristic) A -decaying heuristic for has the following form:

(1)

where is a decaying factor between 0 and 1, is a positive constant or a positive function of that is upper bounded by a constant, is a nonnegative function of under the the given network.

Next, we will show that under certain conditions, a -decaying heuristic can be approximated from an -hop enclosing subgraph, and the approximation error decreases at least exponentially with .

Theorem 2.

Given a -decaying heuristic , if satisfies:

  • [leftmargin=*]

  • (property 1) where ; and

  • (property 2) is calculable from for , where with and ,

then can be approximated from and the approximation error decreases at least exponentially with .

We can approximate such a -decaying heuristic by summing over its first terms.

(2)

The approximation error can be bounded as follows.

In practice, a small and a large lead to a faster decreasing speed. Next we will prove that three popular high-order heuristics: Katz, rooted PageRank and SimRank, are all -decaying heuristics which satisfy the properties in Theorem 2. First, we need the following lemma.

Lemma 1.

Any walk between and with length is included in .

Given any walk with length , we will show that every node is included in . Consider any . Assume and . Then, , a contradiction. Thus, or . By the definition of , must be included in . ∎

Next we will analyze Katz, rooted PageRank and SimRank one by one.

3.1 Katz index

The Katz index [29] for is defined as

(3)

where is the set of length- walks between and , and is the power of the adjacency matrix of the network. Katz index sums over the collection of all walks between and where a walk of length is damped by (), giving more weight to shorter walks.

Katz index is directly defined in the form of a -decaying heuristic with , and . According to Lemma 1, is calculable from for , thus property 2 in Theorem 2 is satisfied. Now we show when property 1 is satisfied.

Proposition 1.

For any nodes , is bounded by , where is the maximum node degree of the network.

We prove it by induction. When , for any . Thus the base case is correct. Now, assume by induction that for any , we have

Taking , we can see that whenever , the Katz index will satisfy property 1 in Theorem 2. In practice, the damping factor is often set to very small values like 5E-4 [1], which implies that Katz can be very well approximated from the -hop enclosing subgraph.

3.2 PageRank

The rooted PageRank for node calculates the stationary distribution of a random walker starting at , who iteratively moves to a random neighbor of its current position with probability or returns to with probability . Let denote the stationary distribution vector. Let denote the probability that the random walker is at node under the stationary distribution.

Let be the transition matrix with if and otherwise. Let be a vector with the element being and others being . The stationary distribution satisfies

(4)

When used for link prediction, the score for is given by (or for symmetry). To show that rooted PageRank is a -decaying heuristic, we introduce the inverse P-distance theory [30], which states that can be equivalently written as follows:

(5)

where the summation is taken over all walks starting at and ending at (possibly touching and multiple times). For a walk , is the length of the walk. The term is defined as , which can be interpreted as the probability of traveling . Now we have the following theorem.

Theorem 3.

The rooted PageRank heuristic is a -decaying heuristic which satisfies the properties in Theorem 2.

We first write in the following form.

(6)

Defining leads to the form of a -decaying heuristic. Note that is the probability that a random walker starting at stops at with exactly steps, which satisfies . Thus, (property 1). According to Lemma 1, is also calculable from for (property 2). ∎

3.3 SimRank

The SimRank score [10] is motivated by the intuition that two nodes are similar if their neighbors are also similar. It is defined in the following recursive way: if , then ; otherwise,

(7)

where is a constant between 0 and 1. According to [10], SimRank has an equivalent definition:

(8)

where denotes all simultaneous walks such that one walk starts at , the other walk starts at , and they first meet at any vertex . For a simultaneous walk , is the length of the walk. The term is similarly defined as , describing the probability of this walk. Now we have the following theorem.

Theorem 4.

SimRank is a -decaying heuristic which satisfies the properties in Theorem 2.

We write as follows.

(9)

Defining reveals that SimRank is a -decaying heuristic. Note that . It is easy to see that is also calculable from for . ∎

Discussion   There exist several other high-order heuristics based on path counting or random walk [6] which can be as well incorporated into the -decaying heuristic framework. We omit the analysis here. Our results reveal that most high-order heuristics inherently share the same -decaying heuristic form, and thus can be effectively approximated from an -hop enclosing subgraph with exponentially smaller approximation error. We believe the ubiquity of -decaying heuristics is not by accident – it implies that a successful link prediction heuristic is better to put exponentially smaller weight on structures far away from the target, as remote parts of the network intuitively make little contribution to link existence. Our results build the foundation for learning heuristics from local subgraphs, as they imply that local enclosing subgraphs already contain enough information to learn good graph structure features for link prediction which is much desired considering learning from the entire network is often infeasible. To summarize, from the small enclosing subgraphs extracted around links, we are able to accurately calculate first and second-order heuristics, and approximate a wide range of high-order heuristics with small errors. Therefore, given adequate feature learning ability of the model used, learning from such enclosing subgraphs is expected to achieve performance at least as good as a wide range of heuristics. There is some related work which empirically verifies that local methods can often estimate PageRank and SimRank well [31, 32]. Another related theoretical work [33] establishes a condition of to achieve some fixed approximation error for ordinary PageRank.

4 SEAL: An implemetation of the theory using GNN

In this section, we describe our SEAL framework for link prediction. SEAL does not restrict the learned features to be in some particular forms such as -decaying heuristics, but instead learns general graph structure features for link prediction. It contains three steps: 1) enclosing subgraph extraction, 2) node information matrix construction, and 3) GNN learning. Given a network, we aim to learn automatically a “heuristic” that best explains the link formations. Motivated by the theoretical results, this function takes local enclosing subgraphs around links as input, and output how likely the links exist. To learn such a function, we train a graph neural network (GNN) over the enclosing subgraphs. Thus, the first step in SEAL is to extract enclosing subgraphs for a set of sampled positive links (observed) and a set of sampled negative links (unobserved) to construct the training data.

A GNN typically takes as input, where (with slight abuse of notation) is the adjacency matrix of the input enclosing subgraph, is the node information matrix each row of which corresponds to a node’s feature vector. The second step in SEAL is to construct the node information matrix for each enclosing subgraph. This step is crucial for training a successful GNN link prediction model. In the following, we discuss this key step. The node information matrix in SEAL has three components: structural node labels, node embeddings and node attributes.

4.1 Node labeling

The first component in is each node’s structural label. A node labeling is function which assigns an integer label to every node in the enclosing subgraph. The purpose is to use different labels to mark nodes’ different roles in an enclosing subgraph: 1) The center nodes and are the target nodes between which the link is located. 2) Nodes with different relative positions to the center have different structural importance to the link. A proper node labeling should mark such differences. If we do not mark such differences, GNNs will not be able to tell where are the target nodes between which a link existence should be predicted, and lose structural information.

Our node labeling method is derived from the following criteria: 1) The two target nodes and always have the distinctive label “”. 2) Nodes and have the same label if and . The second criterion is because, intuitively, a node ’s topological position within an enclosing subgraph can be described by its radius with respect to the two center nodes, namely . Thus, we let nodes on the same orbit have the same label, so that the node labels can reflect nodes’ relative positions and structural importance within subgraphs.

Based on the above criteria, we propose a Double-Radius Node Labeling (DRNL) as follows. First, assign label 1 to and . Then, for any node with , assign label . Nodes with radius or get label 3. Nodes with radius or get 4. Nodes with get 5. Nodes with or get 6. Nodes with or get 7. So on and so forth. In other words, we iteratively assign larger labels to nodes with a larger radius w.r.t. both center nodes, where the label and the double-radius satisfy

1) if , then ;

2) if , then .

One advantage of DRNL is that it has a perfect hashing function

(10)

where , , , and are the integer quotient and remainder of divided by , respectively. This perfect hashing allows fast closed-form computations.

For nodes with or , we give them a null label 0. Note that DRNL is not the only possible way of node labeling, but we empirically verified its better performance than no labeling and other naive labelings. We discuss more about node labeling in Appendix B

. After getting the labels, we use their one-hot encoding vectors to construct

.

4.2 Incorporating latent and explicit features

Other than the structural node labels, the node information matrix also provides an opportunity to include latent and explicit features. By concatenating each node’s embedding/attribute vector to its corresponding row in , we can make SEAL simultaneously learn from all three types of features.

Generating the node embeddings for SEAL is nontrivial. Suppose we are given the observed network , a set of sampled positive training links , and a set of sampled negative training links with . If we directly generate node embeddings on , the node embeddings will record the link existence information of the training links (since ). We observed that GNNs can quickly find out such link existence information and optimize by only fitting this part of information. This results in bad generalization performance in our experiments. Our trick is to temporally add into , and generate the embeddings on

. This way, the positive and negative training links will have the same link existence information recorded in the embeddings, so that GNN cannot classify links by only fitting this part of information. We empirically verified the much improved performance of this trick to SEAL. We name this trick

negative injection.

We name our proposed framework SEAL (learning from Subgraphs, Embeddings and Attributes for Link prediction), emphasizing its ability to jointly learn from three types of features.

5 Experimental results

We conduct extensive experiments to evaluate SEAL. Our results show that SEAL is a superb and robust framework for link prediction, achieving unprecedentedly strong performance on various networks. We use AUC and average precision (AP) as evaluation metrics. We run all experiments for 10 times and report the average AUC results and standard deviations. We leave the the AP and time results in Appendix

F. SEAL is flexible with what GNN or node embeddings to use. Thus, we choose a recent architecture DGCNN [17] as the default GNN, and node2vec [20] as the default embeddings. The code and data are available at https://github.com/muhanzhang/SEAL.

Datasets   The eight datasets used are: USAir, NS, PB, Yeast, C.ele, Power, Router, and E.coli (please see Appendix C for details). We randomly remove 10% existing links from each dataset as positive testing data. Following a standard manner of learning-based link prediction, we randomly sample the same number of nonexistent links (unconnected node pairs) as negative testing data. We use the remaining 90% existing links as well as the same number of additionally sampled nonexistent links to construct the training data.

Comparison to heuristic methods   We first compare SEAL with methods that only use graph structure features. We include eight popular heuristics (shown in Appendix A, Table LABEL:heuristics): common neighbors (CN), Jaccard, preferential attachment (PA), Adamic-Adar (AA), resource allocation (RA), Katz, PageRank (PR), and SimRank (SR). We additionally include Ensemble (ENS) which trains a logistic regression classifier on the eight heuristic scores. We also include two heuristic learning methods: Weisfeiler-Lehman graph kernel (WLK) [34] and WLNM [12], which also learn from (truncated) enclosing subgraphs. We omit path ranking methods [28] as well as other recent methods which are specifically designed for knowledge graphs or recommender systems [23, 35]. As all the baselines only use graph structure features, we restrict SEAL to not include any latent or explicit features. In SEAL, the hop number

is an important hyperparameter. Here, we select

only from , since on one hand we empirically verified that the performance typically does not increase after , which validates our theoretical results that the most useful information is within local structures. On the other hand, even sometimes results in very large subgraphs if a hub node is included. This raises the idea of sampling nodes in subgraphs, which we leave to future work. The selection principle is very simple: If the second-order heuristic AA outperforms the first-order heuristic CN on 10% validation data, then we choose ; otherwise we choose . For datasets PB and E.coli, we consistently use to fit into the memory. We include more details about the baselines and hyperparameters in Appendix D.

Data CN Jaccard PA AA RA Katz PR SR ENS WLK WLNM SEAL
USAir 93.801.22 89.791.61 88.841.45 95.061.03 95.770.92 92.881.42 94.671.08 78.892.31 88.961.44 96.630.73 95.951.10 96.620.72
NS 94.420.95 94.430.93 68.652.03 94.450.93 94.450.93 94.851.10 94.891.08 94.791.08 97.640.25 98.570.51 98.610.49 98.850.47
PB 92.040.35 87.410.39 90.140.45 92.360.34 92.460.37 92.920.35 93.540.41 77.080.80 90.150.45 93.830.59 93.490.47 94.720.46
Yeast 89.370.61 89.320.60 82.201.02 89.430.62 89.450.62 92.240.61 92.760.55 91.490.57 82.361.02 95.860.54 95.620.52 97.910.52
C.ele 85.131.61 80.191.64 74.792.04 86.951.40 87.491.41 86.341.89 90.321.49 77.072.00 74.942.04 89.721.67 86.181.72 90.301.35
Power 58.800.88 58.790.88 44.331.02 58.790.88 58.790.88 65.391.59 66.001.59 76.151.06 79.521.78 82.413.43 84.760.98 87.611.57
Router 56.430.52 56.400.52 47.581.47 56.430.51 56.430.51 38.621.35 38.761.39 37.401.27 47.581.48 87.422.08 94.410.88 96.381.45
E.coli 93.710.39 81.310.61 91.820.58 95.360.34 95.950.35 93.500.44 95.570.44 62.491.43 91.890.58 96.940.29 97.210.27 97.640.22
Table 1: Comparison with heuristic methods (AUC).

Table 1 shows the results. Firstly, we observe that methods which learn from enclosing subgraphs (WLK, WLNM and SEAL) generally perform much better than predefined heuristics. This indicates that the learned “heuristics” are better at capturing the network properties than manually designed ones. Among learning-based methods, SEAL has the best performance, demonstrating GNN’s superior graph feature learning ability over graph kernels and fully-connected neural networks. From the results on Power and Router, we can see that although existing heuristics perform similarly to random guess, learning-based methods still maintain high performance. This suggests that we can even discover new “heuristics” for networks where no existing heuristics work.

Data MF SBM N2V LINE SPC VGAE SEAL
USAir 94.080.80 94.851.14 91.441.78 81.4710.71 74.223.11 89.281.99 97.090.70
NS 74.554.34 92.302.26 91.521.28 80.631.90 89.942.39 94.041.64 97.710.93
PB 94.300.53 93.900.42 85.790.78 76.952.76 83.960.86 90.700.53 95.010.34
Yeast 90.280.69 91.410.60 93.670.46 87.453.33 93.250.40 93.880.21 97.200.64
C.ele 85.901.74 86.482.60 84.111.27 69.213.14 51.902.57 81.802.18 89.542.04
Power 50.631.10 66.572.05 76.220.92 55.631.47 91.780.61 71.201.65 84.181.82
Router 78.031.63 85.651.93 65.460.86 67.152.10 68.792.42 61.511.22 95.681.22
E.coli 93.760.56 93.820.41 90.821.49 82.382.19 94.920.32 90.810.63 97.220.28
Table 2: Comparison with latent feature methods (AUC).

Comparison to latent feature methods   Next we compare SEAL with six state-of-the-art latent feature methods: matrix factorization (MF), stochastic block model (SBM) [18], node2vec (N2V) [20], LINE [21]

, spectral clustering (SPC), and variational graph auto-encoder (VGAE)

[36]. Among them, VGAE uses a GNN too. Please note the difference between VGAE and SEAL: VGAE uses a node-level GNN to learn node embeddings that best reconstruct the network, while SEAL uses a graph-level GNN to classify enclosing subgraphs. Therefore, VGAE still belongs to latent feature methods. For SEAL, we additionally include the 128-dimensional node2vec embeddings in the node information matrix . Since the datasets do not have node attributes, explicit features are not included.

Table 2 shows the results. As we can see, SEAL shows significant improvement over latent feature methods. One reason is that SEAL learns from both graph structures and latent features simultaneously, thus augmenting those methods that only use latent features. We observe that SEAL with node2vec embeddings outperforms pure node2vec by large margins. This implies that network embeddings alone may not be able to capture the most useful link prediction information located in the local structures. It is also interesting that compared to SEAL without node2vec embeddings (Table 1), joint learning does not always improve the performance. More experiments and discussion are included in Appendix F.

6 Conclusions

Learning link prediction heuristics automatically is a new field. In this paper, we presented theoretical justifications for learning from local enclosing subgraphs. In particular, we proposed a -decaying theory to unify a wide range of high-order heuristics and prove their approximability from local subgraphs. Motivated by the theory, we proposed a novel link prediction framework, SEAL, to simultaneously learn from local enclosing subgraphs, embeddings and attributes based on graph neural networks. Experimentally we showed that SEAL achieved unprecedentedly strong performance by comparing to various heuristics, latent feature methods, and network embedding algorithms. We hope SEAL can not only inspire link prediction research, but also open up new directions for other relational machine learning problems such as knowledge graph completion and recommender systems.

Acknowledgments

The work is supported in part by the III-1526012 and SCH-1622678 grants from the National Science Foundation and grant 1R21HS024581 from the National Institute of Health.

References

  • Liben-Nowell and Kleinberg [2007] David Liben-Nowell and Jon Kleinberg. The link-prediction problem for social networks. Journal of the American society for information science and technology, 58(7):1019–1031, 2007.
  • Adamic and Adar [2003] Lada A Adamic and Eytan Adar. Friends and neighbors on the web. Social networks, 25(3):211–230, 2003.
  • Koren et al. [2009] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, (8):30–37, 2009.
  • Nickel et al. [2016] Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1):11–33, 2016.
  • Oyetunde et al. [2016] Tolutola Oyetunde, Muhan Zhang, Yixin Chen, Yinjie Tang, and Cynthia Lo. Boostgapfill: Improving the fidelity of metabolic network reconstructions through integrated constraint and pattern-based methods. Bioinformatics, 2016.
  • Lü and Zhou [2011] Linyuan Lü and Tao Zhou. Link prediction in complex networks: A survey. Physica A: Statistical Mechanics and its Applications, 390(6):1150–1170, 2011.
  • Barabási and Albert [1999] Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. Science, 286(5439):509–512, 1999.
  • Zhou et al. [2009] Tao Zhou, Linyuan Lü, and Yi-Cheng Zhang. Predicting missing links via local information. The European Physical Journal B, 71(4):623–630, 2009.
  • Brin and Page [2012] Sergey Brin and Lawrence Page. Reprint of: The anatomy of a large-scale hypertextual web search engine. Computer networks, 56(18):3825–3833, 2012.
  • Jeh and Widom [2002] Glen Jeh and Jennifer Widom. Simrank: a measure of structural-context similarity. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 538–543. ACM, 2002.
  • Kovács et al. [2018] István A Kovács, Katja Luck, Kerstin Spirohn, Yang Wang, Carl Pollis, Sadie Schlabach, Wenting Bian, Dae-Kyum Kim, Nishka Kishore, Tong Hao, et al. Network-based prediction of protein interactions. bioRxiv, page 275529, 2018.
  • Zhang and Chen [2017] Muhan Zhang and Yixin Chen. Weisfeiler-lehman neural machine for link prediction. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 575–583. ACM, 2017.
  • Bruna et al. [2013] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013.
  • Duvenaud et al. [2015] David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in neural information processing systems, pages 2224–2232, 2015.
  • Kipf and Welling [2016a] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016a.
  • Niepert et al. [2016] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov.

    Learning convolutional neural networks for graphs.

    In International conference on machine learning, pages 2014–2023, 2016.
  • Zhang et al. [2018a] Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen.

    An end-to-end deep learning architecture for graph classification.

    In AAAI, pages 4438–4445, 2018a.
  • Airoldi et al. [2008] Edoardo M Airoldi, David M Blei, Stephen E Fienberg, and Eric P Xing. Mixed membership stochastic blockmodels. Journal of Machine Learning Research, 9(Sep):1981–2014, 2008.
  • Perozzi et al. [2014] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701–710. ACM, 2014.
  • Grover and Leskovec [2016] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855–864. ACM, 2016.
  • Tang et al. [2015] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, pages 1067–1077. International World Wide Web Conferences Steering Committee, 2015.
  • Qiu et al. [2017] Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Kuansan Wang, and Jie Tang. Network embedding as matrix factorization: Unifyingdeepwalk, line, pte, and node2vec. arXiv preprint arXiv:1710.02971, 2017.
  • Nickel et al. [2014] Maximilian Nickel, Xueyan Jiang, and Volker Tresp. Reducing the rank in relational factorization models by including observable patterns. In Advances in Neural Information Processing Systems, pages 1179–1187, 2014.
  • Zhao et al. [2017] He Zhao, Lan Du, and Wray Buntine. Leveraging node attributes for incomplete relational data. In International Conference on Machine Learning, pages 4072–4081, 2017.
  • Li et al. [2015] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493, 2015.
  • Dai et al. [2016] Hanjun Dai, Bo Dai, and Le Song. Discriminative embeddings of latent variable models for structured data. In Proceedings of The 33rd International Conference on Machine Learning, pages 2702–2711, 2016.
  • Gilmer et al. [2017] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. arXiv preprint arXiv:1704.01212, 2017.
  • Lao and Cohen [2010] Ni Lao and William W Cohen. Relational retrieval using a combination of path-constrained random walks. Machine learning, 81(1):53–67, 2010.
  • Katz [1953] Leo Katz. A new status index derived from sociometric analysis. Psychometrika, 18(1):39–43, 1953.
  • Jeh and Widom [2003] Glen Jeh and Jennifer Widom. Scaling personalized web search. In Proceedings of the 12th international conference on World Wide Web, pages 271–279. Acm, 2003.
  • Chen et al. [2004] Yen-Yu Chen, Qingqing Gan, and Torsten Suel. Local methods for estimating pagerank values. In Proceedings of the thirteenth ACM international conference on Information and knowledge management, pages 381–389. ACM, 2004.
  • Jia et al. [2010] Xu Jia, Hongyan Liu, Li Zou, Jun He, Xiaoyong Du, and Yuanzhe Cai. Local methods for estimating simrank score. In Web Conference (APWEB), 2010 12th International Asia-Pacific, pages 157–163. IEEE, 2010.
  • Bar-Yossef and Mashiach [2008] Ziv Bar-Yossef and Li-Tal Mashiach. Local approximation of pagerank and reverse pagerank. In Proceedings of the 17th ACM conference on Information and knowledge management, pages 279–288. ACM, 2008.
  • Shervashidze et al. [2011] Nino Shervashidze, Pascal Schweitzer, Erik Jan van Leeuwen, Kurt Mehlhorn, and Karsten M Borgwardt. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research, 12(Sep):2539–2561, 2011.
  • Monti et al. [2017] Federico Monti, Michael Bronstein, and Xavier Bresson. Geometric matrix completion with recurrent multi-graph neural networks. In Advances in Neural Information Processing Systems, pages 3700–3710, 2017.
  • Kipf and Welling [2016b] Thomas N Kipf and Max Welling. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308, 2016b.
  • Luxburg et al. [2010] Ulrike V Luxburg, Agnes Radl, and Matthias Hein. Getting lost in space: Large sample analysis of the resistance distance. In Advances in Neural Information Processing Systems, pages 2622–2630, 2010.
  • Ribeiro et al. [2017] Leonardo FR Ribeiro, Pedro HP Saverese, and Daniel R Figueiredo. struc2vec: Learning node representations from structural identity. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 385–394. ACM, 2017.
  • Hamilton et al. [2017] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pages 1025–1035, 2017.
  • Lai et al. [2017] Yi-An Lai, Chin-Chi Hsu, Wen Hao Chen, Mi-Yen Yeh, and Shou-De Lin. Prune: Preserving proximity and global ranking for network embedding. In Advances in Neural Information Processing Systems, pages 5263–5272, 2017.
  • Duran and Niepert [2017] Alberto Garcia Duran and Mathias Niepert. Learning graph representations with embedding propagation. In Advances in Neural Information Processing Systems, pages 5125–5136, 2017.
  • Koren [2008] Yehuda Koren. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 426–434. ACM, 2008.
  • Rendle [2010] Steffen Rendle. Factorization machines. In 10th IEEE International Conference on Data Mining (ICDM), pages 995–1000. IEEE, 2010.
  • Batagelj and Mrvar [2006] Vladimir Batagelj and Andrej Mrvar. http://vlado.fmf.uni-lj.si/pub/networks/data/, 2006.
  • Newman [2006] Mark EJ Newman.

    Finding community structure in networks using the eigenvectors of matrices.

    Physical review E, 74(3):036104, 2006.
  • Ackland et al. [2005] Robert Ackland et al. Mapping the us political blogosphere: Are conservative bloggers more prominent? In BlogTalk Downunder 2005 Conference, Sydney. BlogTalk Downunder 2005 Conference, Sydney, 2005.
  • Von Mering et al. [2002] Christian Von Mering, Roland Krause, Berend Snel, Michael Cornell, Stephen G Oliver, Stanley Fields, and Peer Bork. Comparative assessment of large-scale data sets of protein–protein interactions. Nature, 417(6887):399–403, 2002.
  • Watts and Strogatz [1998] Duncan J Watts and Steven H Strogatz. Collective dynamics of ‘small-world’networks. Nature, 393(6684):440–442, 1998.
  • Spring et al. [2004] Neil Spring, Ratul Mahajan, David Wetherall, and Thomas Anderson. Measuring isp topologies with rocketfuel. IEEE/ACM Transactions on networking, 12(1):2–16, 2004.
  • Zhang et al. [2018b] Muhan Zhang, Zhicheng Cui, Shali Jiang, and Yixin Chen. Beyond link prediction: Predicting hyperlinks in adjacency space. In AAAI, pages 4430–4437, 2018b.
  • Aicher et al. [2015] Christopher Aicher, Abigail Z Jacobs, and Aaron Clauset. Learning latent block structure in weighted networks. Journal of Complex Networks, 3(2):221–248, 2015.
  • Rendle [2012] Steffen Rendle. Factorization machines with libfm. ACM Transactions on Intelligent Systems and Technology (TIST), 3(3):57, 2012.
  • Fan et al. [2008] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. Liblinear: A library for large linear classification. Journal of machine learning research, 9(Aug):1871–1874, 2008.
  • Vishwanathan et al. [2010] S Vichy N Vishwanathan, Nicol N Schraudolph, Risi Kondor, and Karsten M Borgwardt. Graph kernels. Journal of Machine Learning Research, 11(Apr):1201–1242, 2010.
  • Sugiyama and Borgwardt [2015] Mahito Sugiyama and Karsten Borgwardt. Halting in random walk kernels. In Advances in neural information processing systems, pages 1639–1647, 2015.
  • Costa and De Grave [2010] Fabrizio Costa and Kurt De Grave. Fast neighborhood subgraph pairwise distance kernel. In Proceedings of the 26th International Conference on Machine Learning, pages 255–262. Omnipress, 2010.
  • Kriege and Mutzel [2012] Nils Kriege and Petra Mutzel. Subgraph matching kernels for attributed graphs. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 1015–1022, 2012.
  • Borgwardt and Kriegel [2005] Karsten M Borgwardt and Hans-Peter Kriegel. Shortest-path kernels on graphs. In Data Mining, Fifth IEEE International Conference on, pages 8–pp. IEEE, 2005.
  • Neumann et al. [2016] Marion Neumann, Roman Garnett, Christian Bauckhage, and Kristian Kersting. Propagation kernels: efficient graph kernels from propagated information. Machine Learning, 102(2):209–245, 2016.
  • Leskovec and Krevl [2015] Jure Leskovec and Andrej Krevl. SNAP Datasets:Stanford large network dataset collection. 2015.
  • [61] Reza Zafarani and Huan Liu. Social computing data repository at asu, 2009. URL http://socialcomputing. asu. edu.
  • Mahoney [2011] Matt Mahoney. Large text compression benchmark, 2011.
  • Stark et al. [2006] Chris Stark, Bobby-Joe Breitkreutz, Teresa Reguly, Lorrie Boucher, Ashton Breitkreutz, and Mike Tyers. Biogrid: a general repository for interaction datasets. Nucleic acids research, 34(suppl_1):D535–D539, 2006.

Appendix A More about the three types of features for link prediction

In this section, we discuss more about the difference among the three types commonly used features for link prediction: graph structure features, latent features, and explicit features.

Graph structure features locate inside the observed node and edge structures of the network, which can be directly observed and computed. Link prediction heuristics belong to graph structure features. We show eight popular heuristics in Table LABEL:heuristics. In addition to link prediction heuristics, node centrality scores (degree, closeness, betweenness, PageRank, eigenvector, hubs etc.), graphlets, network motifs etc. all belong to graph structure features. Although effective in many domains, these predefined graph structure features are handcrafted – they only capture a small set of structure patterns, lacking the ability to express general structure patterns underlying different networks. Considering deep neural networks’ success in feature learning, a natural question to ask is whether we can automatically learn such features, no longer relying on predefined ones.

Graph structure features are inductive, meaning that these features are not associated with a particular node or network. For example, the common neighbor heuristic between any pair of nodes and is consistently calculated by counting the number of their common one-hop neighbors, invariant to where and are located. Thus, graph structure features are transferrable to new nodes and new networks. This is in contrast to latent features, which are often transductive – the changing of network structure will require a complete retraining to get the latent features again.

Name Formula Order
common neighbors first
Jaccard first
preferential attachment first
Adamic-Adar second
resource allocation second
Katz high
PageRank high
SimRank high
Table 3: Popular heuristics for link prediction, see [1] for details.

Latent features are latent properties or representations of nodes, often obtained by factorizing a specific matrix derived from a network, such as the adjacency matrix or the Laplacian matrix. Through factorization, a low-dimensional embedding is learned for each node. Latent features focus more on global properties and long range effects, because the network’s matrix is treated as a whole during factorization. Latent features cannot capture structural similarities between nodes [38], and usually need an extremely large dimension to express some simple heuristics [23]. Latent features are also transductive. They cannot be transferred to new nodes or new networks. They are also less interpretable than graph structure features.

Network embedding methods [19, 21, 20, 39, 40, 41] have gained great popularity recently. They learn low-dimensional representations for nodes too. Recently, it is shown that network embedding methods (including DeepWalk [19], LINE [21], and node2vec [20]) implicitly factorize some matrix representation of a network [22]. For example, DeepWalk approximately factorizes , where is the adjacency matrix of the network , is the diagonal degree matrix, is skip-gram’s window size, and is the number of negative samples. For LINE and node2vec, there also exist such matrices. Since network embedding methods also factorize matrix representations of networks, we may regard them as learning more expressive latent features through factorizing some more informative matrices.

Explicit features are often given by continuous or discrete node attribute vectors. In principle, any side information about the network other than its structure can be seen as explicit features. For example, in citation networks, word distributions are explicit features of document nodes. In social networks, a user’s profile information is also explicit feature (however, their friendship information belongs to graph structure features).

These three types of features are largely orthogonal to each other. Many papers have considered using them together for link prediction [42, 43, 23, 24] to improve the performance of single-feature-based methods.

Appendix B More discussion about node labeling

The necessity of structural node labels for enclosing subgraphs is because, unlike ordinary graphs, enclosing subgraphs intrinsically have a directionality. The center of an enclosing subgraph are two nodes and between which the target link is located. Outward from the center, other nodes have larger and larger distance to and . Node labeling is to mark such structural differences thus providing additional structural information to facilitate GNN training.

When designing a node labeling for enclosing subgraphs, we always want to ensure that the target nodes and have a distinct label so that GNN can distinguish the target link to predict from other edges. Secondly, we want the node labels to reflect nodes’ relative positions in their enclosing subgraph. This relative position can be intuitively described by a node ’s double-radius with respect to and , i.e., .

We restate our Double-Radius Node Labeling (DRNL) algorithm here. First, assign label 1 to and . Then, for any node with , assign label . Nodes with double-radius or get label 3. Nodes with double-radius or get 4. Nodes with get 5. Nodes with or get 6. Nodes with or get 7. So on and so forth. Our DRNL not only satisfies the above criteria, but also attains the additional benefits that for nodes and :

1) if , then ;

2) if , then .

That is, the magnitude of node labels also reflects their distance to the center. Nodes with smaller arithmetic mean distance to the target nodes get smaller labels. If two nodes have the same arithmetic mean distance, the node with a smaller geometric mean distance to the target nodes gets a smaller label. Note that these additional benefits will not be available under one-hot encoding of node labels, since the magnitude information will be lost after one-hot encoding. However, such a labeling is potentially useful when node labels are directly used for training, or used to rank the nodes. Furthermore, our node labeling has a perfect hashing (

10) which allows closed-form computation.

We present a lookup table for DRNL and an example labeled subgraph in Figure 2. Note that when calculating , we temporally remove from the subgraph, and vice versa. This is because we aim to use the pure distance between and without the influence of . If we do not remove , will be upper bounded by , obscuring the “true distance” between and .

(a)
Figure 2: Double-Radius Node Labeling.

Our node labeling algorithm is different from the Weisfeiler-Lehman algorithm used in WLNM [12]. In WLNM, node labeling is for defining a node order in adjacency matrices – the labels are not really input to machine learning models. To rank nodes with least ties, the node labels should be as fine as possible in WLNM. In comparison, the node labels in SEAL need not be very fine, as their purpose is for indicating nodes’ different roles within the enclosing subgraph, not for ranking nodes. In addition, node labels in SEAL are encoded into node information matrices and input to machine learning models.

Appendix C Dataset details

USAir [44] is a network of US Air lines with 332 nodes and 2,126 edges. The average node degree is 12.81. NS [45] is a collaboration network of researchers in network science with 1,589 nodes and 2,742 edges. The average node degree is 3.45. PB [46] is a network of US political blogs with 1,222 nodes and 16,714 edges. The average node degree is 27.36. Yeast [47] is a protein-protein interaction network in yeast with 2,375 nodes and 11,693 edges. The average node degree is 9.85. C.ele [48] is a neural network of C. elegans with 297 nodes and 2,148 edges. The average node degree is 14.46. Power [48] is an electrical grid of western US with 4,941 nodes and 6,594 edges. The average node degree is 2.67. Router [49] is a router-level Internet with 5,022 nodes and 6,258 edges. The average node degree is 2.49. E.coli [50] is a pairwise reaction network of metabolites in E. coli with 1,805 nodes and 14,660 edges. The average node degree is 12.55.

Appendix D Additional details about baselines

Hyperparameters of heuristic and latent feature methods   Most hyperparameters are inherited from the original paper of each method. For Katz, we set the damping factor to 0.001. For PageRank, we set the damping factor to 0.85. For SimRank, we set to 0.8. For stochastic block model (SBM), we use the implementation of [51] using a latent group number 12. For matrix factorization (MF), we use the libFM [52] software with the default parameters. For node2vec, LINE, and spectral clustering, we first generate 128-dimensional embeddings from the observed networks with default parameters of each software. Then, we use the Hadamard product of two nodes’ embeddings as a link’s embedding as suggested in [20], and train a logistic regression model with Liblinear [53] using automatic hyperparameter selection. For VGAE, we use its default setting.

WLNM   Weisfeiler-Lehman Neural Machine (WLNM) [12] is a recent link prediction method that learns general graph structure features. It achieves state-of-the-art performance on various networks, outperforming all handcrafted heuristics. WLNM has three steps: enclosing subgraph extraction, subgraph pattern encoding, and neural network training. In the enclosing subgraph extraction step: for each node pair , WLNM iteratively extracts and ’s one-hop neighbors, two-hop neighbors, and so on, until the enclosing subgraph has more than vertices, where is a user-defined integer. In the subgraph pattern encoding step, WLNM uses the Weisfeiler-Lehman algorithm to define an order for nodes within each enclosing subgraph, so that the neural network can read different subgraphs’ nodes in a consistent order and learn meaningful patterns. To unify the sizes of the enclosing subgraphs, after getting the vertex order, the last few vertices are deleted so that all the truncated enclosing subgraphs have the same size . These truncated enclosing subgraphs are reordered and their fixed-size adjacency matrices are fed into the fully-connected neural network to train a link prediction model. Due to the truncation, WLNM cannot consistently learn from each link’s full -hop neighborhood. The loss of structural information limits WLNM’s performance and restrict it from learning complete -order graph structure features. Following [12], we use (the best performing ) in our experiments.

Heuristics Latent features WLK WLNM SEAL
Graph structure features Yes No Yes Yes Yes
Learn from full -hop No n/a Yes No Yes
Latent/explicit features No Yes No No Yes
Model n/a LR/inner product SVM NN GNN
Table 4: Comparison of different link prediction methods

WLK   Weisfeiler-Lehman graph kernel (WLK) [34] is a state-of-the-art graph kernel. Graph kernels make kernel machines feasible for graph classification by defining some positive semidefinite graph similarity scores. Most graph kernels measure graph similarity by decomposing graphs into small substructures and adding up the pair-wise similarities between these components. Common types of substructures include walks [54, 55], subgraphs [56, 57], paths [58], and subtrees [34, 59]. WLK is based on counting common rooted subtrees between two graphs. In our experiments, we train a SVM on the WL kernel matrix. We feed the same enclosing subgraphs as in SEAL to WLK. We search the subtree depth from on 10% validation links. WLK does not support continuous node information, but supports integer node labels. Thus, we feed the same structural node labels from (10) to WLK too.

We compare the characteristics of different link prediction methods in Table 4.

Appendix E Configuration details of SEAL

In the experiments, we use Deep Graph Convolutional Neural Network (DGCNN) [17] as the default GNN engine of SEAL. DGCNN is a recent GNN architecture for graph classification. It has consistently good performance on various benchmark datasets with a single network architecture (avoid hyperparameter tweaking). DGCNN is equipped with propagation-based graph convolution layers and a novel graph aggregation layer, called SortPooling. We illustrate the overall architecture of DGCNN in Figure 3. Given the adjacency matrix and the node information matrix of an enclosing subgraph, DGCNN uses the following graph convolution layer:

(11)

where , is a diagonal degree matrix with , is a matrix of trainable graph convolution parameters,

is an element-wise nonlinear activation function, and

are the new node states. The mechanism behind (11) is that the initial node states

are first applied a linear transformation by multiplying

, and then propagated to neighboring nodes through the propagation matrix . After graph convolution, the row of becomes:

(12)

which summarizes the node information as well as the first-order structure pattern from ’s neighbors. DGCNN stacks multiple graph convolution layers (11) and concatenates each layer’s node states as the final node states, in order to extract multi-hop node features.

(a)
Figure 3: The DGCNN architecture.

A graph aggregation layer constructs a graph-level feature vector from individual nodes’ final states, which is used for graph classification. The most widely used aggregation operation is summing, i.e., nodes’ final states after graph convolutions are summed up as the graph’s representation. However, the averaging effect of summing might lose much individual nodes’ information as well as the topological information of the graph. DGCNN uses a novel SortPooling layer, which sorts the final node states according to the last graph convolution layer’s output to achieve an isomorphism invariant node ordering [17]. A max- pooling operation is then used to unify the sizes of the sorted representations of different graphs, which enables training a traditional 1-D CNN on the node sequence.

We use the default setting of DGCNN, i.e., four graph convolution layers as in (11) with 32,32,32,1 channels, a SortPooling layer (with such that 60% graphs have nodes less than

), two 1-D convolution layers (16 and 32 output channels) and a dense layer (128 neurons), see

[17]

. We train DGCNN on enclosing subgraphs for 50 epochs, and select the model with the smallest loss on the 10% validation data to predict the testing links.

Note that, in any positive training link’s enclosing subgraph, we should always remove the edge between the two target nodes before feeding it into a graph classification model. This is because this edge will contain the link existence information, which is not available in any testing link’s enclosing subgraph.

Appendix F Additional results

In this section, we show the additional experimental results. We first use 90% observed links as training links and 10% as testing links following the main paper’s experiments. The average precision (AP) comparison results with heuristic methods are shown in Table 5. The AP comparison results with latent feature methods are shown in Table 6. We can see that our proposed SEAL shows great performance improvement over all baselines in both AUC and AP.

Data CN Jaccard PA AA RA Katz PR SR ENS WLK WLNM SEAL
USAir 93.451.19 87.542.07 91.221.28 95.361.00 96.270.79 94.071.18 95.081.16 69.242.61 91.331.27 96.820.84 95.951.13 96.800.55
NS 94.390.96 94.440.93 72.851.88 94.460.93 94.460.93 95.051.08 95.111.04 94.981.02 97.680.36 98.790.40 98.810.49 99.060.37
PB 91.470.45 84.780.71 89.330.72 92.360.46 92.370.57 93.070.46 92.970.77 64.330.95 89.350.71 93.340.89 92.690.64 94.310.56
Yeast 89.340.62 89.150.67 85.360.85 89.530.63 89.550.63 95.230.39 95.470.43 93.420.64 85.540.85 96.820.35 96.400.38 98.330.37
C.ele 82.621.51 77.062.55 75.491.86 86.461.43 87.101.53 85.931.69 89.561.57 68.612.31 75.691.86 88.962.06 85.082.05 89.481.85
Power 58.770.88 58.770.89 51.931.16 58.760.89 58.760.90 79.820.91 80.560.91 77.020.93 83.631.37 83.023.19 87.160.77 89.551.29
Router 56.390.53 55.840.80 69.030.95 56.500.51 56.510.50 64.520.81 64.910.85 58.821.12 69.250.96 86.592.23 93.531.09 96.231.71
E.coli 93.490.38 82.420.59 94.040.33 96.050.25 96.720.25 94.830.30 96.410.33 55.010.86 94.110.33 97.250.42 97.500.23 98.030.20
Table 5: Comparison with heuristic methods (AP), 90% training links.
Data MF SBM N2V LINE SPC VGAE SEAL
USAir 94.360.79 95.081.10 89.712.97 79.7011.76 78.072.92 89.271.29 97.130.80
NS 78.413.85 92.132.36 94.280.91 85.171.65 90.832.16 95.831.04 98.120.77
PB 93.560.71 93.350.52 84.791.03 78.822.71 86.570.61 90.380.72 94.550.43
Yeast 92.010.47 92.730.44 94.900.38 90.552.39 94.630.56 95.190.36 97.950.35
C.ele 83.632.09 84.662.95 83.121.90 67.512.72 62.072.40 78.323.49 88.812.32
Power 53.501.22 65.481.85 81.490.86 56.661.43 91.000.58 75.911.56 86.691.50
Router 82.591.38 84.671.89 68.661.49 71.921.53 73.531.47 70.360.85 95.661.23
E.coli 95.590.31 95.300.27 90.871.48 86.451.82 96.080.37 92.770.65 97.830.20
Table 6: Comparison with latent feature methods (AP), 90% training links.

To evaluate SEAL’s scalability, we show its single-GPU inference time performance in Table 7. As we can see, SEAL has good scalability. For networks with over 1E7 potential links, SEAL took less than an hour to make all the predictions. One possible way to further scale SEAL to social networks with millions of users is to first use some simple heuristics such as common neighbors to filter out most unlikely links and then use SEAL to make further recommendations. Another way is to restrict the candidate friend recommendations to be those who are at most 2 or 3 hops away from the target user, which will vastly reduce the number of candidate links to infer for each user and thus further increase the scalability.

USAir NS PB Yeast C.ele Power Router E.coli
Number of potential links 5.49E+04 1.26E+06 7.46E+05 2.82E+06 4.40E+04 1.22E+07 1.26E+07 1.39E+06
Inference time per link (s) 6.05E-04 2.55E-04 2.04E-04 3.96E-04 4.13E-04 1.35E-04 2.13E-04 2.40E-04
Inference time for all potential links (s) 31 321 146 1106 16 1640 2681 328
Table 7: Inference time of SEAL.
Data CN Jaccard PA AA RA Katz PR SR ENS WLK WLNM SEAL
USAir 87.930.43 84.820.52 87.590.50 88.610.40 88.730.39 88.910.51 90.570.62 81.090.59 87.710.50 91.930.71 91.420.95 93.231.46
NS 77.130.75 77.120.75 65.870.83 77.130.75 77.130.75 82.300.93 82.320.94 81.600.87 87.191.04 87.271.71 87.611.63 90.881.18
PB 86.740.17 83.400.24 89.520.19 87.060.17 87.010.18 91.250.22 92.230.21 81.820.43 89.540.19 92.540.33 90.930.23 93.750.18
Yeast 82.590.28 82.520.28 81.610.39 82.630.27 82.620.27 88.870.28 89.350.29 88.500.26 81.840.38 91.150.35 92.220.32 93.900.54
C.ele 72.290.82 69.750.86 73.810.97 73.370.80 73.420.82 79.990.59 84.950.58 76.050.80 74.110.96 83.290.89 75.721.33 81.161.52
Power 53.380.22 53.380.22 46.790.69 53.380.22 53.380.22 57.340.51 57.340.52 56.160.45 62.700.95 63.441.29 64.090.76 65.841.10
Router 52.930.28 52.930.28 55.060.44 52.940.28 52.940.28 54.390.38 54.440.38 54.380.42 55.060.44 71.254.37 86.100.52 86.641.58
E.coli 86.550.57 81.700.42 90.800.40 87.660.56 87.810.56 89.810.46 92.960.43 73.700.53 90.880.40 92.380.46 92.810.30 94.180.41
Table 8: Comparison with heuristic methods (AUC), 50% training links.
Data MF SBM N2V LINE SPC VGAE SEAL
USAir 91.280.71 91.680.66 84.631.58 72.5112.19 65.423.41 90.090.94 93.360.67
NS 62.951.03 81.911.55 80.291.20 65.961.60 79.631.34 93.381.07 87.731.08
PB 93.270.16 92.960.20 79.290.67 75.531.78 78.061.00 90.570.69 93.790.25
Yeast 84.990.49 88.320.38 90.180.17 79.447.90 89.730.28 93.510.41 93.300.51
C.ele 78.491.73 81.831.44 75.531.23 59.467.08 47.300.91 81.511.69 82.332.31
Power 50.530.60 57.530.76 55.400.84 53.441.83 56.510.94 70.340.84 61.881.31
Router 77.490.64 74.661.52 62.450.81 62.433.10 53.871.33 62.910.95 85.081.53
E.coli 91.750.33 90.600.58 84.730.81 74.5011.10 92.000.50 91.270.42 94.170.36
Table 9: Comparison with latent feature methods (AUC), 50% training links.
Data CN Jaccard PA AA RA Katz PR SR ENS WLK WLNM SEAL
USAir 87.600.45 80.351.26 90.290.45 89.390.39 89.540.36 91.290.36 91.930.50 73.040.84 90.470.45 93.340.51 92.540.81 94.111.08
NS 77.110.74 77.100.75 68.560.71 77.140.74 77.140.75 82.690.88 82.730.90 81.860.88 86.770.88 89.971.02 90.101.11 92.210.97
PB 85.900.16 78.590.43 88.830.25 87.240.18 87.050.21 91.540.16 91.920.25 70.780.69 88.870.25 92.340.34 91.010.20 93.420.19
Yeast 82.550.27 82.160.39 84.450.34 82.680.27 82.660.27 92.220.21 92.540.23 90.980.30 84.770.34 93.550.46 93.930.20 95.320.38
C.ele 69.820.74 64.041.02 74.200.65 73.400.77 73.330.96 79.940.79 84.150.86 68.451.17 74.620.64 83.200.90 76.121.08 81.011.51
Power 53.370.22 53.350.24 51.440.59 53.370.23 53.370.23 57.630.52 57.610.56 56.190.49 61.810.71 63.971.81 66.430.85 68.141.02
Router 52.910.27 52.710.23 65.200.42 52.940.27 52.930.27 60.870.26 61.010.30 58.270.51 65.380.42 75.493.43 86.120.68 87.791.71
E.coli 86.420.54 78.710.40 93.250.26 89.010.49 89.210.48 91.930.35 94.680.28 63.050.48 93.350.27 94.510.32 94.470.21 95.580.28
Table 10: Comparison with heuristic methods (AP), 50% training links.
Data MF SBM N2V LINE SPC VGAE SEAL
USAir 92.330.90 92.790.44 82.512.08 71.7511.85 70.182.16 89.861.23 94.150.54
NS 66.620.89 84.141.18 86.010.87 71.530.97 81.161.26 95.310.80 90.420.79
PB 92.530.33 92.640.17 77.210.97 78.721.24 81.300.84 90.570.79 93.400.33
Yeast 87.280.57 90.650.24 92.450.23 83.069.70 92.070.27 94.710.25 94.830.38
C.ele 77.821.59 80.520.92 72.911.74 60.716.26 55.310.93 79.541.60 81.992.18
Power 52.450.63 57.230.85 60.830.68 55.113.49 59.101.06 74.860.43 65.281.25
Router 81.250.56 77.771.13 66.770.57 64.876.76 59.133.22 71.250.66 86.701.59
E.coli 94.040.36 93.170.35 85.410.94 75.9814.45 94.140.29 93.410.32 95.670.24
Table 11: Comparison with latent feature methods (AP), 50% training links.

Next, we redo the comparisons under 50%–50% train/test split. We randomly remove 50% existing links as positive testing links and use the remaining 50% existing links as positive training links. The same number of negative training and testing links are sampled from the nonexistent links as well. The AUC results are shown in Table 8 and 9. The AP results are shown in Table 10 and 11.

The results are consistent with the 90%–10% split setting. As we can see, SEAL is still the best among all methods in general. The performance gains over heuristic methods are even larger compared to the 90%-10% split. This indicates that SEAL is able to learn good heuristics even when the network is very incomplete. SEAL also shows more clear advantages over WLNM. On the other hand, we observe that VGAE becomes a strong baseline when network is sparser by achieving the best AUC results on 3 out of 8 datasets. It is thus interesting to study whether replacing the node2vec embeddings in SEAL with the VGAE embeddings can further improve the performance. We leave it to future work.

We further conduct experiments with the setting of the node2vec paper [20] on five networks: arXiv (18,722 nodes and 198,110 edges) [60], Facebook (4,039 nodes and 88,234 edges) [60], BlogCatalog (10,312 nodes, 333,983 edges and 39 attributes) [61], Wikipedia (4,777 nodes, 184,812 edges and 40 attributes) [62], and Protein-Protein Interactions (PPI) (3,890 nodes, 76,584 edges and 50 attributes) [63]. For each network, 50% of random links are removed and used as testing data, while keeping the remaining network connected. For Facebook and arXiv, all remained links are used as positive training data. For PPI, BlogCatalog and Wikipedia, we sample 10,000 remained links as positive training data. We compare SEAL (, 10 training epochs) with node2vec, LINE, SPC, VGAE, and WLNM (). For node2vec, we use the parameters provided in [20] if available. For SEAL and VGAE, the node attributes are used since only these two methods support explicit features.

Table 12 shows the results. As we can see, SEAL consistently outperforms all embedding methods. Especially on the last three networks, SEAL (with node2vec embeddings) outperforms pure node2vec by large margins. These results indicate that in many cases, embedding methods alone cannot capture the most useful link prediction information, while effectively combining the power of different types of features results in much better performance. SEAL also consistently outperforms WLNM.

N2V LINE SPC VGAE WLNM SEAL
arXiv 96.180.40 84.640.03 87.000.14 OOM 99.190.03 99.400.14
Facebook 99.050.07 89.630.06 98.590.11 98.210.22 99.240.03 99.400.08
BlogCatalog 85.971.56 90.922.05 96.740.31 OOM 96.550.08 98.100.60
Wikipedia 76.592.06 74.440.66 99.540.04 89.740.18 99.050.03 99.630.05
PPI 70.310.79 72.821.53 92.270.22 85.860.43 88.790.38 93.520.37
Table 12: Comparison with network embedding methods (AUC and standard deviation, OOM: out of memory).