I Introduction
Link prediction is a longstudied problem that attempts to predict either missing links in an incomplete graph, or links that are likely to form in the future. This has applications in discovering unknown protein interactions to speed up the discovery of new drugs, friend recommendation in social networks, knowledge graph completion, and more
[adamic2003friends, liben2007link, martinez2016survey, safavi2020evaluating]. Techniques range from heuristics, such as predicting links based on the number of common neighbors between a pair of nodes, to machine learning techniques, which formulate the link prediction problem as a binary classification problem over node pairs
[hamilton2017representation, zhang2018link].Link prediction is often evaluated via a ranking, where pairs of nodes that are not currently linked are sorted based on the “likelihood” score given by the method being evaluated [martinez2016survey]. To construct the ranking, a “groundtruth” test set of node pairs is constructed by either (1) removing a certain percentage of links from a graph at random or (2) removing the newest links that formed in the graph, if edges have timestamps. These removed edges form the test positives, and the same number of unlinked pairs are generated at random as test negatives. The methods are then evaluated on how well they are able to rank the test positives higher than the test negatives.
However, when link prediction is applied in practice, these ground truth labels are not known, since that is the very question that link prediction is attempting to answer. Instead, any pair of nodes that are not currently linked could link in the future. Thus, to identify likely missing or future links, a link prediction method would need to consider node pairs for a graph with nodes; most of which in sparse, realworld networks would turn out to not link. Proximity, on its own, is only a weak signal, sufficient to rank pairs in a balanced test set, but likely to turn up many false positives in an asymptotically skewed space, leaving discovering the relatively small number of missing or future links a challenging problem.
Proximitybased link prediction heuristics [liben2007link], such as Common Neighbors, could ignore some of the search space, such as nodes that are farther than two hops from each other, but this would not extend to other notions of proximity, like proximitypreserving embeddings. Duan et al. studied the problem of pruning the search space [duan2017ensemble], but formulated it as top link prediction, which attempts to predict a small number of links, but misses a large number of missing links in the process, suffering from low recall.
The goal of this work is to develop a principled approach to choose, from the quadratic and skewed space of possible links, a set of candidate pairs for a link prediction method to make decisions about. We envision that this will allow current and future developments to be realized for link prediction in practice, where no groundtruth set is available.
Problem 1.
Given a graph and a proximity function between nodes, we seek to return a candidate set of node pairs for a link predictor to make decisions about, such that the set is significantly smaller than the quadratic search space, but contains many of the missing and future links.
Our insight to handle the vast number of negatives is to consider not just the proximity of nodes, but also their structural resemblance to observed links. We measure resemblance as the fraction of observed links that fall in inferred, graphstructural equivalence classes of node pairs. For example, Fig. 1 shows one possible grouping of nodes based on their degrees, where the resulting structural equivalence classes (the cells in the “roadmap”) capture what fraction of observed links form between nodes of different degrees. Based on the roadmap, equivalence classes with a high fraction of observed edges are expected to contain more unlinked pairs than those with lower resemblance. We then employ node proximity within equivalence classes, rather than globally, which decreases false positives that are in close proximity, but do not resemble observed links, and decreases false negatives that are farther away in the graph, but resemble many observed edges. Moreover, to avoid computing proximities for all pairs of nodes within each equivalence class, we extend selftuning locality sensitive hashing (LSH). Our main contributions are:

Formulation & Theoretical Connections. Going beyond the heuristic of proximity between nodes, we model the plausibility of a node pair being linked as both their proximity and their structural resemblance to observed links. Based on this insight, we propose Future Link Location Models (FLLM), which combine Proximity Models and Stochastic Block Models; and we prove that Proximity Models are a naive special case. § III

Empirical Analysis. We evaluate LinkWaldo on 13 diverse datasets from different domains, where it returns on average 2233% more missing links than embeddingbased models and 730% more than strong heuristics. § V
Our code is at https://github.com/GemsLab/LinkWaldo.
Ii Related work
In this paper, we focus on the understudied problem of choosing candidate pairs from the quadratic space of possible links, for link prediction methods to make predictions about. Link prediction techniques range from heuristic definitions of similarity, such as Common Neighbors [liben2007link], Jaccard Similarity [liben2007link], and AdamicAdar [adamic2003friends], to machine learning approaches, such as latent methods, which learn lowdimensional node representations that preserve graphstructural proximity in latent space [hamilton2017representation], and GNN methods, which learn heuristics specific to each graph [zhang2018link] or attempt to reconstruct the observed adjacency matrix [kipf2016variational]. For detailed discussion of link prediction techniques, we refer readers to [liben2007link] and [martinez2016survey].
Selecting Candidate Pairs. The closest problem to ours is top link prediction [duan2017ensemble], which attempts to take a particular link prediction method and prune its search space to directly return the highest score pairs. One method [duan2017ensemble] samples multiple subgraphs to form a bagging ensemble, and performs NMF on each subgraph, returning the nodes with the largest latent factor products from each, while leveraging earlystopping. The authors view their method’s output as predictions rather than candidates, and thus focus on high precision at small values of relative to our setting. Another approach, Approximate Resistance Distance Link Predictor [pachev2018fast] generates spectral node embeddings by constructing a lowrank approximation of the graph’s effective resistance matrix, and applies a closest pairs algorithm on the embeddings, predicting these as links. However, this approach does not scale to moderate embedding dimensions (e.g., the dimensionality of 128 oftenused used in embedding methods), and is often outperformed by the simple common neighbors heuristic.
A related problem is linkrecommendation, which seeks to identify the most relevant nodes to a query node. It has been studied in social networks for friend recommendation [song2015top], and in knowledge graphs [joshi2020searching] to pick subgraphs that are likely to contain links to a given query entity. In contrast, we focus on candidate pairs globally, not specific to a query node.
Iii Theory
Let be a graph or network with nodes and edges, where . The adjacency matrix of is an binary matrix with element if nodes and are linked, and 0 otherwise. The set of node ’s neighbors is . We summarize the key symbols used in this paper and their descriptions in Table I.
We now formalize the problem that we seek to solve:
Problem 2.
Given a graph , a proximity function between nodes, and a budget , return a set of plausible candidate node pairs of size for a link predictor to make decisions about.
We describe next how to define resemblance in a principled way inspired by Stochastic Block Models, introduce a unified model for link prediction methods that use the proximity of nodes to rank pairs, and describe our model, which combines resemblance and proximity to solve Problem 2.
Iiia Stochastic Block Models
Stochastic Block Models (SBMs) are generative models of networks. They model the connectivity of graphs as emerging from the community or group membership of nodes [nowicki2001estimation].
Node Grouping. A node grouping is a set of groups or subsets of the nodes that satisfies . It is called a partition if it satisfies . Each node has a dimensional binary membership vector , with element if belongs to group .
A node grouping can capture community structure, but it can also capture other graphstructural properties, like the degrees of nodes, in which case the SBM captures the compatibility of nodes w.r.t degree—viz. degree assortativity.
Membership Indices. The membership indices of nodes are the set of group ids s.t. and : , .
Membership equivalence relation & classes. The membership indices form the equivalence relation : . This induces a partition over all pairs of nodes (both linked and unlinked), where the equivalence class contains all node pairs with the same membership indices, i.e., and for some . We denote the equivalence class of pair as .
Example 1.
If nodes are grouped by their degrees to form , then the membership indices of node pair are determined by and ’s respective degrees. For example, in Fig. 1, the upper circled node pair has degrees and respectively, which determines their equivalence class—in this case, the cell in the roadmap. Each cell of the roadmap corresponds to an equivalence class .
We can now formally define an SBM:
Definition 1 (Stochastic Block Model  SBM).
Given a node grouping and a weight matrix
specifying the propensity for links to form across groups, the probability that two nodes link given their group memberships is
, where function converts the dot product to a probability (e.g., sigmoid) [miller2009nonparametric].The vanilla SBM [nowicki2001estimation] assigns each node to one group (i.e., the grouping is a partition and membership vectors are onehot), in which case . The overlapping SBM [miller2009nonparametric, latouche2011overlapping] is a generalization that allows nodes to belong to multiple groups, in which case membership vectors may have multiple elements set to 1, and .
Resemblance. Given an SBM with grouping , we define the resemblance of node pair under the SBM as the percentage of the observed (training) edges that have the same group membership as :
(1) 
Example 2.
In Figure 1, the resemblance of node pair corresponds to the density of the cell that it maps to. The high density in the border cells indicates that many lowdegree nodes connect to highdegree nodes. The dense central cells indicate that middegree nodes connect to each other.
IiiB Proximity Models
Proximitybased link prediction models (PM) model the connectivity of graphs based on the proximity of nodes. Some methods define the proximity of nodes with a heuristic, such as Common Neighbors (CN), Jaccard Similarity (JS), and Adamic/Adar (AA). More recent approaches learn latent similarities between nodes, capturing the proximity in latent embeddings such that nodes that are in close proximity in the graph have similar latent embeddings (e.g., dot product) [hamilton2017representation].
Node Embedding. A node embedding, , is a realvalued, dimensional vector representation of a node . We denote all the node embeddings as matrix .
Definition 2 (Proximity Model  PM).
Given a similarity or proximity function between nodes, the probability that nodes and link is an increasing function of their proximity: .
Instances of the PM include the Latent Proximity Model:
(2) 
where are the nodes’ latent embeddings; and the Common Neighbors, Jaccard Similarity, and Adamic/Adar models:
(3) 
(4) 
(5) 
IiiC Proposed: Future Link Location Model
Unlike SBM and PM, our model, which we call the Future Link Location Model (FLLM), is not just modeling the probability of links, but rather where in the search space future links are likely to fall. To do so, FLLM uses a partition of the search space, and corresponding SBM, as a roadmap that gives the number of new edges expected to fall in each equivalence class. To formalize this idea, we first define two distributions:
New and Observed Distributions. The new link distribution and the observed link distribution capture the fraction of new and observed edges that fall in equivalence class , respectively.
Definition 3 (Future Link Location Model  FLLM).
Given an overlapping SBM with grouping , the expected number of new links in equivalence class is proportional to the number of observed links in , and the probability of node pair linking is equal to the pair’s resemblance times their proximity relative to other nodes in : ! .
FLLM employs the following theorem, which states that if of the observed links fall in equivalence class , then in expectation, of the unobserved links will fall in equivalence class . We initially assume that the unobserved future links follow the same distribution as the observed links—as generally assumed in machine learning—i.e., the relative fraction of links in each equivalence class will be the same for future links as observed links: . In the next subsection, we show that for a fixed , the error in this assumption is determined by the total variation distance between and , and hence is upperbounded by a constant.
Theorem 1.
Given an overlapping SBM with grouping inducing the partition of for a graph , out of new (unobserved) links , the expected number that will fall in equivalence class
and its variance are:
(6) 
(7) 
Proof.
Observe that the number of the new edges that fall in equivalence class , i.e.,
, is a binomial random variable over
trials, with success probability . Thus, the random variable’s expected value is(8) 
and its variance is
(9) 
We can derive via and Bayes’ rule:
Combining the last equation with Eq. (8) results directly in Eq. (6), and by substituting into Eq. (9) we obtain:
where we used the fact that . ∎
IiiD Guarantees on Error
While this derivation assumed that the future link distribution is the same as the observed link distribution, we now show that for a fixed , the amount of error incurred when this assumption does not hold is entirely dependent on the total variation distance, and hence is upperbounded by .
Total Variation Distance. The total variation distance [tsybakov2008introduction] between and , which is a metric, is defined as
(10) 
Total Error. The total error made in the approximation of using Eq. (6) is defined as
(11) 
where is the true expected value regardless of whether or not holds.
Theorem 2.
The total error incurred over in the computation of the expected number of new edges that fall in each is an increasing function of the the number of new pairs and the total variation distance between and . Furthermore, it has the following upperbound:
(12) 
Proof.
From the definition of total error in Eq. (11), the first equality holds from [levin2017markov]. The inequality holds based on the fact that ranges in , and Pinsker’s inequality [tsybakov2008introduction], which upperbounds via KLdivergence. ∎
IiiE Proximity Model as a Special Case of FLLM
The PM, defined in § IIIB, is a special case of FLLM, where FLLM’s grouping contains just one group . That is, if the nodes are not grouped, then the models give the same result. Thus, FLLM’s improvement over LaPM is a result of using structurallymeaningful groupings over the graph. The following theorem states this result formally.
Theorem 3.
For a single node grouping , both PM and FLLM give the same ranking of pairs :
Proof.
Since , , and since , all observed edges fall in the lone equivalence class . Thus . Since there is only one equivalence class, the denominator in Dfn. 3 is equal to a constant . Therefore, , and both models are increasing functions of . ∎
Iv Method
We solve Problem 2 by using our FLLM model in a new method, LinkWaldo, shown in Fig. 1, which has four steps:

S1: Generate node groupings and equivalence classes.

S2: Map the search space, deciding how many candidate pairs to return from each equivalence class.

S3: Search each equivalence class, returning directly the highestproximity pairs, and stashing some slightly lowerproximity pairs in a global pool.

S4: Choose the best pairs from the global pool to augment those returned from each equivalence class.
We discuss these steps next, give pseudocode in Alg. 1, and discuss time complexity in the appendix.
Iva Generating Node Groupings (S1)
In theory, we would like to infer the groupings that directly maximize the likelihood of the observed adjacency matrix. However, the techniques for inferring these groupings (and the corresponding node membership vectors) are computationally intensive, relying on Markov chain Monte Carlo (MCMC) methods
[Mehta19sbmgnn]. Indeed, these methods are generally applied in networks with only up to a few hundred nodes [miller2009nonparametric]. In cases where is large enough that considering all node pairs would be computationally infeasible, so would be MCMC. Instead LinkWaldo uses a fixed grouping, though it is agnostic to how the nodes are grouped. We discuss a number of sensible groupings below, and discuss how to set the number of groups in § VD. Any other grouping can be readily used within our framework, but should be carefully chosen to lead to strong results.Logbinned Node Degree (DG). This grouping captures degree assortativity [newman2003mixing], i.e., the extent to which low degree nodes link with other low degree nodes vs. high degree nodes, by creating uniform bins in logspace (e.g., Fig. 1; linear bins).
Structural Embedding Clusters (SG). This grouping extends DG by clustering latent node embeddings that capture structural roles of nodes [Rossi2019FromCT].
Communities (CG). This grouping captures community structure by clustering proximity preserving latent embeddings or using community detection methods.
Multiple Groupings (MG). Any subset of these groupings or any other groupings can be combined into a new grouping, by setting element(s) to 1 for ’s membership in each grouping, since nodes can have overlapping group memberships.
IvB Mapping the Search Space (S2)
LinkWaldo’s approach to mapping the search space (i.e., identifying how many pairs to return per class ) follows directly from Thm. 1. LinkWaldo computes the expected number of pairs in each equivalence class based on Eq. (6) and its variance based on Eq. (7), as a measure of the uncertainty. When LinkWaldo searches each equivalence class , it returns the expected number of pairs minus
directly, and adds more pairs, up to a standard deviation past the mean, to a global pool . Thus, LinkWaldo adds into the pairs in closest proximity in equivalence class , and the next closest pairs into the global pool (both expressions are rounded to the nearest integer). Node pairs that are already linked are skipped.IvC Discovering Closest Pairs per Equivalence Class (S3)
We now discuss how LinkWaldo discovers the closest unlinked pairs within each equivalence class (Fig. 1), where is determined in step S2 based on the expected number of pairs in the equivalence class, and variance (uncertainty).
Problem 3.
Given an equivalence class , return the top unlinked pairs in in closest proximity , where (based on S2).
For equivalence classes smaller than some tolerance , it is feasible to search all pairs of nodes exhaustively. However, for , this should be avoided, to make the search practical. We first discuss this case when using the dot product similarity in Eq. (2), and then discuss it for other similarity models (CN, JS, and AA) given by Eqs. (3)(5). Finally, we introduce a refinement that improves the robustness of LinkWaldo against errors in proximity.
IvC1 Avoiding Exhaustive Search for Dot Product
In the case of dot product, we use Locality Sensitive Hashing (LSH) [wang2014hashing] to avoid searching all pairs. LSH functions have the property that the probability of two items colliding is a function of their similarity. We use the following fact:
Fact 1.
The equivalence class can be decomposed into the Cartesian product of two sets , where and .
At a high level, to solve Prob. 3, we hash each node embedding of the nodes in and using a locality sensitive hash function. We design the hash function, described next, such that the number of pairs that map to the same bucket is greater than , but as small as possible, to maximally prune pairs. Once the embeddings are hashed, we search the pairs in each hash bucket for the
closest. We normalize the embeddings so that dot product is equivalent to cosine similarity, and use the Random Hyperplane LSH family
[charikar2002similarity].Definition 4 (Random Hyperplane Hash Family).
The random hyperplane hash family is the set of hash functions , where is a random dimensional Gaussian unit vector and .
This hash family is wellknown to provide the property that the probability of two vectors colliding is a function of the degree of the angle between them [bawa2005lsh]:
where the last equality holds due to normalized embeddings.
To lower the false positive rate, it is conventional to form a new hash function by sampling hash functions from and concatenating the hash codes: . The new hash function is from another LSH family:
Definition 5 (ANDRandom Hyperplane Hash Family).
The ANDRandom hyperplane hash family is the set of hash functions , where is formed by concatenating randomly sampled hash functions for some .
Since the hash functions are sampled randomly from ,
Only vectors that are not split by all random hyperplanes end up with the same hash codes, so this process lowers the false positive rate. However, it also increases the false negative rate for the same reason. The conventional LSHscheme then repeats the process times, computing the dot product exactly over all pairs that match in at least one dim hash code, in order to lower the false negative rate. The challenge of this approach is determining how to set . To do so, we first define the hash buckets of a hash function, and their volume.
Definition 6 (Hash Buckets and Volume).
Given an equivalence class and a hash function , after applying to all , a hash bucket
consists of subsets of nodes that mapped to hashcode . The set of hash buckets consists of all nonempty buckets. We define the volume of the buckets as the number of pairs where and landed in the same bucket:
Since we are after the closest pairs, we want to find a hash function such that . But since we want to search as few pairs as possible, we seek the value of that minimizes for some subject to the constraint that .
Any hash function corresponds to a binary prefix tree, like Fig. 2. Each level of the tree corresponds to one
, and the leaves correspond to the buckets . Thus, to automatically identify the best value of , we can recursively grow the tree, branching each leaf with a new random hyperplane hash function , until , then undo the last branch. At that point, the depth of the tree equals , and is the largest value such that . To prevent this process from repeating indefinitely in edge cases, we halt the branching at a maximum depth . This approach is closely related to LSH Forests [bawa2005lsh], but with some key differences, which we discuss below.
Theorem 4.
Given a hash function , the closest pairs in are the most likely pairs to be in the same bucket: !.
Proof.
Since is a decreasing function of , Eq. (IVC1) shows that is an increasing function of . The result follows from this. ∎
While implies that are more likely than to be in the same bucket, it does not guarantee that this outcome will always happen. Thus, we repeat the process times, creating binary prefix trees and, searching the pairs that fall in the same bucket in any tree for the top . Setting the parameter is considered of minor importance, as long as it is sufficiently large (e.g., 10) [bawa2005lsh].
Differences from LSH Forests [bawa2005lsh]. LSH Forests are designed for search, which seeks to return the nearest neighbors to a query vector. In contrast, our approach is designed for closestpairs search, which seeks to return the closest pairs in a set . LSH Forests grow each tree until each vector is in its own leaf. We grow each tree until we reach the target bucket volume . LSH Forests allow variable length hash codes, since the nearest neighbors of different query vectors may be at different relative distances. All our leaves are at the same depth so that the probability of surviving together to the leaf is an increasing function of their dot product.
IvC2 Avoiding Exhaustive Search for Heuristics
For the heuristic definitions of proximity in Eqs.(3)(5), there are two approaches to solving Prob. 3. The first is to construct embeddings from the CN and AA scores (this does not apply to JS). For CN, if we let the node embeddings be their corresponding rows in the adjacency matrix, i.e, , then . Similarly, , yields , where is a diagonal matrix recording the degree of each node. Thus, the LSH solution just described can be applied. The second approach uses the fact that all three heuristics are defined over the 1hop neighborhoods of nodes . Thus, to have nonzero proximity, must be within 2hops of each other, and any pairs not within 2hops can implicitly be ignored.
IvC3 Bail Out Refinement
To this point we have assumed that the proximity model used in LinkWaldo is highly informative and accurate. However, in reality, heuristics may not be informative for all equivalence classes, and even learned, latent proximity models, can fail to encode adequate information. For instance, it is challenging to learn highquality representations for lowdegree nodes. Thus, we introduce a refinement to LinkWaldo that automatically identifies when a proximity model is uninformative in an equivalence class, and allows it to bail out of searching that equivalence class.
Proximity Model Error. The error that a proximity model makes is the probability that it gives a higher proximity for some unlinked pair than for some linked pair .
By this definition of error, we expect strong proximity models to mostly assign higher proximity between observed edges than future or missing edges: for some and . Thus, on our way to finding the top most similar (unlinked) pairs in an equivalence class (Problem 3), we expect to encounter a majority of the observed edges (linked pairs) that fall in that class. For a userspecified error tolerance , LinkWaldo will bail out and return no pairs from any equivalence class where less than fraction of its observed edges are encountered on the way to finding the most similar unlinked pairs. LinkWaldo keeps track of how many pairs were skipped by bailing out, and replaces them (after step S4) by adding to the topranked pairs of a heuristic (e.g., AA).
IvD Augmenting Pairs from Global Pool (S4)
Since LinkWaldo returns a standard deviation below the expected number of new pairs in each equivalence class, it chooses the remaining pairs up to from . To do so, it considers pairs in descending order on the input similarity function , and greedily adds to until .
V Evaluation
We evaluate LinkWaldo on three research questions: (RQ1) Does the set returned by LinkWaldo have high recall and precision? (RQ2) Is LinkWaldo scalable? (RQ3) How do parameters affect performance?
Va Data & Setup
We evaluate LinkWaldo on a large, diverse set of networks: metabolic, social, communication, and information networks. Moreover, we include datasets to evaluate in both LP scenarios: (1) returning possible missing links in static graphs and (2) returning possible future links in temporal graphs. We treat all graphs as undirected.
Metabolic. Yeast [zhang2018link], HSProtein [kunegis2013konect], and ProteinSoy [snapnets] are metabolic protein networks, where edges denote known associations between proteins in different species. Yeast contains proteins in a species of yeast, HSProtein in human beings, and ProteinSoy in Glycine max (soybeans).
Social. Facebook1 [snapnets] and Facebook2 [kunegis2013konect] capture friendships on Facebook, Reddit [snapnets] encodes links between subreddits (topical discussion boards), edges in Epinions [kunegis2013konect] connect users who trust each other’s opinions, MathOverflow [snapnets] captures comments and answers on mathrelated questions and comments (e.g., user answered user ’s question), Digg [rossi2015network] captures friendships among users.
Communication. Enron [kunegis2013konect] is an email network, capturing emails sent during the collapse of the Enron energy company.
Information. DBLP [kunegis2013konect] is a citation network, and arXiv [snapnets] is a coauthorship network of Astrophysicists. MovieLens [kunegis2013konect] is bipartite graph of users rating movies for the research project MovieLens. Edges encode users and the movies that they rated.
Training Graph and Ground Truth. While using LinkWaldo in practice does not require a test set, in order to know how effective it is, we must evaluate it on ground truth missing links. As ground truth, we remove 20% of the edges. In the static graphs, we remove 20% at random. In the temporal graphs, we remove the 20% of edges with the most recent timestamps. If either of the nodes in the removed edge is not present in the training graph, we discard the edge from the groundtruth. The graph with these edges removed is the training graph, which LinkWaldo and the baselines observe when choosing the set of unlinked pairs to return.
Setup. We discuss in § VD how we choose which groupings to use, and how many groups in each. Whenever used, we implement SG and CG by clustering embeddings with KMeans: xNetMF [heimann2018regal] and NetMF [qiu2018network] (window size 1), respectively. In LSH, we set the maximum tree depth dynamically based on the size of an equivalence class: if , if , , otherwise. We set the number of trees based on the fraction of that we seek to return: if , if and otherwise.
VB Recall and Precision (Rq1)
Task Setup. We evaluate how effectively LinkWaldo returns in the groundtruth missing links, at values of much smaller than . We report , chosen based on dataset size, in Tab. III, and discuss effects of the choice in the appendix. We compare the set LinkWaldo returns to those of five baselines, and evaluate both LinkWaldoD, which uses grouping DG, and LinkWaldoM, which uses DG, SG, and CG together. In both LinkWaldo variants, we consider the following proximities (cf. IIIB) as input, and report the results that are best: LaPM using NetMF [qiu2018network] embeddings (window sizes 1 and 2), and AA, the best heuristic proximity. For the bipartite MovieLens, we use BiNE [gao2018bine], an embedding method designed for bipartite graphs. We report the input proximity model for each dataset in Tab. V in the appendix. We set the exactsearch and bailout tolerances to and , which we determined via a parameter study in § VD. Results are averages over five random seeds (§ VA): for static graphs, the randomlyremoved edges are different for each seed; for temporal graphs, the latest edges are always removed, so the LSH hash functions are the main source of randomness.
Metrics. We use Recall (R@), the fraction of known missing/future links that are in the size set returned by the method, and Precision (P@), the fraction of the pairs that are known to be missing/future links. Recall is a more important metric, since (1) the returned set of pairs does not contain final predictions, but rather pairs for a LP method to make final decisions about, and (2) our realworld graphs are inherently incomplete, and thus pairs returned that are not known to be missing links, could nonetheless be missing in the original dataset prior to groundtruth removal (i.e., the openworld assumption [safavi2020evaluating]). We report both in Table III.
Baselines. We use five baselines. NMF+Bag [duan2017ensemble] uses nonnegative matrix factorization (NMF) and a bagging ensemble to return pairs while pruning the search space. We use their reported strongest version: the Biased Edge Bagging version with Node Uptake and Edge Filter optimizations (Biased(NMF+)). We use the authors’ recommended parameters when possible: , , , , number of latent factors , and ensemble size . In some cases, these suggested parameters led to fewer than pairs being returned, in which case we tweaked the values of , and until were returned. We report these deviations in Tab. V in the appendix. We use our own implementation.
We also use four proximity models, which we showed to be special cases of FLLM in § IIIE: LaPM ranks pairs globally based on the dot product of their embeddings, and returns the top . To avoid searching allpairs, we use the same LSH scheme that we introduce in § IVC for LinkWaldo. We set , and like LinkWaldo, use NetMF with a window size of 1 or 2, except for MovieLens, where we use BiNE.
JS, CN, and AA are defined in IIIB. We exploit the property described in IVC2—i.e., all these scores are zero for nodes beyond two hops. We compute the scores for all nodes within two hops, and return the top unlinked pairs.
Results. Across the 13 datasets, LinkWaldo is the best performing method on 10, in both recall and precision. The LinkWaldoM variant is slightly stronger than LinkWaldoD, but the small gap between the two demonstrates that even simple node groupings can lead to strong improvements over baselines. LinkWaldo generalizes well across the diverse types of networks. In contrast, the heuristics perform well on social networks, but not as well on, e.g., metabolic networks (Yeast, HSProtein, and ProteinSoy). Furthermore, the heuristic baselines cannot extend to bipartite graphs like MovieLens, because fundamentally, all links form between nodes more than one hop away. These observations demonstrate the value of learning from the observed links, which LinkWaldo does via resemblance. We also observe that heuristic definitions of similarity, such as AA
, outperform latent embeddings (LaPM) that capture proximity. We conjecture that the embedding methods are more sensitive to the massive skew of the data, because even random vectors in highdimensional space can end up with some level of proximity, due to the curse of dimensionality. This suggests that the standard approach of evaluating on a balanced test set may artificially inflate results.
In the three datasets where LinkWaldo does not outperform AA, it is only outperformed by a small margin. Furthermore, the four datasets with the largest total Variation distances between and are MovieLens, MathOverflow, Enron, and Digg. Theorem 2 suggests that LinkWaldo may incur the most error in these datasets. Indeed, these are the only three datasets where LinkWaldo fails to outperform all other methods (with MovieLens being bipartite, as discussed above). While the performance on temporal networks is strong, the higher total Variation distance suggests that the assumption that may sometimes be violated due to concept drift [belth2020mining]
. Thus, a promising future research direction is to use the timestamps of observed edges to predict roadmap drift over time, in order to more accurately estimate the future roadmap.
VC Scalability (Rq2)
Task Setup. We evaluate how LinkWaldo scales with the number of edges, and the number of nodes in a graph by running LinkWaldo with fixed parameters on all datasets. We set , use NetMF (windowsize of 1) as , and do not perform bailout (). All other parameters are identical to RQ1. We use our Python implementation on an Intel(R) Xeon(R) CPU E52697 v3, 2.60GHz with 1TB RAM.
Results. The results in Fig. 3 demonstrate that in practice, LinkWaldo scales linearly on the number of edges, and subquadratically on the number of nodes.
VD Parameters (Rq3)
Setup. We evaluate the quality of different groupings (§ IVA), and how the number of groups in each affects performance. On four graphs, Yeast, arXiv, Reddit, and Epinions, we run LinkWaldo with groupings DG, SG, and CG, varying the number of groups . We also investigate pairs of groupings, and the combination of all three groupings, via grid search of the number of groupings in each. We also evaluated , the tolerance for searching equivalence classes exactly vs. approximately with LSH, and , the fraction of training pairs we allow the proximity function to miss before we bailout of an equivalence class.
Results. The results for the individual groupings are shown in Fig. 4. Grouping by logbinning nodes based on their degree, (i.e., DG) is in general the strongest grouping. Across all three groupings, we find that is a good number of groups. We found that using all three groupings was the best combination, with 25 logbins, 5 structural clusters, and 5 communities (we omit the figures for brevity). For individual groupings, we observe diminishing returns, and in multiple groupings, slightly diminished performance when the number of groups in each grows large. We omit the figures for and , but found that and were the best parameters.
Vi Conclusion
In this paper, we focus on the understudied and challenging problem of identifying a moderatelysized set of node pairs for a link prediction method to make decisions about. We mitigate the vastness of the searchspace, filled with mostly nonlinks, by considering not just proximity, but also how much a pair of nodes resembles observed links. We formalize this idea in the Future Link Location Model, show its theoretical connections to stochastic block models and proximity models, and introduce an algorithm, LinkWaldo, that leverages it to return highrecall candidate sets, with only a tiny fraction of all pairs. Via our resemblance insight, LinkWaldo’s strong performance generalizes from social networks to protein networks. Future directions include investigating the directionality of links, since the roadmap can incorporate this information, and extending to heterogeneous graphs with many edge and node types, like knowledge graphs.
Acknowledgements
This work is supported by an NSF GRF, NSF Grant No. IIS 1845491, Army Young Investigator Award No. W911NF1810397, and Adobe, Amazon, and Google faculty awards.
Comments
There are no comments yet.