1 Introduction
Graph matching is an increasingly important problem in inferential graph statistics, with applications across a broad spectrum of fields including computer vision (
[38], [10]), shape matching and object recognition ([4], [7]), and biology and neuroscience ([22], [34], [37]), to name a few. The graph matching problem (GMP) seeks to find an alignment between the vertex sets of two graphs that best preserves common structure across graphs. Unfortunately, the GMP is inherently combinatorial, and no efficient exact graph matching algorithms are known. Indeed, even the simpler problem of determining if two graphs are isomorphic is famously of unknown complexity ([19], [30]), and if the graphs are allowed to be loopy, weighted and directed, then the simplest version of GMP is equivalent to the NPhard quadratic assignment problem. Due to its wide applicability, there exist a vast number of approximating algorithms for GMP; see the paper “30 Years of Graph Matching in Pattern Recognition” (
[11]) for an excellent survey of the existing literature.When matching across graphs, often we have access to a partial matching of the vertices in the form of a seeding. In practice, the assumption of seeds is quite natural in many applications. For example, in aligning social networks actors’ user names may often allow for a partial alignment to be known a priori. When matching across brain graphs (connectomes), we have geometric information provided by the brain atlas which provides a soft seeding fo the vertices. In many time series graphs, it is common to have a group of invariant vertices across time which act as seeds.
In the Seeded Graph Matching Problem (SGMP), we leverage the information contained in an available partial matching to match the remaining vertices across graphs. Though the literature on seeded graph matching is comparatively small, recent results point to significant performance improvements in GM algorithms by incorporating even a modest number of seeds ([17], [27]).
Though a myriad of approximate graph matching algorithms exist, the very large graphs arising in the burgeoning realm of “big data” demand highly scalable algorithms. Roughly speaking, existing state of the art algorithms for approximate graph matching can be divided into two classes: those that seek to bijectively match vertices of graphs of the same order, and those that seek matchings between the vertex sets that are allowed to be many–to–many and many–to–one. The current cuttingedge bijective graph matching algorithms achieve excellent performance in approximately matching graphs with thousands of vertices and with computational complexity — the number of vertices being matched; see for example [34], [36] and [15]. These algorithms often operate directly on the adjacency matrices of the graphs to be matched, utilizing the tools of nonlinear optimization to approximtely solve GMP directly. However, owing to their complexity, these algorithms are practically unusable, without significant computation resources, for matching very large graphs (.
Scalability is often achieved via relaxing the bijection requirement and allowing many–to–many and many–to–one matchings. These graph matching procedures can efficiently match very large graphs, often with in the tens of thousands; see for example [26], [1]. A common approach to these scalable inexact algorithms is that they first match smaller, lower dimensional representative objects (prototype graphs in [1]
, eigenvectors in
[26]) and use these to build the overall matching.Herein, we propose a new divideandconquer approach to scalable bijective seeded graph matching. Our algorithm, the Large Seeded Graph Matching algorithm (LSGM, see Algorithm 1), merges the approaches of bijective and nonbijective graph matching and leverages the information in seeded vertices in order to match very large graphs. The algorithm proceeds in two steps: We first spectrally embed the graphs—yielding a low dimensional Euclidean representation of our graph—and then use the information provided by seeded vertices to jointly cluster the vertices of the two embedded graphs. This embedding procedure allows us to employ the powerful theory of adjacency spectral embedding (see for example [31] and [16]) to prove asymptotically perfect performance in jointly clustering stochastic block model random graphs, see Theorem 4.1 for detail.
Once the vertices are jointly clustered, we then match the graphs within the clusters. This matching step is fully parallelizable and flexible in that we can employ any one of a number of matching procedures depending on the properties of the resulting clusters. The flexibility afforded by our procedure in the clustering and matching subroutines can have a dramatic impact on algorithmic scalability. For example, on a 1600 vertex simulated graph our parallelization procedure was able to achieve an factor of 8 improvement in speed at minimal accuracy degradation by increasing the number of clusters and hence the number of cores that were used; see section 5.2.
Though we are not the first to employ a divideandconquer approach to graph matching (see for example [9], [38], [1]), our focus on the efficient utilization of apriori observed seeded vertices and the theoretical framework for our approach provided by Theorem 4.1 set our algorithm apart from the existing literature.
Note: All graphs considered herein will be simple; in particular there are no multiple edges between two vertices nor are there edges with a single vertex as both endpoints. Modifications for the directed case are quite simple [31, 16]
but we do not consider them in this manuscript. All vectors considered will be column vectors, and
is the length vector of all . When appropriate we drop the subscript and just write . Throughout the paper we employ the standard notation for any , and to simplify future notation, if and , then will denote the submatrix of with row indices and column indices . For a matrix , will denote the th column of and the th row of . Also for two matrices and , will denote the column concatenation of and .2 Background
There are numerous formulations of the graph matching problem, though they all share the same objective heuristic: given two graphs,
and , GMP seeks an alignment between the vertex sets and that best preserves structure across the graphs. In bijective graph matching, we further assume and the alignment sought by GMP is a bijection between and . In nonbijective graph matching, we allow for and for alignments that are not one–to–one.In the bijective matching setting, GMP is commonly formulated as follows: find a bijection minimizing the quantity
(2.1) 
i.e. the problem seeks to minimize the number of edge disagreements between and (see [34], [36], [15]). Equivalently stated, if and are the respective adjacency matrices of and , then this problem seeks to minimize over all permutation matrices permutation matrices, with the matrix Frobenius norm. In the nonbijective matching setting, and need not have equal cardinality. This requires an alternative formulation of GMP, as (2.1) is no longer necessarily welldefined. See [7], [12], [26], [38] for a variety of generalizations of (2.1).
In the seeded graph matching problem (SGMP), we further assume the presence of a latent alignment between the vertex sets of and . Our task is to then efficiently leverage the information in a partial observation of the latent alignment, i.e. a seeding
, to estimate the remaining latent alignment. In bijective SGMP, we are given subsets of the vertices
and called seeds with and a bijective seeding function . Without loss of generality we may reorder the vertices so that and (the identity function on ). The task then is to use to estimate by finding the bijection extending which minimizes (2.1). In the nonbijective setting, to accommodate the fact that the latent alignment need not be one–to–one, we define to be a subset of , and we are tasked with using a partial observation of to estimate the remaining latent alignment.3 Divideandconquer seeded graph matching
We present the details of the LSGM algorithm, Algorithm 1. In section 3.1, we describe Steps 13 of this algorithm which constitute the divide steps. In section 3.2, we describe the final step of the algorithm which constitutes the conquer step.
3.1 Jointly embedding and clustering the graphs
We begin by describing the embedding and clustering subroutine. The input is the symmetric adjacency matrices and of the two graphs to be matched ( and respectively), the number of seeds , the seeding function , the number of clusters , and the embedding dimension . Note that the procedure can easily be modified to handle directed graphs as well.
Step 1: Compute the first eigenpairs of and . Letting the orthonormal eigendecompositions of and , with , and the diagonals of and nonincreasing, we compute only , , , .
Step 2: Initially embed the vertices of and into as and respectively.
Step 3: Let and be the initial embedding of the seeded vertices. Align the embedded seeded vertices via the orthogonal Procrustes fit problem: for , we set
Step 4: Align the two embedded adjacency matrices; i.e. we apply the transformation to and obtaining the transformed embedding .
Step 5: Cluster the embedded vertices, and , into clusters with the kmeans algorithm ([23]). Let the corresponding cluster centroids be labeled .
Remark 3.1.1. The above procedure can be implemented on very large graphs using efficient SVD algorithms (see for example [6]). Indeed, as we are only interested in the first eigenpairs, these can be computed in steps for
. In the sparse regime, fast partial singular value decompositions (e.g. IRLBD in
[2]) can be effectively implemented on arbitrarily large graphs. Paired with fast clustering procedures (here, each iteration of means has complexity , and in practice excellent performance can often be achieved with significantly less than iterations), the above procedure can be effectively run on extremely large sparse graphs.We do not implement parallelized versions of the SVD procedure or clustering procedure in our algorithm; indeed, even for the large graphs we considered, the partial SVD and direct means were directly and efficiently computable. Note that there is an extensive literature devoted to parallel SVD and clustering implementations, see [5] and [3] for more detail. Empirically, we see that the matching step is the most computationally intensive step of our procedure, and the runtime gains possible by parallelizing the SVD and clustering procedures are relatively small compared to the gains achieved by matching in parallel. See Section 5.4 for detail.
Additionally, the orthogonal Procrustes problem in Step 3 can be solved in time as it involves computing the singular value decomposition of and setting .
Remark 3.1.2 Model selection, more specifically choosing and , is a difficult hurdle to overcome in spectral clustering (see [32] and [29] for instance). One way to estimate is via automated profile likelihood procedures such as [39]. Unfortunately, the procedure in [39] requires computation of the full spectrum, which is computationally intensive. In our simulation examples we assume is known, and in the real data examples, we use the ideas of [8] to estimate the embedding dimension from a partial SCREE plot. We expect our procedure to work well as long as (see Lemma 4.2 for detail) which we see is the case in our simulated and real data examples.
Our procedure is insensitive to our choice of provided that

The procedure consistently clusters across the graphs—if the optimal matching of and is given by (in the bijective case), then for all , and are in the same cluster. This is essential for ensuring the accuracy of the subsequent matching step.

The clusters are modestly sized (for implementing the subsequent matching procedure).
Note that in practice it is impossible to ensure that the clustering is consistent, and we explore the impact of different values for (and misclustered vertices) in Section 5.2. Indeed, the accuracy of the algorithm is limited by the initial clustering, and we are presently working to understand the consistency of different clustering procedures in different model settings.
Remark 3.1.3. Practically, the particular choice of clustering procedure utilized in Step 5 of Algorithm 2 is of secondary importance. Indeed, we choose the means clustering procedure (using Matlab’s built in kmeans solver) because of its ease of implementation and theoretical tractability. The particular clustering procedure can be chosen to optimize speed and accuracy given the properties of the underlying data. See [13] for a review of clustering procedures. Also note that although in many applications a natural is dictated by the data, we do not need to exactly find . For our graph matching exploitation task we do not need to finely cluster the vertices of our graphs; a gross but consistent clustering would still achieve excellent performance.
Remark 3.1.4. While our algorithm is presented for undirected unweighted graphs, we could adapt our approach to directed graphs (we would embed the vertices as in [31]), or weighted graphs (the SVD can easily be run on weighted graphs). We plan to theoretically explore this further in future work.
3.2 Matching within clusters
When the desired matching is bijective, we first must resolve disagreements in cluster sizes and adjust the clusters accordingly. More specifically, we need to address the fact that within each cluster, we may have an unequal number of vertices from each of the two graphs. We do this as follows:

Suppose that for each , cluster has total vertices (from both graphs combined) with . Within cluster , suppose there are vertices from and vertices from .

Resize cluster to be of size
(3.1) To parse out Eq. (3.1), note that ideally we would resize the clusters to be of size , but may be greater than (note that it is never greater than ). To account for this, we sequentially (starting from the smallest cluster and working up) remove 2 vertices from each cluster until .

Designating all vertices as unassigned, sequentially for , assign the unassigned vertices from each graph closest (in the L sense) to to be in cluster .
Note that if the desired output is a nonbijective matching, the above procedure for ameliorating cluster sizes need not be implemented.
Once the cluster sizes are resolved, we can match the two graphs within a cluster using any number of bijective matching algorithms. See Section 5 for performance comparisons of various matching procedures. These matching subroutines can be run fully in parallel, and if the matching within cluster is denoted , then the final output of our algorithm is the full matching , an approximate solution to the SGMP. To further parallelize our approach, one could implement a multithread graph matching procedure as in [25]. However, to run their procedure one needs a machine with a NUMA architecture and OpenMP installed, whereas we focus on a scalable procedure able to be run on a typical computer cluster, without any specialized hardware/software.
Remark 3.2.1 First, note that the distances needed to resize the cluster have already been computed by the means clustering procedure so that the cost incurred by reassigning the vertices is computationally minimal (see Section 5 for empirical evidence of this). Second, we do not focus on modifying existing means procedures to automatically make the clusters be of commensurate sizes. We view our resizing as a refinement of the original means procedure, and not as providing a new clustering of the vertices. In practice, our reassigned clusters are very similar to the original means clusters, often differing in only a few vertices.
Remark 3.2.2 In the event that one of the means clusters is composed of a large majority of vertices from a single
graph, bijective graph matching might not be sensible. In this case, we can nonbijectively match within each cluster by padding the adjacency matrices with empty vertices to make the graphs of commensurate size (as suggested in
[36]), and match the resulting graphs. Vertices matched to isolates could be treated as unmatched, or we could iteratively remove the matched vertices in the larger graph and rematch the graphs, yielding a many–to–many matching.Remark 3.2.3 In these matching procedures, it is not surprising that we obtain best results if we use the seeded vertices to not only cluster but also match the graphs (via the SGM algorithm of [17] and [27]). We recognize that the other bijective matching procedures ([36] and [15]) have not been modified in the literature to accommodate seeded vertices, and we do not pursue the modification here. Our results point to the need for modifying these algorithms to handle seedings, and we expect them to achieve excellent performance when thus modified.
3.3 Computational cost of LSGM
The many executions of the bijective matching subroutine can be run in parallel, and if is the size of the largest cluster of the points, then this step has computational complexity (assuming that we use all seeds in the matching procedure). If the executions are run in sequence then this step would have complexity . If then the computational cost of this step is , and we have the same computational bound as the algorithms of [34], [36], [15]. To deal with this issue of load balancing, we recluster any overly large clusters by rerunning our embedding and clustering procedure with the same seeding function on (where is the set of indices of the unseeded vertices in cluster )
and (defined analogously) for all such that the size of the corresponding cluster is overly large. If we are unable to reduce these cluster sizes further, then our algorithm cannot improve upon the existing computational complexity, though we achieve a significantly better lead constant. In this case, we might overcome this hurdle by nonbijectively matching any overly large clusters, as these procedures are often highly scalable.
Remark 3.3.1. If there exists an such that , and each cluster is size , then the computational cost of the LSGM algorithm is for and for when the matching subroutines are fully parallelized. Hence, a modest number of modestly sized clusters——yields a running time for the LSGM algorithm.
3.4 Active seed selection
If the number of seeds is large and if the seeds are all used in the matching procedures (i.e. we use SGM to match the clusters), the LSGM algorithm may be computationally unwieldy. To remedy this, we formulate a procedure for active seed selection that aims to optimally choose a computationally tractable number of seeds from to match across each cluster. If we are matching cluster of size across and , and computationally we can only handle an additional seeds in the SGM subroutine—so that we are matching total vertices—then ideally we would want to pick the “best” seeds to use. Luckily, the results of [27] provide a useful heuristic for what defines “best” in this setting.
Ideally, columns of the seed to nonseed adjacency matrix in and would be enough to uniquely identify the unseeded vertices in each graph and this can be achieved with a logarithmic number of randomly chosen seeds [27]. Though this is a limiting result, the result (and its proof) offers insight into how to select the “best” seeds in a finite resource setting. Specifically, we seek to have the columns of the seed to nonseed adjacency matrix maximally distinguish the unseeded vertices. Mathematically, this translates to choosing seeds that have the maximum entropy in their collection of seednonseed adjacency vectors. To this end, we formulate the following seed selection algorithm for selecting the seeds to use when matching across cluster (for fixed).
Suppose that the desired number of seeds for matching cluster is . To have the columns of the seed to nonseed adjacency matrix maximally distinguish the unseeded vertices, we seek seeds that have maximum entropy contained in their collection of seednonseed adjacency vectors. We propose to accomplish this greedily by repeatedly maximizing the (average across the two graphs) entropy increase possible by adding a single inactive seeded vertex to our active seed set. Abusing notation, define
(3.2) 
to be the Shannon entropy of the binary column vectors of the seed to nonseed adjacency matrix in graph with seed set and unseeded vertices and is the Shannon entropy function. Initialize and for , we set to where
(3.3) 
Finally, set .
For example, suppose that we have 4 seeded vertices and 4 unseeded vertices and seed to nonseed adjacency given by:
If we were choosing 3 seeds for subsequent matching, we would choose (in this order): , then (seed 3 could also have been chosen as there are two maximizers of the entropy), then
4 LSGM and the Stochastic Block Model
In as much as we can partition the vertices of and into consistent clusters, it is natural to model and using the stochastic block model (SBM) of [24] and [35] (details of the model are presented shortly). We then define the clustering criterion for clustering the rows of into clusters via
(4.1) 
where the rows of are the centroids of the clusters and is the cluster assignment function. Note that means attempts to solve (4.1). In Theorem 4.1 we show that, under some mild conditions on the underlying SBM, the optimal cluster assignment almost surely perfectly clusters the vertices of both and . We present the necessary background below.
A dimensional stochastic block model random graph, , has the following parameters: an integer , a vector of nonnegative integers , and a latent–position matrix with distinct rows. The random graph’s vertex set is the union of the blocks , , …, , which are disjoint sets with respective cardinalities , , …, . For each , let denote the block of , ie . Lastly, for each pair of vertices , the adjacency of and
is an independent Bernoulli trial with probability of success
, where .Two independent SBM graphs may have no correlation structure between them, and there is no natural bijective alignment of their vertices. To induce this alignment, we introduce correlation between the graphs. We say that two (matched) random graphs and from this model have correlation
if the set of indicator random variables
are mutually independent except that for each , the indicator random variables and
have Pearson productmoment correlation coefficient
. Such correlated graphs can be easily constructed by realizing from the underlying SBM and then, for each , is an independent Bernoulli trial with probability of success if and are adjacent in , and probability of success if and are not adjacent in . If and are thus correlated, then there is a natural latent alignment between the vertices of the graphs, namely the identity function id.Given such that coordinatewise and (the number of seeds), the random graphs and from the dimensional stochastic block model parameterized with , , , and having correlation , are seeded if, a priori for each , of the vertices from block function as seeds for LSGM, i.e. their across graph correspondence is known.
Let and be correlated, seeded (with ), dimensional SBM’s parametrized by , , and . Let their respective adjacency matrices be and , and let their respective block membership functions be and . Without loss of generality, let the true alignment function be id and let . Consider the transformed (as in Step 4 of Algorithm 2) adjacency spectral embeddings of and , and , and assume that we have clustered the rows of via the optimal of (4.1). Adopting the notation of Algorithm 1, define (where again is the set of unseeded indices in corresponding to cluster and )
(4.2)  
(4.3) 
to be the respective optimal seeded and unseeded matchings of cluster across the two graphs. When appropriate, we will drop the subscript and refer to the matching of cluster as simply .
We shall hereto forth be considering a sequence of growing models with vertices. In the next theorem, we prove that under modest assumptions, we have that for all but finitely many , , and all of the vertices are perfectly clustered across the two graphs. The results of [27] immediately give that a.a.s. and a.a.s. for all and the above procedure (when perfected implemented) correctly aligns the two SBM graphs.Although this result is asymptotic in nature, it provides hope that our twostep procedure will be effective in approximating the the true but unknown alignment across a broad spectrum of graphs.
Theorem 4.1.
With notation as above, let and be seeded (with ), dimensional SBM’s parametrized by , , and . Although we assume and have the same block structure, we make no assumptions about the correlation structure. Let their respective adjacency matrices be and , and without loss of generality let the true alignment function be , so that the block membership function is . Adopting the notation of Section 3, if the following assumptions hold:

There exist constants such that and ;

Defining
(4.4) and
(4.5) if are such that then ;

Without loss of generality, let be the latent positions corresponding to the seeded vertices, then we assume there exists an satisfying and such that
(4.6)
then for all but finitely many , the of (4.1) satisfies
Regardless of the correlation structure, Theorem 4.1 implies that our joint clustering procedure yields a canonical nonbijective matching of the vertices (where the matching is given by the clustering).
Our proof of this theorem will proceed as follows. First we will state some key results proved elsewhere. Then we will bound and will then have that the matrix is close to a specified transformation of the (recalling from [28] that for a matrix , ). Finally, we will use this to show that the clustering will perfectly cluster the vertices in the two graphs into the true blocks.
Let be the orthonormal eigendecomposition of with , and ordered so that the diagonals of are nondecreasing. The next lemma collects some necessary results from [31] and [28] which will be needed in the sequel.
Lemma 4.2.
With notation as above,
let and
.
If then it holds with probability one that for all but finitely many that
(4.7) 
We are now ready to prove the following.
Lemma 4.3.
For all but finitely many it holds that
Proof: As in Section 3, let and let . It immediately follows from Eq. (4.7) that . Clearly
(4.8) 
and working in the other direction
(4.9) 
If we let the SVD of be then
(4.10) 
by the assumption (Eq. 4.6) that and Eq. (4). Combined with Eq. (4.8), we have . Hence, we have that
(4.11) 
since and . ∎
Lemma 4.4.
For all but finitely many , it holds that
Proof: We have
(4.12) 
The first term in Eq. (4.12) is bounded by by Eq. (4.7). For the second term we have from Eq. (4) that ∎
Pf of Main thm: Let be the balls of radius around the distinct rows of . If , then by assumption
(4.13) 
and the are disjoint.
Let and let . Let be the optimal clustering of the rows of from (4.1). Suppose there is an index such that This would imply that (where is the matrix whose row is ). As for a constant , we would then have that
(4.14) 
Lemma 4.4 yields that
(4.15) 
(where the final equality follows from assumption . Combined with Eq. (4.14), this contradicts the minimality of and therefore .
From (4.7) we have If are such that , then and it follows that
(4.16) 
It follows that for all but finitely many , . Stated simply,
(4.17) 
Now [27, Theorem 1] immediate implies that for all but finitely many , for all and the proof is complete. ∎
Remark 4.5. The implication of assumption iii. in Theorem 4.1 is that in order for the scaled Procrustes fit of the embedded seeded vectors to align the entire embedding, it is sufficient that the latent positions corresponding to the seeded vectors cannot concentrate too heavily in one direction. We note that analogous assumptions are made in the literature on sparse subspace clustering, see [14] for example and detail.
5 Empirical Results
We next explore the effectiveness of our divideandconquer approach on simulated and real data examples. When comparing across graph matching algorithms, we measure effectiveness via the matching accuracy (since we assume a true latent alignment, this amounts to the fraction of vertices which were correctly aligned) and runtime of the algorithms. Across both runtime and accuracy, our algorithm achieves excellent performance: achieving significantly better accuracy than existing scalable bijective matching algorithms (Umeyama’s spectral approach [33]), and achieving significantly better accuracy and runtime than the existing stateoftheart (in terms of accuracy) matching procedures (PATH [36], GLAG [15], FAQ [34]). Unless otherwise specified, all of our experiments are run on a 2 x Intel(R) Xeon(R) CPU E52660 0 2.20GHz (with 32 virtual cores and 16 physical cores). We implement all of our code in the software package Matlab limited to 12 parallel threads. Additionally, the code needed to run our algorithm (in Matlab) is publically available for download at https://github.com/lichen11/LSGMcode.
5.1 Simulation Results
Once the vertices of the two graphs are clustered, we can run the matching procedures in full parallel across the clusters. Our first experiment seeks to understand how available bijective matching algorithms perform (with respect to accuracy and speed), so that we can better understand how to appropriately set the maximum allowed cluster size. To this end, we run the following experiment. We consider two correlated SBM random graphs with the following parameters (where , is the identity matrix, and denotes the Kronecker product): each of and , , , for each of . We cluster the graphs into 2 clusters and run a variety of bijective GM algorithms on these clusters. We record both the performance of the algorithms in recovering the true alignment and the corresponding running time of each algorithm. Note we ran the matching procedures on the two clusters in parallel. The algorithms we ran include SGM [17], FAQ [34], the spectral matching algorithm of Umeyama [33], the PATH algorithm and the associated convex relaxation (PATH CR, which is solved exactly using FrankWolfe methodology [18]) [36], and the GLAG algorithm [15]). See Figure 2 for the results.


To run LSGM, we used seeds for and seeds for , all seeds chosen uniformly at random from the two blocks. The seeds are always used in the embedding and clustering procedure, but SGM is the only algorithm to use seeded vertices when matching the clusters. It is not surprising that it achieves best performance. We expect similarly excellent results from the other matching algorithms once they are seeded.
In the experiment, we note that, of the nonseeded matching algorithms, PATH and its associated convex relaxation achieve the best results. The PATH CR procedure scales very well in running time but performs progressively worse as increases. On the other hand, the PATH algorithm’s running time scales poorly (as does that of the GLAG algorithm), needing significantly longer running time than SGM or PATH CR across all values of . While PATH and PATH CR achieve similar results to SGM for , the significantly longer run time for PATH and the sharply decreased performance for PATH CR at hinder these algorithms effectiveness as postclustering matching procedures. Indeed, to employ these two procedures, we would need to severely restrict the maximum allowed size of our clusters to achieve a feasible running time and/or accurate matchings. We note that seeding GLAG, the PATH algorithm and PATH CR may yield significantly faster running times and less performance degradation as increases, as seeding FAQ yields both.
SGM is remarkably stable, achieving excellent matching performance across all . This not only indicates that our clustering methodology is consistent across graphs, but points to the importance of using the seeds in the subsequent matching. Here the correlation is very high, and for smaller PATH and PATH CR perform on par with SGM, suggesting that seeds are less important when matching very similar graphs. We next explore the effect of decreased correlation.
We explore this in the experiment, and again we note that SGM significantly outperforms all the nonseeded matching algorithms (with average accuracy for all ). This points to the consistency of our clustering procedure here. Note that we needed slightly more seeds to achieve this consistency with the lower correlation. Indeed, with three seeds from each cluster, the clustering was not consistent when , unlike in the case.
5.2 Robustness to misspecified
How sensitive is the performance of our algorithm to misspecifying ? We claim that as long as the clusters are consistently estimated, the procedure is relatively insensitive to misestimating . Following this reasoning, if our clustering step allows clusters that are larger than , then we would expect our clusters to be consistent and our performance would not degrade significantly. However, if our clustering step does not allow cluster larger than , then we would not expect our clusters to be consistent and our performance would degrade significantly.
To this end, we consider the following experiment. We consider correlated SBM’s, with 10 blocks each of size , and interblock edge probability and across block edge probability . We run 20 MC replicates of divideandconquer graph matching with seeds and with the maximum allowed cluster size equal to 100, 200, 300, 400, 500. We summarize results in Figure 3. Note that we have included the “Oracle” matcher, which gives the maximum number of vertices possibly matched correctly given the clustering.
From the Figure 3, we see that the performance of SGM again is significantly better than all the other GM algorithms considered, and is also resilient to allowing larger clusters in the means procedure. This is echoed in the experiment for , where we see that SGM nearly achieves oracle accuracy across all maximum cluster sizes. We also explore the sensitivity of the LSGM’s runtime to the maximum allowed cluster size. Utilizing 12 cores, the average runtimes of the LSGM algorithm (using SGM for matching and ) are seconds for max cluster size equal to ; indeed, SGM has runtime and is the slowest step of our divideandconquer procedure, so we expect to see the runtime increase if the matching subroutines are between bigger graphs. Larger clusters may be more consistent and therefore may lead to better matching performance, but this is achieved at the expense of increased runtime.
Comments
There are no comments yet.