Metrics for Graph Comparison: A Practitioner's Guide

04/16/2019
by   Peter Wills, et al.
University of Colorado Boulder
0

Comparison of graph structure is a ubiquitous task in data analysis and machine learning, with diverse applications in fields such as neuroscience, cyber security, social network analysis, and bioinformatics, among others. Discovery and comparison of structures such as modular communities, rich clubs, hubs, and trees in data in these fields yields insight into the generative mechanisms and functional properties of the graph. Often, two graphs are compared via a pairwise distance measure, with a small distance indicating structural similarity and vice versa. Common choices include spectral distances (also known as λ distances) and distances based on node affinities. However, there has of yet been no comparative study of the efficacy of these distance measures in discerning between common graph topologies and different structural scales. In this work, we compare commonly used graph metrics and distance measures, and demonstrate their ability to discern between common topological features found in both random graph models and empirical datasets. We put forward a multi-scale picture of graph structure, in which the effect of global and local structure upon the distance measures is considered. We make recommendations on the applicability of different distance measures to empirical graph data problem based on this multi-scale view. Finally, we introduce the Python library NetComp which implements the graph distances used in this work.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

06/17/2019

Making Fast Graph-based Algorithms with Graph Metric Embeddings

The computation of distance measures between nodes in graphs is ineffici...
08/31/2017

Distances between bicliques and structural properties of bicliques in graphs

A biclique is a maximal bipartite complete induced subgraph of G. The bi...
01/22/2018

Tracking Network Dynamics: a review of distances and similarity metrics

From longitudinal biomedical studies to social networks, graphs have eme...
04/10/2018

An information-theoretic, all-scales approach to comparing networks

As network research becomes more sophisticated, it is more common than e...
10/11/2021

Reeb Graph Metrics from the Ground Up

The Reeb graph has been utilized in various applications including the a...
01/22/2018

Tracking network dynamics: a survey of distances and similarity metrics

From longitudinal biomedical studies to social networks, graphs have eme...
09/09/2018

Performance Metrics (Error Measures) in Machine Learning Regression, Forecasting and Prognostics: Properties and Typology

Performance metrics (error measures) are vital components of the evaluat...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Abstract

Comparison of graph structure is a ubiquitous task in data analysis and machine learning, with diverse applications in fields such as neuroscience [1], cyber security [2], social network analysis [3], and bioinformatics [4], among others. Discovery and comparison of structures such as modular communities, rich clubs, hubs, and trees in data in these fields yields insight into the generative mechanisms and functional properties of the graph.

Often, two graphs are compared via a pairwise distance measure, with a small distance indicating structural similarity and vice versa. Common choices include spectral distances (also known as distances) and distances based on node affinities (such as DeltaCon [5]). However, there has of yet been no comparative study of the efficacy of these distance measures in discerning between common graph topologies and different structural scales.

In this work, we compare commonly used graph metrics and distance measures, and demonstrate their ability to discern between common topological features found in both random graph models and empirical datasets. We put forward a multi-scale picture of graph structure, in which the effect of global and local structure upon the distance measures is considered. We make recommendations on the applicability of different distance measures to empirical graph data problem based on this multi-scale view. Finally, we introduce the Python library NetComp which implements the graph distances used in this work.

Introduction

In the era of big data, comparison and matching are ubiquitous tasks. A graph is a particular type of data structure which records the interactions between some collection of agents.111These objects are sometimes referred to as “complex networks;” we will use the mathematician’s term “graph” throughout the paper. This type of data structure relates connections between objects, rather than directly relating the properties of those objects. The interconnectedness of the object in graph data disallows many common statistical techniques used to analyze tabular datasets. The need for new analytical techniques for visualizing, comparing, and understanding graph data has given rise to a rich field of study [6].

In this work, we focus on tools for pairwise comparison of graphs. Such comparison often takes place within the contest of anomaly detection and graph matching. In the former, one has a sequence of graphs (often a time series) and hopes to establish at what time steps the graphs change “significantly” at any given time step. In the latter, one has a collection of graphs, and wants to establish whether a sample is likely to have been drawn from that collection. Both problems require the ability to effectively compare two graphs. However, the utility of any given comparison method varies with the type of information the user is looking for; one may care primarily about large scale graph features such as community structure or the existence of highly connected “hubs”; or, one may be focused on smaller scale structure such as local connectivity (i.e. the degree of a vertex) or the ubiquity of substructures such as triangles.

Existing surveys of graph distances are limited to observational datasets [7]. While authors try to choose datasets that are exemplars of certain classes of networks (e.g. social, biological, or computer networks), it is difficult to generalize these studies to other datasets.

In this paper, we take a different approach. We consider existing ensembles of random graphs as prototypical examples of certain graph structures, which are the building blocks of existing empirical network datasets. We propose therefore to study the ability of various distances to compare two samples randomly drawn from distinct ensembles of graphs. Our investigation is concerned with the relationship between the families of graph ensembles, the structural features characteristic of these ensembles, and the sensitivity of the distances to these characteristic structural features.

The myriad proposed techniques for graph comparison [8] are severely reduced in number when one requires the practical restriction that the algorithm run in a reasonable amount of time on large graphs. Graph data frequently consists of to vertices, and so algorithms whose complexity scales quadratically with the size of the graph quickly become unfeasible. In this work, we restrict our attention to approaches where the calculation time scaled linearly or near-linearly with the number of vertices in the graph for sparse graphs.222Sparsity is, roughly, the requirement that the number of edges in a graph of size be much lower than the maximum possible number ; a technical definition is provided below.

In the past 40 years, many random graph models have been developed which emulate certain features found in real-world graphs [9, 10]. A rigorous probabilistic study of the application of graph distances to these random models is difficult because the models are often defined in terms of a generative process rather than a distribution over the space of possible graphs. As such, researchers often restrict their attention to very small, deterministic graphs (see e.g. [11]) or to very simple random models, such as that proposed by Erdős and Rényi [12]. Even in these simple cases, rigorous probabilistic analysis can be prohibitively difficult. We adopt a numerical approach, in which we sample from random graph distributions and observe the empirical performance of various distance measures.

Throughout the work, we understand the observed results through a lens of global versus local graph structure. Examples of global structure include community structure and the existence of well-connected vertices (often referred to as “hubs”). Examples of local structure include the median degree in the graph, or the density of substructures such as triangles. Our results demonstrate that some distances are particularly tuned towards observing global structure, while some naturally observe both scales. In both our empirical and numerical experiments, we use this multi-scale interpretation to understand why the distances perform the way they do on a given model, or on given empirical graph data.

The paper is structured as follows: in Section Graph Distance Measures, we introduce the distances used, and establish the state of knowledge regarding each. In Section Random Graph Models, we similarly introduce the random graph models of study and discuss their important features. In Section Evaluation of Distances on Random Graph Ensembles we numerically examine the ability of the distances to distinguish between the various random graph models. The reader who is already familiar with the graph models and distances discussed can skip to Section Discussion for a discussion of the results of our evaluation of the distances on the various random graph models, referencing the results in Section Experimental Results as necessary. In Section Applications to Empirical Data, we apply the distances to empirical graph data and discuss the results. Finally, Section Conclusion summarizes the work and summarizes our recommendations. In the appendix, we introduce and discuss NetComp, the Python package which implements the distances used to compare the graphs throughout the paper.

Graph Distance Measures

Let us begin by introducing the distances we will use in this study, and discussing the state of the knowledge for each. We have chosen both standard and cutting-edge distances, with the requirement that the algorithms be computable in a reasonable amount of time on large, sparse graphs. In practice, this means that the distances must scale linearly or near-linearly in the size in the graph.

We refer to these tools as “distance measures,” as many of them do not satisfy the technical requirements of a metric. Although all are symmetric, they may fail one or more of the other requirements of a mathematical metric. This can be very problematic if one hopes to perform rigorous analysis on these distances, but in practice it is generally not significant. Consider the requirement of identity of indiscernible, in which if and only if . we rarely encounter two graphs where ; we are more frequently concerned with an approximate form of this statement, in which we wish to deduce that is similar to from the fact that is small. Similarly, although the triangle inequality is foundational in approximation and proof methods in analysis, it is rarely employed in our process in applying these distances for anomaly detection.

Notation

We must first introduce the notation used throughout the paper. It is standard wherever possible.

We denote by a graph with vertex set and edge set . The function assigns each edge in a positive number, which we denote . We call the size of the graph, and denote by the number of edges. For and , we say if . The matrix is called the adjacency matrix, and is defined as

The degree of a vertex is defined as . The degree matrix is the diagonal matrix of degrees, so and for . The Laplacian matrix (or just Laplacian) of is given by . The normalized Laplacian is defined as , where the diagonal matrix is given by

We refer to , , and as matrix representations of . These are not the only useful matrix representation of a graph, although they are some of the most common. For a more diverse catalog of representations, see [13]

. Note that other normalizations of the Laplacian matrix are possible; a popular choice is normalizing the rows so that they sum to one, which results in the transition matrix for a random walk on the graph. Our choice maintains symmetry, and thus strictly real eigenvalues and eigenvectors, a property which the row-normalized Laplacian lacks. A real spectrum simplifies computations, for example, when one wishes to form a basis of eigenvectors in order to decompose real-valued functions on the graph. However, the interpretability of the spectrum comes at the cost of interpretability of the matrix itself; while the row-stochastic normalization has an easy-to-understand function in terms of random walks, the interpretation of our normalized Laplacian

is not so straightforward.

The spectrum of a matrix is the sorted sequence of eigenvalues. Whether the sequence is ascending or descending depends on the matrix in question. We denote the eigenvalue of the adjacency matrix by , where . The eigenvalue of the Laplacian matrix are denotes by , with the eigenvalues sorted in ascending order, so that . We similarly denote the eigenvalue of the normalized Laplacian by , with .

Two graphs are isomorphic if and only if there exists a map between their vertex sets under which the two edge sets are equal. Since our vertex sets are integers, we can simplify this definition. In particular, let us say that if and only if there exists a permutation matrix such that .

We say that a distance requires node correspondence when there exist graphs , , and such that but . Intuitively, a distance requires node correspondence when one must know some meaningful mapping between the vertex sets of the graphs under comparison.

Graph Distance Taxonomy

The distance functions we study divide naturally into two categories, which we will now describe. These categories are not exhaustive; many distance functions (including one we employ in our experiments) do not fit neatly into either category. Akoglu et al. [8] provide an alternative taxonomy; our taxonomy refines a particular group of methods they refer to as “feature-based”.333Note that the authors in [8]

are classifying anomaly detection methods in particular, rather than graph comparison methods in general.

Spectral Distances

Let us first discuss spectral distances, also known as distances. We will briefly review the necessary background; for a good introduction to spectral methods of graph comparison, see [13].

We will first define the adjacency spectral distance; the Laplacian and normalized Laplacian spectral distances are defines similarly. Let and be graphs of size , with adjacency spectra and , respectively. The adjacency spectral distance between the two graphs is defined as

which is just the distance between the two spectra in the metric. We could use any metric here, for . The choice of

is informed by how much one wishes to emphasize outliers; in the limiting case of

, the metric returns the measure of the set over which the two vectors are different, and when

only the largest element-wise difference between the two vectors is returned. Note that for the distances are not true metrics (in particular, they fail the triangle inequality) but they still may provide valuable information. For a more detailed discussion on norms, see [14].

The Laplacian and normalized Laplacian spectral distances and are defined in the exact same way. In general, one can define a spectral distance for any matrix representation of a graph; for results on more than just the three we analyze here, see [13]. Spectral distances are invariant under permutations of the vertex labels; that is to say, if is a permutation matrix, then the spectrum of is equal to the spectrum of . This allows us to directly compare the topological similarity of two graphs without having to discover any mapping between the vertex sets.

In practice, it is often the case that only the first eigenvalues are compared, where . We refer to such truncated distances as distances. When using distances, it is important to keep in mind that the adjacency spectral distance compares the largest eigenvalues, whereas the Laplacian spectral distances compare the smallest eigenvalues. Comparison using the first eigenvalues for small allows one to focus on the community structure of the graph, while ignoring the local structure of the graph [15]. Inclusion of the higher- eigenvalues allows one to discern local features as well as global. As we will see, this flexibility allows the user to target the particular scale at which they wish to examine the graph, and is a significant advantage of the spectral distances.

The three spectral distances used here are not true metrics. This is because there exist graphs and that are co-spectral but not isomorphic. That is to say, adjacency cospectrality occurs when for all , so , but . Similar notions of cospectrality exist for all matrix representations; graphs that are co-spectral with respect to one matrix representation are not necessarily co-spectral with respect to other representations.

Little is known about cospectrality, save for some computational results on small graphs [16] and trees [13]. Schwenk proved that a sufficiently large tree nearly always has a co-spectral counterpart [17]. This result was extended recently to include a wide variety of random trees [18]. However, results such as these are not of great import to us; the graphs examined are large enough that we do not encounter cospectrality in our numerical experiments. A more troubling failure mode of the spectral distances would be when the distance between two graphs is very small, but the two graphs have important topological distinctions. In Section Discussion, we will provide further insight into the effect of topological changes on the spectra of some of the random graph models we study.

The consideration above addresses the question of how local changes effect the overall spectral properties of a graph. Some limited computational studies have been done in this direction. For example, Farkas et al. [19] study the transition of the adjacency spectrum of a small world graph as the disorder parameter increases. As one might expect, they observe the spectral density transition from a highly discontinuous density (which occurs when the disorder is zero, and so the graph is a ring-like lattice) to Wigner’s famous semi-circular shape [20] (which occurs when the disorder is maximized, so that the graph is roughly equivalent to an uncorrelated random graph.)

From an analytical standpoint, certain results in random matrix theory inform our understanding of fluctuations of eigenvalues of the uncorrelated random graph (see Section

Random Graph Models for a definition). These results hold asymptotically as we consider the eigenvalue of a graph of size , where for . In this case, O’Rourke [21] has shown that the the eigenvalue

is asymptotically normal with asymptotic variance

. An expression for the constant is provided; see Remark 8 in [21]

for the detailed statement of the theorem. This result can provide a heuristic for spectral fluctuations in some random graphs, but when the structure of these graphs diverges significantly from that of the uncorrelated random graph, then results such as these become less informative.

Another common question is that of interpretation of the spectrum of a given matrix representation of a graph.444“Spectral structure” might refer to the overall shape of the spectral density, or the value of individual eigenvalues separated from the bulk. How are we to understand the shape of the empirical distribution of eigenvalues? Can we interpret the eigenvalues which separate from this bulk in a meaningful way? The answer to this question depends, of course, on the matrix representation in question. Let us focus first on the Laplacian matrix , the interpretation of which is the clearest.

The first eigenvalue of is always , with the eigenvector being the vector of all ones, . It is a well-known result that the multiplicity of the zero eigenvalue is the number of connected components of the graph, i.e. if , then there are precisely connected components of the graph [22]. Furthermore, in such a case, the first eigenvectors will indicate the components. In [15], an approximate version of this statement is made rigorous, in which the first eigenvalues being small is an indicator of a graph being strongly partitioned into

clusters. This result justifies the use of the Laplacian in spectral clustering algorithms.

The eigenvalues of the Laplacian also have an interpretation analogous to the vibrational frequencies that arise as the eigenvalues of the continuous Laplacian operator . To understand this analogy, consider the graph as embedded in a plane, with each vertex representing an oscillator of mass one and each edge a spring with elasticity one. Then, for small oscillations perpendicular to the plane, the Laplacian matrix is precisely the coupling matrix for this system, and so the eigenvalues give the square of the normal mode frequencies, . For a more thorough exposition of this interpretation of the Laplacian, see [23].

Maas [24] suggests a similar interpretation of the spectrum of the adjacency matrix . Consider the graph as a network of oscillators, embedded in a plane as previous. Additionally suppose that each vertex is connected to so many external non-moving points (by edges with elasticity one) so that the graph becomes regular with degree . The frequencies of the normal modes of this structure then connect to the eigenvalues of via .555If the graph is already regular with degree , then this interpretation is consistent with the previous, since the eigenvalues of are just .

Matrix Distances

The second class of distances we will discuss are called matrix distances, and consist of direct comparison of the structure of pairwise affinities between vertices in a graph. These affinities are frequently organized into matrices, and the matrices can then be compared, often via an entry-wise norm.

We have discussed spectral methods for measuring distances between two graphs; to introduce the matrix distances, we will begin by focusing on methods for measuring distances on graphs; that is to say, the distance between two vertices . Just a few examples of such distances include the shortest path distance [25], the effective graph resistance [26], and variations on random-walk distances [27]. Of those listed above, the shortest path distance is the oldest and the most thoroughly studied; in fact, it is so ubiquitous that “graph distance” is frequently used synonymously with shortest path distance [28].

There are important differences between the distances that we might choose. The shortest path distance considers only a single path between two vertices. In comparison, the effective graph resistance takes into account all possible paths between the vertices, and so measures not only the length, but the robustness of the communication between the vertices. This distinction is important when, for example, considering travel between two locations on a road network subject to high traffic.

How do these distances on a graph help us compute distances between graphs? Let us denote by a generic distance on a graph. We need assume very little about this function, besides it being real-valued; in particular, it need not be symmetric, and we can even allow .666When we say “distance” we implicitly assume that smaller values imply greater similarity; however, we can also carry out this approach with a similarity score, in which larger values imply greater similarity. Recalling that our vertices are labelled with natural numbers, we can then construct a matrix of pairwise distances via .

The idea behind what we refer to as matrix distances is that this matrix carries important structural information about the graph. Suppose that, for our given distance graphs and have corresponding matrices and . We can then compare and via

(1)

where is a norm we are free to choose.777We could use metrics, or even similarity functions here, although that may cause the function to lose some desirable properties.

Let us elucidate a specific example of such a distance; in particular, we will show how the edit distance conforms to this description. Let be defined as

(2)

Then the matrix is just the adjacency matrix . If we use the norm

(3)

then we call the resulting distance the edit distance.

Of course, the usefulness of such a distance is directly dependent on how well the matrix reflects the topological structure of the graph. The edit distance focuses by definition on local structure; it can only see changes at the level of edge perturbations. If significant volume changes are happening in the graph, then the edit distance will detect this, as do our other matrix distances. However, in our numerical experiments, we match the expected volume of the models under comparison, and so the edit distance is generally unable to discern between the models seen in Section Evaluation of Distances on Random Graph Ensembles.

We also implement the resistance-perturbation distance, first discussed in [11]. This distance takes the effective graph resistance , defined in [26], as the measure of vertex affinity. This results in a (symmetric) matrix of pairwise resistances . The resistance-perturbation distance (or just resistance distance) is based on comparing these two matrices in the entry-wise norm given in Equation (3).

The nice theoretical properties of the effective graph resistance [26] motivate our computational exploration of how well it reflects structure in realistic scenarios. Unlike the edit distance, the resistance distance is designed to detect global structural differences between graphs. A recent work [29] discusses the efficacy of the resistance distance in detecting community changes.

Finally, we look at DeltaCon, a distance based on the fast belief-propagation method of measuring node affinities [5]. To compare graphs, this method uses the fast belief-propagation matrix

(4)

and compares the two representations and via the Matusita difference:

(5)

Note that the matrix can be rewritten in a matrix power series as

(6)

and so takes into account the influence of neighboring vertices in a weighted manner, where neighbors separated by paths of length have weight . Fast belief-propagation is designed to model the diffusion of information throughout a graph [30], and so should in theory be able to perceive both global and local structures. Although empirical tests are performed in [5], no direct comparison to other modern methods are presented.

Other Graph Distances

These two categories do not cover all possible methods of graph comparison. The computer science literature explores various other methods (see [8], Section 3.2 for a nice review), and other disciplines that apply graph-based techniques often have their own idiosyncratic methods for comparing graphs extracted from data.

One possible method for comparing graphs is to look at specific “features” of the graph, such as the degree distribution, betweenness centrality distribution, diameter, number of triangles, number of

-cliques, etc. For graph features that are vector-valued (such as degree distribution) one might also consider the vector as an empirical distribution and take as graph features the sample moments (or quantiles, or statistical properties). A

feature-based distance is a distance that uses comparison of such features to compare graphs.

Of course, in a general sense, all methods discussed so far are feature based; however, in the special case that the features occur as values over the space of possible node pairings, we choose to refer to them more specifically as matrix distances. Similarly, if the feature in question is the spectrum of a particular matrix realization of the graph, we will call the method a spectral distance.

In [31], a feature-based distance called NetSimile is proposed, which focuses on local and egonet-based features (e.g. degree, volume of egonet as fraction of maximum possible volume, etc). If we are using features, the method aggregates a feature-vertex matrix of size

. This feature matrix is then reduced to a “signature vector” (a process they call “aggregation”) which consists of the mean, median, standard deviation, skewness, and kurtosis of each feature. These signature vectors are then compared in order to obtain a measure of distance between graphs.

In the neuroscience literature in particular, feature-based methods for comparing graphs are popular. In [32], the authors use graph features such as modularity, shortest path distance, clustering coefficient, and global efficiency to compare functionally connectivity networks of patients with and without schizophrenia. Statistics of these features for the control and experiment groups are aggregated and compared using standard statistical techniques.

We implement NetSimile in our numerical tests as a prototypical feature-based method. It is worth noting that the general approach could be extended in almost any direction; any number of features could be used (which could take on scalar, vector, or matrix values) and the aggregation step can include or omit any number of summary statistics on the features, or can be omitted entirely. We implement the method as it is originally proposed, with the caveat that calculation of many of these features is not appropriate for large graphs, as they cannot be computed in linear or near-linear time. A scalable modification of NetSimile would utilize features that can be calculated (at least approximately) in linear or near-linear time.

Scaling and Complexity of Algorithms

In many interesting graph analysis scenarios, the sizes of the graphs to be analyzed are on the order of millions or even billions of vertices. For example, the social network defined by Facebook users has over 2 billion vertices as of 2017. In scenarios such as these, any algorithm of complexity will become unfeasible; although in principle it is possible that the constant would be so small it would make up for the term in the complexity, in practice this is not the case. This motivates our requirement that our algorithms be of near-linear complexity. Indeed, even for graphs on the scale of , quadratic algorithms quickly become unfeasible.

This challenge motivates the previously stated requirement that all algorithms be of linear or near-linear complexity. We say an algorithm is linear if it is ; it is near-linear if it is where is asymptotically bounded by a polynomial. We use the notation in the standard way; for a more thorough discussion of algorithmic complexity, including definitions of the notation, see [33].

We focus our attention on sparse graphs. We define sparsity as an asymptotic property, and so it is only defined on a sequence of graphs. However, one can reasonably apply this to empirical graph data which changes over time and thus generates a natural time series which can be tested (roughly, since we are always at finite time) against this definition. In particular, let be a sequence of graphs, where the size of is and the number of edges in denoted by . We say a graph is sparse when the sequence is near-linear, in the sense given above.

Table 1 indicates the algorithmic complexity of each distance measure we compare. For DeltaCon and the resistance distance, there are approximate algorithms as well as exact algorithms; we list the complexity of both. Although we use the exact versions in our experiments, in practice the approximate version would likely be used if the graphs to be compared are large.

Of particular interest are the highly parallelizable randomized algorithms which can allow for extremely efficient matrix decomposition. In [34], the authors review many such algorithms, and discuss in particular their applicability to determining principal eigenvalues. The computation complexity in Table 1 for the spectral distances is based on their simplified analysis of the Krylov subspace methods, which states that the approach is , where is the cost of matrix-vector multiplication for the input matrix. Since our input matrices are sparse, , and . Although we use the implicitly restarted Arnoldi method in our eigenvalue calculations, if implementing such a decomposition on large matrices the use of a randomized algorithm could lead to a significant increase in efficiency.

Distance Measure Complexity Ref.
Edit Distance 888The edit distance, as we define it, consists of subtracting sparse matrices, and thus an efficient implementation scales with the number of entries in the matrices in question.
Resistance Distance (Exact) [11]
Resistance Distance (Approximate) [11]
DeltaCon (Exact) [5]
DeltaCon (Approximate) [5]
NetSimile [31]
Spectral Distance [34]
Table 1: Distance measures and complexity. indicates the maximum of size of the two graphs being compares, and indicates the maximum number of edges. For the spectral decomposition, denotes the number of principal eigenvalues we wish to find. We assume that factors such as graph weights and quality of approximation are held constant, leading to simpler expressions here than appear in cited references. Spectral distances have equivalent complexity, since they all all amount to performing an eigendecomposition on a symmetric real matrix.

Random Graph Models

Random graph models have long been used as a method for understanding topological properties of graph data that occurs in the world. The uncorrelated random graph model of Erdős and Rényi [12]

is the simplest model, and provides a null model akin to white noise. The tractability of this model has led to some beautiful probabilistic analysis

[35] but the uniform topology of the model does not accurately model empirical graph data. The stochastic blockmodel is an extension of the uncorrelated random graph, but with explicit community structure reflected in the distribution over edges.

Models such as preferential attachment [9] and the Watts-Strogatz model [10] have been designed to mimic properties of observed graphs. Very little can be said about these models analytically, and thus much of what is understood about them is computational. The two-dimensional square lattice is a quintessential example of a highly structured and regular graph.

We will now introduce the models that are used. We study only undirected graphs, with no self-loops. Although directed graphs of are of great practical importance [36], tools such as the graph resistance only apply to undirected graphs. In particular, the electrical analogy needed to render the effective graph resistance meaningful is lost in a directed graph. Random-walk concepts are still perfectly meaningful on directed graphs, and motivate many popular algorithms used on such graphs (see e.g. [37]).

Most of the models in this work are sampled via the Python package NetworkX [38]

; details of implementation can be found in the source code of the same. Some of the models we use are most clearly defined via their associated probability distribution, while others are best described by a generative mechanism. We will introduce the models roughly in order of complexity.

The Uncorrelated Random Graph

The uncorrelated random graph (also known as the Erdős-Rényi random graph) is a random graph in which each edge exists with probability , independent of the existence of all others. We denote this distribution of graphs by (recall that denotes the size of the graph). As previously mentioned, this is by far the most thoroughly studied of random graph models; the simplicity of its definition allows for analytic tractability not found in many other models of interest. For example, the spectrum of the uncorrelated random graph is well understood. In particular, the spectral density forms a semi-circular shape, first described by Wigner [20], of radius , albeit with a single eigenvalue separate from the semicircular bulk [19].

We will employ the uncorrelated random graph as our null model in many of our experiments. It is, in some sense, a “structureless” model; more specifically, the statistical properties of each edge and vertex in the graph are exactly the same. This model fails to produce many of properties observed in empirical networks, which motivates the use of alternative graph models. For a more detailed definition of the model, and a thorough study of the properties of the uncorrelated random graph, see [35].

The Stochastic Blockmodel

One important property of empirical graphs is community structure. Vertices often form (relatively) densely connected communities, with the connection between communities being (relatively) sparse, or non-existent. This motivates the use of the stochastic blockmodel. In this model, each of the vertices are in one two non-overlapping sets and , referred to as “communities”. Each edge exists (independently) with probability if and are in the same community, and if and are in separate communities. In this work, we will use “balanced” communities, so that the difference in size is less than 1 in magnitude.

The stochastic blockmodel is a prime example of a model which exhibits global structure without any meaningful local structure. In this case, the global structure is the partitioned nature of the graph as a whole. On a file scale, the graph looks like an uncorrelated random graph. We will use the model to determine which distances are most effective at discerning global (and in particular, community) structure.

The stochastic blockmodel is at the cutting edge of rigorous probabilistic analysis of random graphs. In particular, Abbe et al. [39] have recently proven a strict bound on community recovery, showing in exactly what regimes of and it is possible to discern the communities.

Generalizations of this model exist in which there are communities of arbitrary size. Furthermore, each community need not have the same parameter , and each community pair need not have the same parameter . One can, in general, construct a (symmetric) matrix of parameters with the on the diagonal, and the elsewhere, for . However, since our model has only two communities of nearly equal size, we only need a pair of parameters .

Preferential Attachment Models

Another often-studied feature of empirical graphs are their degree distribution. This is generally visualized as a histogram of degree frequencies, and studied under the assumption that it reflect some underlying distribution that informs us about the generative mechanism of the graph.

The degree distribution of an uncorrelated random graph is binomial, and so it has tails that exponentially decrease; for large , the probability that a randomly chosen vertex has degree decays exponentially, . However, in observed graphs such as computer networks, human neural nets, and social networks, the observed degree distribution has a power-law tail [9]. In particular, one observed where generally . Such distributions are often also referred to as “scale-free”.

The preferential attachment model is a scale-free random graph model. It is best described via the generative process rather than by a particular distribution over edges or possible graphs.999This feature of the preferential attachment model is what makes it particularly difficult to work with analytically. Although first described by Yule in 1925 [40], the model did not achieve its current popularity until the work of Barabási and Albert in 1999 [9].

The model has two parameters, and . The latter is the size of the graph, and the former controls the density of the graph. We require that . The generative procedure for sampling from this distribution proceeds as follows. Begin by initializing a star graph with vertices, with vertex having degree and all others having degree . Then, for each , add a vertex, and randomly attach it to vertices already present in the graph, where the probability of attaching to is proportional to to the degree of . Stop once the graph contains vertices.

The constructive description of the algorithm does not yield itself to simple analysis, and so less is known analytically about the preferential attachment model than the uncorrelated random graph or the stochastic blockmodel. In [41], the authors prove that the eigenvalue of the Laplacian scales like , where is the largest degree in the graph.101010These results are proven on a model with a slightly different generative procedure; we do not find that they yield a particularly good approximation for our experiments, which are conducted at the quite low . In [19], the authors demonstrate numerically that the adjacency spectrum exhibits a triangular peak with power-law tails.

Having a high degree makes a vertex more likely to attract more connections, so the graph quickly develops strongly connected “hubs,” or vertices of high degree. This impacts both the global and local structure of the graph. Hubs are by definition global structures, as they touch a significant portion of the rest of the graph, making path lengths shorter and increasing connectivity throughout the graph. On the local scale, vertices in the graph tend to connect exclusively to the highest-degree vertices in the graph, rather than to one another, generating a tree-like topology. This particular topology yields a signature in the tail of the spectrum, the importance of which will be discussed below.

The Watts-Strogatz Model

Many real-world graphs exhibit the so-called “small world pheomenon,” where the expected shortest path length between two vertices chosen uniformly at random grows logarithmically with the size of the graph. Watts and Strogatz [10] constructed a random graph model that exhibits this behavior, along with a high clustering coefficient not seen in an uncorrelated random graph. The clustering coefficient is defined as the ratio of number of triangles to the number of connected triplets of vertices in the graph. The Watts-Strogatz model [10] is designed to be the simplest random graph that has high local clustering and small average (shortest path) distance between vertices.

Like preferential attachment, this graph is most easily described via a generative mechanism. The algorithm proceeds as follows. Let be the size of the desired graph, let , and let be an even integer, with . We begin with a ring lattice, which is a graph where each vertex is attached to its nearest neighbors, on each side. Then, for each edge in the graph with , with probability rewire the edge to a random vertex , so that is replaced with . The target is chosen so that and at the time of rewiring. Stop once all edges have been iterated through. In our implementations, we add an additional stipulation that the graph must be connected. If the algorithm terminates with a disconnected graph, then we restart the algorithm and generate a new graph. This process is repeated until the resulting graph is connected.

As mentioned before, the topological features that are significant in this graph are the high local clustering and short expected distance between vertices. Of course, these quantities are dependent on the parameter ; as , the Watts-Strogatz model approaches an uncorrelated random graph. Similarly, as the adjacency spectral density transitions from the tangle of sharp maxima typical of a ring-lattice graph to the smooth semi-circle of the uncorrelated random graph [19]. Unlike the models above, this model exhibits primarily local structure. Indeed, we will see that the most significant differences lie in the tail of the adjacency spectrum, which can be directly linked to the number of triangles in the graph [19]. On the large scale, however, this graph looks much like the uncorrelated random graph, in that it exhibits no communities or high-degree vertices.

This model fails to produce the scale-free behavior observed in many empirical graph data sets. Although the preferential attachment model reproduces this scale-free behavior, it fails to reproduce the high local clustering that is frequently observed, and so we should think of neither model as fully replicating the properties of observed graphs.

Random Degree-Distribution Graphs

The above three models are designed to mimic certain properties of empirical graphs. In some cases, however, we may wish to fully replicate a given degree sequence, while allowing other aspects of the graph to remain random. That is to say, we seek a distribution that assigns equal probability to each graph, conditioned upon the graph having a given degree sequence. The simplest model that attains this result is the configuration model [42]. Recently, Zhang et al. [43] have derived an asymptotic expression for the adjacency spectrum of a configuration model, which is exact in the limit of large graph size and large mean degree.

Inconveniently, this model is not guaranteed to generate a simple graph; the resulting graph can have self-edges, or multiple edges between two vertices. In 2010, Bayati et al. [44] described an algorithm which uniformly samples from the space of simple graphs with a given degree distribution. We will refer to graphs sampled in this way as random degree-sequence graphs. Their utility lies in the fact that we can use them to control for the degree sequence when comparing graphs; they are used as a null model, similar to the uncorrelated random graph, but they can be tuned to share some structure (notably, the power-law degree distribution of preferential attachment) with the graphs to which they are compared.

The generative algorithm for this model is designed to sample from a uniform distribution over all possible graphs of a given size, conditioned upon the provided degree distribution. Their algorithm is fast, but not perfectly uniform; in

[44] the authors prove that the distribution is asymptotically uniform, but do not prove results for finite graph size. We use this algorithm despite the fact that it does not sample the desired distribution in a truly uniform manner; the fact that the resulting graph is simple overcomes this drawback.

Lattice Graphs

In some of our experiments, we utilize lattice graphs. In particular we use a 2-dimensional by rectangular lattice. Using such a predictable structure allows us to test our understanding of our distances; in particular, we can see if our distances respond as we expect to structural features that are present in the lattice. Empirical realizations of planar graphs, such as road networks, often exhibit lattice-like structure. The planar structure of the lattice allows for an intuitive understanding of the spectral features of the graph, as they approximate the normal vibrational frequencies of a two-dimensional surface.

Lattice graphs are highly regular, in the sense that the connectivity pattern of each (interior) vertex is identical. This regularity is reflected by the discrete nature of the lattice’s spectrum, which can be seen in Figure 10. This is a particularly strong flavor of local structure, as it is not subject to the nose present in random graph models. This aspect allows us to probe the functioning of our distances when they are exposed to graphs with a high amount of inherent structure and very low noise.

Exponential Random Graph Models

A popular random graph model is the exponential random graph model, or ERGM for short. Although they are popular and enjoy simple interpretability, we do not use ERGMs in our experiments. Unlike some of our other models which are described by their generative mechanisms, these are described directly via the probability of observing a given graph .

In particular, let be some scalar graph properties (e.g. size, volume, or number of triangles) and let be corresponding coefficients, for . Then, the ERGM assigns to each graph a probability [45]

This distribution can be sampled via a Gibbs sampling technique, a process which is outlined in detail in [45]. ERGMs show great promise in terms of flexibility and interpretability; one can seemingly tune the distribution towards or away from any given graph metric, including mean clustering, average path length, or even decay of the degree distribution.

However, our experience attempting to utilize ERGMs led us away from this approach. When sampling from ERGMs, we were unable to control properties individually to our satisfaction. In particular, we found that attempts to increase the number of triangles in a graph increased the graph volume; when we subsequently used the ERGM parameters to de-emphasize graph volume, the sampled graphs had an empirical distribution very similar to an uncorrelated random graph.

Evaluation of Distances on Random Graph Ensembles

We will now present the results of our numerical tests, which compare the effectiveness of the various distances in discerning between pairs of random graph ensembles. The discussion in this section will be brief; we reserve our interpretation of the results until the next section. The experiments are organized via the models being compared, and with the performance of each distance shown in plots in each section. When appropriate, we also show the performance of the distances for various . Table 2 summarizes each comparison performed.

Description of Experiments

The experiments are designed to mimic a scenario in which a practitioner is attempting to determine whether a given graph belongs to a population or is an outlier relative to that population. In this vein, we perform experiments that determine how well each distance distinguishes populations drawn from a random graph model. In particular, let us define by and our two graph populations, which we will refer to as the null and alternative populations, respectively. For each distance measure, let be the distribution of distances where and are both drawn from the distribution . Similarly, let be the distribution of distances , where is drawn from and is drawn from .

The distances are a characteristic distance between members of the population . The intuition here is that if the distribution of is well separated from that of , then that distance is effective at separating the null population from the alternative population; if member of are much further from members of than this characteristic in-population distance, we can easily distinguish the two.

To that end, we normalize the statistics of by those of in order to compare. In particular, let be the sample mean of , and let be the sample standard deviation, for . Then, we examine the statistics of , whose samples are calculated via

(7)

If our empirical distribution of is well separated from zero (i.e. the mean is significantly greater than the standard deviation), then the distance is effectively separating the null and alternative population.

We generate 500 samples of and , where each sample compares two graphs of size , unless otherwise specified. The graphs are always connected; our sampler will discard a draw from a random graph distribution if the resulting graph is disconnected. Said another way, we draw from the distribution defined by the model, conditioning upon the fact that the graph is connected.

The small size of our graphs allows us to use larger sample sizes; although all of the matrix distances used have fast approximate algorithms available, we use the slower, often , exact algorithms for our experiments, and so larger graphs would be prohibitively slow to work with. In all the below experiments, we choose our parameters so that the expected volume of the two models under comparison is equal.

Sections Stochastic Blockmodel through Lattice Graph are separated by the models being compared. A very brief discussion of the results occurs alongside the presentation of the results, while a more thorough discussion is reserved for Section Discussion. The reader who wishes to primarily understand our observations and interpretation of the results can skip to Discussion and reference the above sections as necessary.

Experimental Results

Stochastic Blockmodel

In Figure 1, we see the results of comparison between an uncorrelated random graph model and a stochastic block model. For the uncorrelated random graph, the probability of an edge existing is , which is a value of for which the graph was almost always connected.111111In particular, with these parameters, we observe that the empirical probability of generating a disconnected uncorrelated random graph with these parameters is . The preferential attachment section describes in more detail why this exact value is chosen. For the stochastic blockmodel, we have two communities, each of size , with parameters . Thus, the in-community connectivity is more dense than the cross-community connectivity by a factor of .

Since we have volume matched the graphs, the edit distance fails to distinguish the two models. Among the matrix distances, DeltaCon separates the two models most reliably. The adjacency and normalized Laplacian distances perform well, but the non-normalized Laplacian distance fails to distinguish the two models. The performance of the adjacency distance is primarily in the second eigenvalue , and including further eigenvalues adds no benefit; the normalized Laplacian also shows most of its benefit in the second eigenvalue , but unlike the adjacency distance, including more eigenvalues decreases the performance of the metric.

Fig 1:

Comparison of distance performance, with uncorrelated random graph as null model and stochastic blockmodel as alternative. Boxes extend from lower to upper quartile, with center line at median. Whiskers extend from 5th to 95th percentile.

Preferential Attachment vs Uncorrelated

Figure 3 shows the results of comparing a preferential attachment graph to an uncorrelated random graph. The preferential attachment graph is quite dense, with . Since the number of edges in this model is always , we calculate the parameter for the uncorrelated graph via

We use in these experiments.

Again, due to matching the volumes of the graph, the edit distance fails to distinguish the two models. The resistance distance shows mediocre performance, although 0 is outside the 95% confidence interval.

DeltaCon exhibits extremely high variability, although it has the highest median of the matrix distances.

The Laplacian distance outperforms all others, while the normalized Laplacian does not separate the two models at all. Figure 4 shows that most of the spectral information for the Laplacian is contained in the last few eigenvalues, counter to what one often expects from distances. For the adjacency distance, most of the information is held in the first eigenvalue, as the scaled distance stays more or less constant as one increases (plot not shown).

Fig 2: Comparison of distance performance, with uncorrelated random graph as null model and preferential attachment as alternative. See Figure 1 for boxplot details.
Fig 3: Comparison of distance performance, with degree-sequence random graph as null model and preferential attachment as alternative. The degree sequence for each null matches that of the alternative. See Figure 1 for boxplot details.
Fig 4: Comparison of distance performance, with uncorrelated random graph as null model and preferential attachment as alternative. See Figure 1 for boxplot details.

Preferential Attachment vs Random Degree Distribution Graph

In addition to the comparison of preferential attachment and uncorrelated random graphs in Section Preferential Attachment vs Uncorrelated, we now compare preferential attachment to random degree-distribution graphs. Recall that for a given degree distribution, the random degree-distribution graph probability density is the uniform density over all simple graphs with the given degree distribution. We employ the algorithm of Bayati et al. [44] to sample from this distribution.

This experiment allows us to search for structure in the preferential attachment model that is not prescribed by the degree distribution. The discrepancy in effectiveness of the normalized versus non-normalized Laplacian distances in Section Preferential Attachment vs Uncorrelated suggests that much of the structural information that the Laplacian distance is using to discern the two models is contained in the degree distribution. None of the metrics have scaled distances well-separated from zero, which suggests that all significant structural features of the preferential attachment model are prescribed by the degree distribution.

Fig 5: Comparison of distance performance, with uncorrelated random graph as the null model, and a small-world graph as the alternative. See Figure 1 for boxplot details.
Fig 6: Comparison of distance performance, with a 10 by 10 2-dimensional lattice graph as the alternative model, and a random degree-distribution graph (with the same degree distribution as the lattice) as the null. See Figure 1 for boxplot details.

Watts-Strogatz

In this section, we compare a Watts-Strogatz random graph and an uncorrelated random graph. The Wattz-Strogatz model is interesting in that it contains primarily local structure, in the form of a high local clustering coefficient (i.e. density of triangles).

The Watts-Strogatz model is sparse, and so our volume-matched null model has a low value of and thus is very likely disconnected. This is only a significant problem for the resistance distance, which is undefined for disconnected graphs. To remedy this, we use an extension of the resistance distance called the renormalized resistance distance, which is developed and analyzed in [29]. This is the only experiment in which the use of this particular variant of the resistance distance is required.

In Figure 6 we see that the adjacency and normalized Laplacian spectral distances are the strongest performers. Amongst the matrix distanced, DeltaCon strongly outperforms the resistance distance. The resistance distance here shows a negative median, which indicates smaller distances between populations than within the null population. This is likely due to the existence of many (randomly partitioned) disconnected components within this particular null model, which inflates the distances generated by the renormalized resistance distance. It is notable that, contrary to the comparison in Section Preferential Attachment vs Uncorrelated, the normalized Laplacian outperforms the non-normalized version of the same.

In Figure 7 we see the results for distances, for a wide variety of . These results indicate that much of the information that the distances are using to discern between the two models is contained in the higher eigenvalues, particularly for the adjacency and normalized Laplacian distances.

Fig 7: Comparison of distance performance, with uncorrelated random graph as the null model, and a small-world graph as the alternative. See Figure 1 for boxplot details.

Lattice Graph

For our final experiment we compare a lattice graph to a random degree-distribution graph with the same degree distribution. The lattice here is highly structured, while the random degree-distribution graph is quite similar to an uncorrelated random graph; both the deterministic degree distribution of the lattice and the binomial distribution of the uncorrelated random graph are highly concentrated around their mean.

We see that the scaled distances in this experiment are about an order of magnitude higher than they are in other experiments for some of the distances; because the lattice is such an extreme example of regularity, it is quite easy for many of the distances to discern between these two models. The resistance distance has the highest performance, while spectral distances all perform equally well. Note that for a regular graph, the eigenvalues of , , and are all equivalent, up to an overall scaling and shift, so we would expect near-identical performance for graphs that are nearly regular.

Similarly to the Watts-Strogatz comparison in Section Watts-Strogatz, much of the information that the distances use to discern between the models is contained in the higher eigenvalues. This points to the importance of local structure in the lattice.

Fig 8: Comparison of distance performance, with a 10 by 10 2-dimensional lattice graph as the alternative model, and a random degree-distribution graph (with the same degree distribution as the lattice) as the null. See Figure 1 for boxplot details.
Section Null Alternative Primary Structural Difference
Stochastic Blockmodel SBM Community structure
Preferential Attachment vs Uncorrelated PA High-degree vertices
Preferential Attachment vs Random Degree Distribution Graph RDDG PA Structure not in degree distr.
Watts-Strogatz WS Local structure
Lattice Graph Lattice Extreme local structure
Table 2: Table of comparisons performed, and the important structural features therein. indicates the uncorrelated random graph, SBM is the stochastic blockmodel, PA is the preferential attachment model, RDDG is the random degree distribution graph, and WS is the Watts-Strogatz model.

Discussion

In this discussion, as we have done throughout the paper, we will emphasize a distinction between local and global graph structure. Global structures include community separation as seen in the stochastic blockmodel, while local structures include the high density of triangles in the Watts-Strogatz model.

In general, we find that when examining global structure, the adjacency spectral distance and DeltaCon distance both provide good performance. When examining community structure in particular, one need not employ the full spectrum when using a spectral distance. The fact that the spectra of the graph provide a natural partitioning [15] aligns with our result that the first few eigenvalues will provide sufficient differentiation if the number of communities is low.

When one is interested in both global and local structure, we recommend use of the adjacency spectral distance. When the full spectrum is employed, the adjacency spectral distance is effective at differentiating between models even if the primary structural differences occur on the local level (e.g. the Watts-Strogatz graph). The use of the entire spectrum here is essential; much of the most important information is contained in the tail of the distribution, and the utility of the adjacency spectral distance decreases significantly when only the dominant eigenvalues are compared.

It is important to remember that these experiments represent only one way that pairwise graph comparison might be used. In particular, we are here comparing a sample to a known population. Alternatively, one might also be interested in comparing a dynamic graph at adjacent time steps [46]; this scenario is treated empirically in Section Primary School Social Contact Data.

Discerning Global Structure

Across our models, we see two significant and quite distinct types of global structure, which can be seen in Figure 9. The first of these is the grouping of the graph into distinct communities. The stochastic blockmodel is of course the model which most clearly possesses this type of global structure. At the local level, the stochastic blockmodel is nearly identical to the uncorrelated random graph, and so we can use the results of Section Stochastic Blockmodel to understand how distances respond to this specific feature.

Fig 9: Two significant global structures observed in our experiments. On the left is the community structure typical of the stochastic blockmodel. On the right is the heavy-tailed degree distribution typical of the preferential attachment model.

The particular configuration of the stochastic blockmodel that we use has two partitions of equal size. We would thus expect the second eigenvalue to be the primary distinguishing spectral feature of the graph (in any of the three matrix representations used). Indeed we see in Figure 1 that this is the case, and that the use of additional eigenvalues beyond only serves to decrease performance by including noise in the comparison. In Figure 10 we can directly observe the similarity in spectra between the two models, as well as the presence in the stochastic blockmodel of a second eigenvalue which separates from the bulk of the spectrum.

The separation of the first eigenvalues from the semi-circular bulk spectrum of the stochastic blockmodel is studied analytically in [43]. The authors show that a graph spectrum can be though of as two distinct components; a continuous bulk, and discrete outliers, with the latter indicating community structure. This separation is what allows our distances to function effectively in detecting community structure. In general, the use of the spectrum for community partitioning in graphs has a long history [47]. Recent, Lee et al. [15] have proven a performance bound on the effectiveness of using the first eigenvectors to partition the graph into clusters.

In [29], the authors study the performance of the resistance perturbation metric in the setting of a dynamic variant of the stochastic blockmodel. Although not the same scenario as ours, their result is closely connected, and conforms well with the currently observed data. In particular, their result indicates that for the resistance metric to be effective in detecting topological changes in a stochastic blockmodel, the number of cross-community edges must be asymptotically dominated by the mean degree.

This is a highly restrictive condition. In the results shown in Section Stochastic Blockmodel, we see that the resistance metrics performs poorly; auxiliary results (not shown) indicate that its performance increases significantly when the graph has high in-community degree and very low cross-community connection . This unrealistic density requirement puts severe practical restrictions on the applicability of using the resistance metric to detect topological changes in community graphs. Furthermore, in these extreme cases, other measures (such as the spectral distances) can also easily distinguish between the two models.

The link between graph resistance and degree has been established in [48], where the authors show that the resistance between vertices and can be well approximated by

an approximation which suggests that fluctuations in degree distribution would result in significant fluctuations in the graph resistance. This is corroborated by the poor performance of the resistance distance in Section Stochastic Blockmodel. These results indicate that the resistance distance cannot “see” changes in global structure over local noise, unless the global structure is unrealistically stark (as in the asymptotic condition given in [29]).

The second significant global structure seen in our models is the particular topology in which there are a small number of highly connected vertices which dominate the connectivity patterns of the graph. A small graph exhibiting this structure can be seen in Figure 9. This results in a heavy-tailed degree distribution. The random graph model which features this type of structure is the preferential attachment model, whose degree distribution exhibits polynomial decay in the tails [9].

The best tool for detecting this structure is the Laplacian spectral distance. The presence of the degree matrix in the Laplacian means that comparison of Laplacians is very effective for discerning between models with radically different degree distributions. Since significant differences between the degree distributions of the preferential and attachment graphs occur in the tail (i.e. high-degree vertices), the inclusion of the final few eigenvalues is essential if one wishes to use the Laplacian spectrum to perform this comparison.

Figure 10 exhibits the influence of the degree distribution on the Laplacian spectrum. We observe qualitatively, as demonstrated in [19], that the tails of the Laplacian spectrum of a preferential attachment graph exhibits polynomial decay similar to the tail of the degree distribution. This is a prime example of the way in which the spectrum of the Laplacian can be heavily influenced by the degree distribution.

The particular topology of the preferential attachment differentiates itself from that of the uncorrelated random graph at both a global and local level. Even though the Laplacian spectral distance is best at observing the significant effect of high-degree vertices on the model, it is not, all in all, the most efficient tool for differentiating the two. To understand this, let us now turn to further discussion of the local structure present in the preferential attachment model, as well as the other models studied.

Fig 10: Spectral densities for various graph comparisons. Parameters used match those in Sections Stochastic Blockmodel through Lattice Graph. Densities are built from an ensemble of 1000 graphs. The Laplacian spectrum is shown for preferential attachment, while adjacency spectrum is shown for all others. The uncorrelated random graph model in the lower left has a lower than those on the upper row, resulting in a sharp peak at .

Impact of Local Structure

Local structure consists of structures existing at the level of a single vertex or subgraphs consisting of a small number of vertices. These local structures can provide important information about the topology of the graph, or they can amount to noise which obfuscates our ability to examine global structures of interest. Our experiments provide examples of both of these cases, which we will now examine.

Consider first the results of Section Preferential Attachment vs Uncorrelated. In Figure 4, we see that the adjacency spectral distance differentiates between the two models based primarily on the first dominant eigenvalue. Recalling the interpretation of the adjacency spectrum provided by Maas [24] and reiterated in Section Spectral Distances, we realize that this is due to the high density of low-degree vertices in the preferential attachment model, compared to the uncorrelated random graph. This local structure is in some sense necessitated by the presence of a few very high degree vertices, since we demand that the graphs being compared are equal in expected volume. Indeed, the degree distribution of this model is so structurally significant that it almost entirely determines the structure of the resulting graph. We see in Figure 3 that no distances can effective discern between a preferential attachment graph and a randomized graph with the same degree distribution.

The Watts-Strogatz graph is another example of a model whose signature lies primarily in local structure. Farkas et al. [19] argue that the presence of a high number of triangles is the distinguishing feature of a Watts-Strogatz graph, and persists at values of in which other structural aspects of the ring lattice (e.g. regularity and periodicity). The third moment of the spectral density of tells us the expected number of triangles in a graph,121212This is not hard to show; see e.g. [19], Sec III A 1. Furthermore, the moment of the density gives the expected number of paths of length in the graph. and so one would expect inclusion of the full spectrum important in detecting the topological signature of this model. On a global scale, the model does not significantly differ from the uncorrelated random graph; highly connected vertices are extremely unlikely, and the generative rewiring mechanism does not result in the presence of communities in the graph.

We see in Figure 7 that inclusion of the large- (high frequency) eigenvalues is essential to differentiating between the models. In Figure 10 we see that the spectral density of the Watts-Strogatz model exhibits high skewness, which indicates the high expected number of triangles in the graph, and is only captured by inclusion of the full spectral bulk.

The lattice graph is an extreme example of this kind of local structure. Similarly to the Watts-Strogatz model, there is a ubiquity of a certain type of local structure in the graph, namely the presence of four-edge loops. We see in Figure 8 that including a large number of eigenvalues in spectral comparison greatly increases the efficacy of the spectral distances. However, since the lattice is so remarkably regular (unlike the Watts-Strogatz model, which is a perturbed ring lattice) even comparing only a few principal eigenvalue is sufficient to differentiate it from a randomized graph with the same degree distribution. The spectra of the two models are shown in Figure 10.

Local structure is sometimes important when understanding graph structure, but can also frequently serve as a source of uninformative noise when comparing graphs. The results of Section Stochastic Blockmodel illustrate this fact. Looking to Figure 1, we see that the Laplacian spectrum is unable to distinguish between the stochastic blockmodel and the uncorrelated random graph, while the normalized Laplacian distinguishes them well. The difference between these two matrix representations is that normalization removes degree information, which is not informative in this particular model.

We see a similar problem arise when we apply the resistance distance to the stochastic blockmodel; as discussed in the previous section, the resistance distance is disproportionately influenced by local structure, and is unable to discern the global structure of the graph over local fluctuations. Interestingly, DeltaCon does not appear to suffer from local fluctuations as much as the resistance distance. This could be due to the structure of the matrix that DeltaCon uses to represent the graph, or due to the use of the Matusita distance rather than the or norm to compare the resulting matrices (for more discussion of this, see Sections 2.2 and 3.1 in [5]).

It is essential to determine whether local topological features are of interest in the comparison problem at hand; inclusion of locally targeted distance measures can hinder the performance of graph distances in cases where local structure is noisy and uninformative. However, if local structure is ignored, one can often omit essential structural information about the graphs under comparison.

Recommendations

Throughout our experiments, the most consistent observation is that the adjacency spectral distance shows high effectiveness in discerning between a variety of models. We see in Section Stochastic Blockmodel that it is able to perceive global community structure and not be overwhelmed by local fluctuations in degree, but Sections Preferential Attachment vs Uncorrelated and Watts-Strogatz show that it is by no means ignorant of local structure present in a graph. That is to say, the adjacency spectral distance is multiscale; the scale of interest can be chosen by tuning the number of principal eigenvalues included in the comparison.

Spectral distances exhibits practical advantages over matrix distances, as they can inherently compare graphs of different sizes and can compare graphs without known vertex correspondence. The adjacency spectrum in particular is well-understood, and is perhaps the most frequently studied graph spectrum; see e.g. [19, 41]. Finally, fast, stable eigensolvers for symmetric matrices are ubiquitous in modern computing packages such as ARPACK, NumPy, and Matlab, allowing for rapid deployment of models based on spectral graph comparison.131313The Python library NetComp further simplifies the application of these tools to practical problems; see the appendix for more details. Furthermore, randomized algorithms for matrix decomposition allow for highly parallelizable calculation of the spectra of large graphs [34].

However, the utility of the adjacency spectral distance is not general enough to simply apply it to any given graph matching or anomaly detection problem in a naive manner. A prudent practitioner would combine exploratory structural analysis of the graphs in question with an ensemble approach in which multiple distance measures are considered simultaneously, and the resulting information is combined to form a consensus. Such systems are commonplace in problems of classification in machine learning, where they are sometimes known as “voting classifiers” (see e.g. [49]).

As we have said before, we have been comparing graphs of equal volume (in expectation). In situations where the graph volume varies drastically, the process of choosing a graph comparison tool may differ significantly. We will address this in Section Primary School Social Contact Data, where we deal with graphs that exhibit significant volume fluctuations.

Applications to Empirical Data

Random graph models are often designed to simulate a single important feature of empirical networks, such as clustering in the Watts-Strogatz model or the high-degree vertices of the preferential attachment model. In empirical graphs, these factors coexist in an often unpredictable configuration, along with significant amounts of noise. Although the above analysis of the efficacy of various distances on random graph scenarios can help inform and guide our intuition, to truly understand their utility we must also look at how they perform when applied to empirical graph data.

In this section, we will examine the performance of our distance in two scenarios. First, we will look at an anomaly detection scenario for a dynamic social-contact graph, collected via RFID tags in an French primary school [50]. Secondly, we will look at a graph matching problem in neuroscience, comparing correlation graphs of brain activity in subjects with and without autism spectrum disorder [51].

The first experiment suggests that the tools that perform the most consistently in the graph matching applications (the spectral distances) are unreliable in our anomaly detection experiment. It is also interesting insofar as the graphs exhibit significant volume fluctuations, which was a factor not present in our numerical studies.

In the second experiment, we see that none of our graph distances fully distinguish between the two populations. Signal-to-noise is a ubiquitous problem in analyzing actual graph data, and is particularly notable in building a connectivity networks of human brain activity (see e.g. [52]). Accordingly, the results of our data experiments show that in the presence of real-world noise levels, many of these distances fail to distinguish subtle structural differences. In the face of this, we examine more targeted analysis techniques which may be applied in such a situation.

Primary School Social Contact Data

Some of the most well-known empirical network datasets reflect social connective structure between individuals, often in online social network platforms such as Facebook and Twitter. These networks exhibit structural features such as communities and highly connected vertices, and can undergo significant structural changes as they evolve in time. Examples of such structural changes include the merging of communities, or the emergence of a single user as a connective hub between disparate regions of the graph.

In this section, we investigate a social contact network, which is based on measurements of face-to-face contact using RFID tags. We use our distances to compare the graph at subsequent time steps. This is a quite different scenario than that presented in Section Evaluation of Distances on Random Graph Ensembles; the most important difference is that there is a natural sense of vertex correspondence, because the students’ labels persist over time. This change has significant implications for the performance of our various distances, which we will explore in the discussion below.

Description of Experiment

The data are part of a study of face to face contact between primary school students [50]. Briefly, RFID tags were used to record face-to-face contact between students in a primary school in Lyon, France in October, 2009. Events punctuate the school day of the children (see Table 3), and lead to fundamental topological changes in the contact network (see Fig. 11). The school is composed of ten classes: each of the five grades (1 to 5) is divided into two classes (see Fig. 11).

Time Event
10:30 a.m. – 11:00 a.m. Morning Recess
12:00 p.m. – 1:00 p.m. First Lunch Period
1:00 p.m. – 2:00 p.m. Second Lunch Period
3:30 p.m. – 4:00 p.m. Afternoon Recess
Table 3: Events that punctuate the school day.

The construction of a dynamic graph proceeds as follows: time series of edges that correspond to face to face contact describe the dynamics of the pairwise interactions between students. We divide the school day into time intervals of . We denote by , the corresponding temporal grid. For each we construct an undirected unweighted graph , where the nodes correspond to the students in the 10 classes, and an edge is present between two students and if they were in contact (according to the RFID tags) during the time interval .

9:00 a.m.                            10:20 a.m.                             10:50 a.m.                           10:57 a.m.

11:57 a.m.                            12:13 p.m.                             12:54 p.m.                            1:46 p.m.

2:00 p.m.                               2:03 p.m.

Fig 11: Top to bottom, left to right: snapshots of the face-to-face contact network at times (shown below each graph) surrounding significant topological changes.

Changes in the graph topology during the school day are quantified using the various distance measures,

To help compare these distances with one another, we normalize each by their sample mean , and we define

For the purpose of this work, we think of each class as a community of connected students; classes are weakly connected (e.g., see Fig. 11 at times 9:00 a.m., and 2:03 p.m.). During the school day, events such as lunchtime and recess, trigger significant increases in the the number of links between the communities, and disrupt the community structure; see Fig. 11 at times 11:57 a.m., and 1:46 p.m..

Fig 12: Comparison of distance performance on primary school data set. All distances are normalized by their sample mean.

Discussion

In Figure 12, we see the normalized time series for each of the distance measures studied. Interestingly, the matrix distances all achieve passable performance, while NetSimile and the spectral distances are far too noisy to be of any practical use. As we see in Figure 11, the main structural changes that the graph undergoes are transitions into and out of a strong ten-community structure that reflects the classrooms of the school. For example, the adjacency matrix begins as (mostly) block-diagonal at 9 AM, but has significant off-diagonal elements by morning recess at 10:20 AM, and is no longer (block) diagonally dominant come the lunch period at 12 PM.

These structural changes are of a global nature. In Section Stochastic Blockmodel we saw that the spectral distances were more effective than the matrix distances at detecting differences in community structure between graphs. Why is this not the case here? The graphs are persistent, in the sense that the vertices show a natural correspondence, which the matrix distances exploit. For example, we know that certain edges (those between classes) are “cross-community,” and so the presence of these edges suggests some anomalous topology. This is why even the edit distance is quite effective at detecting topological changes in the graph.

Now let us highlight certain interesting features of the comparison shown in the top plot in Figure 12. Amongst the matrix distances, the resistance distance shows the largest distance at 10:20 AM, followed by the smallest distance at the subsequent time step. During recess, local changes are occurring in the graph, but the global structure is remaining mostly constant - the graph consists of strongly connected communities, with some connection between them. The fact that the resistance distance shows the lowest distance between time steps within recess suggests that, of the matrix distances, it is least effected by this local variation in the graph.

The lunch periods are marked by a stark change in graph topology. The graph undergoes significant global structural transformation, becoming almost entirely unordered with respect to class communities. Again the resistance distance shows a more significant signal at the beginning of this change, and a smaller signal as the anomalous topology persists.

The next significant transformation the graph undergoes is the transition out of lunch periods. The resistance distance stands out as marking this transition most pronouncedly, although all distances show significantly smaller distances between the time steps during the afternoon class period compared to during the lunch period.

Unlike our numerical experiments above, the graphs being compared here show significant fluctuations in volume. However, these fluctuations do not align with our event markers to the extent that our matrix distances do. This indicates that our matrix distances are picking up more than simply changes in volume.

The most remarkable conclusion of this particular experiment is that although the spectral distances are very efficient and stable for the purposes of graph matching, they show very poor performance in anomaly detection on dynamic graphs. This is due to the inherent vertex correspondence that is automatically provided when comparing a dynamic graph at subsequent time steps. Although we reflect on the subtle distinctions in performance between the three matrix distances, they all show very similar overall performance, and any one of them would be sufficient for application in this scenario.

Brain Connectomics of Autism Spectrum Disorder

Graph theoretical analysis of the connective structure of the human brain is a popular research topic, and has benefited from our growing ability to analyze network topology [53, 1]. In these graph representations of the brain, the vertices are physical regions of the brain, and the edges indicate the connectivity between two regions. The connective structure of the brain is examined either at the “structural” level, in which edges represent anatomical connection between two regions, or at the “functional” level, in which an edge connects regions whose activation patterns are in some sense similar.

Psychological conditions such as Alzheimer’s disease [54], autism spectrum disorder [55], and schizophrenia [56] have been shown to have structural correlates in the graph representations of the brains of those affected. In this section, we will focus on autism spectrum disorder, or ASD. The availability of high-quality, open-access preprocessed data [57] makes ASD a particularly attractive choice for researchers with little experience implementing the nuanced preprocessing pipelines seen in neuroimaging. We examine which, if any, of our graph distances are able to effectively discern between subjects with ASD and subjects which are typically developing (TD). This is a problem in graph matching, very similar in structure to the experiments done on random graph models above.

We will see that our distances are ineffective at discerning between the graphs arising from ASD subjects and TD subjects. This result agrees with the conclusion of a recent review, which finds that classification methods do not generalize well to novel data [58]. The negative outcome of this experiment both informs an understanding of the limitations of generalized tools such as our graph distances, and points to possible refinements of these tools.

Description of the Data

The Autism Brain Imagine Data Exachange, or ABIDE, is an aggregation of brain-imaging data sets from laboratories around the world which study the neurophisiology of ASD [51]

. The data that we focus on are measurements of the activity level in various regions of the brain, measured via functional magnetic resonance imaging (fMRI). The fMRI method is actually measuring blood oxygen levels in the brain, which are then used as a proxy for activation levels. Measurements are taken over myriad small volumes within the brain, preprocessed, and then aggregated into a much smaller collection of time series, each representing a distinct region of the brain.

These time series then pass through an extensive preprocessing pipeline, which includes myriad steps such as nuisance signal (e.g. heartbeat) removal, detrending, smoothing via band-pass filtration, and so on. A detailed assessment of the preprocessing steps can be found in [57]. After preprocessing, the data is analyzed for quality. Of the original 1114 subjects (521 ASD and 593 TD), only 871 pass this quality-assurance step. These subjects are then spatially aggregated via the Automated Anatomical Labelling (AAL) atlas, which aggregates the spatial data into 116 time series.

To construct a graph from these time series, the pairwise Pearson correlation is calculated to measure similarity. If we let and denote two regions in the AAL atlas and let denote the Pearson correlation between the corresponding time series, the simplest way to build a graph is to assign weights , so the weight between vertices and is the modulus of the correlation. One may wish to exclude particularly low correlations, as these are often spurious and not informative as to the structure of the underlying network. In this case, one chooses a threshold and assigns weights via

Finally, one may wish to binarize the graph, so that

, and thus uses the formula

We will compare both binary and weighted connectomes, generated at multiple thresholding levels. This will allow us to be confident that our results are not artifacts of poorly chosen parameters in our definition of the connectome graph.

Discussion

Figure 13 shows the results of our experiment comparing connectomes of TD and ASD subjects. The TD subjects play the role of the null population and the ASD subjects constitute the alternative population , and the scaled distances shown in the plot are calculated in the manner outlined in Section Evaluation of Distances on Random Graph Ensembles.

Fig 13: Distance comparison for ABIDE autism data set, for various thresholding configurations.

We observe that no distances effectively separate the two communities, regardless of level of thresholding, or the presence or absence of binarization. Indeed, the negative median of the scaled distance indicates that distances from ASD to TD connectomes is lower than distances between two TD connectomes, indicating a higher structural variability in TD connectomes. As we will see below, the structural differences between the two communities are localized within subgraphs, and do not persist throughout the full graph. Furthermore, the signal produced by these differences is not easily differentiated from the local variations (i.e. noise) present in the communities. For these two reasons, global comparison using graph metrics is ineffective for this problem.

Figure 14 shows a region-by-region comparison of connectomes of ASD and TD subjects. Similarly to our previous scaling, we take the mean and variance (region-wise) of the correlations in TD subject, and then use these to normalize the correlations of ASD subject. Thus, a value of 0.25 in Figure 14 indicates that ASD subject show, on average, correlations 0.25 standard deviations above the mean, relative to TD subjects.


Fig 14: Absolute correlations between regions in ASD subjects. Data are normalized by mean and variance of absolute correlations between TD subjects.

We see that certain regions show significant differences. A closer examination shows that ASD subjects are generally underconnected in regions 73 through 77, and are overconnected in regions 79 and 84 through 89. A table below shows the specific anatomical regions that these labels correspond to.

Label Region Connection
73 L. Putamen Underconnected
74 R. Putamen Underconnected
75 L. Globus Pallidus Underconnected
76 R. Globus Pallidus Underconnected
77 L. Thalamus Underconnected
79 R. Transverse Temporal Gyrus Overconnected
84 R. Superior Temporal Lobe Overconnected
85 L. Middle Temporal Gyrus Overconnected
86 R. Middle Temporal Gyrus Overconnected
87 L. Middle Temporal Pole Overconnected
88 R. Middle Temporal Pole Overconnected
89 L. Inferior Temporal Gyrus Overconnected
Table 4: Regions that show notably anomalous connectivity patterns. Correspondence between labels and regions is established via the Automated Anatomical Labelling atlas [59].

Figure  14 indicates that there are in fact significant structural differences between the connectomes of TD and ASD subjects. However, the differences barely stand out above the noise in the graph; all the edge differences in Figure 14 are less than half a standard deviation away from the mean. Furthermore, the differences occur in isolated regions of the graph, and the majority of the edge weights do not show statistically significant differences between the two populations. Both the low amplitude and small extent of the signal contribute to the difficulty we see graph distances have in discerning between TD and ASD subjects.

In [60], the authors find very little that distinguishes the connectomes of ASD subjects from TD subjects, save for lower betweenness centrality in right later parietal region. Similarly, the authors of [58] assert that although classification algorithms show “modest to conservatively good” accuracy rates, they perform poorly when tested on novel data sets. Looking in particular to [61], we see that the authors are able to achieve just over 75% accuracy in their classification, but they do so by both preprocessing the connectomes via particular regions of interest, and by testing a smorgasbord of classifiers (9 different models are used) to find the one which shows highest performance. Such a process exhibits that reasonable accuracy can be achieved via careful algorithmic tuning; inversely, our result shows that a naïve application of a graph-theoretic method does not provide us with accurate classification.

A complete explanation of the failure of connectomes to unambiguously characterize ASD is well beyond the scope of this work; however, we will highlight a few interesting possibilities. In [61], the authors suggest that poor performance of many classifiers may be due to inclusion of uninformative features. Said another way, regions of the brain that have no bearing on the presence or absence of ASD are included in the connectome, which then reduces the signal to noise ratio of the data. In [58], the authors raise the issue of mental state in which the scans are performed. This state is often referred to as a “resting state,” but variations in instructions (i.e. eyes open vs. closed) can have strong effects on the resulting data [62]. In [63], the authors find that when ASD connectomes show lower connectivity than TD connectomes when the subject exteroceptive activity, but show higher connectivity during introspective attentional tasks. This contrast highlights the need for careful control of the subjects’ mental state if meaningful comparisons between connectomes are to be made.

It should also be noted that the choice of Pearson correlation to compare the fMRI time series is not obviously the correct one. It has been recently shown that the time series of brain activity exhibit nontrivial lag structure [64], indicating the need for a more general method of time series comparison. Myriad popular methods such as Granger causality [65]

and mutual information can be applied to this problem; indeed, mutual information analysis has already shown promise as a tool in the pre-processing and feature selection stage of connectome analysis

[66].

Conclusion

We have studied the efficacy of various graph distance measures when they are used to differentiate between popular random graph models, as well as in empirical anomaly detection and graph classification scenarios. These measures are understood through a multi-scale lens, in which the impact of global and local structures are considered separately. Although recent work [67] has called into question the previously assumed ubiquity of some of these models, studying their properties builds qualitative and guide intuition when examining empirical network datasets.

Throughout our graph matching experiments in Section Evaluation of Distances on Random Graph Ensembles, we find that the adjacency spectral distance is the most stable, in the sense that it exhibits good performance across a variety of scenarios. It exhibits an ability to perceive both global and local structure, while avoiding becoming overwhelmed by local fluctuations in graph topology. Although the various matrix distances we examine allow for elegant analysis [5, 11], we find their performance on random graph model comparison underwhelming.

The situation reverses when we look at dynamic anomaly detection in Section Primary School Social Contact Data. In this scenario, the matrix distances proved most effective, and showed clear indications of the ground-truth anomalies present in the data. The spectral distances, on the other hand, were so noisy as to be useless. When doing anomaly detection on a dynamic graph, the two graphs in comparison tend to share more edges than in graph matching scenarios, which may contribute to the good performance of our matrix distances. Although the graphs in this section fluctuate in volume, we do not find that these fluctuations are helpful in detecting anomalous time steps.

Finally, we explore a collection of human connectomes of subjects with and without autism spectrum disorder. We observe that although differences are observable within the two populations via statistical comparison of edge weights, no graph distance effectively separates the two populations. This experiment helps us understand the limitations of using such generalized tools. We conclude that either more targeted tools are necessary, or more careful data collection and preprocessing is needed to establish a dataset that is separable via classification algorithms.

Based on the results of our numerical and empirical data experiments, we provide a suggested decision process in Figure 15. If the graphs to be compared exhibit differences in volume or size, then these should be examined to see if they hold predictive power, as they are so simple and efficient to compute. If they prove ineffective, then one must consider the setting. In a dynamic setting, in which a dynamic graph is being compared at subsequent time steps, then we recommend using matrix distances based on the results of Section Primary School Social Contact Data. If one is comparing graphs to determine whether a sample belongs to a given population, then the adjacency spectral distance is the most reliable, as Section Evaluation of Distances on Random Graph Ensembles demonstrates. Finally, if none of these approaches give adequate performance, then a more targeted analysis must be performed, such as the edge-wise statistical comparison of weights in Figure 14. The particular design of this analysis is domain specific and highly dependent upon the nature of the data.

Fig 15: Flow chart summarizing the suggested decision process for applying distance measures in empirical data.

Notation

For reference, in Table 5 we provide a table of notation used throughout the paper.

Graph
Vertex set, taken to be
Edge set, subset of
Weight function,
Size of the graph,
Number of edges,
Degree of vertex
Degree matrix (diagonal)
Distance function
Adjacency matrix
Laplacian matrix
normalized Laplacian matrix (symmetric)
eigenvalue of the adjacency matrix
eigenvalue of the Laplacian matrix
eigenvalue of the normalized Laplacian matrix
The {null,alternative} population of graphs
Sample graph from
Distribution of distances between graphs in null population
Sample from
Distribution of distances
Sample from
Distribution normalized via (7)
Sample from
Uncorrelated random graph with parameters and
Parameters for stochastic blockmodel
Parameters for preferential attachment model, with
Parameters for Watts-Strogatz graph, with even
Table 5: Table of commonly used notation.

NetComp: Network Comparison in Python

NetComp is a Python library which implements the graph distances studied in this work. Although many useful tools for network construction and analysis are available in the well-known NetworkX [38], advanced algorithms such as spectral comparisons and DeltaCon are not present. NetComp is designed to bridge this gap.

Design Consideration

The guiding principles behind the library are

  1. Speed. The library implements algorithms that run in linear or near-linear time, and are thus applicable to large graph data problems.141414See below regarding the implementation of exact and approximate forms of DeltaCon and the resistance distance.

  2. Flexibility. The library uses as its fundamental object the adjacency matrix. This matrix can be represented in either a dense (NumPy matrix) or sparse (SciPy sparse matrix) format. Using such a ubiquitous format as fundamental allows easy input of graph data from a wide variety of sources.

  3. Extensibility. The library is written so as to be easily extended by anyone wishing to do so. The included graph distances will hopefully be only the beginning of a full library of efficient modern graph comparison tools that will be implemented within NetComp.

NetComp is available via the Python Package Index, which is most frequently accessed via the command-line tool pip. The user can install it locally via the shell command

As of writing, the library is in alpha. The approximate (near-linear) forms of DeltaCon and the resistance distance are not yet included in the package. Both algorithms have an quadratic-time exact form which is implemented. Those interested can view the source code and contribute at https://www.github.com/peterewills/netcomp.

Acknowledgments

F.G.M was supported by the National Natural Science Foundation (CCF/CIF 1815971), and by a Jean d’Alembert Fellowship.

Author contributions

Writing – Original Draft: P.W.; Writing – Review & Editing: P.W. and F.G.M.; Conceptualization: P.W. and F.G.M.; Investigation: P.W. and F.G.M.; Methodology: P.W. and F.G.M; Formal Analysis: P.W. and F.G.M.; Software: P.W.

References

  •  1. Bullmore E, Sporns O. Complex brain networks: graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience. 2009;10:186–198.
  •  2. Pasqualetti F, Döfler F, Bullo F. Attack Detection and Identification in Cyber-Physical Systems. IEEE Transactions on Automatic Control. 2013;58(11):2715–2729.
  •  3. Myers SA, Sharma A, Gupta P, Lin J. Information Network or Social Network?: The Structure of the Twitter Follow Graph. In: Proceedings of the 23rd International Conference on World Wide Web. ACM; 2014. p. 493–498. Available from: http://doi.acm.org/10.1145/2567948.2576939.
  •  4. Garroway CJ, Bowman J, Carr D, Wilson PJ. Applications of graph theory to landscape genetics. Evolutionary Applications. 2008;1(4):620–630.
  •  5. Koutra D, Shah N, Vogelstein JT, Gallagher B, Faloutsos C. Delta Con: Principled Massive-Graph Similarity Function with Attribution. ACM Transactions on Knowledge Discovery from Data (TKDD). 2016;10(3):28.
  •  6. D J Cook LBH, editor. Mining Graph Data. Wiley; 2006.
  •  7. Donnat C, Holmes S, et al. Tracking network dynamics: A survey using graph distances. The Annals of Applied Statistics. 2018;12(2):971–1012.
  •  8. Akoglu L, Tong H, Koutra D. Graph based anomaly detection and description: a survey. Data Mining and Knowledge Discovery. 2015;29(3):626–688.
  •  9. Barabási AL, Albert R. Emergence of scaling in random networks. Science. 1999;286(5439):509–512. doi:10.1.1.226.2025.
  •  10. Watts DJ, Strogatz SH. Collective dynamics of ’small-world’ networks. Nature. 1998;393(6684):440–442. doi:10.1038/30918.
  •  11. Monnig ND, Meyer FG. The resistance perturbation distance: A metric for the analysis of dynamic networks. Discrete Applied Mathematics. 2018;236:347 – 386. doi:https://doi.org/10.1016/j.dam.2017.10.007.
  •  12. Erős P, Rényi A. On Random Graphs I. Publicationes Mathematicae. 1959;6:290–297.
  •  13. Wilson RC, Zhu P. A study of graph spectra for comparing graphs and trees. Pattern Recognition. 2008;41(9):2833 – 2841. doi:https://doi.org/10.1016/j.patcog.2008.03.011.
  •  14. Rudin W. Functional Analysis. International series in pure and applied mathematics. McGraw-Hill; 1991. Available from: https://books.google.com/books?id=Sh_vAAAAMAAJ.
  •  15. Lee JR, Gharan SO, Trevisan L. Multiway Spectral Partitioning and Higher-Order Cheeger Inequalities. J ACM. 2014;61(6):37:1–37:30.
  •  16. Haemers WH, Spence E. Enumeration of cospectral graphs. European Journal of Combinatorics. 2004;25(2):199 – 211.
  •  17. Schwenk AJ. Almost all trees are cospectral. New directions in the theory of graphs. 1973; p. 275–307.
  •  18. Bhamidi S, Evans SN, Sen A. Spectra of Large Random Trees. Journal of Theoretical Probability. 2012;25(3):613–654. doi:10.1007/s10959-011-0360-9.
  •  19. Farkas IJ. Spectra of “real-world” graphs: Beyond the semicircle law. Physical Review E. 2001;64(2). doi:10.1103/PhysRevE.64.026704.
  •  20. Wigner EP. On the Distribution of the Roots of Certain Symmetric Matrices. Annals of Mathematics. 1958;67(2):325–327.
  •  21. O’Rourke S. Gaussian Fluctuations of Eigenvalues in Wigner Random Matrices. Journal of Statistical Physics. 2010;138(6):1045–1066.
  •  22. Chung FRK. Spectral Graph Theory. American Mathematical Society; 1997.
  •  23. Friedman J, Tillich JP. Wave equations for graphs and the edge-based Laplacian. Pacific Journal of Mathematics. 2004;216(2):229–266.
  •  24. Maas C. Computing and interpreting the adjacency spectrum of traffic networks. Journal of Computational and Applied Mathematics. 1985;12-13(Supplement C):459 – 466. doi:https://doi.org/10.1016/0377-0427(85)90039-1.
  •  25. Moore EF. The shortest path through a maze. Proceedings of an International Symposium on the Theory of Switching. 1959; p. 285–292.
  •  26. Ellens W, Spieksma FM, Mieghem PV, Jamakovic A, Kooij RE. Effective graph resistance. Linear Algebra and its Applications. 2011;435(10):2491 – 2506. doi:https://doi.org/10.1016/j.laa.2011.02.024.
  •  27. Haveliwala TH. Topic-sensitive PageRank: a context-sensitive ranking algorithm for Web search. IEEE Transactions on Knowledge and Data Engineering. 2003;15(4):784–796.
  •  28. Goddard W, Oellermann OR. In: Distance in Graphs. Birkhäuser Boston; 2011. p. 49–72. Available from: https://doi.org/10.1007/978-0-8176-4789-6_3.
  •  29. Wills P, Meyer FG. Detecting Topological Changes in Dynamic Community Networks. CoRR. 2017;abs/1707.07362.
  •  30. Koutra D, Ke TY, Kang U, Chau DHP, Pao HKK, Faloutsos C. Unifying Guilt-by-Association Approaches: Theorems and Fast Algorithms. In: Machine Learning and Knowledge Discovery in Databases. Springer Berlin Heidelberg; 2011. p. 245–260.
  •  31. Berlingerio M, Koutra D, Eliassi-Rad T, Faloutsos C. NetSimile: A Scalable Approach to Size-Independent Network Similarity. CoRR. 2012;abs/1209.2684.
  •  32. van den Heuvel MP, Sporns O, Collin G, et al. Abnormal rich club organization and functional brain dynamics in schizophrenia. JAMA Psychiatry. 2013;70(8):783–792. doi:10.1001/jamapsychiatry.2013.1328.
  •  33. Papadimitriou CH. Computational Complexity. In: Encyclopedia of Computer Science. John Wiley and Sons Ltd.; 2003. p. 260–265. Available from: http://dl.acm.org/citation.cfm?id=1074100.1074233.
  •  34. Halko N, Martinsson P, Tropp J. Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions. SIAM Review. 2011;53(2):217–288.
  •  35. Ballobás B. Random Graphs. Cambridge University Press; 2001.
  •  36. Zhou D, Huang J, Schölkopf B. Learning from Labeled and Unlabeled Data on a Directed Graph. In: Proceedings of the 22nd International Conference on Machine Learning. New York, NY, USA: ACM; 2005. p. 1036–1043.
  •  37. Chung F. Laplacians and the Cheeger Inequality for Directed Graphs. Annals of Combinatorics. 2005;9(1):1–19. doi:10.1007/s00026-005-0237-z.
  •  38. Hagberg AA, Schult DA, Swart PJ. Exploring network structure, dynamics, and function using NetworkX. In: Proceedings of the 7th Python in Science Conference (SciPy2008). Pasadena, CA USA; 2008. p. 11–15.
  •  39. Abbe E, Bandeira AS, Hall G. Exact recovery in the stochastic block model. IEEE Transactions on Information Theory. 2016;62(1):471–487.
  •  40. Yule GU. A Mathematical Theory of Evolution, based on the Conclusion of Dr. J. C. Willis, F.R.S. Philisophical Transactions of the Royal Society B. 1925;213:402–410.
  •  41. Flaxman A, Frieze A, Fenner T. In: High Degree Vertices and Eigenvalues in the Preferential Attachment Graph. Springer Berlin Heidelberg; 2003. p. 264–274. Available from: https://doi.org/10.1007/978-3-540-45198-3_23.
  •  42. Bender EA, Canfield ER. The asymptotic number of labeled graphs with given degree sequences. Journal of Combinatorial Theory, Series A. 1978; p. 296–307.
  •  43. Zhang X, Nadakuditi RR, Newman MEJ. Spectra of random graphs with community structure and arbitrary degrees. Phys Rev E. 2014;89:042816.
  •  44. Bayati M, Kim JH, Saberi A. A Sequential Algorithm for Generating Random Graphs. Algorithmica. 2010;58(4):860–910. doi:10.1007/s00453-009-9340-1.
  •  45. Exponential Random Graph Models for Social Networks: Theory, Methods, and Applications. Cambridge University Press; 2012.
  •  46. Holme P, Saramäki J. Temporal networks. Physics Reports. 2012;519(3):97 – 125. doi:https://doi.org/10.1016/j.physrep.2012.03.001.
  •  47. Von Luxburg U. A tutorial on spectral clustering. Statistics and computing. 2007;17(4):395–416.
  •  48. Von Luxburg U, Radl A, Hein M. Hitting and commute times in large random neighborhood graphs. The Journal of Machine Learning Research. 2014;15(1):1751–1798.
  •  49. Roli F, Giacinto G, Vernazza G. In: Methods for Designing Multiple Classifier Systems. Springer Berlin Heidelberg; 2001. p. 78–87. Available from: https://doi.org/10.1007/3-540-48219-9_8.
  •  50. Stehlé J, Voirin N, Barrat A, Cattuto C, Isella L, Pinton JF, et al. High-resolution measurements of face-to-face contact patterns in a primary school. PloS one. 2011;6(8):e23176.
  •  51. Di Martino A, Yan CG, Li Q, Denio E, Castellanos FX, Alaerts K, et al. The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism. Molecular Psychiatry. 2013;19:659–667.
  •  52. Burgess GC, Kandala S, Nolan D, Laumann TO, Power JD, Adeyemo B, et al. Evaluation of Denoising Strategies to Address Motion-Correlated Artifacts in Resting-State Functional Magnetic Resonance Imaging Data from the Human Connectome Project. Brain Connectivity. 2016;6(9):669–680.
  •  53. Sporns O, Chialvo DR, Kaiser M, Hilgetag CC. Organization, development and function of complex brain networks. Trends in Cognitive Sciences. 2004;8(9):418 – 425. doi:https://doi.org/10.1016/j.tics.2004.07.008.
  •  54. Supekar K, Menon V, Rubin D, Musen M, Greicius MD. Network Analysis of Intrinsic Functional Brain Connectivity in Alzheimer’s Disease. PLOS Computational Biology. 2008;4(6):e1000100–.
  •  55. Subbaraju V, Suresh MB, Sundaram S, Narasimhan S. Identifying differences in brain activities and an accurate detection of autism spectrum disorder using resting state functional-magnetic resonance imaging : A spatial filtering approach. Medical Image Analysis. 2017;35:375–389.
  •  56. Fornito A, Zalesky A, Pantelis C, Bullmore ET. Schizophrenia, neuroimaging and connectomics. NeuroImage. 2012;62(4):2296–2314. doi:https://doi.org/10.1016/j.neuroimage.2011.12.090.
  •  57. Craddock C, Benhajali Y, Chu C, Chouinard F, Evans A, Jakab A, et al. The Neuro Bureau Preprocessing Initiative: open sharing of preprocessed neuroimaging data and derivatives. Frontiers in Neuroinformatics. 2013;(41).
  •  58. Hull JV, Jacokes ZJ, Torgerson CM, Irimia A, Van Horn JD. Resting-State Functional Connectivity in Autism Spectrum Disorders: A Review. Frontiers in Psychiatry. 2016;7:205.
  •  59. Tzourio-Mazoyer N, Landeau B, Papathanassiou D, Crivello F, Etard O, Delcroix N, et al. Automated Anatomical Labeling of Activations in SPM Using a Macroscopic Anatomical Parcellation of the MNI MRI Single-Subject Brain. NeuroImage. 2002;15(1):273–289. doi:https://doi.org/10.1006/nimg.2001.0978.
  •  60. Redcay E, Moran J, Mavros P, Tager-Flusberg H, Gabrieli J, Whitfield-Gabrieli S. Intrinsic functional network organization in high-functioning adolescents with autism spectrum disorder. Frontiers in Human Neuroscience. 2013;7:573.
  •  61. Plitt M, Barnes KA, Martin A. Functional connectivity classification of autism identifies highly predictive brain features but falls short of biomarker standards. NeuroImage: Clinical. 2015;7:359–366. doi:https://doi.org/10.1016/j.nicl.2014.12.013.
  •  62. Tang YY, Rothbart MK, Posner MI. Neural correlates of establishing, maintaining, and switching brain states. Trends in Cognitive Sciences. 2012;16(6):330–337.
  •  63. Barttfeld P, Wicker B, Cukier S, Navarta S, Lew S, Leiguarda R, et al. State-dependent changes of connectivity patterns and functional brain network topology in autism spectrum disorder. Neuropsychologia. 2012;50(14):3653–3662. doi:https://doi.org/10.1016/j.neuropsychologia.2012.09.047.
  •  64. Mitra A, Snyder AZ, Constantino JN, Raichle ME. The Lag Structure of Intrinsic Activity is Focally Altered in High Functioning Adults with Autism. Cerebral Cortex. 2017;27(2):1083–1093.
  •  65. Marinazzo D, Pellicoro M, Stramaglia S. Kernel Method for Nonlinear Granger Causality. Phys Rev Lett. 2008;100:144103. doi:10.1103/PhysRevLett.100.144103.
  •  66. Michel V, Damon C, Thirion B. Mutual information-based feature selection enhances fMRI brain activity classification. In: 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro; 2008. p. 592–595.
  •  67. Broido AD, Clauset A. Scale-free networks are rare. Nature communications. 2019;10(1):1017.