Identifying networks with common organizational principles

Many complex systems can be represented as networks, and the problem of network comparison is becoming increasingly relevant. There are many techniques for network comparison, from simply comparing network summary statistics to sophisticated but computationally costly alignment-based approaches. Yet it remains challenging to accurately cluster networks that are of a different size and density, but hypothesized to be structurally similar. In this paper, we address this problem by introducing a new network comparison methodology that is aimed at identifying common organizational principles in networks. The methodology is simple, intuitive and applicable in a wide variety of settings ranging from the functional classification of proteins to tracking the evolution of a world trade network.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 8

page 17

page 22

03/15/2021

Connectionism, Complexity, and Living Systems: a comparison of Artificial and Biological Neural Networks

While Artificial Neural Networks (ANNs) have yielded impressive results ...
10/01/2013

Inference of Network Summary Statistics Through Network Denoising

Consider observing an undirected network that is `noisy' in the sense th...
04/05/2018

Identifying Cross-Depicted Historical Motifs

Cross-depiction is the problem of identifying the same object even when ...
07/10/2018

Network Classification in Temporal Networks Using Motifs

Network classification has a variety of applications, such as detecting ...
03/06/2016

Fast calculation of correlations in recognition systems

Computationally efficient classification system architecture is proposed...
02/10/2020

Introducing Graph Cumulants: What is the Kurtosis of your Social Network?

In an increasingly interconnected world, understanding and summarizing t...
02/10/2020

Introducing Graph Cumulants: What is the Variance of Your Social Network?

In an increasingly interconnected world, understanding and summarizing t...

Code Repositories

network-comparison

An R package implementing the NetEMD and NetDis network comparison measures


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many complex systems can be represented as networks, including friendships, the World Wide Web, global trade flows and protein-protein interactions [newmanbook]. The study of networks has been a very active area of research in recent years, and in particular, network comparison has become increasingly relevant e.g [wilson2008study, netal, 2014waqar, 2014yaveroglu]. Network comparison itself has many wide-ranging applications, for example, comparing protein-protein interaction networks could lead to increased understanding of underlying biological processes [2014waqar]. Network comparison can also be used to study the evolution of networks over time and for identifying sudden changes and shocks.

Network comparison methods have attracted increasing attention in the field of machine learning, where they are mostly referred to as graph kernels, and have numerous applications in personalized medicine e.g [borgdisease]

, computer vision and drug discovery e.g

[NCI]

. In the machine learning setting, the problem of interest is to obtain classifiers that can accurately predict the class membership of graphs.

Methods for comparing networks range from comparison of summary statistics to sophisticated but computationally expensive alignment-based approaches [migraal, netal, sana]. Real-world networks can be very large and are often inhomogeneous, which makes the problem of network comparison challenging, especially when networks differ significantly in terms of size and density. In this paper, we address this problem by introducing a new network comparison methodology that is aimed at comparing networks according to their common organizational principles.

The observation that the degree distribution of many real world networks is highly right skewed and in many cases approximately follows a power law has been very influential in the development of network science

[barabasi1999emergence]. Consequently, it has become widely accepted that the shape of the degree distribution (for example, binomial vs power law) is indicative of the generating mechanism underlying the network. In this paper, we formalize this idea by introducing a measure that captures the shape of distributions. The measure emerges from the requirement that a metric between forms of distributions should be invariant under rescalings and translations of the observables. Based on this measure, we then introduce a new network comparison methodology, which we call .

Although our methodology is applicable to almost any type of feature that can be associated to nodes or edges of a graph, we focus mainly on distributions of small connected subgraphs, also known as graphlets. Graphlets form the basis of many of the state of the art network comparison methods [2007gdda, 2014waqar, 2014yaveroglu] and hence using graphlet based features allows for a comparative assessment of the presented methodology. Moreover, certain graphlets, called network motifs [2002milo], occur much more frequently in many real world networks than is expected on the basis of pure chance. Network motifs are considered to be basic building blocks of networks that contribute to the function of the network by performing modular tasks and have therefore been conjectured to be favoured by natural selection. This is supported by the observation that network motifs are largely conserved within classes of networks [milo2004superfamilies, wegner].

Our methodology provides an effective tool for comparing networks even when networks differ significantly in size and density, which is the case in most applications. The methodology performs well on a wide variety of networks ranging from chemical compounds having as few as 10 nodes to tens of thousands of nodes in internet networks. The method achieves state of the art performance even when it is based on rather restricted sets of inputs that can be computed efficiently and hence scales favourably to networks with millions and even billions of nodes. The method also behaves well under network sub-sampling as described in SS. The methodology further meets the needs of researchers from a variety of fields, from the social sciences to the biological and life sciences, by being computationally efficient and simple to implement.

We test the presented methodology in a large number of settings, starting with clustering synthetic and real world networks, where we find that the presented methodology outperforms state of the art graphlet-based network comparison methods in clustering networks of different sizes and densities. We then test the more fine grained properties of using data sets that represent evolving networks at different points in time. Finally, we test whether can predict functional categories of networks by exploring machine learning applications and find that classifiers based on outperform state-of-the art graph classifiers on several benchmark data sets.

2 A measure for comparing shapes of distributions

Here we build on the idea that the information encapsulated in the shape of the degree distribution and other network properties reflects the topological organization of the network. From an abstract point of view we think of the shape of a distribution as a property that is invariant under linear deformations i.e

translations and re-scalings of the axis. For example, a Gaussian distribution always has its characteristic bell curve shape regardless of its mean and standard deviation. Consequently, we postulate that any metric that aims to capture the similarity of shapes should be invariant under linear transformations of its inputs.

Based on these ideas we define the following measure between distributions and that are supported on

and have non-zero, finite variances:

(1)

where is the earth mover’s distance and and are the distributions obtained by rescaling and to have variance 1. More precisely, is the distribution obtained from by the transformation , where is the standard deviation of . Intuitively, (also known as the 1st Wasserstein metric [emd1998] can be thought of as the minimal work, i.e

mass times distance, needed to “transport” the mass of one distribution onto the other. For probability distributions

and with support in

and bounded absolute first moment, the

between and is given by , where and

are the cumulative distribution functions of

and respectively.

In principle, in Equation (1) can be replaced by almost any other probability metric to obtain a corresponding metric . Here we choose

because it is well suited to comparing shapes, as shown by its many applications in the area of pattern recognition and image retrieval

[emd1998]. Moreover, we found that produces superior results to classical and Kolmogorov distances, especially for highly irregular distributions that one frequently encounters in real world networks.

For two networks and and given network feature , we define the corresponding measure by:

(2)

where and are the distributions of on and respectively. can be shown to be a pseudometric between graphs for any feature (see Sec B), that is it is non-negative, symmetric and satisfies the triangle inequality. Figure 1 gives examples where is taken to be the degree distribution, and is the degree distribution of .

(a)
(b)
(c)
Figure 1: Plots of rescaled and translated degree distributions for Barabasi-Albert (BA) and Erdős-Rényi (ER) models with nodes and average degree : a) BA , vs BA , . b) ER , vs ER , . c) BA , vs ER , . The distances between the degree distribution of two BA or ER models with quite different values of and are smaller than the distance between the degree distribution of a BA and ER model when the number of nodes and average degree are equal.

Measures that are based on the comparison of multiple features can be expected to be more effective at identifying structural differences between networks than measures that are based on a single feature , because for two networks to be considered similar they must show similarity across multiple features. Hence, for a given set of network features, we define the measure corresponding to simply as:

(3)

Although can in principle be based on any set of network features to which one can associate distributions, we initially consider only features that are based on distributions of small connected subgraphs, also known as graphlets. Graphlets form the basis of many state of the art network comparison methods and hence allow for a comparative assessment of the proposed methodology.

First, we consider graphlet degree distributions (s) [gdda] as our set of features. For a given graphlet , the graphlet degree of a node is the number of graphlet- induced subgraphs that are attached to the node. One can distinguish between the different positions the node can have in , which correspond to the automorphism orbits of , see Figure 2. For graphlets up to size 5 there are 73 such orbits. We initially take the set of 73 s corresponding to graphlets up to size 5 to be the default set of inputs, for which we denote the metric as .

Later we also explore alternative definitions of subgraph distributions based on ego networks, as well as the effect of varying the size of subgraphs considered in the input. Finally, we consider the eigenvalue spectra of the graph Laplacian and the normalized graph Laplacian as inputs.

Figure 2: Graphlets on two to four nodes. The different shades in each graphlet represent different automorphism orbits, numbered from 0 to 14.

3 Results

In order to give a comparative assessment of , we consider to other graphlet based network comparison methods, namely [gdda], [2014yaveroglu] and Netdis [2014waqar]. These represent the most effective alignment-free network comparison methodologies in the existing literature. While directly compares distributions of graphlets up to size 5 in a pairwise fashion, is based on comparing rank correlations between graphlet degrees. Here we consider both default settings of GCD [2014yaveroglu], namely , which is based on a non-redundant subset of 11 graphlets up to size 4, and which uses all graphlets up to size 5. differs from and in that it is based on subgraph counts in ego-networks of nodes. Another important distinction is that first centers these raw counts by comparing them to the counts that could be expected under a particular null model before computing the final statistics. In our analysis, we consider two null models: an Erdös-Rényi random graph and a duplication divergence [DD1] graph which has a scale-free degree distribution as well as a high clustering coefficient. We denote these two variants as and respectively.

3.1 Clustering synthetic and real world networks

We start with the classical setting of network comparison where the task is to identify groups of structurally similar networks. The main challenge in this setting is to identify structurally similar networks even though they might differ substantially in terms of size and density.

Given a set of networks consisting of disjoint classes one would like a network comparison measure to position networks from the same class closer to each other when compared to networks from other classes. Given a network , this can be measured in terms of the empirical probability that where is a randomly selected network from the same class as (excluding itself) and is a randomly selected network from outside the class of and is the network comparison statistic. Consequently, the performance over the whole data set is measured in terms of the quantity . It can be shown that

is equivalent to the average area under the receiver operator characteristic curve of a classifier that for a given network

classifies the nearest neighbours of with respect to as being similar to . Hence, a measure that positions networks randomly has an expected of 0.5 whereas corresponds to perfect separation between classes. Other measures are discussed in the Appendix. Conclusions reached in this paper hold regardless of which performance measure one uses.

We first test on synthetic networks corresponding to realizations of eight random graph models, namely the Erdős-Rényi random graphs [1960er], the Barabasi Albert preferential attachment model [barabasi1999emergence], two duplication divergence models [DD1, DD2], the geometric gene duplication model [2008higham], 3D geometric random graphs [2003penrose], the configuration model [1995molloy], and Watts-Strogatz small world networks [1998watts] (see Sec F.1 in the Appendix for details).

For synthetic networks we consider three experimental settings of increasing difficulty, starting with the task of clustering networks that have same size and average degree according to generating mechanism - a task that is relevant in a model selection setting. For this we generate 16 data sets, which collectively we call , corresponding to combinations of and , each containing 10 realizations per model, i.e. 80 networks. This is an easier problem than clustering networks of different sizes and densities, and in this setting we find that the scores (see Table LABEL:tab:clustering) of top performing measures tend to be within one standard deviation of each other. We find that and achieve the highest scores, followed by and .

Having established that is able to differentiate networks according to generating mechanism, we move on to the task of clustering networks of different sizes and densities. For this we generate two data sets: in which the size and average degree are increased independently in linear steps to twice their initial value ( and ) and in which the size and average degree are increased independently in multiples of 2 to 8 times their initial value ( and ). In , the number of nodes and average degrees of the networks both vary by one order of magnitude, and therefore clustering according to model type is challenging. Both and contain 10 realizations per model parameter i.e. contain and networks, respectively. Finally, we consider a data set consisting of networks from 10 different classes of real world networks (RWN) as well as a data set from [2014waqar] that consists of real world and synthetic networks from the larger collection compiled by Onnela [onnela].

(a) Heatmap of for .
(b) Heatmap of for .
Dataset
Synthetic Networks
0.9970.003 0.9810.013 0.9860.011 0.9920.012 0.9960.005 0.9520.056
0.988 0.897 0.919 0.976 0.976 0.956
0.925 0.790 0.800 0.872 0.861 0.812
RWN 0.942 0.898 0.866 0.898 0.906 0.745
Onnela et al. 0.890 0.832 0.809 0.789 0.819 0.783
(c) values for different network measures on data sets of synthetic and real world networks.
Figure 3: a) and b) show the heatmaps of pairwise distances on ( and ) according to and , respectively. In the heat map, networks are ordered from top to bottom in the following order: model, average degree and node count. The heatmap of shows eight clearly identifiable blocks on the diagonal corresponding to different generative models while the heatmap of shows signs of off-diagonal mixing. c) values for various comparison measures for data sets of synthetic and real world networks. For we calculated the value of for each of the 16 sub-data sets. The table shows the average and standard deviation of the values obtained over these 16 sub-data sets.

We find that outperforms all of the other three methods at clustering networks of different sizes and densities on all data sets. The difference can also be seen in the heatmaps of and , the second best performing method for , given in Figures (a)a and (b)b. While the heatmap of shows eight clearly identifiable blocks on the diagonal corresponding to different generative models, the heatmap of shows signs of off-diagonal mixing. The difference in performance becomes even more pronounced on more challenging data sets, i.e on (see Fig 6 in the Appendix) and the Onnela data set.

3.2 Time ordered networks

A network comparison measure should ideally not only be able to identify groups of similar networks but should also be able to capture structural similarity at a finer local scale. To study the behavior of at a more local level, we consider data sets that represent a system measured at different points in time. Since such networks can be assumed to evolve gradually over time they offer an ideal setting for testing the local properties of network comparison methodologies.

We consider two data sets, named AS-caida and AS-733 [as], that represent the topology of the internet at the level of autonomous systems and a third data set that consists of bilateral trade flows between countries for the years 1962–2014 [feenstra2005world, comtrade]. Both edges and nodes are added and deleted over time in all three data sets. As was noted in [as] the time ranking in evolving networks is reflected to a certain degree in simple summary statistics. Hence, recovering the time ranking of evolving networks should be regarded as a test of consistency rather than an evaluation of performance.

In order to minimize the dependence of our results on the algorithm that is used to rank networks, we consider four different ways of ranking networks based on their pairwise distances as follows. We assume that either the first or last network in the time series is given. Rankings are then constructed in a step-wise fashion. At each step one either adds the network that is closest to the last added network (Algorithm 1), or adds the network that has smallest average distance to all the networks in the ranking constructed so far (Algorithm 2). The performance of a measure in ranking networks is then measured in terms of Kendall’s rank correlation coefficient between the true time ranking and the best ranking obtained by any of the 4 methods.

(a) Heatmap of for AS-733
(b) Heatmap of for AS-caida
(c) Heatmap of for world trade networks.
Dataset
AS-733 0.874 0.867 0.933 0.763 0.770 0.740
AS-caida 0.890 0.844 0.849 0.897 0.878 0.870
World Trade 0.821 0.666 0.388 0.380 0.567 0.649
(d) Kendall’s between the true time ranking and rankings inferred from network comparison methodologies.
Figure 4: (a), (b) & (c) Heatmaps of for networks representing the internet at the level of autonomous systems networks and world trade networks. The date of measurement increases from left to right/ top to bottom. accurately captures the evolution over time in all three data sets by positioning networks that are close in time closer to each other resulting in a clear signal along the diagonal.(d) Kendall’s rank correlation coefficient between the true time ranking and rankings inferred from different network comparison measures.

We find that successfully recovers the time ordering for all three data sets, as can be seen in the time ordered heatmaps given in Figure 4

which all show clear groupings along the diagonal. The red regions in the two internet data sets correspond to outliers which can also be identified as sudden jumps in summary statistics e.g. the number of nodes. The two large clusters in the heatmap of world trade networks (Figure

(c)c) coincide with a change in the data gathering methodology in 1984 [feenstra2005world]. Although comes second to on AS-733 and to on AS-caida, has the highest overall score and is the only measure that achieves consistently high scores on all three data sets.

3.3 NetEmd based on different sets of inputs

We examine the effect of reducing the size of graphlets considered in the input of , which is also relevant from a computational point of view, since enumerating graphlets up to size 5 can be challenging for very large networks. We consider variants based on the graphlet degree distributions of graphlets up to size 3 and 4, which we denote as and . We also consider which is based only on the degree distribution as a baseline. Results are given in Table 1.

We find that reducing the size of graphlets from 5 to 4 does not significantly decrease the performance of and actually produces better results on three data sets (, Real world and Onnela et ). Even when based on only graphlets up to size 3, i.e. just edges, 2-paths and triangles, outperforms all other non- methods that we tested on at least 6 out of 8 data sets.

Given that the complexity of enumerating graphlets up to size in a network on nodes having maximum degree is , offers an optimal combination of performance and computational efficiency in most cases. The even less computationally costly scales favourably even to networks of billions of edges for which enumerating graphlets of size 4 can be computationally prohibitive. This opens the door for comparing very large networks which are outside the reach of current methods while still retaining state of the art performance. Furthermore, the measures perform well under sub-sampling of nodes [SS] (see Appendix D) which can be leveraged to further improve computational efficiency.

We find that in some cases restricting the set of inputs actually leads to an increase in the performance of . This indicates that not all graphlet distributions are equally informative in all settings [2017_maugis]. Consequently, identifying (learning) which graphlet distributions contain the most pertinent information for a given task might lead to significant improvements in performance. Such generalizations can be incorporated into in a straightforward manner, for instance by modifying the sum in Equation (3) to incorporate weights. is ideally suited for such metric learning [metriclearning] type generalizations since it constructs an individual distance for each graphlet distribution. Moreover, such single feature measures are in many cases highly informative even on their own. For instance , which only uses the degree distribution, outperforms the non- measures we tested individually on more than half the data sets we considered.

We also considered counts of graphlets up to size 4 in 1-step ego networks of nodes () [2014waqar] as an alternative way of capturing subgraph distributions, for which we denote the measure as . Although we find that achieves consistently high scores, we find that variants based on graphlet degree distributions tend to perform better on most data sets.

Finally, we consider spectral distributions of graphs as a possible alternative to graphlet based features. The spectra of various graph operators are closely related to topological properties of graphs [chung1997spectral, mohar1991laplacian, banerjee2008spectrum] and have been widely used to characterize and compare graphs [gu2016spectral, wilson2008study]. We used the spectra of the graph Laplacian and normalized graph Laplacian as inputs for for which we denote the measure as . For a given graph the Laplacian is defined as where is the adjacency matrix of the graph and is the diagonal matrix whose diagonal entries are the node degrees. The normalized Laplacian is defined as . Given the eigenvalue distributions and of and we define to be .

We find that in general performs better in clustering random graphs of different sizes and densities when compared to graphlet based network comparison measures. However, on the RWN and Onnela et al. data sets graphlet based measures tend to perform better than the spectral variant which can be attributed to the prevalence of network motifs in real world networks, giving graphlet based measures an advantage. The spectral variant is also outperformed on the time ordering of data sets which in turn might be a result of the sensitivity of graph spectra to small changes in the underlying graph [wilson2008study].

Data set
0.9890.008 0.9950.005 0.9930.004 0.9920.007 0.9570.024
0.982 0.987 0.983 0.992 0.944
0.940 0.941 0.947 0.972 0.902
RWN 0.952 0.950 0.933 0.933 0.907
Onnela et al. 0.892 0.898 0.892 0.858 0.867
AS-733 0.808 0.874 0.922 0.855 0.928
AS-caida 0.898 0.892 0.820 0.780 0.821
World Trade 0.697 0.785 0.665 0.430 0.358
Table 1: Results for different variants of based on distributions of graphlets up to size 3 and 4 ( and ), counts of graphlets up to size 4 in 1-step ego networks of nodes (), eigenvalue spectra of Laplacian operators () and the degree distribution (). Values in bold indicate that a measure achieves the highest score among all measures considered in the manuscript. For we calculate the value of for each of the 16 sub-data sets. The table shows the average and standard deviation of the values obtained over these 16 sub-data sets.

3.4 Functional classification of networks

One of the primary motivations in studying the structure of networks is to identify topological features that can be related to the function of a network. In the context of network comparison this translates into the problem of finding metrics that can identify functionally similar networks based on their topological structure.

In order to test whether can be used to identify functionally similar networks, we use several benchmarks from the machine learning literature where graph similarity measures, called graph kernels, have been intensively studied over the past decade. In the context of machine learning the goal is to construct classifiers that can accurately predict the class membership of unknown graphs.

We test on benchmark data sets representing social networks [yanardag2015deep] consisting of Reddit posts, scientific collaborations and ego networks in the Internet Movie Database (IMDB). The Reddit data sets Reddit-Binary, Reddit-Multi-5k and Reddit-Multi-12k consist of networks representing Reddit treads where nodes correspond to users and two users are connected whenever one responded to the other’s comments. While for the Reddit-Binary data sets the task is to classify networks into discussion based and question/answer based communities, in the data sets Reddit-Multi-5k and Reddit-Multi-12k the task is to classify networks according to their subreddit categories. COLLAB is a data set consisting of ego-networks of scientists from the fields High Energy Physics, Condensed Matter Physics and Astro Physics and the task is to determine which of these fields a given researcher belongs to. Similarly, the data sets IMDB-Binary and IMDB-Multi represent collaborations between film actors derived from the IMDB and the task is to classify ego-networks into different genres i.e. action and romance in the case of IMDB-Binary and comedy, action and Sci-Fi genres in the case of IMDB-Multi.

We use C - support vector machine (C-SVM)

[csvm] classifiers with a Gaussian kernel , where is a free parameter to be learned during training. Performance evaluation is carried out by 10 fold cross validation, where at each step of the validation 9 folds are used for training and 1 fold for evaluation. Free parameters of classifiers are learned via 10 fold cross validation on the training data only. Finally, every experiment is repeated 10 fold and average prediction accuracy and standard deviation are reported.

Kernel Reddit-Binary Reddit-Multi-5k Reddit-Multi-12k COLLAB IMDB-Binary IMDB-Multi
92.67 0.30 54.610.18 48.090.21 79.320.27 66.99 1.19 41.450.70
88.59 0.35 53.050.34 44.450.18 79.050.20 71.680.88 46.060.50
DGK 78.040.39 41.270.18 32.220.10 73.09 0.25 66.96 0.56 44.55 0.52
GK 77.340.18 41.010.17 31.820.08 72.84 0.28 65.87 0.98 43.89 0.38
RF 88.7 1.99 50.9 2.07 42.7 1.28 76.51.68 72.4 4.68 47.8 3.55
PCSN 86.301.58 49.100.70 41.320.42 72.602.15 71.002.29 45.232.84
Table 2: 10 fold cross validation accuracies of Gaussian kernels based on measures using the distributions of graphlets up to size 5 () and Laplacian spectra () and other graph kernels, namely the deep graphlet kernels (DGK)[yanardag2015deep] and the graphlet kernel (GK) [GK]

. We also consider alternatives to support vector machines classifiers, namely the random forest classifiers (RF) introduced in

[featurebased]

and convolutional neural networks (PCSN)

[PCSN]

. Values in bold correspond to significantly higher scores, which are scores with t-test p-values less than 0.05 when compared to the highest score.

Table 2 gives classification accuracies obtained using measures based on graphlets up to size five () and spectra of Laplacian operators () on the data sets representing social networks. We compare based kernels to graphlet kernels [GK] and deep graphlet kernels [yanardag2015deep] as well as two non-SVM classifiers namely the random forest classifier introduced in [featurebased] and the convolutional neural network based classifier introduced in [PCSN].

On the Reddit data sets and the COLLAB data set, significantly outperforms other state-of-the-art graph classifiers. On the other hand, we find that performs poorly on the IMDB data sets. This can be traced back to the large number of complete graphs present in the IMDB data sets: 139 out of the 1000 graphs in IMDB-Binary and 789 out of 1500 graphs in IMDB-Multi are complete graphs which correspond to ego-networks of actors having acted only in a single film. By definition, cannot distinguish between complete graphs of different sizes since all graphlet degree distributions are concentrated on a single value in complete graphs. The spectral variant is not affected by this and we find that is either on par with or outperforms the other non- graph classifiers on all six data sets.

We also tested on benchmark data sets representing chemical compounds and protein structures. Unlike the social network data sets, in these data sets nodes and edges are labeled to reflect domain specific knowledge such as atomic number, amino acid type and bond type. Although , in contrast to the other graph kernels, does not rely on domain specific knowledge in the form of node or edge labels, we found that outperforms many of the considered graph kernels coming only second to the Weisfeiler-Lehman [WL] type kernels in terms of overall performance (see Appendix E).

4 Discussion

Starting from basic principles, we have introduced a general network comparison methodology, , that is aimed at capturing common generating processes in networks. We tested in a large variety of experimental settings and found that successfully identifies similar networks at multiple scales even when networks differ significantly in terms of size and density, generally outperforming other graphlet based network comparison measures. Even when based only on graphlets up to size 3 (i.e. edges, 2-paths and triangles), has performance comparable to the state of the art, making feasible even for networks containing billions of edges and nodes.

By exploring machine learning applications we showed that captures topological similarity in a way that relates to the function of networks and outperforms state-of-the art graph classifiers on several graph classification benchmarks.

Although we only considered variants of that are based on distributions of graphlets and spectra of Laplacian operators in this paper, can also be applied to other graph features in a straightforward manner. For instance, distributions of paths and centrality measures might capture larger scale properties of networks and their inclusion into might lead to a more refined measure.

Data availability

The source code for is freely available at:www.opig.ox.ac.uk/resources

Acknowledgements

This work was in part supported by EPSRC grant EP/K032402/1 (A.W, G.R, C.D and R.G) and EPSRC grants EP/G037280/1 and EP/L016044/1 (C.D). L.O acknowledges the support of Colciencias through grant 568. R.G. acknowledges support from the COST Action CA15109 and is currently supported by a Dame Kathleen Ollerenshaw Research Fellowship. C.D. and G.R. acknowledge the support of the Alan Turing Institute (grant EP/NS10129/1).

We thank Xiaochuan Xu and Martin O’Reilly for useful discussions.

Appendix A Implementation

a.1 Graphlet distributions.

In the main paper, both the graphlet degree distribution and graphlet counts in 1-step ego networks were used as inputs for .

Graphlet degree distributions

The graphlet degree [gdda] of a node specifies the number of graphlets (small induced subgraphs) of a certain type the node appears in, while distinguishing between different positions the node can have in a graphlet. Different positions within a graphlet correspond to the orbits of the automorphism group of the graphlet. Among graphs on two to four nodes, there are 9 possible graphs and 15 possible orbits. Among graphs on two to five nodes there are 30 possible graphs and 73 possible orbits.

Graphlet distributions based on ego-networks.

Another way of obtaining graphlet distributions is to consider graphlet counts in ego-networks [2014waqar]. The -step ego-network of a node is defined as the subgraph induced on all the nodes that can be reached from (including ) in less than steps. For a given , the distribution of a graphlet in a network is then simply obtained by counting the occurrence of as an induced subgraph in the -step ego-networks of each individual node.

a.2 Step-wise implementation

In this paper, for integer valued network features such as graphlet based distributions, we base our implementation on the probability distribution that corresponds to the histogram of feature with bin width 1 as . can also be defined on the basis of discrete empirical distributions i.e. distributions consisting of point masses (See Section C).

Here we summarise the calculation of the distance between networks and (with and nodes respectively), based on the comparison of the set of local network features of graphlet degrees corresponding to graphlets up to size .

  1. First one computes the graphlet degree sequences corresponding to graphlets up to size for networks and . This can be done efficiently using the algorithm ORCA [hovcevar2014combinatorial]. For the graphlet degree compute a histogram across all nodes of

    having bins of width 1 of which the centers are at their respective values. This histogram is then normalized to have total mass 1. We then interpret the histogram as the (piecewise continuous) probability density function of a random variable. This probability density function is denoted by

    . The standard deviation of is then computed, and is used to rescale the distribution so that it has variance 1. This distribution is denoted by .

  2. Repeat the above step for network , and denote the resulting distribution by . Now compute

    In practice, this minimisation over is computed using a suitable optimization algorithm. In our implementation we use the Brent-Dekker algorithm [brent1971algorithm] with an error tolerance of 0.00001 and with the number of iterations upper bounded by 150.

  3. Repeat the above two steps for the network features and compute

a.3 Example: for Gaussian distributions

Suppose that and are and distributions, respectively. Then

Here we used that if and , then and , and these two distributions are equal if .

a.4 Spectral NetEmd

When using spectra of graph operators, which take real values instead of the integer values one has in the case of graphlet distributions, we use the empirical distribution consisting of point masses for computing . For more details see Section C of this appendix.

a.5 Computational complexity

The computational complexity of graphlet based comparison methods is dominated by the complexity of enumerating graphlets. For a network of size and maximum degree , enumerating all connected graphlets up to size has complexity , while counting all graphlets up to size in all -step ego-networks has complexity . Because most real world networks are sparse, graphlet enumeration algorithms tends to scale more favourably in practice than the worst case upper bounds given above.

In the case of spectral measures, the most commonly used algorithms for computing the eigenvalue spectrum have complexity . Recent results show that the spectra of graph operators can be approximated efficiently in time [thune2013eigenvalues].

Given the distribution of a feature , computing has complexity , where and are the number of different values takes in and respectively and is the maximum number function calls of the optimization algorithm used to align the distributions. For node based features such as motif distributions, the worst case complexity is , where is the number of nodes of , since the number of different values can take is bounded by the number of nodes.

Appendix B Proof that is a distance measure

We begin by stating a definition. A pseudometric on a set is a non-negative real-valued function such that, for all ,

  1. ;

  2. (symmetry);

  3. (triangle inequality).

If Condition 1 is replaced by the condition that then defines a metric. Note that this requirement can only be satisfied by a network comparison measure that is based on a complete set of graph invariants and hence network comparison measures in general will not satisfy this requirement.

Proposition

Let denote the space of all real-valued probability measures supported on with finite, non-zero variance. Then the distance between probability measures, and in defined by

defines a pseudometric on the space of probability measures .

Proof

We first note that if then for any . Let us now verify that satisfies all properties of a pseudometric. Clearly, for any , we have , and so . Symmetry holds, since for, any and in ,

Finally, we verify that satisfies the triangle inequality. Suppose , and are probability measures from the space , then so are , for any . Since EMD satisfies the triangle inequality, we have, for any ,

Since the above inequality holds for all , we have that

as required. We have thus verified that satisfies all properties of a pseudometric.

Appendix C Generalization of to point masses

Although in the case of graphlet based features we based our implementation of on probability distribution functions that correspond to normalized histograms havning bin width 1 can also be based on empirical distributions consisting of collections of point masses located at the observed values.

The definition of can be generalized to include distributions of zero variance, i.e. unit point masses. Mathematically, the distribution of a point mass at is given by the Dirac measure . Such distributions are frequently encountered in practice since some graphlets do not occur in certain networks.

First, we note that unit point masses are always mapped onto unit point masses under rescaling operations. Moreover, for a unit point mass we have that for all and . Consequently, can be generalized to include unit point masses in a consistent fashion by always rescaling them by 1:

where (as in Eq. 1) if has a non-zero variance, and if has variance zero.

Appendix D Sub-sampling

is well suited for the sub-sampling procedure from [SS]. Following this procedure we base the graphlet distributions used as an input of on a sample of nodes rather than the whole network.

Figure 5 shows the scores for variants of on a set of synthetic networks and the Onnela et al. data set. We find that the performance of is stable under sub-sampling and that in general using a sample of only of the nodes produces results comparable to the case where all nodes are used.

(a) Synthetic networks
(b) Onnela et al.
Figure 5: The values for different variants of under sub-sampling for a) a set of 80 synthetic networks coming from eight different random graph models with 2500 nodes and average degree 20, b) for the Onnela et al. data set showing the average and standard deviation over 50 experiments for each sampled proportion. Note that the performance of under sub-sampling is remarkably stable and is close to optimal even when only of nodes are sampled. For synthetic networks we find that the stability of increases as the size of the graphlets used in the input is increased.
(a) Heatmap of for .
(b) Heatmap of for .
Figure 6: a) and b) show the heatmaps of pairwise distances on ( and ) according to and next best performing measure , respectively. In the heat map, networks are ordered from top to bottom in the following order: model, average degree and node count. Although we observe some degree of off diagonal mixing the heatmap of still shows 8 diagonal blocks corresponding to different generative models in contrast to the heat map of .

Appendix E Results for data sets of chemical compounds and proteins

We also tested on benchmark data sets representing chemical compounds (MUTAG, NCI1 and NCI109) and protein structures (ENZYMES and D&D). MUTAG [mutag] is a data set of 188 chemical compounds that are labelled according to their mutagenic effect on Salmonella typhimurium. NCI1 and NCI109 represent sets of chemical compounds which are labelled for their activity against non-small cell lung cancer and ovarian cancer cell lines, respectively [NCI]. Nodes and edges in MUTAG, NCI1 and NCI109 are labeled by atomic number and bound type, respectively. ENZYMES and D&D [borgpro] consist of networks representing protein structures at the level of tertiary structure and amino acids respectively. While networks in ENZYMES are classified into six different enzyme classes, networks in D&D are classified according to whether or not they correspond to an enzyme. Nodes in ENZYMES are labelled according to structural element type and according to amino acid types in D&D.

Classification accuracies obtained using on the data sets of chemical compounds and protein structures are given in Table 3, along with results for other graph kernels reported in [WL]. For a detailed description of these kernels we refer to [WL] and the references therein. Note that, in contrast to all other kernels in Table 3, does not use any domain specific knowledge in the form of node or edge labels. Node and edge labels are highly informative for all five classification tasks - as shown in [halting].

Kernel MUTAG NCI1 NCI109 ENZYMES D & D
83.71 1.16 78.590.28 76.710.34 46.551.25 78.01 0.38
83.30 1.20 77.360.38 76.140.27 42.750.78 76.74 0.43
WL subtree 82.050.36 82.19 0.18 82.46 0.24 52.221.26 79.78 0.36
WL edge 81.061.95 84.370.30 84.490.20 53.172.04 77.950.70
WL shortest path 83.781.46 84.550.36 83.530.30 59.051.05 79.430.55
Ramon & Gärtner 85.720.49 61.860.27 61.670.21 13.350.87 57.270.07
p-random walk 79.191.09 58.660.28 58.360.94 27.670.95 66.640.83
Random Walk 80.720.38 64.340.27 63.510.18 21.680.94 71.700.47
Graphlet count 75.610.49 66.000.07 66.590.08 32.701.20 78.590.12
Shortest path 87.280.55 73.470.11 73.070.11 41.681.79 78.450.26
Table 3: 10 fold cross validation accuracies of Gaussian kernels based on and and other kernels reported in [WL].

On MUTAG, achieves an accuracy that is comparable to the Weisfeiler-Lehman (WL) shortest path kernel, but is outperformed by the shortest path kernel and the kernel by Ramon & Gärtner. While on NCI1, NCI109 and ENZYMES, is outperformed only by WL kernels, on D&D achieves a classification accuracy that is comparable to the best performing kernels. Notably, on D&D also outperforms the vector model by Dobson and Doig [DD] (classification accuracy: 76.861.23) which is based on 52 physical and chemical features without using domain specific knowledge i.e. solely based on graph topology.

e.1 Implementation of C-SVMs

Following the procedure in [WL] we use 10-fold cross validation with a C-SVM [csvm] to test classification performance. We use the python package scikit-learn [pedregosa2011scikit] which is based is build on libsvm implementation [chang2011libsvm]. The of the C-SVM and the for the Gaussian kernel is tuned independently for each fold using training data from that fold only. Each experiment is repeated 10 times, and average prediction accuracies and their standard deviations are reported.

We also note that note for all values of is the Gaussian NetEmd kernel is positive semidefinite (psd) [jayasumana2015kernel]. The implication is that the C-SVM converges to a stationary point that is not always guaranteed to be global optimum. Although there exist alternative algorithms [luss2008support] for training C-SVMs with indefinite kernels which might result in better classification accuracy, here we chose to use the standard libsvm-algorithm in order to ensure a fair comparison between kernels. For a discussion of support vector machines with indefinite kernels see [haasdonk2005feature].

Appendix F Detailed description of data sets and models

f.1 Synthetic networks and random graph models

consists of 16 sub data sets corresponding to combinations of and containing 10 realizations for each model i.e. contain 80 networks each.

In the size and average degree are increased independently in linear steps to twice their initial value ( and ) and contains 10 realizations per model parameter combination, resulting in a data set of networks.

In the size and average degree are increased independently in multiples of 2 to 8 times their initial value ( and ) and again contains 10 realizations per model parameter combination, resulting in a data set of networks. The models are as follows.

f.1.1 The Erdős-Rényi model

We consider the Erdős-Rényi (ER) model [1960er] where is the number of nodes and is the number of edges. The edges are chosen uniformly at random without replacement from the possible edges.

f.1.2 The configuration model

Given a graphical degree sequence, the configuration model creates a random graph that is drawn uniformly at random from the space of all graphs with the given degree sequence. The degree sequence of the configuration models used in the paper is taken to be degree sequence of a duplication divergence model that has the desired average degree.

f.1.3 The Barabási Albert preferential attachment model

In the Barabási-Albert model [barabasi1999emergence] a network is generated starting from a small initial network to which nodes of degree are added iteratively and the probability of connecting the new node to an existing node is proportional to the degree of the existing node.

f.1.4 Geometric random graphs

Geometric random graphs [1961gilbert] are constructed under the assumption that the nodes in the network are embedded into a dimensional space, and the presence of an edge depends only on the distance between the nodes and a given threshold . The model is constructed by placing nodes uniformly at random in an -dimensional square . Then edges are placed between any pair of nodes for which the distance between them is less or equal to the threshold . We use and set to be the threshold that results in a network with the desired average degree, while the distance is the Euclidean distance.

f.1.5 The geometric gene duplication model

The geometric gene duplication model is a geometric model [2008higham] in which the nodes are distributed in 3 dimensional Euclidean space according to the following rule. Starting from an small initial set of nodes in three dimensions, at each step a randomly chosen node is selected and a new node is placed at random within a Euclidean distance of this node. The process is repeated until the desired number of nodes is reached. Nodes within a certain distance are then connected. We fix to obtain the desired average degree.

f.1.6 The duplication divergence model of Vázquez et al.

The duplication divergence model of Vázquez et al. [DD1] is defined by the following growing rules: (1) Duplication: A node is randomly selected and duplicated () along with all of its interactions. An edge between and is placed with probability . (2) Divergence: For each pair of duplicated edges ; one of the duplicated edges is selected uniformly at random and then deleted with probability . This process is followed until the desired number of nodes is reached. In our case we fix to be 0.05 and adjust through a grid search to obtain a network that on average has the desired average degree.

f.1.7 The duplication divergence of Ispolatov et al.

The duplication divergence model of Ispolatov et al. [DD2] starts with an initial network consisting of a single edge and then at each step a random node is chosen for duplication and the duplicate is connected to each of the neighbours of its parent with probability . We adjust to obtain networks that have on average the desired average degree.

f.1.8 The Watts-Strogatz model

The Watts-Strogatz model, [1998watts]

creates graphs that interpolate between regular graphs and ER graphs. The model starts with a ring of

nodes in which each node is connected to its -nearest neighbours in both directions of the ring. Each edges is rewired with probability to a node which is selected uniformly at random. While is adjusted to obtain networks having the desired average degree we take to be 0.05.

f.2 Real world data sets

Summary statistics of the data sets are given in Table 4.

Data set #Networks Median() Median() Median()
RWN 167 24 351 62586 76 2595 824617 7.55e-05 0.0163 0.625
Onnela et al. 151 30 918 11586 62 2436 232794 4.26e-5 0.0147 0.499
AS-caida 122 8020 22883 26475 18203 46290 53601 1.48e-4 1.78e-4 5.66e-4
AS-733 732 493 4180.5 6474 1234 8380.5 13895 6.63e-4 9.71e-4 1.01e-2
World Trade Networks 53 156 195 242 5132 7675 18083 0.333 0.515 0.625
Reddit-Binary 2000 6 304.5 3782 4 379 4071 5.69e-4 8.25e-3 0.286
Reddit-Multi-5k 4999 22 374 3648 21 422 4783 6.55e-4 6.03e-3 0.091
Reddit-Multi-12k 11929 2 280 3782 1 323 5171 5.69e-4 8.27e-3 1.0
COLLAB 5000 32 52 492 60 654.5 40120 0.029 0.424 1.0
IMDB-Binary 1000 12 17 136 26 65 1249 0.095 0.462 1.0
IMDB-Multi 1500 7 10 89 12 36 1467 0.127 1.0 1.0
MUTAG 188 10 17.5 28 10 19 33 0.082 0.132 0.222
NCI1 4110 3 27 111 2 29 119 0.0192 0.0855 0.667
NCI109 4127 4 26 111 3 29 119 0.0192 0.0862 0.5
ENZYMES 600 2 32 125 1 60 149 0.0182 0.130 1.0
D&D 1178 30 241 5748 63 610.5 14267 8.64e-4 0.0207 0.2
Table 4: Summary statistics of data sets , and stand for the number of nodes, number of edges and edge density, respectively.

f.2.1 Real world networks from different classes (RWN)

We compiled a data set consisting of 10 different classes of real world networks: social networks, metabolic networks, protein interaction networks, protein structure networks, food webs, autonomous systems networks of the internet, world trade networks, airline networks, peer to peer file sharing networks and scientific collaboration networks. Although in some instances larger versions of these data sets are available, we restrict the maximum number of networks in a certain class to 20 by taking random samples of larger data sets in order to avoid scores being dominated by larger network classes.

The class of social networks consists of 10 social networks from the Pajek data set which can be found at http://vlado.fmf.uni-lj.si/pub/networks/data/default.htm (June 12th 2015) (Networks: ’bkfrat’, ’bkham’, ’bkoff’, ’bktec’, ’dolphins’, ’kaptailS1’, ’kaptailS2’, ’kaptailT1’, ’kaptailT2’, ’karate’, ’lesmis’, ’prison’) and a sample of 10 Facebook networks from [traud2012social] (Networks:’Auburn71’, ’Bucknell39’, ’Caltech36’, ’Duke14’, ’Harvard1’, ’JMU79’, ’MU78’, ’Maine59’, ’Maryland58’, ’Rice31’, ’Rutgers89’, ’Santa74’, ’UC61’, ’UC64’, ’UCLA26’, ’UPenn7’, ’UVA16’, ’Vassar85’, ’WashU32’, ’Yale4’). The class of metabolic networks consists of 20 networks taken [jeong2000large] (Networks: ’AB’, ’AG’, ’AP’, ’AT’, ’BS’, ’CE’, ’CT’, ’EF’, ’HI’, ’MG’, ’MJ’, ’ML’, ’NG’, ’OS’, ’PA’, ’PN’, ’RP’, ’TH’, ’TM’, ’YP’). The class of protein interaction networks consists of 6 networks from BIOGRID [stark2006biogrid] (Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Homo sapiens, Mus musculus and Saccharomyces cerevisiae downloaded: October 2015) and 5 networks from HINT [das2012hint] (Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Homo sapiens and Mus musculus (Version: June 1 2014)) and the protein interaction network of Echeria coli by Rajagopala et al. [rajagopala2014binary]. The class of protein structure networks consists of a sample of 20 networks from the data set D&D (Networks: 20, 119, 231, 279, 335, 354, 355, 369, 386, 462, 523, 529, 597, 748, 833, 866, 990, 1043, 1113, 1157). The class of food webs consists of 20 food webs from the Pajek data set: http://vlado.fmf.uni-lj.si/pub/networks/data/default.htm (June 10th 2015) (Networks: ’ChesLower’, ’ChesMiddle’, ’ChesUpper’, ’Chesapeake’, ’CrystalC’, ’CrystalD’, ’Everglades’, ’Florida’, ’Michigan’, ’Mondego’, ’Narragan’, ’StMarks’, ’baydry’, ’baywet’, ’cypdry’, ’cypwet’, ’gramdry’, ’gramwet’, ’mangdry’, ’mangwet’). The class of internet networks consists of 10 randomly chosen networks from AS-733 [as] (Networks:’1997/11/12’, ’1997/12/28’, ’1998/01/01’, ’1998/06/06’, ’1998/08/13’, ’1998/12/04’, ’1999/03/30’, ’1999/04/17’, ’1999 /06/18’, ’1999/08/30’) and 10 randomly chosen networks from AS-caida [as] (Networks: ’2004/10/04’, ’2006/01/23’, ’2006/03/27’, ’2006/07/10’, ’2006/09/25’, ’2006/11/27’, ’2007/01/15’, ’2007/04/30’, ’2007/05/28’, ’2007/09/24’). Both datasets are from SNAP [snapnets](June 1 2016). The class of world trade networks is a sample of 20 networks of the larger data set considered in [feenstra2005world, comtrade] (Networks: 1968, 1971, 1974, 1975, 1976, 1978, 1980, 1984, 1989, 1992, 1993, 1996, 1998, 2001, 2003, 2005, 2007, 2010, 2011, 2012). The airline networks were derived from the data available at: http://openflights.org/ (June 12 2015). For this we considered the 50 largest airlines from the database in terms of the number of destinations that the airline serves. For each airline a network is obtained by the considering all airports that are serviced by the airlines which are connected whenever there is direct flight between a pair of nodes. We then took a sample of 20 networks from this larger data set (Airline codes of the networks: ’AD’, ’AF’, ’AM’, ’BA’, ’DY’, ’FL’, ’FR’, ’JJ’, ’JL’, ’MH’, ’MU’, ’NH’, ’QF’, ’SU’, ’SV’, ’U2’, ’UA’, ’US’, ’VY’, ’ZH’). The class of peer to peer networks consist of 9 networks of the Gnutella file sharing platform measured at different dates which are available at [snapnets]. The scientific collaboration networks consists of 5 networks representing different scientific disciplines which were obtained from [snapnets] (June 1 2015).

f.2.2 Onnela et al. data set

The Onnela et al. data set consists of all undirected and unweighted networks from the larger collection analysed in [onnela]. A complete list of networks and class membership can be found in the supplementary information of [2014waqar].

f.2.3 Time ordered data sets

The data sets AS-caida and AS-733 each represent the internet measured at the level of autonomous systems at various points in time. Both data sets were downloaded from [snapnets](June 1 2015).

The World Trade Networks data set is based on the data set [feenstra2005world] for the years 1962-2000 and on UN COMTRADE [comtrade] for the years 2001-2015. Two countries are connected in the network whenever they import or export a commodity from a each other within the given calendar year. The complete data set was downloaded from : http://atlas.media.mit.edu/en/resources/data/ on July 12 2015.

f.2.4 Machine learning benchmarks

A short description of the social networks datasets was given in the main text. A more detailed description can be found in [yanardag2015deep]. The social network data sets were downloaded from https://ls11-www.cs.tu-dortmund.de/staff/morris/graphkerneldatasets on September 2 2016.

A short short description of the chemical compound and protein structure data sets was given in Section E. A more detailed description of the data set can be found in [WL]. These data sets were downloaded from: https://www.bsse.ethz.ch/mlcb/research/machine-learning/graph-kernels.html on June 12 2016.

Appendix G Performance evaluation via area under precision recall curve ?

The area under precision recall curve (AUPRC) was used as a performance metric for network comparison measures by Yaveroglu et al. [2014yaveroglu]. The AUPRC is based on a classifier that for a given distance threshold classifies pairs of networks to be similar whenever .

A pair satisfying is taken to be a true positive whenever and are from the same class. The AUPRC is then defined to be the area under the precision recall curve obtained by varying in small increments. However, AUPRC is problematic, especially in settings where one has more than two classes and when classes are separated at different scales.

Figure 7 gives three examples of metrics for a problem that has three classes: a) shows a metric (AUPRC=0.847) that clearly separates the 3-classes which, however, has a lower AUPRC than the metrics given in b) (AUPRC=0.902) which confuses half of Class-1 with Class-2 and c) (PRC=0.896) which shows 2 rather than 3 classes. The colour scale in the figure represents the magnitude of a comparison between a pair of individuals according to the corresponding metric.

Some of the problems of AUPRC are the following. First, AUPRC is based on a classifier that identifies pairs of similar networks and hence is only indirectly related to the problem of separating classes. Moreover, the classifier uses a single global threshold for all networks and classes, and hence implicitly assumes that all classes are separated on the same scale. The AUPRC further lacks a clear statistical interpretation, which complicates its use especially when one has multiple classes and when precision recall curves of different measures intersect.

Despite its problems we give AUPRC values for all measures we considered in the main text in Table 5 for the sake of completeness. Note that measures achieve the highest AUPRC on all data sets.

(a)
(b)
(c)
Figure 7: Heat maps of three measures for in an example of 3 equally sized classes. a) Metric shows clear separation between the 3 classes. b) shows 3 classes with half of Class-1 positioned closer to Class-2. c) identifies 2 rather than 3 classes. Note that has lower AUPRC than and despite being best at identifying the 3 classes whereas values for the metrics are =1.0, =0.887 and =0.869.
RWN Onnela et al.
0.917 0.039 0.869 0.702 0.800 0.756
0.959 0.030 0.930 0.759 0.774 0.786
0.981 0.018 0.957 0.766 0.722 0.757
0.9670.015 0.958 0.833 0.702 0.672
0.9660.030 0.945 0.801 0.777 0.739
0.7560.044 0.708 0.516 0.655 0.612
0.867 0.044 0.579 0.396 0.607 0.621
0.8520.028 0.657 0.437 0.522 0.592
0.8880.084 0.709 0.478 0.713 0.693
0.9660.052 0.858 0.571 0.736 0.743
0.8150.176 0.740 0.481 0.500 0.625
Table 5: AUPRC scores for measures and data sets considered in the main text. measures have the highest AUPRC score (given in bold) on all data sets.For we calculated the value of the AUPRC score for each of the 16 sub-data sets. The table shows the average and standard deviation of the AUPRC values obtained over these 16 sub-data sets.

References