1 Introduction
Timevarying graphs are ubiquitous across many disciplines, yet difficult to analyze, making them a natural target for visualization – a good visual representation of a timevarying graph will present its structure and structural changes quickly and clearly, to enable further analysis and exploration. A major development in graph drawing has been the observation that using derived information can retain structure in static graph visualizations. For example, the dot layout uses node ranks to perform hierarchical drawings [31]; the neato algorithm employs graph distances within statistical multidimensional scaling [30]; Noack’s energy model utilizes approximated clustering [46]. In this paper, we take the first steps towards using topological features – captured by persistent homology – with the design goal of detecting potentially important structural changes in timevarying graph data. By topological features, we do not mean the configuration of nodes and edges alone, but instead the  and dimensional homology groups of a metric space that describe its connected components and tunnels, respectively. This definition allows us to quantify structural elements within timevarying graphs to identify behavior patterns in the data. Persistent homology quantifies individual topological features (events) in the graph according to their significance (or persistence). The set of all features, encoded by the persistence diagram, can be seen as a fingerprint for the graph. Using this fingerprint, the most topologicallyimportant structures of two graphs can be compared in a manner that is robust to small perturbations in the data. Wellunderstood techniques in topological data analysis typically focus on the qualitative study of point cloud data under the metric space setting. In order to study graph data, our approach is to embed the graph in a metric space, where topological techniques can be applied. In other words, the notion of metric space acts as an organizational principle [9] in interpreting the graph data.
Our approach, as seen Figure 1, can be summarized as follows. The input of our pipeline is a timevarying graph, which is an ordered sequence of graph instances. First, each instance is embedded into a metric space. Second, topological features of each instance are extracted using persistent homology, and encoded within persistence diagrams. Third, instances are compared by calculating the distance between persistence diagrams and projecting them using classical multidimensional scaling (MDS) [6]. The data is then visualized using an interactive timeline and nodelink diagrams, as shown in Figure Visual Detection of Structural Changes in TimeVarying Graphs Using Persistent Homology. In this figure, the horizontal axis is used to represent time, while the vertical location is the first component of MDS, in other words, it captures the dissimilarities among instances. Graph instances from selected timeframes are drawn using a forcedirected layout to demonstrate how the approach highlights different structure in the graph. The contributions of our paper are:

A novel pipeline for detecting structural changes in timevarying graphs that uses persistent homology to summarize important structures, as opposed to directly comparing nodes and edges.

An interface that uses conventional visualization approaches adapted to the design goal of highlighting structural changes.

Two case studies of timevarying graphs showing how our approach can find cyclic patterns, deviations from those patterns, and unique onetime events in the graphs.

A study of the suitability of using persistencebased similarity measure for detecting structural changes in timevarying graphs.
2 Related Work
Static Graph Analysis and Visualization.
We provide a brief overview here. See von Landesberger et al.’s survey [53] for a full treatment. The first automated technique for nodelink diagrams is Tutte’s barycentric coordinate embedding [49]
, followed by linear programming techniques
[31], forcedirected/massspring embeddings [28, 36], embeddings of the graph metric [30], and linearalgebraic properties of the connectivity structures (especially, the graph Laplacian and associated eigenspaces)
[39, 40]. Most graph visualization systems, including Gephi [3], NodeXL [34], and Graphviz [25], use variations on nodelink visualizations to display graphs. For dense graphs, edge bundling can reduce visual clutter by routing graph edges to the same portion of the screen [35]. In terms of quality, divided edge bundling [48] produces highquality results, while hierarchical edge bundling [29] scales to millions of edges with slightly lower quality. Because these quality and runtime tradeoffs are so characteristic of nodelink diagram visualizations, whether or not this class of diagrams can effectively unlock the insights hidden inside the structure of large networks remains an open research question. Other visual metaphors have been proposed to reduce clutter, ranging from relatively conservative proposals [20, 21] to variants of matrix diagrams [18] and abstract displays of graph statistics [38].TimeVarying Graph Analysis.
The problem we address is closely related to the problem of measuring similarity or dissimilarly between graphs without knowing node correspondences. Comparing between graphs up to isomorphism is hard [1]. For this reason many notions of graph similarities have been proposed [4, 47]. These methods rely on mapping the graphs into a feature space and then defining distances on that space. Other approaches use kernel functions to build a similarity measures on graphs [43, 51]. While large portions of the literature on graph similarity focus on graph comparison with known node correspondences, there are attempts to tackle the problem where node correspondence is unknown [51, 52]. Distance functions on the space of graphs have also been studied [12].
TimeVarying Graph Visualization.
Beck et al. [5] provide a detailed survey of dynamic graph visualization. They divide the techniques into two major categories, animation and timelines. Our approach falls into the latter category. Animation approaches, such as the work of Misue et al. [44], vary the graph representation over time, while making the graph as legible as possible at any given instance. Timeline approaches, such as the work of Greilich et al. [33], use a nonanimated, often spatiallyoriented, visual channel to show the changes in the graph over time. Timeline approaches seem to provide a better overview of the data as it tries to capture the entire graph sequence in a single image. These approaches include multiple techniques such as nodelinkbased methods [37], matrixbased approaches [8]
and feature vectorbased method
[50]. For more references and background see also Landesberger et al.’s survey [53].Topological Data Analysis of Networks.
3 Approach
Our approach uses persistent homology to identify and compare features in a timevarying graph. Our visual design goal is to identify highlevel structural changes in the graph. To do this, consider a timevarying graph , which contains an ordered sequence of static graph instances . We are interested in quantifying and visualizing structural changes of . Our analysis pipeline (see Figure 1) is described below, and we provide detailed description of each step in the subsequent sections.

Associate each instance with a metric space representation. This yields a symmetric distance matrix , where measures the (shortestpath or commutetime) distance between vertices and in (Section 3.1).

Extract topological features of by constructing a filtration from its distance matrix and computing its corresponding dimensional persistence diagrams for (Section 3.2).

Capture the structural differences between and by computing the bottleneck or Wasserstein distance between their corresponding persistence diagrams and (Section 3.3).

Visualize the structural differences among the instances of (Section 3.4).
3.1 Graphs and Metric Space Representations
Suppose an instance is represented as a weighted, undirected graph with a vertex set and an edge set equipped with a positive edge weight . We associate each graph instance with a metric space representation, which yields a symmetric distance matrix . Consider the positive edge weight as the length of an edge, then a natural metric is obtained on , where for every pair of vertices and in , the distance is the length of the shortestpath between them. This is the classic shortestpath distance, which is typically computed with Dijkstra’s algorithm [17] and its variations. Alternatively, other distance metrics based on the graph Laplacian [14], such as commutetime distance, discrete biharmonic distance and diffusion distance, can be considered. For instance, the commutetime distance is defined as [27],
(1) 
Here and
are the generalized eigenvalues and eigenvectors of the graph Laplacian of
, respectively [13]. In practice, we approximate the summations of Equation (1) by considering the first few eigenvectors, since the higher eigenvectors do not contribute significantly. These distance metrics are illustrated in Figure 2. This illustration shows the distance from a point source to all other locations on the surface. From this we see commutetime distance produces a smoother gradient than shortestpath distance.3.2 Extracting Topological Features
To extract topological features from each graph instance, we apply persistent homology to its metric space representation. To describe our process, we first briefly review persistent homology. We then describe persistence diagrams, which encode topological features of a given graph instance. For more details and background on persistence homology see [23] and the references within.
Topological features.
Homology deals with topological features of a space. Given a topological space , the ,  and dimensional homology groups, denoted as , and respectively, correspond to (connected) components, tunnels and voids of . In our context, we care about the  and dimensional topological features of a graph instance , which correspond to and of its metric space representation. These  and dimensional topological features, roughly speaking, capture connected components and tunnels formed by vertices in the instances.
Persistent homology.
In practice, there might not exist a unique scale that captures topological structures of the data. Instead, we adapt a multiscale notion of homology, called persistent homology, a main tool in topological data analysis, to describe the topological features of a space at different spatial resolutions. Persistent homology typically starts with a finite set of points in a metric space. In our setting, each graph instance is associated with a metric space, where vertices in form a finite set of points , and encodes the pairwise distance among points in . We then apply a geometric construction, such as a Rips complex, on the point set , that describe the combinatorial structure among the points. For a real number , a Rips complex, denotes as , is formed by considering a set of balls of radius centered at points in . a simplex (an edge) is formed between two points in if and only if their balls intersect (see Figure 3 left). A simplex (a triangular face) is formed among three points if the balls intersect between every pair of points (see Figure 3 right).
Given a finite point set from , continuously increase the diameter forms a parameter family of nested unions of balls; and correspondingly we obtain a parameter family of nested Rips complexes, referred to as a Rips filtration. Let denote a finite sequence of increasing diameter. The Rips filtration (of ) is a sequence of Rips complexes connected by inclusions, Figure 4 shows a Rips filtration defined on an example graph equipped with a shortestpath metric.
Applying homology to a Rips filtration, the homology groups are connected from left to right by homomorphisms induced by inclusions, Topological features appear and disappear as the diameter increases: when a topological feature appears, that is, a cluster or a tunnel forms, this is called a birth event; when a topological feature disappears, that is, two clusters merge into one or a tunnel is filled, it is called a death event. The persistence of a topological feature, is the time difference between the death and the birth event.
Persistence Diagrams.
Topological features of a graph instance and their persistence are recorded by pairing their birth and the death events as a multiset of points in the plane, called the persistence diagram (see [24]). Each topological features is represented as a point , where is the birth time, and is the death time of the feature. Certain feature may “live” forever, in that case, they are assigned a death time of . Therefore, persistence diagram contains a multiset of points in the extended plane (i.e. ). For technical reasons, we add the points on the diagonal to the diagram, each with infinite multiplicity. The persistence of the pair is simply . Features with higher persistence carry more significant topological information. Features with low persistence are typically considered to be noise. A persistence diagram can be visualized as persistence barcodes [32] (see Figures 1 and 4), where each bar starts at time and ends at time . We are interested in  and dimensional topological features, so we consider the  and  persistence diagrams, denoted as and , respectively.
3.3 Comparing Sets of Topological Features
A persistence diagram can be thought of as a summary of topological features of a graph instance . To quantify the structural difference between two instances and , we compute the bottleneck and Wasserstein distances between their persistence diagrams. Given two persistence diagrams and , let be a bijection between points in the diagram. The bottleneck distance [24] is defined as,
(2) 
The Wasserstein distance is,
(3) 
for any positive real number ; in our setting, . The set of points in the persistence diagram can be considered as a feature vector, where the feature space consists of all persistence diagrams for the timevarying graph . Given all pairwise distances between persistence diagrams, classical multidimensional scaling (MDS) is then used to reduce the dimensionality of the feature vectors for visualization, and to identify the instances where topologically interesting events occur.
3.4 Visualization
The design goal of our interactive visualization tool is to provide insights about variation in the structural properties of timevarying graphs. In this way, we hope to identify time periods of uniform behavior (low variation) and outlier behavior (instances of high variation). Our visualization tool provides a number of capabilities to support this form of investigation.
Timeline.
The timeline view uses the horizontal axis to represent time and the vertical axis to represent the first dimension returned by applying classical MDS to the space of persistence diagrams. This in essence highlights the dissimilarity between graph instances. Each point on the timeline represents a single instance of the timevarying graph. The points are colored using cyclic colormaps, such as the timeofday colormap of Figure Visual Detection of Structural Changes in TimeVarying Graphs Using Persistent Homology or the dayoftheweek colormap of Figure 10.
Cyclic Patterns.
Two techniques are available for showing repetitive patterns in the data, both being variations of the timeline. The first technique simply splits the data based upon a userspecified period length. Each period is colored uniquely. Figure 6 shows an example of this. For the second technique, the time periods are clustered based upon their norm using means clustering with a user specified . Figure 11 shows an example of this where the points are colored by dayoftheweek.
Graph Visualization.
For investigating the behavior of specific graph instances, the instances are displayed by two visualization mechanisms. The first is a nodelink diagram created using a forcedirected layout. If categorical information is available (such as in Figure Visual Detection of Structural Changes in TimeVarying Graphs Using Persistent Homology), the nodes are colored by those categories. For 1dimensional topological features, nodes can be parameterized around the tunnel using a 1dimensional cyclic parameterization [16, 54], and colored accordingly. An example of this is seen in Figure 8. In other cases, nodes receive a fixed color. The second mechanism visualizes the persistence diagram for a given graph instance using its barcodes (see fourth row of Figure 5). The barcode is a variation on a bar chart that represents the birth and death of all topological features in the graph.
3.5 Example
We provide an illustrative example of our pipeline in Figure 5. In step 1 (1st row), a timevarying graph is given as sequence of graph instances, where each instance is a connected, weighted graph. In step 2 (2nd row), each graph instance is embedded in a metric space by calculating a distance matrix using the shortestpath metric. In step 3 (3rd row), the distance matrix is used to compute a series of filtrations. In reality, additional filtrations are created, but we only shows those that produce topological events, in this example only 0dimensional features. The step 4 (4th row), the dimensional persistence diagrams of the filtrations are extracted and shown as barcodes. The final step (5th row) consists of computing the distances between these diagrams using bottleneck and Wasserstein distances. The bottleneck or Wasserstein distance as persistencebased similarity measure helps to quantify topologically similarity between a pair of instances. For example, under both distances, and are much closer to one another than either and or and .
4 Case Studies
To validate our approach, we look at case studies of two publicly available datasets. Both are communication networks, one involves interpersonal communication of high school students, and the other contains email communications between researchers. These case studies help demonstrate how our approach can identify cyclic patterns in data, deviations from patterns, and onetime events in timevarying graphs. Our pipeline requires a number of tools for processing. Graph processing and metric space embedding are coded using Python. Persistent homology calculations and the bottleneck and Wasserstein distances are computed using Dionysus^{1}^{1}1http://www.mrzv.org/software/dionysus/. Finally, visualizations are implemented using Processing^{2}^{2}2https://processing.org/.
4.1 High School Communications
The High School Communications dataset [26] is a timevarying graph that tracks the contact between high school students. The data was collected for 180 students in 5 classes over 7 school days in November 2012 in Marseilles, France. The graph tracks Monday through Friday of the first week and Monday and Tuesday of the following week. We compute both shortestpath and commutetime distances and both  and dimensional persistence diagrams. Then, both the bottleneck and Wasserstein distances are used to compare persistence diagrams. We present a small set of configurations and draw a few conclusions from them. Many similar conclusions have been identified in other configurations that are not shown.
4.1.1 An Average Day
First, to examine an average day of communication, we look at the dimensional features of the first Monday of the dataset in Figure Visual Detection of Structural Changes in TimeVarying Graphs Using Persistent Homology. In this figure, commutetime is used to generate persistence diagrams and bottleneck distance is used to compare diagrams. In this figure, a number of phases can be seen. In the early and late hours, no interactions occur (e.g., time A). As the school day begins at time B, light, looselyconnected communications begin. By midmorning (time C), class MP*1, PC, PC*, and PSI* are all interacting heavily within and between groups. Midday (times D & E), shows classes heavily interacting once again. Early afternoon (time F) shows mostly within communications for classes PC, PC*, and PSI* and within and between communications for MPI*1 and MPI*2. Finally, the end of the day, time G, shows much sparser group communications.
4.1.2 Comparison with Other Days
While observing patterns within a single day is interesting, comparing Monday with other days can help to better identify regular and irregular daily behavior. Figure 6 shows just such a comparison; it uses commutetime to generate dimensional persistence diagrams, and Wasserstein distance to compare diagrams. The top chart of Figure 6 compares the first Monday and the first Tuesday. Ignoring outlier graph instances, two main differences can be observed. First, the early morning of Tuesday shows different levels of activity than Monday. This can be confirmed by looking at examples from those days. Figure 6 (top left) shows example graphs from Monday and Tuesday morning. Secondly, at both the beginning and end of midday, Tuesday shows higher activity than Monday. The middle chart of Figure 6
compares Wednesday, Thursday, and Friday. In this chart, Wednesday and Friday show more early morning activity than Monday, but Thursday shows activity levels similar to Tuesday. Individual graph instances of the timevarying graph from this timeframe can be seen in Figure
6 (middle left). Late morning shows that Wednesday is extremely active, while Thursday and Friday are mostly inactive. Midday across all three days remains similar. Finally, the afternoons of all three days are similarly inactive. Sample graphs for this timeframe can be seen in Figure 6 (middle right). The bottom of Figure 6 shows the second Monday and Tuesday. These days show almost no morning activity (also see Figure 6 (bottom left)) and normal midday activity. Early afternoon shows midrange and high activity for Monday and Tuesday, respectively. Graphs associated with these activity levels can be seen in Figure 6 (bottom right). As a means to compare results to a more traditional analytic, Figure 7 bottom is a timeline that captures the number of interaction events for a given graph instance in the timevarying graph (i.e. the sum of the weights). Comparing this chart to that in Figure 7 bottom, it is clear that our approach captures a different type of behavior than edge counting alone.4.1.3 1Dimensional Topological Features
The High School Communications dataset ultimately contains very few 1dimensional topological features, the majority of which have low persistence. The onetime exception, which appears on the first Monday, can be seen in Figure 8. Between 11:48 am and 12:48 pm a high persistence dimensional pattern appears in the graph. The nodes of the graph are parameterized using that feature and visualized using a cyclic rainbow colormap. The graph shows a large tunnel (loop) towards the upper left.
4.2 EU Research Institution EMail
The EU Research Institution Email [42]^{3}^{3}3http://snap.stanford.edu/data/emailEucore.html dataset is an anonymized timevarying graph tracking emails between members of “a large European research institution”. We have used the smaller of the available networks containing 986 nodes and 332,334 temporal edges. The graph tracks the activity for 803 days. A period of about 200 days is missing towards the end of the dataset, so we have analyzed the first 500 days. A single graph instance is created per day and shared overlap with neighboring days. Once again, edge weight is chosen by counting the number of communications during the graph instance.
4.2.1 Bottleneck vs. Wasserstein Distance
The bottleneck and Wasserstein distance both capture important but distinct differences among sets of topological features. Intuitively, the bottleneck distance () captures the most perturbed topological feature (or the extreme behavior); while the Wasserstein distance () captures the perturbation across all features (or the average behavior). Figure 9 shows how this impact the analysis of the EU EMail dataset. For dimensional (Figure 9(a)) and dimensional (Figure 9(b)) bottleneck distances, the result is noisy, as the value captured has the most variation. For dimensional (Figure 9(c)) and dimensional (Figure 9(d)) Wasserstein distances, the result is smoother, since it encodes the perturbations across all features. For our analysis of the EU Email data, this property is more desirable.
4.2.2 Revealing Cyclic Patterns
Upon investigating the data, cyclic patterns were immediately apparent with all configurations of the Wasserstein distance ( & dimensional features and shortestpath & commutetime). Figure 10
A & B show the 1dimensional shortestpath version, where the cyclic patterns are most prominent (also see supplemental material for the complete 1dimensional feature timeline). It is notable that this pattern is related to the natural cycle of the week. To identify the pattern of the “standard” week, we divided the data into 7 day segments and used kmeans clustering to group similar weeks. Figure
11 shows the result with 5 clusters. Each of the 5 clusters shows a version of the typical week for this institution.4.2.3 Onetime Events
When looking at the entire timeline (see supplemental material), a number of onetime events are easily discovered. Figure 10 C & F are two such events. During these time periods very little activity is present in the graphs. These times happen to be the last week of December and first few days of January, during the Christmas and New Year’s holidays. Figure 10 D is a one time event that shows an extreme increase in activity for a 12 day period. After entering the date, June 13, 2004, into Google, we discovered that this day corresponds to the release day of the results for the EU Parliamentary Election. Finally, Figure 10 E shows a 34 week period of significantly decreased activity. Despite our best efforts, we could not identify a major external event that would have caused such a reduction, and since the data is anonymized, we could not identify the institution to investigate a local or internal cause.
5 Discussion
In the previous section, we construct a similarity measure between two graph instances of a timevarying graph by utilizing the bottleneck or Wasserstein distance between their persistence diagrams, which encode the topological features associated with each instance. However, one might ask: why persistent homology? We argue that using topological data analysis and in particular, persistent homology, to study graphs, have complementary benefits and offer new insights. In this section, we conduct several experiments to justify our approach. In addition, we describe some intuition behind the information encoded by the persistence diagram of a graph, and the distances functions defined on them.
5.1 Persistent Diagram As a Graph Fingerprint
Conventional graphtheoretical approaches typical utilize the statistical properties of the vertices and edges, for instance, degrees, connectivity, path lengths to describe the short range and pairwise interactions in the system. On the other hand, topological summaries, such as the persistence diagrams, are compressed feature representation of the underlying data, that can capture long range and higherorder interactions. We test our persistencebased similarity measure against a set of desirable properties for a similarity measure on a graph (the first four conditions are introduced in [41]):

Edge importance: An edge whose insertion or deletion changes the number of connected components is more important than an edge that does not.

Weight awareness: In weighted graphs, the bigger the weight of the removed edge is, the greater the impact on the similarity measure should be.

Edgesubmodularity: Changing an edge in a dense graph is less important than changing an edge in an equallysized, sparse graph.

Focus awareness: Random changes in graphs are less important than targeted changes of the same extent.

Node awareness: We add an extra condition in this paper, i.e., deleting a large number of nodes in a graph has a larger impact than deleting a small number of nodes from the same graph.
We conduct several experiments on synthetic and realworld datasets to test the above conditions.
For the node awareness (property 5) we consider the graphs shown in Figure 12 (c) top left. Each of the graphs is obtained from the original graph by deleting number of nodes (in blue). The bottleneck and Wasserstein distance matrices of between these graphs are shown in the top of Figure 12 (a)(b). The distance matrices are omitted since their entries are all zeros. From the matrices in Figure 12 (a)(b) top, we observe that persistencebased similarity measure is sensitive to node deletion, that is, it satisfies node awareness, in particular, the Wasserstein distance is more node aware than the bottleneck distance in these examples.
Similarly, to test edge importance (property 1) against our similarity measure we delete a set of edges from a graph , shown in Figure 12 (c) top right. The graph is obtained from by deleting edges (in blue). The bottleneck and Wasserstein distance matrices of among these graphs are shown in Figure 12 (a)(b) bottom. We observe that our persistencebased similarity measure is sensitive to edge deletions that change the connectivity of the graph, that is, it satisfies edge importance. Notice how the the Wasserstein distance is more aware of the level of (dis)connectedness between the graphs. To test weight awareness (property 2), we run our test on three randomly generated, weighted graphs , and , where , , , , and respectively. Each is generated from the random graph model, where a graph is chosen uniformly at random from the set of all graphs with nodes and edges (by setting and for ). The weights on the edges are drawn uniformly from . For each graph , we obtain a set of modified graphs by only modifying the weight of an edge (for all edges) in such that , where is drawn uniformly randomly from ; similarly, we obtain a set of modified graph from by only modifying the weight of edge such that , where is drawn uniformly randomly from . Let graph denote the graph obtained from by deleting an edge . Property 2 holds when for all in . In Figure 13 we represent the difference by plotting the points . Hence, property holds for a point on and above the diagonal (i.e. ). Note that our similarity measure satisfies weight awareness for dimension but violates the condition for dimension . This is due to the fact that for some captures a creation or a destruction of a cycle. To test edgesubmodularity (property 3), we consider a set of four graphs , , and . These graphs share the same number of nodes. Graph is denser than graph ; while graph and are obtained from and respectively by deleting an edge. We test property 3 against four sets of small synthetic graphs in Figure 12 (c) bottom; the results are shown in Table 1. We see that both Wasserstein and bottleneck on capture better the changes that occur in a sparser graph than they do on an equally sized denser graph; i.e. they satisfy edgesubmodularity in dimension 0. However, these distances behave differently on . Table 1 shows some negative entries; this is due to the fact that between and , a cycle is either created or destroyed; while no cycle appears/disappears between and (that is, ).
Graphs  

0  0.25  0  0.5  
0.25  0.25  0.5  0.5  
0  1  0  1  
0.25  1  0.5  1  
0.25  0  0.5  0 
For focus awareness (property 4), we generate three random weighted graphs , and following the same model as before, with , , vertices, , and edges respectively; and all edge weights are chosen uniformly random from . We generate a collection of socalled corrupted (i.e. modified) graphs from the original graph with two types of corruptions: (1) by deleting to of random edges (with increment) of the original graph; and (2) by deleting the same number of edges from the original graph in a targeted way, specifically, among the edges with the largest weights. For each graph we plot the difference between the targeted corruption and the random corruption , for some percentage : against the percentage of deleted edges .
We obtain similar observation shown in Figure 14 as the property 3 test. Our persistencebased similarity measure satisfies the focus awareness property in dimension 0 but not in dimension 1. This is due to the fact that the deletion of an edge might create a cycle in the corrupted graph (see the negative values in Figure 14 bottom).
5.2 Stability Under Perturbation
The persistence diagram computation depends on the distance matrix we impose on a graph. A natural question is: what are the advantages of using the persistence digram on a graph over the distance matrix itself as a topological fingerprint of the graph? We would like to give some experimental evidence in this section to justify our choice of persistencebased similarity measure. To simplify the analysis, we perturb a small percentage of edges on a simple example, the “map of science” graph [7] and we focus only on edge deletion. The experiments we show here only use . is omitted because the results are similar. The map of science graph consists of nodes and edges; we refer to it as the baseline graph, denoted as .
Edge Deletion Model.
Our edge deletion is designed as follows. For the th perturbation step, of edges are deleted from the baseline uniformly at random; and such a perturbation is repeated times to obtain (almost) unbiased results. We perform a total of perturbation steps, that is, up to of edges can be deleted from the baseline.
Similarity Measures.
We compare variations among various similarity measure. Recall is the baseline graph, and is the distance matrix of its metric space representation. Let be an instance of a perturbed graph at the the perturbation step, be its distance matrix of its metric space representation. The first set of similarity measures are based on bottleneck and Wasserstein distances. We examine the bottleneck distance and the Wasserstein distance between the  and dimensional persistence diagrams associated with and respectively. The second set of similarity measures are based upon matrix norms on the distance matrices. We measure the matrix max norm, that is, , where for a matrix . We also measure the matrix Frobenius norm, that is, , where .
Experimental Results.
Figure 15 shows our experimental results. Figure 15(a) uses shortestpath distance metric in the computation of various similarity measures; while Figure 15(b) uses commutetime distance metric. Each subfigure is a boxplot whose yaxis corresponds to a particular similarity measure. Since these similarity measures are not directly comparable, the range of yaxis for each plot has been normalized to according to the maximum similarity measure across all experimental instances. In Figure 15(a)
, under the shortestpath distance metric, there appears to be a linear relationship between perturbation and the bottleneck distance (and Wasserstein distance). Furthermore, the Wasserstein distance has a smaller variance than the bottleneck distance, making it suitable to study global perturbation in the data. On the other hand, similarity measures based on matrix norms are relatively unstable. Both max norm and Frobenius norm show large fluctuations and variance making them less suitable for analysis. Moreover, these measures completely fail when the perturbed graph becomes disconnected, which is not an issue for our approach. Figure
15(b), under the commutetime distance, we observe that persistencebased measure appears to be less noisy and more stable than the shortestpath distance metric.6 Conclusion
Timevarying graphs are becoming increasingly important in data analysis and visualization. In this paper, we address the problem of capturing and visualizing structural changes in timevarying graphs using techniques originated from topological data analysis, in particular, persistent homology. We provide a simple and intuitive visual interface for investigating structural changes in the graph using persistencebased similarity measures. There are many ongoing and future research avenues based upon our approach. For example, in our work, we restrict topological feature extraction to Rips filtrations. Other types of filtrations, such as clique filtration
[55], can be natural in analyzing and understanding timevarying graphs. One interesting question that arises in our approach is how best to convert edge weights into distances. The conventional wisdom is that the stronger the communication between nodes (i.e., higher edge weight), the closer together they should be. However, we have some evidence that such a conversion may not always capture the underlying structural changes, and sometimes, an inverse weighting scheme may be more effective. It would also be interesting to perform systematic comparison to a wide range of similarity measures in the study of timevarying graphs [45], in particular, to see how these different measures can complement one another in enriching our current visual analytic framework. A final note is that we hope the work described here could inspire more graph visualization research to move beyond graphtheoretical measures and venture into techniques from topological data analysis.References
 [1] L. Babai. Graph isomorphism in quasipolynomial time. arxiv.org/abs/1512.03547, 2016.
 [2] M. Bampasidou and T. Gentimis. Modeling collaborations with persistent homology. CoRR, abs/1403.5346, 2014.
 [3] M. Bastian, S. Heymann, and M. Jacomy. Gephi: an open source software for exploring and manipulating networks. In ICWSM, pages 361–362, 2009.
 [4] M. Baur and M. Benkert. Network comparison. In Network analysis, pages 318–340. Springer, 2005.
 [5] F. Beck, M. Burch, S. Diehl, and D. Weiskopf. The state of the art in visualizing dynamic graphs. EuroVis STAR, 2, 2014.
 [6] I. Borg and P. J. Groenen. Modern multidimensional scaling: Theory and applications. Springer Science & Business Media, 2005.
 [7] K. Börner, R. Klavans, M. Patek, A. M. Zoss, J. R. Biberstine, R. P. Light, V. Larivière, and K. W. Boyack. Design and update of a classification system: The ucsd map of science. PloS one, 7(7):e39464, 2012.
 [8] M. Burch, B. Schmidt, and D. Weiskopf. A matrixbased visualization for exploring dynamic compound digraphs. In Information Visualisation International Conference, pages 66–73. IEEE, 2013.

[9]
G. Carlsson.
Topological pattern recognition for point cloud data.
Acta Numerica, 23:289–368, 2014.  [10] C. J. Carstens and K. J. Horadam. Persistent homology of collaboration networks. Mathematical Problems in Engineering, 2013, 2013.
 [11] B. Cassidy, C. Rae, and V. Solo. Brain activity: Conditional dissimilarity and persistent homology. IEEE 12th International Symposium on Biomedical Imaging (ISBI), pages 1356 – 1359, 2015.
 [12] G. Chartrand, F. Saba, and H. B. Zou. Edge rotations and distance between graphs. Časopis pro pěstování matematiky, 110(1):87–91, 1985.
 [13] F. R. Chung. Spectral graph theory, volume 92. American Mathematical Soc., 1997.
 [14] D. M. Cvetković, M. Doob, and H. Sachs. Spectra of graphs: theory and application, volume 87. Academic Pr, 1980.
 [15] Y. Dabaghian, F. Mémoli, L. Frank, and G. Carlsson. A topological paradigm for hippocampal spatial map formation using persistent homology. PLoS Computational Biology, 8(8):e1002581, 2012.
 [16] V. de Silva, D. Morozov, and M. VejdemoJohansson. Persistent cohomology and circular coordinates. Proceedings 25th Annual Symposium on Computational Geometry, pages 227–236, 2009.
 [17] E. W. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, 1:269–271, 1959.
 [18] K. Dinkla, M. A. Westenberg, and J. J. van Wijk. Compressed adjacency matrices: untangling gene regulatory networks. IEEE Transactions on Vis. and CG, 18(12):2457–2466, 2012.
 [19] I. Donato, G. Petri, M. Scolamiero, L. Rondoni, and F. Vaccarino. Decimation of fast states and weak nodes. Proceedings of the European Conference on Complex Systems, pages 295–301, 2012.
 [20] C. Dunne and B. Shneiderman. Motif simplification. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2013.
 [21] T. Dwyer, N. H. Riche, K. Marriott, and C. Mears. Edge compression techniques for visualization of dense directed graphs. IEEE Transactions on Visualization and Computer Graphics, 19(12):2596–2605, 2013.
 [22] W. E, J. Lu, and Y. Yao. The landscape of complex networks. CoRR, abs/1204.6376, 2012.
 [23] H. Edelsbrunner and J. Harer. Persistent homology  a survey. Contemporary Mathematics, 453:257–282, 2008.
 [24] H. Edelsbrunner and J. Harer. Computational Topology: An Introduction. American Mathematical Society, Providence, RI, USA, 2010.
 [25] J. Ellson, E. Gansner, L. Koutsofios, S. C. North, and G. Woodhull. In Graph Drawing, pages 483–484. Springer, 2002.
 [26] J. Fournet and A. Barrat. Contact patterns among high school students. PloS one, 9(9):e107878, 2014.
 [27] F. Fouss, A. Pirotte, J.M. Renders, and M. Saerens. Randomwalk computation of similarities between nodes of a graph with application to collaborative recommendation. IEEE transactions on Knowledge and data engineering, 19(3):355–369, 2007.
 [28] T. M. Fruchterman and E. M. Reingold. Graph drawing by forcedirected placement. Software: Practice and experience, 21(11):1129–1164, 1991.
 [29] E. R. Gansner, Y. Hu, S. North, and C. Scheidegger. Multilevel agglomerative edge bundling for visualizing large graphs. In IEEE Pacific Visualization Symposium, pages 187–194, 2011.
 [30] E. R. Gansner, Y. Koren, and S. North. Graph drawing by stress majorization. In Graph Drawing, pages 239–250. Springer, 2005.
 [31] E. R. Gansner, E. Koutsofios, S. C. North, and K.P. Vo. A technique for drawing directed graphs. IEEE Transactions on Software Engineering, 19(3):214–230, 1993.
 [32] R. Ghrist. Barcodes: The persistent topology of data. Bullentin of the American Mathematical Society, 45:61–75, 2008.
 [33] M. Greilich, M. Burch, and S. Diehl. Visualizing the evolution of compound digraphs with timearctrees. In Computer Graphics Forum, volume 28, pages 975–982. Wiley Online Library, 2009.
 [34] D. Hansen, B. Shneiderman, and M. A. Smith. Analyzing social media networks with NodeXL: Insights from a connected world. Morgan Kaufmann, 2010.
 [35] D. Holten and J. J. Van Wijk. Forcedirected edge bundling for graph visualization. In Computer Graphics Forum, volume 28, pages 983–990. Wiley Online Library, 2009.
 [36] Y. Hu. Efficient, highquality forcedirected graph drawing. Mathematica Journal, 10(1):37–71, 2005.
 [37] W. Javed and N. Elmqvist. Exploring the design space of composite visualization. In PacificVis 2012, pages 1–8. IEEE, 2012.
 [38] S. Kairam, D. MacLean, M. Savva, and J. Heer. Graphprism: Compact visualization of network structure. In Advanced Visual Interfaces, 2012.
 [39] M. Khoury, Y. Hu, S. Krishnan, and C. Scheidegger. Drawing large graphs by lowrank stress majorization. In Computer Graphics Forum, volume 31, pages 975–984. Wiley Online Library, 2012.
 [40] Y. Koren, L. Carmel, and D. Harel. Ace: A fast multiscale eigenvectors computation for drawing huge graphs. In IEEE Symposium on Information Visualization, pages 137–144, 2002.
 [41] D. Koutra, J. T. Vogelstein, and C. Faloutsos. Deltacon: A principled massivegraph similarity function. In Proceedings of the 2013 SIAM International Conference on Data Mining, pages 162–170. SIAM, 2013.
 [42] J. Leskovec, J. Kleinberg, and C. Faloutsos. Graph evolution: Densification and shrinking diameters. ACM Transactions on Knowledge Discovery from Data (TKDD), 1(1):2, 2007.
 [43] G. Li, M. Semerci, B. Yener, and M. J. Zaki. Graph classification via topological and label attributes. In Proceedings of the 9th international workshop on MLG, volume 2, 2011.
 [44] K. Misue, P. Eades, W. Lai, and K. Sugiyama. Layout adjustment and the mental map. Journal of Visual Languages & Computing, 6(2):183–210, 1995.
 [45] N. D. Monnig and F. G. Meyer. The resistance perturbation distance. arXiv preprint arXiv:1605.01091, 2016.
 [46] A. Noack. Energy models for graph clustering. Journal of Graph Algorithms and Applications, 11(2):453–480, 2007.

[47]
P. Papadimitriou, A. Dasdan, and H. GarciaMolina.
Web graph similarity for anomaly detection.
Journal of Internet Services and Applications, 1(1):19–30, 2010.  [48] D. Selassie, B. Heller, and J. Heer. Divided edge bundling for directional network data. IEEE Transactions on Visualization and Computer Graphics, 17(12):2354–2363, 2011.
 [49] W. T. Tutte. How to draw a graph. Proceedings of the London Mathematical Society, s313(1):743–767, Jan 1963.
 [50] S. van den Elzen, D. Holten, J. Blaas, and J. J. van Wijk. Reducing snapshots to points: A visual analytics approach to dynamic network exploration. IEEE transactions on visualization and computer graphics, 22(1):1–10, 2016.

[51]
S. V. N. Vishwanathan, N. N. Schraudolph, R. Kondor, and K. M. Borgwardt.
Graph kernels.
Journal of Machine Learning Research
, 11(Apr):1201–1242, 2010.  [52] J. T. Vogelstein and C. E. Priebe. Shuffled graph classification: Theory and connectome applications. arXiv preprint arXiv:1112.5506, 2011.
 [53] T. Von Landesberger, A. Kuijper, T. Schreck, J. Kohlhammer, J. J. van Wijk, J.D. Fekete, and D. W. Fellner. Visual analysis of large graphs: Stateoftheart and future research challenges. In Computer graphics forum, volume 30, pages 1719–1749. Wiley Online Library, 2011.
 [54] B. Wang, B. Summa, V. Pascucci, and M. VejdemoJohansson. Branching and circular features in high dimensional data. IEEE Transactions on Visualization and Computer Graphics, 17:1902–1911, 2011.
 [55] A. Zomorodian. The tidy set: A minimal simplicial set for computing homology of clique complexes. Proceedings 26th ACM Symposium on Computational Geometry, pages 257–266, 2010.
Comments
There are no comments yet.