DeepAI
Log In Sign Up

VCExplorer: A Interactive Graph Exploration Framework Based on Hub Vertices with Graph Consolidation

09/20/2017
by   Huiju Wang, et al.
0

Graphs have been widely used to model different information networks, such as the Web, biological networks and social networks (e.g. Twitter). Due to the size and complexity of these graphs, how to explore and utilize these graphs has become a very challenging problem. In this paper, we propose, VCExplorer, a new interactive graph exploration framework that integrates the strengths of graph visualization and graph summarization. Unlike existing graph visualization tools where vertices of a graph may be clustered into a smaller collection of super/virtual vertices, VCExplorer displays a small number of actual source graph vertices (called hubs) and summaries of the information between these vertices. We refer to such a graph as a HA-graph (Hub-based Aggregation Graph). This allows users to appreciate the relationship between the hubs, rather than super/virtual vertices. Users can navigate through the HA- graph by "drilling down" into the summaries between hubs to display more hubs. We illustrate how the graph aggregation techniques can be integrated into the exploring framework as the consolidated information to users. In addition, we propose efficient graph aggregation algorithms over multiple subgraphs via computation sharing. Extensive experimental evaluations have been conducted using both real and synthetic datasets and the results indicate the effectiveness and efficiency of VCExplorer for exploration.

READ FULL TEXT VIEW PDF
08/26/2020

Argo Lite: Open-Source Interactive Graph Exploration and Visualization in Browsers

Graph data have become increasingly common. Visualizing them helps peopl...
02/02/2015

A Web-based Interactive Visual Graph Analytics Platform

This paper proposes a web-based visual graph analytics platform for inte...
12/24/2021

Multi-relation Graph Summarization

Graph summarization is beneficial in a wide range of applications, such ...
06/11/2018

Scalable Approximation Algorithm for Graph Summarization

Massive sizes of real-world graphs, such as social networks and web grap...
01/27/2021

A Neighborhood-preserving Graph Summarization

We introduce in this paper a new summarization method for large graphs. ...
05/17/2019

TGView3D System Description: 3-Dimensional Visualization of Theory Graphs

We describe TGView3D, an interactive 3D graph viewer optimized for explo...
03/31/2021

Efficient Exploration of Interesting Aggregates in RDF Graphs

As large Open Data are increasingly shared as RDF graphs today, there is...

I Introduction

Graphs are powerful tools to model a variety of information networks, such as the Web, biological networks and social networks (e.g. Twitter). In a graph, each vertex usually represents one real world object and each edge indicates the link between two objects. Normally, both vertices and edges may be annotated with attributes or labels.

These graphs contain a wealth of valuable information to support a wide variety of queries for information discovery and decision making. To better understand the information encoded in the underlying graphs, different approaches have been used to explore these data.

On one hand, we have summarized-based methods that aim to simplify or summarize the graphs to provide a coarser and higher level graph that is normally referred to as a view. These approaches include graph summarization [1], graph aggregation in graph OLAP [2], graph clustering and so on. The common methodology of these approaches is to aggregate multiple vertices (resp. edges) into one super node (resp. edge) based on certain rules (e.g. through clustering or aggregating the vertices with the same attributes) to a view with much fewer vertices and edges. This makes it easier to visualize a large and complex graph. On the other hand, we have graph-based methods (e.g. [3]) that convey the content of a graph by displaying the whole graph including all the individual vertices and the links on a screen via graph layout. The mainstream approach of these mechanisms is graph visualization which provides the individual vertices and the links among them in the visualization space.

From users’ point of view, graph summarization/aggregation methods show summarized view, but hide the original individual vertices; conversely, graph visualization schemes show all individual vertices, but hide the summarized view. Each of the approaches has its own strengths and limitations in exploring a graph. As the size of the graph increases, what to show and what to hide plays an important role on the effectiveness of graph exploration.

I-a A Running Example over Social Network.

Typically, a social network is modeled as a graph. Vertices of the graph represent persons, whereas edges represent relationships between the vertices. Both vertices and edges may have attributes. Figure 2(a) shows such a social network. Each vertex is affiliated with an attribute name, and each edge is affiliated with a relationship type (e.g., friend, relative) between two vertices. Given such a social network, an analyst may be interested to find out how user bingfish is connected with user kristy. Now, each path between bingfish and kristy represents one type of connections between them, and there are potentially an exponential number of such paths. Under the graph-based methods, it is not feasible to show the entire graph (or the subgraph containing all paths between them) to users as the display will become too cluttered (as shown in Figure 2(a)). With summarized-based methods, the resultant view resulted in information “loss” - the vertices of and are not shown at certain levels. Therefore, for the aforementioned query, both approaches cannot effectively facilitate exploration.

In this work, we advocate an alterative approach that displays a subgraph (called HA-graph) containing a subset of the actual vertices (called hubs) between bingfish and kristy111Note that both vertices bingfish and kristy are also hubs. as well as summaries of the relationships and information between these vertices.

Fig. 1: A running example of VCExplorer. (a) A derived Twitter network dataset 222The network is consisted of bi-directional edges of the input Twitter network. For clearness, we draw bi-directional edges as undirectional ones in Figure 2 with 5k vertices and 18k edges visualized by Cytoscape [4] (b) output HA-graph of SQ1. (c) HA-graph after zooming in edge (, ) in (b). In (b) and (c), the width of an edge represents the relationship strength of the induced subgraph represented by the edge; and each edge is labeled with its representative relationship type as well as a count of the number of vertices in the associated induced subgraph.

Such an approach allows users to be engaged with the original/source vertices (rather than virtual vertices), and the consolidated summary information of the hidden vertices (i.e, vertices that are not hubs in the current graph). Our approach may be viewed as a generalization of the above two approaches: if all vertices are chosen as hubs, it becomes a graph visualization approach; if no hub is selected, it becomes a graph summarization approach. We have developed VCExplorer (Vertex and Consolidation Based Explorer), a novel graph exploration framework that does just precisely what we advocate. VCExplorer starts by accepting a new type of graph exploration query (denoted as GE-query) that is formally defined in Section II. The following is an example GE-query, denoted by SQ1, on the social network graph in Figure 2:

SELECT TopMaxDegreeVertices(G’, 2)

FROM Subgraph(G, kristy, bingfish, 4) G’

GROUP BY betweenness()

SUMMARIZE BY relationshipStrength(),

relationshipType(),

vertexCount()

Given a GE-query, VCExplorer first derives the target subgraph to be explored. For social network applications, we expect users to explore relationships among people close to each other. In SQ1, the FROM clause specifies the subgraph of interest to be explored by using a user-defined function Subgraph, which extracts the subgraph of that consists of all vertices/edges along paths (with a path length of at most 4 hops apart) between a specific pair of vertices, and 333If and are more than 4 hops apart, then we should use that distance to bound the search space.. The SELECT clause identifies a set of hubs using a user-defined function, TopMaxDegreeVertices(G’, 2), which returns a set of two vertices in with the maximum vertex degree; these hubs represent the two most influential people connecting and . For SQ1, suppose that and are the top 2 vertices selected. Unlike graph visualization methods, only the hubs will be displayed in the resultant graph (as shown in Figure 2(b)). In this way, it is visually more appealing since fewer but more important vertices are being displayed.

Given the hubs (including vertices kristy and bingfish), the GROUP BY clause then induces a subgraph of between every pair of the hubs using a user-defined function which determines for each induced subgraph (wrt a pair of hubs and in ) and for each vertex in , whether is contained in . For SQ1, the betweenness function in the GROUP BY clause includes a vertex in an induced subgraph if is along some path between and in . One edge belongs to G’(, ) if its two vertices are in G’(, ). Note that a vertex/edge could be contained in multiple induced subgraphs.

The SUMMARIZE BY clause specifies a list of user-defined aggregation functions to compute summary information for each of the induced subgraphs. In SQ1, the user is interested in the following three summary information for each induced subgraph . The first is the closeness of the two hubs and based on the trust propagation among the users in  [5] computed by the relationshipStrength function. The second is the most representative relationship between the two hubs, such as “friend’s friend” relationship; the relationshipType function returns the concatenation of the relationship types along the shortest path between and . The third is a count of the number of vertices in the induced subgraph which is computed by the vertexCount function.

In general, all the information discovered can be visualized as a graph, referred as a Hub-based Aggregation Graph (HA-graph) in this paper. In the resultant HA-graph, the vertices are the hubs and edges are the connections among them which will be associated with the summarized information. For instance, the resultant HA-graph of SQ1 is shown in Figure 2(b). The HA-graph is much clearer than visualizing all the vertices in the underlying graphs. In addition, the HA-graph allows users to navigate and explore by zooming to the next level. To analyze the reason why and is weakly connected, the analyst may zoom in to the subgraph between and by issuing another GE-query. The resultant graph is shown in Figure 2(c).

I-B Contributions

Our contributions may be summarized as follows:

  • We present VCExplorer, a novel graph exploration framework. VCExplorer combines the innovative ideas of graph visualization and graph summarization. On one hand, it shows a subset of vertices each time without cluttering the display; and on the other hand, it summarizes information of “hidden” vertices. Compared to traditional graph visualization approach, VCExplorer is able to provide much clearer and useful information. It also offers an effective mechanism to navigate through the graph.

  • We illustrate how VCExplorer framework can be designed by incorporating existing technologies. Each component of VCExplorer actually covers many research problems and most of them have been studied for a long time. We further study how the newly emerged graph aggregation can be well integrated with the VCExplorer as one approach to summarize the relationship between two hub vertices. We propose and study efficient algorithms to share computations to salvage partial work done.

  • We conduct extensive experimental evaluation based on both real and synthetic data. The experimental results demonstrate that VCExplorer is effective and efficient.

Ii VCExplorer: The Big Picture

It is interesting and challenging to develop techniques to support graph exploration in real-time. In this section, we introduce VCExplorer by giving an overview of its features and components.

Ii-a Graph Exploration Query

The exploration starts by accepting a user’s query defined as follows.

Definition 1

A graph exploration query (GE-Query) is used to explore a data graph by identifying a subgraph of interest, a subset of interested vertices (i.e. hubs) in ’, and computing summarized information for each subgraph induced by every pair of hubs in . A GE-Query is characterized by five components which can be expressed using the following syntax:

SELECT 

FROM 

GROUP BY 

SUMMARIZE BY ,,

where

  • is an input data graph from which a subgraph of interest is extracted from a user-defined function .

  • is a user-defined function to return a set of hubs from the subgraph of interest . Possible selection criterias for include “selecting vertices with a specific attribute value”, “selecting the top k vertices with maximum closeness centrality value” and so on. For each selection criterion, the system may build an index to accelerate the computation of the selection.

  • is a user-defined function to compute an induced subgraph of , denoted by , for each pair of hubs from . An example of is the InBetween function illustrated in SQ1, whose computation can be accelerated using some reachability index.

  • Each is an aggregation function to compute some summarized information for each of the induced subgraphs . The summarized information could be path-related information (e.g., shortest path length), aggregation information (e.g., aggregate graph based on different attributes like in graph OLAP [2]). In [6], we have developed aggregation sharing algorithms by utilizing the overlaps between subgraphs to share computations.

Ii-B Hub-based Aggregation Graph

The output of a GE-query is formally defined as a HA-graph.

Definition 2

Hub-based Aggregation Graph (HA-graph): Given a GE-query , the result is a graph called the HA-graph = (, ), where is the set of hubs extracted from the subgraph of interest ; note that the set of hubs also include any vertex argument in function for computing the subgraph of interest. is a non-empty graph. Each vertex is associated with a set of attribute values inherited from the corresponding vertices in . Each edge in is associated with a set of summarized values where each is an aggregated value computed by the aggregation function on the induced subgraph for the pair of hubs .

Figure 2(b) shows the resultant HA-graph for SQ1, which consists of two most influential users between and . The labeled edges between a pair of vertices indicate the summarized information for the induced subgraph betwen the vertices. For instance, the edge (, ) in Figure 2(b) indicates that the number of vertices in the induced subgraph between and is 19, the shortest path between them in the induced subgraph is 3 consisting of three edge labels along this shortest path, and they have weaker relationship strength comparing with other pairs of hubs.

Ii-C Navigation

It is essential to provide navigation capabilities in graph exploration. This is to allow users to interact and explore large graphs. In general, zooming operations are quite indispensable and useful. Given a HA-graph, users can zoom-in on an induced subgraph by clicking on its corresponding edge . Another way for users to zoom-in is to select a subset of the vertices in the HA-graph; the collection of induced subgraphs among the selected vertices would form a new subgraph of interest to be further explored.

Iii Framework Design

After defining VCExplorer framework, we turn to the design of such framework. Specifically, we discuss how to utilize existing techniques to design efficient algorithm for each component. Due to space limit, we will not go too far into the technical details.

Iii-a Hub Vertex Generation

Hub vertices are selected using the function which is based on some measures

, such as vertex attribute, importance values, etc. According to the variability of measure value, we classify them into two categories:

Static function: whose measure values does not change during subgraph navigation. Such measures include vertex attributes and derived attributes. Take Twitter network as example. function ”Users whose age is above 80” takes age as measure which is an attribute native to vertex and remain static during navigation. function ”Top-10 Americans rank with closeness centrality value in ascending order” is built on closeness centrality. As closeness centrality measure is defined in the context of whole graph, during navigation, the closeness centrality value do not change in the context of new subgraphs. In this context, closeness centrality is in fact a derived attribute for vertex.

Since this kind of measures are static, it can be precomputed (for derived attribute) and indexed. For example, we can pre-compute the Twitter Closeness Centrality for every vertex and index them using -tree. When processing , we can directly refer to the -tree for a candidate list and thus boost the computation.

Dynamic Function: whose measure values change accordingly during subgraph navigation. For example, given a which computes ”Top-10 Americans rank with Closeness Centrality in ascending order”, here Closeness Centrality measure implicitly refers to current subgraph that consists of American users and following relationship between these users. During navigation, since subgraph is changing, the Closeness Centrality also changes.

Dynamic measures are often not easy to index, a commonly used technique is to compute the measure at run-time. When online computing is time consuming, we generally have two alternatives: 1) use approximated measure to speed up. For example in the case of Closeness Centrality, we can adopt the approximate scheme as in [7, 8]. 2) precompute some intermediate results. In the case of Closeness Centrality, we can compute all-pair shortest distance first. Since during navigation, subgraphs are extracted based on reachability property, all shortest distances are valid locally. With the knowledge of shortest distance, the Closeness Centrality can be efficiently computed.

Iii-B Subgraph Extraction

Before consolidation, subgraphs between any pair of hub vertices are extracted. By default, a betweenness function is used. That is for a vertex and two hub vertices , is between iff and . Given an exploring graph and a set of hub vertices , subgraph extraction can be translated as: , compute two sets: , . The Cartesian product of and denotes the set of subgraphs belongs to. Reachability index can be used to boost the extraction process. If the index is built and extraction is based on betweenness measure, we can use the following approach:

Index based extraction: Given a reachability index, the extraction can be performed as follows: , conduct two reachability tests and . And then update its and lists accordingly. Many reachability indices are developed in literature, such as transitive closure, 2-hop [9], highway [10], dual-labeling [11] etc. Due to betweeness measure, it is easy to see that reachabiilty relationship holds in any subgraphs. Therefore, the index can be reused in further navigation.If the reachabality index is unavailable, we can adapt graph traversal based approach instead.

Non-Index based extraction: We first preprocess the graph to assign each vertex with a topological order number. Circles are condensed and vertices in the same circle share the same order number. Then we process the vertices topologically. For every vertex, it will push its lists to all its immediate children. Each children unions all the lists it received from its father to form its own . The procedure to compute lists is similar but in a reversed manner. By so doing, the subgraph is extracted, but is slower than index based approach.

Iii-C Consolidation

After subgraph extraction, consolidation is performed on each subgraph. According to object type to be consolidated, graph consolidation can be further classified into following categories:

Attribute-based consolidation: consolidation that is only operate on vertices (edges) attributes or derived attributes. Typical operators are SUM, COUNT, AVG, etc. Since all the vertices (edges) are known at this stage, we can retrieve related attributes from the vertex (edge) attribute table. If any index on vertex(edge) ID is present, we can directly retrieve the target attributes, otherwise one scan on vertex (edge) attribute table will be introduced.

Structure-based consolidation: consolidation that is only related to graph structure. Typical operators include shortest distance (path), minimum cut etc. These problems are well-studied in the literature. Taking shortest distance as example, we have several algorithms to choose from: 1) In uniweighted graph, a BFS from a hub vertes is sufficient to compute all the shortest distance to other hub vertices; 2) In weighted graph, a Dijkstra’s algorithm is applicable; 3) If shortest distance indices [12, 13] is available and subgraphs are extracted based on function, the distance can be efficiently derived.

Attribute and Structure based consolidation: consolidation that is related to both graph structure and attributes on vertex (edge). Prominent example in this category is graph aggregation. Several algorithms has been developed recently [14, 2].Since these schemes focus on single graph computation, one naive solution is to run these schemes for each subgraph. Unlike the above two category where proper indices can boost the consolidation, graph aggregation is more complex and no indexing scheme is available. In the next Section, we will give an efficient algorithm to perform graph aggregation on multiple graphs.

Iii-D Visualization

A HA-graph usually has at most tens of vertices, and hundreds of edges, thus most layout algorithms [15] is able to handle it. In additional to displaying HA-graph structure, we also display the consolidated information for each edges in the HA-graph. A consolidated information can be a single value (i.e., COUNT(.)), a list (i.e., a shortest path, a set of group-value pairs) or a attributed graph (i.e., an aggregate graph). Given the diversity of the information, we create two modes for displaying the information on edges.

Data Mode: we display results in raw data format. A single value is a label to an edge; A path is displayed as a list of vertices and is attached to edge. A graph is displayed as 2-D tables with each row representing an edge or a vertex in aggregated graph.

Graph Mode: we display results in graph format. A single value is still a label; A path is displayed as a chain; and A graph is displayed in vertex-edge format.

Users are flexible to toggle between two modes for each edge. We adapted several other interaction designs which are more friendly to users. In HA-graph, user can enable hover features, then all results on edges are hidden and only shown when mouse is moving over. In data mode, user are freely to perform selection, projection, sorting on 2-D table.

Iv Sharing-based Online Aggregation

Efficient online aggregation for multiple subgraphs is the key to provide user better exploration experience. In this section, we introduce how to conduct the graph aggregation for multiple subgraphs online. We first introduce some preliminaries followed by one naive aggregation algorithm - SN (Shared Nothing) algorithm. Then we introduce the proposed aggregation algorithm, AS (Aggregation Sharing) algorithm, that provides an efficient aggregation by sharing the computation.

Graph aggregation offers a high level view of the attribute graph [14]. Integrating graph aggregation with VCExplorer is of great help to provide users the summarized information of the subgraphs which are unable to display. In this work, we will focus on the discussion of the distributive and algebraic functions (e.g. SUM, COUNT, Max, Min etc.) which can be applied to the subset of the edges or vertices in one graph. For these functions, the final results can be further calculated based on the result of each subset. For illustration, we take directed graph in Figure 2 as input graph and use betweenness function to find out target subgraphs one vertex belongs to. Other types of aggregation functions and graphs should be addressed similarly.

Iv-a Preprocessing

Iv-A1 Handling SCC

For directed graph, when betweenness is set chosen as the manner to extract the influential subgraph between two hub vertices, once one of the vertices in a SCC (strongly connected component) is in the subgraph, the entire SCC will be in the subgraph. Therefore, one optimization can be adopted here is to preprocess each SCC in advance.

In graph aggregation, each SCC can be pre-aggregated together and condensed into a super vertex. The super vertex will be associated with the pre-aggregate value of the SCC. In so doing, the original graph becomes an acyclic graph which is our discussion focused on in the later. Note that many existing works have been proposed and can be adopted here to detect the SCC, such as the Tarjan’s strongly connected component algorithm that runs in .

Iv-A2 Tags Generation

For illustration, we first definition vertex and edge tags which will be used later. In , every vertex is associated with a conceptual tag indicating which influential subgraph it belongs to in the HA-graph .

Definition 3

Vertex Tag: is a tag for every G. , where and .

Intuitively, denotes the hub vertices which can reach in and denotes the hub vertices which can be reached by in . is formed by concatenating the two lists. For instance, Figure 2 indicates a simple graph where vertices 1, 2, 3, 4 and 5 are selected as the hub vertices. In this example, vertex ’s tag is , which means vertex 1 can reach and can reach . We refer to the list of S(v) as and as .

Fig. 2: Example Graph.
Fig. 3: SN-Agg Plan Example

On the basis of tag definition, given a tag of , Cartesian product of and represents the infulential subgraphs belongs to. In addition, we also define the the size of the Cartesian product as the cardinality of . For instance, in Figure 2, is tagged with indicating that it belongs to subgraphs , , and and the cardinality is 4.

Similarly, we assign the similar tag for edge tag as well. In , a tag for is denoted as . For instance, is tagged with (¡1,2,3¿¡4,5¿). The cardinality of is 6.

To speedup generating the tags, the reachability index can be adopted here, such as transitive closure or 2-hop. For each and , we test whether or . The total complexity is , where stands for the number of hub vertices and stands for the cost for reachability testing between two vertices. After generating the tags for each vertex, the edge tags can be easily calculated based on the vertex tags.

Iv-B Share-Nothing Aggregate Algorithm

Recall that there are multiple subgraphs need to be aggregated, each of which corresponds to one edge in . To conduct the graph aggregation, one naive approach is to aggregate each subgraph individually. Intuitively, this approach aggregate the subgraph independently without any sharing operation. Thus, we refer to this algorithm as algorithm - stands for shared nothing.

In SN algorithm, each subgraph extracts its own vertices and edges and further calculates its own aggregate graph independently. Take the vertex aggregation as in example. Figure 3 shows how the vertices will be processed for different subgraphs. In Figure 3, the bottom lists all the vertices and the top lists all the subgraphs. Each link between the vertex and subgraphs indicates one aggregate operation where the vertex should be aggregated to a corresponding subgraph. Thus, in , each subgraph (denoted by tags) receives and aggregates the vertices independently. Given a graph with vertices, assume that is the number of subgraphs, the complexity of vertex aggregation is .

For the edge aggregation, if the graph is stored in the format as shown in Figure 2, there is a need to convert the vertex IDs of two endpoints of one edge to the vertex aggregate attributes. This can be done by performing a join between the edge attribute table and vertex attribute table. After the conversation, the edge aggregation can be conducted in the similar way as the vertex aggregation. Given the a graph with edges and subgraphs, the complexity of edge aggregation is .

Iv-C Aggregation Sharing Algorithm

SN is a straightforward approach as it computes the graph aggregation for each subgraph independently. However, it may incur high computation overhead as it may involve many redundant computations.

One observation is that some vertices and edges are involved the same set of multiple subgraphs. This provides us the opportunity to share the computation among different subgraphs.

For instance, in Figure 3 have common tag of which means these three vertices are involved into the same 6 subgraphs and . Therefore, the aggregation computation can be shared among these subgraphs. can be aggregated once and then supply to the 6 subgraph directly, instead of aggregating them 6 times. Similarly, can also be aggregated together then supply the result to their shared subgraphs directly. Figure 4 (a) indicates this procedure where B (resp. C) is the aggregate result of and (resp. ).

Another observation is that even though the tag are not exactly the same, they may still be able to share the computation once they have the shared subgraphs. One simple example is between B () and C () which are similar but not the same. It is easy to see that they share 3 subgraphs . We can pre-aggregate B and C where the result can be directly supplied to the 3 shared subgraphs which is able to reduce the computation overhead. Figure 4 (b) indicates such an idea.

Based on these observations, we propose a new algorithm, AS (Aggregation Sharing), on the principle of sharing the aggregation when the vertices or edges are involved into a common set of subgraphs. We refer to a common set of subgraphs as a shared component(SC). Given two tags t1 and t2, the SC can be calculated by t1.S t2.S concatenated by t1.R t2.R where means intersect. For instance, given t1 () and t2(), the SC can be calculated as concatenated by which will be . Note that to speed up SC calculation, the vertex ID lists in the tag can be stored as BitSet where the SC can be simply computed via the AND operation between two BitSets.

AS Algorithm: Discovering all the possible SCs among the tags incurs a high computation complexity that is almost

where n is the number of different tags. As a real-time exploration, finding the optimal solution for finding SCs may not be practical. Therefore, in this paper, we propose a heuristic algorithm to discover the SCs by tag clustering. For illustration, as the aggregating the vertices and edges is under the similar procedure, we focus on introducing the vertex aggregation here. The similar algorithm can be easily adopted for the edge aggregation which will be omitted. The pesudo code of proposed

AS algorithm is provided in Algorithm 1.

Fig. 4: Sharing Plan Example
1:  INPUT:
2:  :=null;
3:  :=null;
4:  := genTags(vertices)
5:  := sort()
6:  while ! do
7:     := .pop()
8:     if nt is the same tag with previous one then
9:        Combine nt into current group
10:     else
11:        if  then
12:           :=FindBestCluster(, )
13:           :=
14:           if Saving of combing and cluster is positive then
15:              .add()
16:              :=
17:              :=
18:               :=
19:              :=
20:              .add()
21:              .add(
22:           else
23:              .newCluster(, )
24:           end if
25:        else
26:           .add(, )
27:        end if
28:     end if
29:  end while
30:  aggregate()
Algorithm 1 Aggregation Sharing Alogrithm

Given a set of , we first generate tags for each vertex (Line 4) then sort all tags and put into a queue based on their size and their values (Line 5). The benefit of this sorting operation is two-fold. First, after sorting, it is easier and fast to combine and pre-aggregate all the vertices with the same tag. Second, after sorting by size, we can guarantee that the larger tags can be clustered first. This is designed based on the fact that the longer tag it is, the larger possibility it has to provide a benefited sharing.

As the vertices with the same tags are definitely able to share their computation, for each popped tag in , vertices with same tags will be combined together into groups first(Line 9). This same tag combing is conducted until it reaches a different tag. Note that this coming is also a pre-aggregating procedure where the corresponding vertices information is pre-aggregated.

After the first step of combing vertices with the same tags, we get a list of distinct tags each of which is associated with on group and the pre-aggregated value in the group. For instance, like in Figure 3 (B), after the first combining step, B1 and B2 are combining into one group B with the tag and C1, C2 and C3 are into another group C with the tag .

In the second step, we discover more sharing opportunities among these distinct tags by clustering them into clusters according to their similarity. The general idea of this clustering procedure is as follows: Given a new tag, it compares all the existing clusters to find the best cluster which obtains the biggest saving value after adding the new tag into the cluster based on one saving function. The saving function will be provided in Equation 1. If the biggest saving value is negative which means adding the new tag into any of the cluster does not increase the sharing opportunity, this new tag becomes a new cluster itself. This heuristic approach guarantees that the best cluster that increases the computation sharing is chosen in each clustering step. Since the tags are in sorted order, the clustering can stop while the new tag size becomes smaller than a threshold value, like 3. This is because most likely, when the tag size is smaller, the sharing opportunity is slightly small. There is no need to cluster them.

Now, we provide the saving equation used during the clustering. Assume for each cluster , is the common tag that is the intersection among all the tags in . is the number of tags already in the cluster. Then the saving cost after adding a new tag can be calculated as follows:

(1)

where is the new common tag of the cluster after adding to , indicates the total saving of the new cluster after adding , indicates the aggregation saving of before adding the . Therefore the difference between these two costs are the benefit of adding a new tag to the cluster.

After the clustering, the aggregation can be conducted for each cluster. Each tag in one cluster is actually split into two parts: one is the common tag and another is the differential tag . Note that of is the tag that is not covered by the common tag of . The can be obtained by . For instance, if is and is , is . For the common tag in each cluster, one further aggregation based on all the groups in one cluster can be conducted. The aggregation results can be directly used to all the subgraphs indicating by the common tag. This saves the repeated aggregation among these group for each subgraph. For each member in the cluster, its pre-aggregate value from the first step needs to send to all the subgraphs representing in its differential tag.

CSV 5 10 20 30 40
SN AS SN AS SN AS SN AS SN AS
10 1877 1383 2302 1233 3154 1009 4894 972 6175 990
100 1888 1563 2433 1234 3432 969 5375 961 6862 1001
1000 2110 1904 2823 1308 3828 1071 7281 1099 9264 1274
10000 2656 2148 3084 1506 4454 1364 7696 1522 10367 1877
100000 2927 2382 3192 1869 4370 2588 8403 2977 11452 3141
200000 3028 2620 3215 2115 4608 2469 9174 3116 13100 4049
400000 3039 3070 3294 3404 4664 3296 8587 6146 16380 5678
600000 2933 3232 3353 3525 4782 3665 10018 5694 19798 7377
800000 3003 3182 3338 3532 4949 5214 10227 6741 22852 8257
1000000 3113 3361 3497 3691 5372 6082 10299 8000 24475 10151
TABLE I: Aggregate performance over dense graph (ms)
CSV 5 10 20 30 40
SN AS SN AS SN AS SN AS SN AS
10 502 418 625 375 664 229 963 227 1136 226
100 513 428 665 377 708 231 1019 234 1357 244
1000 548 464 682 401 776 276 1126 385 1544 345
10000 573 516 713 446 843 360 1240 558 1604 540
50000 587 584 537 389 923 581 1328 721 1677 1012
100000 593 633 760 681 898 841 1284 1013 1775 1219
150000 602 650 584 568 884 863 1340 1155 2221 1581
200000 645 674 763 795 951 996 1305 1286 2508 1699
TABLE II: Aggregate performance over sparse graph(ms)

V Experimental Evaluation

Environment. We conduct all the experimental evaluations on a platform with an Intel Xeon E5607 4-core CPU (2.33GHz), 32GB of memory with running Linux 2.6.32 64-bit OS.

Implementation. All algorithms are implemented using java. Transitive closure are used as reachability index to support the extraction of subgraphs.

Datasets. We perform our experimental studies on two kinds of datasets including one real Twitter dataset (provided by UIUC [4]) and a set of synthetic datasets. The Twitter dataset contains 284 million following relationships, 3 million user profiles and 50 million tweets. Each user profile has information about account age, location, etc, and Re-tweets contains information about origin, time, content, etc.

The synthetic datasets are generated using the GRAIL graph data generator. Each generated synthetic dataset is a directed attributed graph. Each vertex in the graph is associated with three attributes (, , ) where and are the group and measure information with integer data type. Each edge is associated with four attributes (, , , ), where and are edge group and measure information with integer data type as well.

V-a Effectiveness

We first show the effectiveness of VCExplorer as a powerful tool to explore the Twitter graph. Given the Twitter graph, we are interested in discovering who are the most active users and what are the distributions of contact frequency among users in the influence subnetwork between them. We use count of tweets between two users to compute their contact frequency. Bigger is, stronger relationship they are. Further, we classify into three categories(): High, Middle, and Low.

Fig. 5: HA-Graphs over Twitter Network.

The GE-query may be expressed as follows:

SELECT TopMaxDegreeVertices(twitter,3)

FROM twitter

GROUP BY betweeness()

SUMMARIZE BY COUNT(.) e. Closeness()

Resulting graph is shown in Figure 5 (a). Distribution of different closeness categories of each subgraph are annotated on edge. From Figure 5(a) we may see that there is one circle between and which causes other edges (, ) and (, ) have the same distributions. So we change to another betweenness function to eliminate the circle affection: replace the function with which check whether one vertex may reach another vertex within hops. Figure 5(c) is the resulting HA-graph while . One remarkable change is, high closeness relationships between and has been reduced from 7 to 2. Such remarkable change leads us to analyze the subnetwork between and deeply. We may issue a zoom query over subgraph between and with and , zoom operation output a new HA-graph as shown in Figure 5 (b). From the aggregate values on edge, it is easy to see that most strong relationships between and are between and , which indicates that middle users between and have stronger relationships.

V-B Performance Evaluation

In this section, we evaluate the performance of our proposed graph aggregation algorithm. Two algorithms are implemented and compared including the baseline algorithm SN - shared nothing algorithm as discussed in section IV-B and the AS algorithm - Aggregation Sharing algorithm as proposed in section IV-C. Note that all the following experiments are conducted three times and the average performance is reported.

The GE-query used is provided as follows:

SELECT TopMaxDegreeVertices(k)

FROM G

GROUP BY betweeness()

SUMMARIZE BY SumVMrByVGrpEGrp(),

SumEMrByVGrpEGrp()

For simplicity, the GE-query used during the following experiments is to identify the top hub vertices with the maximum degree and summarize the relationship between two hub vertices by calculating the aggregate graph based on dimension v_grp and e_grp using SumVMrByVGrpEGrp() and SumEMrByVGrpEGrp() function which summarize v_mr and e_mr measures respectively by v_grp and e_grp.

Towards a comprehensive study, we study the impact of the number of hub vertices, graph dimension cardinality, graph degree and graph size accordingly. It is worthy of noting that the aggregation performance is affected by the cardinality of vertex group-by dimension and edge dimension together. These cardinalities will affect the final total different number of group-by values. Therefore, for simplicity, in the following experiments, we refer to the final total different number of group-by values of both vertices and edges as the cardinality.

Impact of the number of hub vertices. In this experiment, we first study the benefit of graph aggregation sharing over multiple sub-graphs when we vary the number of hub vertices (SV) from 5 to 40. We conduct the experiments over two different types of graphs: one with graph degree 8 representing a relative spare graph and another with degree 40 representing a relative dense graph. All the graphs used in these set of experiments consist of 30K vertices.

Table I and Table II show detailed results for the graphs with degree 8 and 40 respectively. Note that each row indicates the execution time of different algorithms while selecting different number of hub vertices on the same graph with a specific cardinality showing the most left column.

From the result, we have the following findings: First, SN and AS have different reactions when change SV. While SV increases, the execution time of shared nothing SN algorithm increases as well. The reason is that, as more hub vertices generate more influential subgraphs which leads more vertices and edges are involved into recomputation. Differently, AS does held this pattern. As shown in the result, while SV increases, the execution time does not increase as much. For some cases, it is even decreasing. For instance, in Table 1, the execution time of AS with SV=10 is always smaller than the one with SV=5 for smaller cardinality. This, however, is reasonable, as more hub vertices and smaller cardinality mean more sharing opportunities.

Second, as SV increases, AS outperforms SV more. As shown in Table 1 and 2, the execution time of SN increases dramatically while SV becomes larger. However, AS is more stable which leads AS outperforms SN more.

Impact of cardinality. Table 1 and 2 also indicate how the performance changes when we vary the cardinality from 10 to 1,000,000. As expected, SN outperforms weakly AS only when the cardinality is large enough and SV is small. For instance, in Table 1, when SV=5, SN becomes faster than AS when the cardinality reaches 400,000(Italic numbers). This is because a larger cardinality reduces the opportunity of sharing operation.

(a) Sparse graph.
(b) Dengse graph.
Fig. 6: Scalability vs graph degree.
(a) Sparse graph.
(b) Dense graph.
Fig. 7: Scalability vs graph size.
(a) SV=5.
(b) SV=20.
(c) SV=40.
Fig. 8: Time distribution for AS.

Impact of graph degree. In this set of experiments, we study the performance comparison among different algorithms while we change the graph degree from 2 to 80. These experiments are conducted with SV=20 and C=10K based on the graphs with 10,000 vertices.

Figure 6 (a) and (b) show the execution time(solid line) for relative sparse graph and dense graph respectively. From the result, we can see that as the degree increases, the query execution time of all the algorithms increases as well. It also indicates that AS is more stable than SN.

To better understand, how many add operations are saved by the sharing algorithm. We collect the total number of add operations and show them as dash line in Figure 6. From Figure 6, we can see that the reason the AS can outperform SN dramatically is because it saves many add operations by sharing. In average, AS saved 74% and 60% add operations in dense graph and sparse graph respectively compared to SN.

Impact of the number of vertices. In this set of experiments, we study how the performance changes while we fix the graph degree but vary the number of vertices from 10K to 40K.

Figure 7 (a) and (b) provide the execution time (solid line) based on the graphs with degree 8 and 40 each of which represents relative sparse or dense graphs. The results indicate that the execution time of SN algorithm increases faster than the execution time of AS algorithm when the number of vertices is increased. We further calculate the number of add operations incur in each experiment as shown as dash line in Figure 7. It is easy to see that the number of add operations in SN algorithm becomes much larger than the ones in AS algorithm. These experiments also show that both SN and AS scale linearly when vertex number increases.

Time Distribution Analysis. To better understand the proposed AS algorithm, we run a set of experiments and count the running time of each part, including tagging generation (referred as Tag), subgraph extraction (referred as SGExt), planning time (referred as Plan) and aggregation time (referred as Agg). The experiments are conducted over a set of graphs with C=10,000 by running the query with different number of hub vertices (5, 20 and 40).

Figure 8 (a), (b) and (c) present the overview of query execution time distribution with SV=5, 20 and 40 respectively. Note that the x axis indicates the graph size used. For instance, 10K-80K means the graph consists of 10K vertices and 80K edges. 40K-1600K means the graph consists of 40K vertices and 1600K edges and so on. The results indicate that the planning is very fast compared with other operations. The tagging time and subgraph extraction time occupy about 13% and 38% of total query time in average respectively. In whatever cases, the aggregation time takes the big portion of the total execution time.

Vi Related Work

A great challenge in graph analytic is to deal with the presence of large attributed graphs. Related work on graph analytic can be summarized as follows:

Graph Layout Drawing aims to display whole graph in a user friendly way. Classic graph drawing algorithms are surveyed in [15]. Those algorithms can structurally display small graph on the screen. In order to enable user discerning on interesting vertices and edges, some discriminating methods are proposed in the literature. Position discriminating methods [16, 17] place vertices with high centrality [18, 19, 20] near the center of graph. Some other works [21] use Size discriminating methods by displaying vertices with high importance value in larger circles or using prominent colors [22]. All these algorithms suffer from the volume of graphs. When graph size is up to tens of thousands vertices and edges, the screen will be filled up with dots, and the link information among vertices is barely seen. In contrast, our SVExpolorer displays sketch graph which contains less vertices and consolidated information between vertices. By so doing, user will not get overwhelming points in the display.

Graph Simplification aims to reduce graph size prior to above layout algorithms. Several approaches are developed for this purpose. [23, 24] group strongly connected vertices and edges into metanodes. [25] merges edges in the same simple path or routes, [26, 27] condense non-planar graph into planar graphs, and [28, 29, 30] form edge bundles by some metrics. [31, 32] uses clustering based approach to form hierarchical view of the graph, which supports navigation.  [33] reduces graph size by displaying only nodes and neighborhoods that are most subjectively interesting to users. However, all these methods cannot handle attributed graphs as in our case. First, since vertices and edges to be retained in the simplification algorithm is selected automatically, users are not feasible to choose particular points and view the relationships among them. Second, most of these methods only concern the structure of graphs, the attributes of vertices and edges are not preserved. On the contrary, VCExplorer enables users arbitrarily picking of the interesting vertices, and further provides consolidated information among these vertices.

Graph Summarization aims to provide a succinct high-level graph by consolidating vertex’s attributes and edge information. Vertices and edges belonging to the same metric are viewed as metanodes and edges. Aggregated information from detail vertices and edges are attached to the metanodes and edges. [1] develops k-SNAP method for cluster graph into k groups. [2, 14] has proposed graph aggregation methods which group the graph based on vertex and edge attributes. These methods offer good overview of graph attributes in a succinct way, but they do not position important vertices and their relationships. Although [34] summarizes graph according to the importance and relatedness of vertices, it focus mainly detailed vertices. Differently, our VCExplorer displays important vertices as hub vertices and reveals the relationships between them using consolidation techniques.

References

  • [1] Y. Tian, R. A. Hankins, and J. M. Patel, “Efficient aggregation for graph summarization,” in SIGMOD Conference, 2008, pp. 567–580.
  • [2] P. Zhao, X. Li, D. Xin, and J. Han, “Graph cube: on warehousing and olap multidimensional networks,” in SIGMOD Conference, 2011, pp. 853–864.
  • [3] I. Herman, G. Melancon, and M. Marshall, “Graph visualization and navigation in information visualization: A survey,” Visualization and Computer Graphics, IEEE Transactions on, vol. 6, no. 1, pp. 24–43, Jan 2000.
  • [4] S. W. Kevin Chen-Chuan Chang, Rui Li, “Uiuc twitter dataset,” 2012. [Online]. Available: https://wiki.engr.illinois.edu/display/forward/Dataset-UDI-TwitterCrawl-Aug2012
  • [5]

    X. XLin, T. Shang, and J. Liu, “An estimation method for relationship strength in weighted social network graphs,”

    Journal of Computer and Communications, vol. 02, no. 04, p. 82??9, 2014. [Online]. Available: http://dx.doi.org/10.4236/jcc.2014.24012
  • [6] H. Wang, Z. Wang, Q. Fan, K.-L. Tan, and C. yong Chan, “Computation sharing for graph aggregates,” National University of Singapore, Tech. Rep., 2014.
  • [7] D. A. Bader, S. Kintali, K. Madduri, and M. Mihail, “Approximating betweenness centrality,” in WAW, 2007, pp. 124–137.
  • [8] D. Eppstein and J. Wang, “Fast approximation of centrality,” in Proceedings of the Twelfth Annual ACM-SIAM Symposium on Discrete Algorithms, ser. SODA ’01.   Philadelphia, PA, USA: Society for Industrial and Applied Mathematics, 2001, pp. 228–229. [Online]. Available: http://dl.acm.org/citation.cfm?id=365411.365449
  • [9] E. Cohen, E. Halperin, H. Kaplan, and U. Zwick, “Reachability and distance queries via 2-hop labels,” in SODA, 2002, pp. 937–946.
  • [10] R. Jin, N. Ruan, Y. Xiang, and V. E. Lee, “A highway-centric labeling approach for answering distance queries on large sparse graphs,” in SIGMOD Conference, 2012, pp. 445–456.
  • [11] H. Wang, H. He, J. Yang, P. S. Yu, and J. X. Yu, “Dual labeling: Answering graph reachability queries in constant time,” in ICDE, 2006, p. 75.
  • [12] F. Wei, “Tedi: efficient shortest path query answering on graphs,” in Proceedings of the 2010 ACM SIGMOD International Conference on Management of data.   ACM, 2010, pp. 99–110.
  • [13] Y. Xiao, W. Wu, J. Pei, W. Wang, and Z. He, “Efficiently indexing shortest paths by exploiting symmetry in graphs,” in Proceedings of the 12th International Conference on Extending Database Technology: Advances in Database Technology.   ACM, 2009, pp. 493–504.
  • [14] Z. Wang, Q. Fan, H. Wang, K.-L. Tan, D. Agrawal, and A. El Abbadi, “Pagrol: Parallel graph olap over large-scale attributed graphs,” in ICDE, 2014, pp. 496–507.
  • [15] I. Herman, G. Melançon, and M. S. Marshall, “Graph visualization and navigation in information visualization: A survey,” Visualization and Computer Graphics, IEEE Transactions on, vol. 6, no. 1, pp. 24–43, 2000.
  • [16] U. Brandes, P. Kenis, and D. Wagner, “Communicating centrality in policy network drawings,” IEEE Trans. Vis. Comput. Graph., vol. 9, no. 2, pp. 241–253, 2003.
  • [17] D. Merrick and J. Gudmundsson, “Increasing the readability of graph drawings with centrality-based scaling,” in APVIS, 2006, pp. 67–76.
  • [18] G. Sabidussi, “The centrality index of a graph,” Psychometrika, vol. 31, no. 4, pp. 581–603, 1966.
  • [19] L. C. Freeman, “A set of measures of centrality based on betweenness,” Sociometry, pp. 35–41, 1977.
  • [20] E. Noah, “Theoretical foundations for centrality measures,” American journal of Sociology, vol. 96, pp. 1478–1504, 1991.
  • [21] M. Bastian, S. Heymann, and M. Jacomy, “Gephi: an open source software for exploring and manipulating networks.” 2009.
  • [22] W. S. Cleveland and R. McGill, “Graphical perception: Theory, experimentation, and application to the development of graphical methods,” Journal of the American Statistical Association, vol. 79, no. 387, pp. 531–554, 1984.
  • [23] J. Abello, F. Van Ham, and N. Krishnan, “Ask-graphview: A large scale graph visualization system,” Visualization and Computer Graphics, IEEE Transactions on, vol. 12, no. 5, pp. 669–676, 2006.
  • [24] D. Archambault, T. Munzner, and D. Auber, “Grouse: Feature-based, steerable graph hierarchy exploration,” in Proceedings of the 9th Joint Eurographics/IEEE VGTC conference on Visualization.   Eurographics Association, 2007, pp. 67–74.
  • [25] G. Ellis and A. Dix, “A taxonomy of clutter reduction for information visualisation,” Visualization and Computer Graphics, IEEE Transactions on, vol. 13, no. 6, pp. 1216–1223, 2007.
  • [26] M. Dickerson, D. Eppstein, M. T. Goodrich, and J. Y. Meng, “Confluent drawings: visualizing non-planar diagrams in a planar way,” in Graph Drawing.   Springer, 2004, pp. 1–12.
  • [27] D. Holten, “Hierarchical edge bundles: Visualization of adjacency relations in hierarchical data,” Visualization and Computer Graphics, IEEE Transactions on, vol. 12, no. 5, pp. 741–748, 2006.
  • [28] E. R. Gansner and Y. Koren, “Improved circular layouts,” in Graph Drawing.   Springer, 2007, pp. 386–398.
  • [29] T. Dwyer, K. Marriott, and M. Wybrow, “Integrating edge routing into force-directed layout,” in Graph Drawing.   Springer, 2007, pp. 8–19.
  • [30] O. Ersoy, C. Hurter, F. V. Paulovich, G. Cantareiro, and A. Telea, “Skeleton-based edge bundling for graph visualization,” Visualization and Computer Graphics, IEEE Transactions on, vol. 17, no. 12, pp. 2364–2373, 2011.
  • [31] K. Higbee, “Mathematical classification and clustering,” Technometrics, vol. 40, no. 1, pp. 80–80, 1998.
  • [32]

    D. Schaffer, Z. Zuo, S. Greenberg, L. Bartram, J. Dill, S. Dubs, and M. Roseman, “Navigating hierarchically clustered networks through fisheye and full-zoom methods,”

    ACM Transactions on Computer-Human Interaction (TOCHI), vol. 3, no. 2, pp. 162–188, 1996.
  • [33] R. Pienta, M. Kahng, Z. Lin, J. Vreeken, P. P. Talukdar, J. Abello, G. Parameswaran, and D. H. Chau, “FACETS: adaptive local exploration of large graphs,” in Proceedings of the 2017 SIAM International Conference on Data Mining, 2017, pp. 597–605.
  • [34] Y. Miao, J. Qin, and W. Wang, “Graph summarization for entity relatedness visualization,” ser. SIGIR ’17, 2017, pp. 1161–1164.