A Streaming Algorithm for Graph Clustering

12/09/2017 ∙ by Alexandre Hollocou, et al. ∙ Cole Normale Suprieure Télécom ParisTech Inria Ecole Polytechnique 0

We introduce a novel algorithm to perform graph clustering in the edge streaming setting. In this model, the graph is presented as a sequence of edges that can be processed strictly once. Our streaming algorithm has an extremely low memory footprint as it stores only three integers per node and does not keep any edge in memory. We provide a theoretical justification of the design of the algorithm based on the modularity function, which is a usual metric to evaluate the quality of a graph partition. We perform experiments on massive real-life graphs ranging from one million to more than one billion edges and we show that this new algorithm runs more than ten times faster than existing algorithms and leads to similar or better detection scores on the largest graphs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 Motivations

Graphs arise in a wide range of fields from biology (palla2005uncovering, ) to social media (mislove2007measurement, ) or web analysis (flake2000efficient, )(pastor2007evolution, ). In most of these graphs, we observe groups of nodes that are densely connected between each other and sparsely connected to the rest of the graph. One of the most fundamental problems in the study of such graphs consists in identifying these dense clusters of nodes. This problem is commonly referred to as graph clustering or community detection.

A major challenge for community detection algorithms is their ability to process very large graphs that are commonly observed in numerous fields. For instance, social networks have typically millions of nodes and billions of edges (e.g. Friendster (mislove2007measurement, )

). Many algorithms have been proposed during the last ten years, using various techniques ranging from combinatorial optimization to spectral analysis

(lancichinetti2009community, ). Most of them fail to scale to such large real-life graphs (prat2014high, ) and require the whole graph to be stored in memory, which often represents a heavy constraint in practice. Streaming the edges is a natural way to handle such massive graphs. In this setting, the entire graph is not stored but processed edge by edge (mcgregor2014graph, ). Note that the streaming approach is particularly relevant in most real-life applications where graphs are fundamentally dynamic and edges naturally arrive in a streaming fashion.

1.2 Contributions

In this paper, we introduce a novel approach based on edge streams to detect communities in graphs. The algorithm processes each edge strictly once. When the graph is a multi-graph, in the sense that two nodes may be connected by more than one edge, these edges are streamed independently. The algorithm only stores three integers for each node: its current community index, its current degree (i.e. the number of adjacent edges that have already been processed), and the current community volume (i.e. the sum of the degrees of all nodes in the community). Hence, the time complexity of the algorithm is linear in the number of edges and its space complexity is linear in the number of nodes. In the experimental evaluation of the algorithm, we show that this streaming algorithm is able to handle massive graphs (yang2015defining, ) with low execution time and memory consumption.

The algorithm takes only one integer parameter and, for each arriving edge of the stream, it uses a simple decision strategy based on this parameter and the volumes of the communities of nodes and . We provide a theoretical analysis that justifies the form of this decision strategy using the so-called modularity of the clustering. Modularity, that has been introduced by the physics community (newman2006modularity, ), is one of the most widely used quality function for graph clustering. It measures the quality of a given partition based on the comparison between the number of edges that are observed in each cluster and the number of edges that would be observed if the edges were randomly distributed. In our analysis, we show that, under certain assumptions, the processing of each new edge by our algorithm leads to an increase in modularity.

1.3 Related work

A number of algorithms have been developed for detecting communities in graphs (fortunato2010community, ). Many rely on the optimization of some objective function that measures the quality of the detected communities. Modularity and other metrics, like conductance, out-degree fraction and the clustering coefficient (yang2015defining, )

have been used with success. Other popular methods include spectral clustering

(spielman2007spectral, )(von2007tutorial, ), clique percolation (palla2005uncovering, ), statistical inference (lancichinetti2010statistical, ), random walks (pons2005computing, )(whang2013overlapping, ) and matrix factorization (yang2013overlapping, ). These techniques have proved to be efficient but are often time-consuming and fail to scale to large graphs (prat2014high, ).

The streaming approach has drawn considerable interest in network analysis over the last decade. Within the data stream model, massive graphs with potentially billions of edges can be processed without being stored in memory (mcgregor2014graph, ). A lot of algorithm have been proposed for different problems that arise in large graphs, such as counting subgraphs (bar2002reductions, )(buriol2006counting, ), computing matchings (goel2012communication, )(feigenbaum2005graph, ), finding the minimum spanning tree (elkin2006efficient, )(tarjan1983data, ) or graph sparsification (benczur1996approximating, ). Different types of data streams can be considered: insert-only streams, where the stream is the unordered sequence of the network edges, or dynamic graph streams, where edges can both be added or deleted. Many streaming algorithms rely on graph sketches which store the input in a memory-efficient way and are updated at each step (ahn2012graph, ).

In this paper, we use the streaming setting to define a novel community detection algorithm. We use an insert-only edge streams and define a minimal sketch, by storing only three integers per node.

1.4 Paper outline

The rest of the paper is organized as follows. We first describe our streaming algorithm in Section 2. A theoretical analysis of this algorithm is presented in Section 3. In Section 4, we evaluate experimentally the performance of our approach on real-life graphs and compare it to state-of-the-art algorithms. Section 5 concludes the paper.

2 A streaming algorithm for community detection

In this section, we define a novel streaming algorithm for community detection in graphs.

2.1 Streaming setting

We are given an undirected and unweighted multi-graph where is the set of vertices and is a multi-set of edges (i.e. an edge can appear multiple times in ). We use to denote the number of nodes and the number of edges. We use to denote the number of edges between and (and we set if ). We assume that there is no self-loop, that is for all . We use to denote the degree of node , and to denote the weight of the graph, corresponding to the total degree. Given a set of nodes , we use to denote the volume of .

We consider the following streaming framework: we are given a stream , which is an order sequence of the multi-set . Note that each edge appears exactly times in .

2.2 Intuition

Although there is no universal definition of what a community is, most existing algorithms rely on the principle that nodes tend to be more connected within a community than across communities. Hence, if we pick uniformly at random an edge in , this edge is more likely to link nodes of the same community (i.e., is an intra-community edge), than nodes from distinct communities (i.e., is an inter-community edge). Equivalently, if we assume that edges arrive in a random order, we expect many intra-community edges to arrive before the inter-community edges.

This observation is used to design a streaming algorithm. For each arriving edge , the algorithm places and in the same community if the edge arrives early (intra-community edge) and splits the nodes in distinct communities otherwise (inter-community edge). In this formulation, the notion of an early edge is of course critical. In the proposed algorithm, we consider that an edge arrives early if the current volumes of the communities of nodes and , accounting for previously arrived edges only, is low.

More formally, the algorithm considers successively each edge of the stream . Each node is initially in its own community. At time , a new edge arrives and the algorithm performs one of the following actions: (a) joins the community of ; (b) joins the community of ; (c) no action.

The choice of the action depends on the updated community volumes and of the communities of and , and , i.e., the volumes computed using the edges . If or is greater than a given threshold , then we do nothing; otherwise, the node belonging to the smallest community (in volume) joins the community of the other node and the volumes are updated.

2.3 Algorithm

We define our streaming algorithm in Algorithm 1. It takes the list of edges of the graph and one integer parameter . The algorithm uses three dictionaries , and , initialized with default value . At the end of the algorithm, is the degree of node , the community of node , and is the volume of community . When an edge with an unknown node arrives, let say , we give this node a new community index, , and increment the index variable (which is initialized with ). For each new edge , the degrees of and and the volumes of communities and are updated. Then, if these volumes are both lower than the threshold parameter , the node in the community with the lowest volume joins the community of the other node. Otherwise, the communities remain unchanged.

1:Stream of edges and parameter
2: dictionaries initialized with default value
3: (new community index)
4:for  do
5:     if  then and
6:     end if
7:     if  then and
8:     end if
9:      and (update degrees)
10:      and (update community volumes)
11:     if  and  then
12:         if  then ( joins the community of )
13:              
14:              
15:              
16:         else ( joins the community of )
17:              
18:              
19:              
20:         end if
21:     end if
22:end for
23:return
Algorithm 1 Streaming algorithm for clustering graph nodes

Observe that, in case of equality , joins the community of . Of course, this choice is arbitrary and can be made random (e.g., joins the community of

with probability

and joins the community of with probability ).

2.4 Complexity

The main loop is linear in the number of edges in the stream. Thus, the time complexity of the algorithm is linear in .

Concerning the space complexity, we only use three dictionaries of integers , and , of size . Hence, the space complexity of the algorithm is . Note that the algorithm does not need to store the list of edges in memory, which is the main benefit of the streaming approach. To implement dictionaries with default value in practice, we can use classic dictionaries or maps, and, when an unknown key is requested, set the value relative to this key to . Note that, in Python, the defaultdict structure already allows us to exactly implement dictionaries with 0 as a default value.

2.5 Parameter setting

Note that the algorithm can be run once with multiple values of parameter . In this case, only arrays and need to be duplicated for each value of . In this multi-parameter setting, we obtain multiple results at the end of the algorithm, where is the number of distinct values for the parameter . Then, the best can be selected by computing quality metrics that only use dictionaries and . In particular, we do not want to use metrics that requires the knowledge of the input graph. For instance, common metrics (yang2015defining, ) like entropy or average density , where is the set of nodes in community and is the set of all non-empty communities, can be easily computed from each pair of dictionaries , and be used to select the best result for . Note that modularity cannot be used here as its computation requires the knowledge of the whole graph.

3 Theoretical analysis

In this section, we analyze the modularity optimization problem in the edge-streaming setting and qualitatively justify the conditions on community volumes used in Algorithm 1. See Appendices A, B and C for complete proofs. .

3.1 Modularity optimization in the streaming setting

Modularity is a quality metric that is widely used in graph clustering newman2006modularity . Given a partition of the nodes, modularity is defined as

where if and belongs to the same community and otherwise. Modularity can be seen as the difference between two probabilities,

,

where corresponds to the probability to choose an edge of uniformly at random between two nodes of the same community, and to the probability to choose an edge between two nodes from the same community in the so-called null model , where an edge is chosen with a probability proportional to the degrees of and . A classic approach to cluster graphs consists in finding a partition that maximizes . Many algorithms have been proposed to perform this task (blondel2008fast, ; newman2004fast, ) but, to the best of our knowledge, none can be applied to our streaming setting.

The modularity can be rewritten as

In our streaming setting, we are given a stream of edges such that edge appears times in . We consider the situation where edges from the stream have already arrived, and where we have computed a partition of the graph. We define as , and as

where and . Note that there is no normalization factor in the definition of as it has no impact on the optimization problem.

We do not store the edges of but we assume that we have kept updated node degrees and community volumes in a streaming fashion as shown in Algorithm 1. We consider the situation where a new edge arrives. We want to make a decision that maximizes .

3.2 Streaming decision

We can express in function of as stated in Lemma 1.

Lemma 1.

If and if , can be expressed in function of as follows

where denotes the community of in , and if and belongs to the same community of and otherwise.

We want to update the community membership of or with one of the following actions: (a) joins the community of ; (b) joins the community of ; (c) and stays in their respective communities. We consider the case where nodes and belongs to distinct communities of , since all three actions are identical if and belong to the same community. We want to choose the action that maximizes , but we have a typical streaming problem where we cannot evaluate the impact of action (a) or (b) on but only on the term that comes from the new edge .

Let us consider action (a), where joins the community of . We can assume that without loss of generality (otherwise we can swap and ). We are interested in , which is the variation of between the state where and are in their own communities and the state where has joined . We have where is the variation of . Lemma 2 gives us an expression for this variation.

Lemma 2.

where

We define as . Then, we can easily show that and if edges of follow the null model presented above. measures the difference between the number of edges connecting node to community in the edge stream , and the number of edges that we would observe in the null model. It can be interpreted as a degree of attachment of node to community . Thus, can be seen as a normalized degree of attachment of node to community in the edge stream .

Lemma 1 and 2 gives us a sufficient condition presented in Theorem 1 in order to have a positive variation of the modularity when joins .

Theorem 1.

If , then:

where

Thus, we see that, if we have , then the strategy used by Algorithm 1 leads to an increase in modularity. In the general case, we cannot control terms and , but, in most cases, we expect the degree of attachment of to to be upper-bounded by some constant and the degree of attachment of to to be higher than some . Indeed, the fact that we observe an edge between node and and community is likely to indicate that the degree of attachment between and is greater than what we would have in the null model, and that the degree of attachment between and is below maximum. Moreover, since in real-world graphs the degree of most nodes is in whereas is in , we expect the term to be smaller than a constant . Then, the condition on becomes:

This justifies the design of the algorithm, with the decision of joining one community or the other based on the community volumes.

4 Experimental results

4.1 Datasets

We use real-life graphs provided by the Stanford Social Network Analysis Project (SNAP (yang2015defining, )) for the experimental evaluation of our new algorithm. These datasets include ground-truth community memberships that we use to measure the quality of the detection. We consider datasets of different natures. Social networks: The YouTube, LiveJournal, Orkut and Friendster datasets correspond to social networks (backstrom2006group, )(mislove2007measurement, ) where nodes represent users and edges connect users who have a friendship relation. In all these networks, users can create groups that are used as ground-truth communities in the dataset definitions. Co-purchasing network: The Amazon dataset corresponds to a product co-purchasing network (leskovec2007dynamics, ). The nodes of the graph represent Amazon products and the edges correspond to frequently co-purchased products. The ground-truth communities are defined as the product categories. Co-citation network: The DBLP dataset corresponds to a scientific collaboration network (backstrom2006group, ). The nodes of the graph represent the authors and the edges the co-authorship relations. The scientific conferences are used as ground-truth communities.

The size of these graphs ranges from approximately one million edges to more than one billion edges. It enables us to test the ability of our algorithm to scale to very large graphs. The characteristics of these datasets can be found in Table 1.

4.2 Benchmark algorithms

For assessing the performance of our streaming algorithm we use a wide range of state-of-the-art but non-streaming algorithms that are based on various approaches. SCD (S) partitions the graph by maximizing the WCC, which is a community quality metric based on triangle counting (prat2014high, ). Louvain (L) is based on the optimization of the well-known modularity metric (blondel2008fast, ). Infomap (I) splits the network into modules by compressing the information flow generated by random walks (rosvall2008maps, ). Walktrap

(W) uses random walks to estimate the similarity between nodes, which is then used to cluster the network

(pons2005computing, ). OSLOM (O) partitions the network by locally optimizing a fitness function which measures the statistical significance of a community (lancichinetti2011finding, ). In the data tables, we use STR to refer to our streaming algorithm.

4.3 Performance metrics and benchmark setup

We use two metrics for the performance evaluation of the selected algorithms. The first is the average F1-score (yang2013overlapping, )(prat2014high, )

which corresponds to the harmonic mean of precision and recall. The second metric is the

Normalized Mutual Information (NMI), which is based on the mutual entropy between indicator functions for the communities (lancichinetti2009detecting, ).

The experiments were performed on EC2 instances provided by Amazon Web Services of type m4.4xlarge with 64 GB of RAM, 100 GB of disk space, 16 virtual CPU with Intel Xeon Broadwell or Haswell and Ubuntu Linux 14.04 LTS.

Our algorithm is implemented in C++ and the source code can be found on GitHub111https://github.com/ahollocou/graph-streaming. For the other algorithms, we used the C++ implementations provided by the authors, that can be found on their respective websites. Finally, all the scoring functions were implemented in C++. We used the implementation provided by the authors of (lancichinetti2009detecting, ) for the NMI and the implementation provided by the authors of SCD (prat2014high, ) for the F1-Score.

4.4 Benchmark results

Execution time

We compare the execution times of the different algorithms on SNAP graphs in Table 1. The entries that are not reported in the table corresponds to algorithms that returned execution errors or algorithms with execution times exceeding hours. In our experiments, only SCD, except from our algorithm, was able to run on all datasets. The fastest algorithms in our benchmarks are SCD and Louvain and we observe that they run more than ten times slower than our streaming algorithm. More precisely, our streaming algorithm runs in less than 50ms on the Amazon and DBLP graphs, which contain millions of edges, and less than 5 minutes on the largest network, Friendster, that has more than one billion edges. In comparison, it takes seconds for SCD and Louvain to detect communities on the smallest graphs, and several hours to run on Friendster. Table 1 shows the execution times of all the algorithms with respect to the number of edges in the network. We remark that there is more than one order of magnitude between our algorithm and the other algorithms.

In order to compare the execution time of our algorithm with a minimal algorithm that only reads the list of edges without doing any additional operation, we measured the run time of the Unix command cat on the largest dataset, Friendster. cat reads the edge file sequentially and writes each line corresponding to an edge to standard output. In our expermiments, the command cat takes 152 seconds to read the list of edges of the Friendster dataset, whereas our algorithm processes this network in 241 seconds. That is to say, reading the edge stream is only twice faster than the execution of our streaming algorithm.

S L I W O STR
Amazon 334,863 925,872 1.84 2.85 31.8 261 1038 0.05
DBLP 317,080 1,049,866 1.48 5.52 27.6 1785 1717 0.05
YouTube 1,134,890 2,987,624 9.96 11.5 150 - - 0.14
LiveJournal 3,997,962 34,681,189 85.7 206 - - - 2.50
Orkut 3,072,441 117,185,083 466 348 - - - 8.67
Friendster 65,608,366 1,806,067,135 13464 - - - - 241
Table 1: SNAP dataset sizes and execution times in seconds

Memory consumption

We measured the memory consumption of streaming algorithm and compared it to the memory that is needed to store the list of the edges for each network, which is a lower bound of the memory consumption of the other algorithms. We use 64-bit integers to store the node indices. The memory needed to represent the list of edges is 14,8 MB for the smallest network, Amazon, and 28,9 GB for the largest one, Friendster. In comparison, our algorithm consumes 8,1 MB on Amazon and only 1,6 GB on Friendster.

Detection scores

Table 2 shows the Average F1-score and NMI of the algorithms on the SNAP datasets. Note that the NMI on the Friendster dataset is not reported in the table because the scoring program used for its computation (lancichinetti2009detecting, ) cannot handle the size of the output on this dataset. While Louvain and OSLOM clearly outperform our algorithm on Amazon and DBLP (at the expense of longer execution times), our streaming algorithm shows similar performance as SCD on YouTube and much better performance than SCD and Louvain on LiveJournal, Orkut and Friendster (the other algorithms do not run these datasets). Thus our algorithm does not only run much faster than the existing algorithms but the quality of the detected communities is also better than that of the state-of-the-art algorithms for very large graphs.

F1-Score NMI
S L I W O STR S L I W O STR
Ama. 0.39 0.47 0.30 0.39 0.47 0.38 0.16 0.24 0.16 0.26 0.23 0.12
DBLP 0.30 0.32 0.10 0.22 0.35 0.28 0.15 0.14 0.01 0.10 0.15 0.10
YT 0.23 0.11 0.02 - - 0.26 0.10 0.04 0.00 - - 0.13
LiveJ. 0.19 0.08 - - - 0.28 0.05 0.02 - - - 0.09
Orkut 0.22 0.19 - - - 0.44 0.22 0.19 - - - 0.24
Friend. 0.10 - - - - 0.19 - - - - - -
Table 2: Average F1 Scores and NMI

5 Conclusion and future work

We introduced a new algorithm for the problem of graph clustering in the edge streaming setting. In this setting, the input data is presented to the algorithm as a sequence of edges that can be examined only once. Our algorithm only stores three integers per node and requires only one integer parameter . It runs more than 10 times faster than state-of-the-art algorithms such as Louvain and SCD and shows better detection scores on the largest graphs. Such an algorithm is extremely useful in many applications where massive graphs arise. For instance, the web graph contains around nodes which is much more than in the Friendster dataset.

We analyzed the adaptation of the popular modularity problem to the streaming setting. Theorem 1 justifies the nature of the condition on the volumes of the communities of nodes and for each new edge , which is the core of Algorithm 1.

It would be interesting for future work to perform further experiments. In particular the ability of the algorithm to handle evolving graphs could be evaluated on dynamic datasets (panzarasa2009patterns, ) and compared to existing approaches (gauvin2014detecting, )(epasto2015efficient, ). Note that, in the dynamic network settings, modifications to the algorithm design could be made to handle events such as edge deletions.

Finally, our algorithm only returns disjoint communities, whereas, in many real graphs, overlaps between communities can be observed (lancichinetti2009detecting, ). An important research direction would consist in adapting approach to overlapping community detection and compare it to existing approaches (xie2013overlapping, )(yang2013overlapping, ).

References

  • (1) K. J. Ahn, S. Guha, and A. McGregor. Graph sketches: sparsification, spanners, and subgraphs. In Proceedings of the 31st ACM SIGMOD-SIGACT-SIGAI symposium on Principles of Database Systems, pages 5–14. ACM, 2012.
  • (2) L. Backstrom, D. Huttenlocher, J. Kleinberg, and X. Lan. Group formation in large social networks: membership, growth, and evolution. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 44–54. ACM, 2006.
  • (3) Z. Bar-Yossef, R. Kumar, and D. Sivakumar. Reductions in streaming algorithms, with an application to counting triangles in graphs. In Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, pages 623–632. Society for Industrial and Applied Mathematics, 2002.
  • (4) A. A. Benczúr and D. R. Karger. Approximating st minimum cuts in õ (n 2) time. In

    Proceedings of the twenty-eighth annual ACM symposium on Theory of computing

    , pages 47–55. ACM, 1996.
  • (5) V. D. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre. Fast unfolding of communities in large networks. Journal of statistical mechanics: theory and experiment, 2008(10):P10008, 2008.
  • (6) L. S. Buriol, G. Frahling, S. Leonardi, A. Marchetti-Spaccamela, and C. Sohler. Counting triangles in data streams. In Proceedings of the twenty-fifth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, pages 253–262. ACM, 2006.
  • (7) M. Elkin and J. Zhang. Efficient algorithms for constructing (1+, )-spanners in the distributed and streaming models. Distributed Computing, 18(5):375–385, 2006.
  • (8) A. Epasto, S. Lattanzi, and M. Sozio. Efficient densest subgraph computation in evolving graphs. In Proceedings of the 24th International Conference on World Wide Web, pages 300–310. ACM, 2015.
  • (9) J. Feigenbaum, S. Kannan, A. McGregor, S. Suri, and J. Zhang. On graph problems in a semi-streaming model. Theoretical Computer Science, 348(2-3):207–216, 2005.
  • (10) G. W. Flake, S. Lawrence, and C. L. Giles. Efficient identification of web communities. In Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 150–160. ACM, 2000.
  • (11) S. Fortunato. Community detection in graphs. Physics reports, 486(3):75–174, 2010.
  • (12) L. Gauvin, A. Panisson, and C. Cattuto.

    Detecting the community structure and activity patterns of temporal networks: a non-negative tensor factorization approach.

    PloS one, 9(1):e86028, 2014.
  • (13) A. Goel, M. Kapralov, and S. Khanna. On the communication and streaming complexity of maximum bipartite matching. In Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms, pages 468–485. SIAM, 2012.
  • (14) A. Lancichinetti and S. Fortunato. Community detection algorithms: a comparative analysis. Physical review E, 80(5):056117, 2009.
  • (15) A. Lancichinetti, S. Fortunato, and J. Kertész. Detecting the overlapping and hierarchical community structure in complex networks. New Journal of Physics, 11(3):033015, 2009.
  • (16) A. Lancichinetti, F. Radicchi, and J. J. Ramasco. Statistical significance of communities in networks. Physical Review E, 81(4):046110, 2010.
  • (17) A. Lancichinetti, F. Radicchi, J. J. Ramasco, and S. Fortunato. Finding statistically significant communities in networks. PloS one, 6(4):e18961, 2011.
  • (18) J. Leskovec, L. A. Adamic, and B. A. Huberman. The dynamics of viral marketing. ACM Transactions on the Web (TWEB), 1(1):5, 2007.
  • (19) A. McGregor. Graph stream algorithms: a survey. ACM SIGMOD Record, 43(1):9–20, 2014.
  • (20) A. Mislove, M. Marcon, K. P. Gummadi, P. Druschel, and B. Bhattacharjee. Measurement and analysis of online social networks. In Proceedings of the 7th ACM SIGCOMM conference on Internet measurement, pages 29–42. ACM, 2007.
  • (21) M. E. Newman. Fast algorithm for detecting community structure in networks. Physical review E, 69(6):066133, 2004.
  • (22) M. E. Newman. Modularity and community structure in networks. Proceedings of the national academy of sciences, 103(23):8577–8582, 2006.
  • (23) G. Palla, I. Derényi, I. Farkas, and T. Vicsek. Uncovering the overlapping community structure of complex networks in nature and society. Nature, 435(7043):814–818, 2005.
  • (24) P. Panzarasa, T. Opsahl, and K. M. Carley. Patterns and dynamics of users’ behavior and interaction: Network analysis of an online community. Journal of the American Society for Information Science and Technology, 60(5):911–932, 2009.
  • (25) R. Pastor-Satorras and A. Vespignani. Evolution and structure of the Internet: A statistical physics approach. Cambridge University Press, 2007.
  • (26) P. Pons and M. Latapy. Computing communities in large networks using random walks. In International Symposium on Computer and Information Sciences, pages 284–293. Springer, 2005.
  • (27) A. Prat-Pérez, D. Dominguez-Sal, and J.-L. Larriba-Pey. High quality, scalable and parallel community detection for large real graphs. In Proceedings of the 23rd international conference on World wide web, pages 225–236. ACM, 2014.
  • (28) M. Rosvall and C. T. Bergstrom. Maps of random walks on complex networks reveal community structure. Proceedings of the National Academy of Sciences, 105(4):1118–1123, 2008.
  • (29) D. A. Spielman and S.-H. Teng. Spectral partitioning works: Planar graphs and finite element meshes. Linear Algebra and its Applications, 421(2-3):284–305, 2007.
  • (30) R. E. Tarjan. Data structures and network algorithms. SIAM, 1983.
  • (31) U. Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4):395–416, 2007.
  • (32) J. J. Whang, D. F. Gleich, and I. S. Dhillon. Overlapping community detection using seed set expansion. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management, pages 2099–2108. ACM, 2013.
  • (33) J. Xie, S. Kelley, and B. K. Szymanski. Overlapping community detection in networks: The state-of-the-art and comparative study. Acm computing surveys (csur), 45(4):43, 2013.
  • (34) J. Yang and J. Leskovec. Overlapping community detection at scale: a nonnegative matrix factorization approach. In Proceedings of the sixth ACM international conference on Web search and data mining, pages 587–596. ACM, 2013.
  • (35) J. Yang and J. Leskovec. Defining and evaluating network communities based on ground-truth. Knowledge and Information Systems, 42(1):181–213, 2015.

Appendix A: Proof of Lemma 1

Given a new edge , we have the following relation between quantities and at times and .

and

This gives us the following equation for

in the case , and

in the case .

Finally, the definition of

gives us the wanted result.

Appendix B: Proof of Lemma 2

is defined as a sum over all communities of partition . Only terms depending on and are modified by action (a). Thus, we have:

This leads to:

Using the definition of , we obtain the wanted expression for .

Appendix C: Proof of Theorem 1

From Lemma 1, we obtain

which gives us:

(1)

Then, Equation (1) and Lemma 2 gives us the following expression for

Thus, is equivalent to

(2)

We use to denote the left-hand side of this inequality. If , then we have

Thus, the following inequality

implies inequality (2), which proves the theorem.