Discovering Polarized Communities in Signed Networks

Signed networks contain edge annotations to indicate whether each interaction is friendly (positive edge) or antagonistic (negative edge). The model is simple but powerful and it can capture novel and interesting structural properties of real-world phenomena. The analysis of signed networks has many applications from modeling discussions in social media, to mining user reviews, and to recommending products in e-commerce sites. In this paper we consider the problem of discovering polarized communities in signed networks. In particular, we search for two communities (subsets of the network vertices) where within communities there are mostly positive edges while across communities there are mostly negative edges. We formulate this novel problem as a "discrete eigenvector" problem, which we show to be NP-hard. We then develop two intuitive spectral algorithms: one deterministic, and one randomized with quality guarantee √(n) (where n is the number of vertices in the graph), tight up to constant factors. We validate our algorithms against non-trivial baselines on real-world signed networks. Our experiments confirm that our algorithms produce higher quality solutions, are much faster and can scale to much larger networks than the baselines, and are able to detect ground-truth polarized communities.



There are no comments yet.


page 1

page 2

page 3

page 4


Searching for polarization in signed graphs: a local spectral approach

Signed graphs have been used to model interactions in social net-works, ...

Inferring the strength of social ties: a community-driven approach

Online social networks are growing and becoming denser. The social conne...

Discovering Nested Communities

Finding communities in graphs is one of the most well-studied problems i...

Predicting Positive and Negative Links with Noisy Queries: Theory & Practice

Social networks and interactions in social media involve both positive a...

Using edge contractions to reduce the semitotal domination number

In this paper, we consider the problem of reducing the semitotal dominat...

Finding Theme Communities from Database Networks: from Mining to Indexing and Query Answering

Given a database network where each vertex is associated with a transact...

Statistical inference on errorfully observed graphs

Statistical inference on graphs is a burgeoning field in the applied and...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

The increase of polarization around controversial issues is a growing concern with important societal fallouts. While controversy can be engaging, and can lead to users spending more time on social-media platforms, in disproportionate amounts it can generate a negative user experience, potentially leading to the abandonment of the platform. Excessive polarization, together with the emergence of bots and the spread of misinformation, has thus become an urgent technological problem that needs to be solved. It is not surprising that the last few years have witnessed an uptake of the research on methods for the detection and suppression of these phenomena (garimella2017reducing, ; liao2014can, ; liao2014expert, ; vydiswaran2015overcoming, ; munson2013encouraging, ; graells2014people, ).

Figure 1. An example of two polarized communities in the Congress network (dataset details in Section 5). Solid edges are positive, while dashed edges are negative.

While polarization is a well studied phenomenon in political and social sciences (baldassarri2007dynamics, ; brundidge2010encountering, ; esteban1994measurement, ; feldman2014mutual, ; garrett2014partisan, ; wojcieszak2009online, ), modern social-media platforms brought it to a different scale, providing an unprecedented wealth of data. The necessity to analyze the available data and gain valuable insights brings new algorithmic challenges.

In order to study polarization in large-scale online data, one first step is to detect it. As a step in this direction, in this paper we study a fundamental problem abstraction for this task, i.e., the problem of discovering polarized communities in signed networks.

A signed network is a simple, yet general and powerful, representation: vertices represent entities and edges between vertices represent interactions, which can be friendly (positive) or antagonistic (negative) (harary1953notion, ). Signed graphs analysis has many applications from modeling interactions in social media (kunegis2009slashdot, ), to mining user reviews (beigi2016signed, ), to studying information diffusion and epidemics (li2013influence, ), to recommending products in e-commerce sites (ma2009learning, ; victor2011trust, )

, and to estimating the structural balance of a (physical) complex system 

(antal2006, ; marvel2009, ).

In this paper, we introduce the 2-Polarized-Communities problem (2PC), which requires finding two communities (subsets of the network vertices) such that within communities there are mostly positive edges while across communities there are mostly negative edges. Furthermore, we do not aim to partition the whole network, so the two polarized communities we are searching can be concealed within a large body of other network vertices, which are neutral with respect to the polarized structure. Our hypothesis is that such 2-community polarized structure accurately captures controversial discussions in real-world social-media platforms.

Figure 1 shows an example of the two most polarized communities found in the Congress network (details in Section 5). The two communities involve and vertices (out of ), respectively, having more than 98% of positive edges within and 78% negative edges across. The vertices in gray do not participate in any of the two polarized communities: either they have too few connections with any community, or the polarity of their relations are mixed and thus their position within the debate unclear.

Our work is, to the best of our knowledge, the first to propose a spectral method for extracting polarized communities from signed networks. In addition, we present hardness results and approximation guarantees. Our problem formulation deviates from the bulk of the literature where methods typically look for finding many communities while partitioning the whole network (anchuri2012communities, ; bansal2004correlation, ; chiang2012scalable, ; coleman2008local, ; giotis2006correlation, ; kunegis2010spectral, ). As discussed in more detail in Sections 2 and 3, the closest to our problem statement is the work by Coleman et al. (coleman2008local, ), who employ the correlation-clustering framework and search for exactly two communities. However, while in that work all vertices must be included in a cluster, in our setting we allow vertices not to be part of any cluster. This captures the fact that polarized communities are typically concealed within a large body of neutral vertices in a social network. An algorithm that attempts to partition the whole network would fail to reveal these communities. As an additional feature, our methods can be fine-tuned to increase or decrease the size of the discovered communities. Finally, while some spectral techniques promote balanced partitions, we hypothesize that two polarized communities might be of very different sizes, and thus our problem formulation does not enforce evenly sized subgraphs.

Our reliance on spectral methods carries several benefits. First, it is possible to leverage readily available, highly optimized, and parallelized software implementations. This makes it straightforward for the practitioner to analyze large networks in real settings using our approach. Second, even though in this paper we focus on the case of two communities, we can take inspiration from the existing literature on spectral graph partitioning to easily extend our algorithms to the case of an arbitrary number of subgraphs, e.g., by recursive two-way partitioning or the analysis of multiple eigenvectors (shi2000normalized, ).

In this paper we make the following contributions:

  • We formulate the 2-Polarized-Communities problem (2PC) as a “discrete eigenvector” problem (Section 3).

  • Exploiting a reduction from classic correlation clustering, we prove that 2PC is -hard (Theorem 3.3).

  • We devise two intuitive spectral algorithms (Section 4), one deterministic, and one randomized with quality guarantee (Theorem 4.2), which is tight up to constant factors. We believe these to be the first purely combinatorial bounds for spectral methods. Our results apply to graphs of arbitrary weights. Our algorithms’ running time is essentially the time required to compute the first eigenvector of the adjacency matrix of the input graph.

  • Our experiments (Section 5) on a large collection of real-world signed networks show that the proposed algorithms discover higher quality solutions, are much faster than the baselines, and can scale to much larger networks. In addition, they are able to identify ground-truth planted polarized communities in synthetic datasets.

Related literature is discussed in the next section, while Section 6 discusses future work and concludes the paper.

2. Background and related work

Signed networks. Signed graphs appeared in a work by Harary, who was interested in the notion of balance in graphs (harary1953notion, ). In 1956, Cartwright and Harary generalized Heider’s psychological theory of balance in triangles of sentiments to the theory of balance in signed graphs (cartwright1956structural, ).

A more recent line of work develops the spectral properties of signed graphs, still related to balance theory. Hou et al. (hou2003laplacian, )

prove that a connected signed graph is balanced if and only if the smallest eigenvalue of the Laplacian is 0. Hou 

(hou2005bounds, ) also investigates the relationship between the smallest eigenvalue of the Laplacian and the unbalancedness of a signed graph.

Signed graphs have also been studied in different contexts. Guha et al. (guha2004propagation, ) and Leskovec et al. (leskovec2010signed, ) study directed signed graphs and develop status theory, to reason about the importance of the vertices in such graphs. Other lines of research include edge and vertex classification (cesa2012correlation, ; tang2016node, ), link prediction (leskovec2010predicting, ; symeonidis2010transitive, ), community detection (ailon2008aggregating, ; anchuri2012communities, ; bansal2004correlation, ; swamy2004correlation, ), recommendation (tang2016recommendations, ), and more. A detailed survey on the topic is provided by Tang et al. (tang2016survey, ).

A few recent works explore the problem of finding antagonistic communities in signed networks, though with approaches fundamentally different to ours. Lo et al. consider directed graphs and search for strongly-connected positive subgraphs that are negative bi-cliques (lo2011mining, ), which severely limits the size of the resulting communities. A relaxed variant for undirected networks was described in subsequent work (gao2016detecting, ). Chu et al. propose a constrained-programming objective to find warring factions (chu2016finding, ), as well as an efficient algorithm to find local optima.

Correlation Clustering. In the standard correlation-clustering problem (bansal2004correlation, ), we ask to partition the vertices of a signed graph into clusters so as to maximize (minimize) the number of edges that “agree” (“disagree”) with the partitioning, i.e., the number of positive (negative) edges within clusters plus the number of negative (positive) edges across clusters. In the original problem formulations, such as the ones studied by Bansal et al. (bansal2004correlation, ), Swamy (swamy2004correlation, ), and Ailon et al. (ailon2008aggregating, ), the number of clusters is not given as input, instead it is part of the optimization. More recent works study the correlation-clustering problem with additional constraints, e.g., Giotis and Guruswami (giotis2006correlation, ) fix the number of clusters, Coleman et al. (coleman2008local, ) consider only two clusters, while Puleo and Milenkovic (puleo2015correlation, ) consider constraints on the cluster sizes.

The problem we study could be seen as a variant of correlation clustering where we search for two clusters, while we allow vertices not to be part of any cluster.

Detecting polarization in social media. A number of papers have studied the problem of detecting polarization in social media Some approaches are based on text analysis (choi2010identifying, ; mejova2014controversy, ; popescu2010detecting, ), while other approaches consider a graph-theoretic setting (akoglu2014quantifying, ; conover2011predicting, ; garimella2018quantifying, ). However, our work differs significantly from these papers, as we consider signed networks, we provide a correlation-clustering problem formulation, and obtain results with approximation guarantees.

3. Problem statement

Our setting is reminiscent to the correlation-clustering problem (bansal2004correlation, ), which we recall here. Given a signed network , where is the set of positive edges and the set of negative edges, the goal is to find a partition of the vertices into clusters, so as to maximize the number of positive edges within clusters plus the number of negative edges between clusters.

An interesting property of the correlation-clustering formulation is that one does not need to specify in advance the number of clusters , instead it is part of the optimization. In certain cases, however, the number of clusters is given as input. The general problem (given ) has been studied by Giotis and Guruswami (giotis2006correlation, ), while Coleman et al. (coleman2008local, ) studied the 2-Correlation-Clustering problem (). The problem arises, for instance, in the domain of social networks, where two well-separated clusters reveal a polarized structure. It can be defined as follows.

Problem 1 (2cc).

Given a signed network , find a partition of so as to maximize

where is the indicator function of the set .

A crucial limitation of the 2CC problem is that all vertices must be accounted for in one of the two clusters. From an application perspective, however, this may be a strong assumption. For example, in a social network, we may expect two polarized communities on a topic, but there may be many individuals who are neutral.

In order to find communities embedded within large networks, we need to exclude neutral vertices from the solution. Therefore, a first approach might be to consider maximizing agreements including a neutral cluster, that is, finding a partition of into , , and , so that and are the two polarized communities, and is the neutral community, and the 2CC objective is maximized. However, this modification does not change the problem significantly. It is easy to see that it is always no worse to switch a vertex from cluster to one of the other two clusters.

Proposition 3.1 ().

Let be any partition of , with . Then there is always a partition of (i.e., and ) with and so that

A further modification might be to subtract disagreements from the value of the solution, that is, to maximize agreements minus disagreements. In other words, we consider the following problem.

Problem 2 (2CC-Full).

Given a signed network , find a partition of so as to maximize

where is the indicator function of the set .

Unfortunately, problem 2CC-Full suffers from the same issue as problem 2CC: switching a vertex from the neutral cluster to one of the polarized clusters or (the one that is best) leads to no worse solution according to the objective   objective is that it can be written neatly in a matrix notation. Let be the adjacency matrix of the signed network , where positive edges are indicated by , negative edges are indicated by , and non-edges are indicated by . A partition of 

can be represented by a vector

, whose -th coordinate is if , if , and if . Then 2CC-Full can be reformulated as follows.

Problem 3 (2CC-Full).

Given a signed network with vertices and signed adjacency matrix , find a partition of represented by vector maximizing

Since our goal is to discover polarized communities and that are potentially concealed within other neutral vertices , we want to find minimal sets and . This can be achieved by normalizing with the size of and , which in vector form is . This consideration motivates our last problem formulation, which we dub 2-Polarized-Communities (2PC).


Problem 4 (2pc).

Given a signed network with vertices and signed adjacency matrix , find a vector that maximizes

In the rest of this paper we refer to the objective function of Problem 4 as polarity. As polarity is penalized with the size of the solution, vertices are only added to one of the two clusters if they contribute significantly to the objective. We show this problem to be -hard (proof in the Appendix) and propose algorithms with approximation guarantees.

Theorem 3.3 ().

2PC is -hard.

It should be noted that 2PC does not enforce balance between the communities. This can be beneficial if there exist polarized communities of significantly different size in the input network. In an extreme case, the solution could even be comprised of a single cluster if there is a large, dense community that overwhelms any other polarized formation.

4. Algorithms

The formulation of 2PC is suggestive of spectral theory, which we utilize to design our algorithms. We propose and analyze two spectral algorithms: one is deterministic, while the second is randomized and achieves approximation guarantee . The running time of both algorithms is dominated by the computation of a spectral decomposition of the adjacency matrix. In practice, this can be done using readily available implementations that exploit sparsity and can run in parallel on multiple cores.

The first algorithm, Eigensign, works by simply discretizing the entries of the eigenvector of the adjacency matrix corresponding to the largest eigenvalue.

To illustrate the difficulty of approximating 2PC, we analyze the following simple algorithm, which we refer to as Pick-an-edge. Pick an arbitrary edge: if it is positive, put the endpoints in one cluster, leaving the other cluster empty; if it is negative, put the endpoints in separate clusters.

Proposition 4.1 ().

The Pick-an-edge algorithm gives an -approximation of the optimum.


The described algorithm outputs a solution such that

The result now follows from the fact that , where is the largest eigenvalue of .∎

In the case of networks with arbitrary real weights, it can be shown that despite the close relationship between the 2PC objective and the leading eigenvector of , Eigensign cannot do better than this up to constant factors. Consider a fully connected network with one edge of weight . The rest of the edges have weight close to zero. The primary eigenvector of the adjacency matrix has two entries — those corresponding to and — of the form for some small , while the rest are close to zero. We construct a solution vector as follows: the two entries corresponding to and are set to 1, and the rest to 0. We have . On the other hand, the Eigensign algorithm outputs a vector for which . It should be noted however, that the focus of this paper is the analysis of the 2PC problem on signed networks. The approximation capabilities of the Eigensign algorithm on signed networks (the adjacency matrix contains entries with values only , 0, and 1) are left open.

Eigensign generally outputs a solution comprised of all the vertices in the graph — unless some components of the eigenvector are exactly zero — which is, of course, counter to the motivation of our problem setting.

To overcome this issue we propose a randomized algorithm, Random-Eigensign, which also computes the first eigenvector, i.e., , of the adjacency matrix. Instead of simply discretizing the entries of , it randomly sets each entry of

to 1 or -1 with probabilities determined by the entries of

. Entries with large magnitude are more likely to turn into ( or 1), while entries with small magnitude are more likely to turn into 0. For details see Algorithm 2. Note that if is the output of Random-Eigensign, then .

The next theorem shows approximation guarantees of Random-Eigensign for signed networks.

Theorem 4.2 ().

Algorithm Random-Eigensign gives a -approximation of the optimum in expectation.


First, observe that we can rewrite the expected value of the objective as follows:

If we define , where denotes the sign of , for all we have


We now invoke Bayes’ theorem and proceed.

Since is a convex function, by Jensen’s inequality it is

Furthermore, for any

To see this, observe that . So we have


That is,

In the appendix we show that this result is tight.

Input: adjacency matrix

1:  Compute , the eigenvector corresponding to the largest eigenvalue of .
2:  Construct as follows: for each , .
3:  Output .
Algorithm 1 Eigensign

Input: adjacency matrix

1:  Compute , the eigenvector corresponding to the largest eigenvalue of .
2:  Construct as follows: for each , run a Bernoulli experiment with success probability . If it succeeds, then , otherwise .
3:  Output .
Algorithm 2 Random-Eigensign

4.1. Enhancements for practical use

When using these algorithms to analyze real-world networks in practical applications, it might be beneficial to apply tweaks to enhance their flexibility and produce a wider variety of results. We propose the following simple enhancements.

Eigensign: As discussed above, Eigensign always outputs a solution involving all the vertices in the network. We can circumvent this shortcoming by including only those vertices such that the corresponding entry of the eigenvector is at least a user-defined threshold . That is, if , 0 otherwise.

Random-Eigensign: The -approximation guaranteed by Random-Eigensign is matched in the extreme case in which all entries of the eigenvector are of equal magnitude. Paradoxically, in this situation a solution comprised of all vertices would be optimal, but each vertex is included with a small probability of . We could of course fix this by modifying the probabilities to be for each . However, in the opposite extreme, where most of the magnitude of is concentrated in one entry, modifying the probabilities this way might disproportionately boost the likelihood of including undesirable vertices. An adequate multiplicative factor for both cases is , modifying the probabilities to be for each ; in the first case, all vertices are taken with probability 1, while in the second, the probabilities remain almost unchanged. We employed this factor in our experiments with satisfactory results.

An obvious question arising is whether the approximation guarantee of Random-Eigensign could be improved using the modification described above. This question is left for future investigation.

5. Experimental Assessment

This section presents the evaluation of the proposed algorithms: first (Section 5.1) we present a characterization of the polarized communities discovered by our methods; then (Section 5.2) we compare our methods against non-trivial baselines in terms of objective, efficiency and scalability, and ability to detect ground-truth planted polarized communities in synthetic datasets. Finally, we show a case study about political debates (Section 5.3).

Datasets. We select publicly-available real-world signed networks, whose main characteristics are summarized in Table 1. represents the alliance structure of the Gahuku–Gama tribes of New Guinea. Cloister contains the esteem/disesteem relations of monks living in a cloister in New England (USA). Congress reports (un/)favorable mentions of politicians speaking in the US Congress. and Epinions are who-trusts-whom networks of the users of Bitcoin OTC and Epinions, respectively. WikiElections includes the votes about admin elections of the users of the English Wikipedia. (lai2018stance, )

records Twitter data about the 2016 Italian Referendum: an interaction is negative if two users are classified with different stances, and is positive otherwise.

Slashdot contains friend/foe links between the users of Slashdot. The edges of WikiConflict represent positive and negative edit conflicts between the users of the English Wikipedia. WikiPolitics represents interpreted interactions between the users of the English Wikipedia that have edited pages about politics.

In order to study scalability, we artificially augment two of the largest datasets to produce networks with millions of vertices and tens of millions of edges (details in Section 5.2).

Real-world datasets HighlandTribes Cloister Congress Bitcoin  k  k WikiElections  k  k Referendum  k  k Slashdot  k  k WikiConflict  k  M Epinions  k  k WikiPolitics  k  k WikiConflict  M  M Epinions  M  M

Table 1. Signed networks used: number of vertices and edges; ratio of negative edges (); -norm of the eigenvector corresponding the largest eigenvalue of (); and, ratio of non-zero elements of ().

Implementation. All methods, with the exception of algorithm FOCG (details in Section 5.2), are implemented in Python (v. 2.7.15) and compiled by Cython. The experiments run on a machine equipped with Intel Xeon CPU at 2.1GHz and 128GB RAM.444Code and datasets available at

Figure 2. Solutions produced by E as a function of .

Figure 3. Edge-agreement ratio of the solutions of RE.

5.1. Solutions characterization

We first characterize the solutions discovered by our methods Eigensign (for short E) and Random-Eigensign (RE), and we show how the tweaks described in Section 4.1 enhance their flexibility in producing a wider variety of results. In particular, algorithm E evaluates the threshold for each discretized at the third decimal digit. This operation is carried out efficiently, since is computed only once regardless of the number of evaluated values of . On the other hand, algorithm RE employs as multiplicative factor, therefore the probabilities are modified to be . In the following, we refer to the two communities included in the solutions as and , namely the subsets of vertices that are assigned with and , respectively, by the solution vector .

Figure 2 shows how the solutions returned by algorithm E are affected by parameter in terms of polarity, edge-agreement ratio (i.e., the portion of edges in the solution that comply with the polarized structure), and size on four datasets. In all of them, the three measures follow very similar trends. The highest polarity is achieved at about a fourth of the domain of , when most of the neutral vertices are discarded. The edge-agreement ratio, instead, is consistently close or equal to : the solutions have a coherent polarized structure regardless of the chosen . Finally, as expected, the number of vertices included in the solutions decreases as grows, and presents a substantial decay at the beginning of the domain. Therefore, parameter is a powerful enhancement that allows algorithm E to be tuned to return the most suitable solution for the domain under analysis.

For algorithm RE, due to the randomness, we report the best solution with respect to polarity out of runs. We do the same for the baseline LS, that we introduce in Section 5.2. Figure 3 shows the boxplots of the edge-agreement ratio over the larger datasets. It has significant values in all cases, above , and is stable among the different executions. Polarity and solution size for all datasets are reported in Figure 4. For such measures, we do not show boxplots as they are highly dependent on the specific dataset and very stable over different runs: their index of dispersion is lower than and , respectively, for all datasets. This confirms that algorithm RE is very stable and does not require multiple executions to identify high-quality solutions.

5.2. Comparative evaluation

We next compare algorithms E and RE against non-trivial baselines inspired by methods proposed in the literature for different yet related problems.

FOCG. The first method we compare to, whose objective is to find oppositive cohesive groups (i.e., -OCG) in signed networks, is taken from (chu2016finding, ). Algorithm FOCG detects different -OCG structures within the input signed network, among which we elect the one having highest polarity as the ultimate solution to our problem. We setup the algorithm with the default configuration (i.e., and ) and . The code is provided by the authors.

Greedy. Our second baseline is inspired by the 2-approximation algorithm for densest subgraph (charikar2000greedy, ). Algorithm Greedy (for short G), iteratively removes the vertex minimizing the difference between the number of positive adjacent edges and the number of negative adjacent edges, up to when the graph is empty. At the end, it returns the subgraph having the highest polarity among all subgraphs visited during its execution. The assignment of the vertices to the clusters is guided by the sign of the components of the eigenvector , corresponding to the largest eigenvalue of .

Bansal. A different approach, motivated by the strong similarity to our setting, is inspired by Bansal’s 3-approximation algorithm for 2CC on complete signed graphs (bansal2004correlation, ). For each vertex , this algorithm, which we refer to as Bansal (for short B), identifies together with the vertices sharing a positive edge with as one cluster, and the vertices sharing a negative edge as the other. Of these possible solutions, it returns the one maximizing polarity.

LocalSearch. Finally, we consider a local search approach (LocalSearch, for short LS), guided by our objective function. Algorithm LS starts from a set of vertices chosen at random; at each iteration, it adds (removes) to (from) the current solution the vertex that maximizes the gain in terms of polarity, and finally terminates when the gain of moving any vertex is lower than . Also for this algorithm, the assignment of the vertices to the clusters is guided by the signs of .

Figure 4. Polarity and solution size (normalized) of the proposed algorithms and baselines.

Figure 4 reports the achieved values of polarity for all compared algorithms on all datasets, as well as the size (normalized by ) of the solutions returned. In most of the cases, algorithm E results the be the most competitive method with respect to polarity; on the other hand, algorithm RE is able to return solutions of high polarity for the small-sized datasets. Algorithm FOCG is instead not competitive since its solutions are of extremely small size (note that the numerator of our objective can be up to quadratic in the size of the denominator, so size matters for reaching high polarity). Algorithm G has, in general, polarity comparable to algorithm E, slightly higher in a few cases (with the exception of WikiConflict, in which algorithm E clearly outperforms algorithm G). However, it must be noted that algorithm G often returns a very dense subgraph as one of the two communities, leaving the second community totally empty, which is, of course, undesirable in our context. Algorithms B and LS, instead, exhibit weak performance in terms of polarity: their search spaces strongly depend on the neighborhood structure of the vertices (for B), or on the random starting sets (for LS). About the solution size, all methods, with the exception of algorithms FOCG and LS, return solutions of reasonable dimension with respect to the number of vertices of the networks. Excluding the small empirical datasets (i.e., HighlandTribes, Cloister, and Congress), the size of the solutions is below of the input.

Figure 5. -score as a function of the noise parameter (, ).

Figure 6. -score as a function of the number noisy vertices (, ).

Planted polarized communities. In order to better assess the effectiveness of the various algorithms, we test their ability to detect a known planted solution, concealed within varying amounts of noise. For our purposes we create a collection of synthetic signed networks identified by three parameters: the size of each planted polarized community (for convenience, we consider communities having the same size); the number of noisy vertices external to the two polarized communities ; and, a noise parameter governing the edge density and agreement to the model. In detail:

  • edges inside (respect. ) exist and are positive with probability , exist and are negative with probability , and do not exist with probability ;

  • edges between and exist and are negative with probability , exist and are positive with probability , and do not exist with probability ;

  • all other edges (outside the two polarized communities) exist with probability and have equal probability of being positive or negative.

The higher , the less internally dense and polarized the two communities are, and the more connected the noisy vertices are, both between themselves and to the communities. Observe how the case with no noise () corresponds to the “perfect” structure.

For each configuration of the parameters, we create different networks and we report the average -score in detecting which vertices belong to (respect. ) and which ones to 555For instance recall is defined as , where () denotes the first (second) community returned by the algorithm while () denotes the corresponding ground-truth one..

In Figure 6 we fix the size of the synthetic network to (, ) and vary . For , all algorithms have, as expected, maximum -score with the exception of algorithm G that, even in the case without noise, is not able to exactly identify the planted structure. As expected, as increases, the -score decays for all methods; however, our algorithms E and RE clearly outperform the others. Figure 6 shows the -score varying the number of vertices external to the polarized communities, with fixed and . Again algorithms E and RE stand out, especially E that presents -score close to the maximum in all cases. Algorithm FOCG has the poorest performance: the small size of its solutions penalizes the recall, which is never greater than .

Figure 7. Runtime of the proposed algorithms and baselines.

Figure 8. Scalability: runtime of the proposed algorithms and baselines as a function of the number of injected dummy vertices, for WikiConflict and Epinions.

Runtime and scalability. Figure 8 reports the runtime of all algorithms over all datasets. Algorithms E and RE, with their practical enhancements discussed in Section 4.1, always terminate in less than seconds. The runtime of the baselines is instead more than an order of magnitude higher than algorithms E and RE.

In order to assess the scalability of our methods, we augment two of the larger datasets (i.e., WikiConflict and Epinions) by artificially injecting dummy vertices having a number of randomly-connected edges equal to the average degree of the original network, while maintaining (i.e., the ratio of negative edges). The largest datasets created in this way contain up to  M vertices and  M edges (see Table 1 for details). Note that, as the quantity of noise increases, , i.e., the ratio of non-zero elements of the adjacency matrix, decreases. Nonetheless, differs with respect to the original datasets less than an order of magnitude in both cases, making the following results about scalability significant.

Figure 8, which reports on the -axis the number of dummy vertices added, shows that the runtime of both algorithms E and RE grows linearly with the number of vertices. Among the two, algorithm E is slightly slower than algorithm RE due to the evaluation of multiple values of the threshold . In the worst case, algorithms E returns in about minutes. On the other hand, the baselines cannot complete each computation within the seconds timeout that we apply. No baseline terminates for more than additional dummy vertices on both datasets. In particular, algorithm FOCG is able to handle in reasonable time only the original versions, with no dummy vertices. It should be noted that algorithm FOCG recursively finds a polarized structure, removes the corresponding subgraph, and repeats the process on the remaining vertices. While each one of these iterations runs efficiently, most of the found structures are too small to be of interest in our setting. Thus, it is necessary to allow the algorithm to complete many of such iterations in order to find interesting solutions.

5.3. Case study: political debate

We finally analyze the solution extracted by algorithm RE from Referendum to show tangible benefits of our problem formulation and algorithms in identifying the two most polarized communities in a signed network modeling political debates. The Referendum dataset includes Twitter data about the Italian Constitutional Referendum held on December 4, 2016 (more information about the Referendum can be found at this link). The original data seed consists of about  M tweets posted between November 24 and December 7, 2016, extended by collecting retweets, quotes, and replies. The users ( in total) are annotated with a stance about their outlook towards the Referendum as favorable (), against (), or none () when the stance cannot be inferred. An interaction (edge) is considered negative if occurred between two users (vertices) of different stances, and is positive otherwise, i.e., we treat “none” users as neutral, in agreement with both favorable and against users.

The solution output by algorithm RE consists of two communities of and users, accounting for 14% of the overall user set. Both communities have more than 99% of positive edges within and 74% of negative edges in-between, and thus, are highly polarized. Interestingly, all the users of the smaller community are classified as favorable to the Referendum, while the users in the larger community as against (75%) or “none” (24%), with the exception of favorables. Moreover, the vertices in the solution have, on average, adjacent edges compared to the average contacts of the vertices outside, meaning that the solution identifies the “core” of the controversies, i.e., a set of intensely debating users about the Referendum. These results provide evidence of the practical value of our problem formulation and algorithms to identify two communities that are polarized about a certain topic.

6. Conclusions and Future Work

Detecting extremely polarized communities might enable fine-grained analysis of controversy in social networks, as well as open the door to interventions aimed at reducing it (garimella2017reducing, ). As a step in this direction, in this paper we introduce the 2-Polarized-Communities problem, which requires finding two communities (subsets of the network vertices) such that within communities there are mostly positive edges while across communities there are mostly negative edges. We prove that the proposed problem is -hard and devise two efficient algorithms with provable approximation guarantees. Through an extensive set of experiments on a wide variety of real-world networks, we show how the proposed objective function can be optimized to reveal polarized communities. Our experiments confirm that our algorithms are more accurate, faster, and more scalable than non-trivial baselines.

This work opens several enticing avenues for further inquiry. Some questions follow immediately from our theoretical results. What are the approximation capabilities of Eigensign in signed networks? Can we improve the factor of Random-Eigensign, e.g., by multiplying the probability vectors by or some other factor? Finally, it would be interesting to extend the problem to detect an arbitrary number of communities.

The application of the proposed algorithms to real-world networks with positive and negative relationships can have implications in computational social science problems. For instance, understanding opinion shifts in data streaming from social media sources can be investigated in terms of polarized communities. Opinions shared within vertices (individuals) belonging to the same community are likely to be reinforced after different interactions; discussions within individuals with antagonistic perspectives may result in both opinion shifts and controversy amplification. The identification of a subgraph made of vertices belonging to 2-Polarized-Communities may lead to novel ways to discover the basic laws behind opinion shift dynamics. Thus, it would be interesting to study extensions of the 2-Polarized-Communities problem in the setting of temporal networks.


  • (1) N. Ailon, M. Charikar, and A. Newman. Aggregating inconsistent information: ranking and clustering. Journal of the ACM (JACM), 55(5):23, 2008.
  • (2) L. Akoglu. Quantifying political polarity based on bipartite opinion networks. In ICWSM, 2014.
  • (3) P. Anchuri and M. Magdon-Ismail. Communities and balance in signed networks: A spectral approach. In ASONAM, 2012.
  • (4) T. Antal, P. Krapivsky, and S. Redner. Social balance on networks: The dynamics of friendship and enmity. Physica D, 224(130), 2006.
  • (5) D. Baldassarri and P. Bearman. Dynamics of political polarization. American sociological review, 72(5):784–811, 2007.
  • (6) N. Bansal, A. Blum, and S. Chawla. Correlation clustering. Machine learning, 56(1-3):89–113, 2004.
  • (7) G. Beigi, J. Tang, and H. Liu. Signed link analysis in social media networks. In ICWSM, 2016.
  • (8) J. Brundidge. Encountering ‘difference’ in the contemporary public sphere: The contribution of the internet to the heterogeneity of political discussion networks. Journal of Communication, 60(4):680–700, 2010.
  • (9) D. Cartwright and F. Harary. Structural balance: a generalization of heider’s theory. Psychological review, 63(5):277, 1956.
  • (10) N. Cesa-Bianchi, C. Gentile, F. Vitale, and G. Zappella. A correlation clustering approach to link classification in signed networks. In COLT, 2012.
  • (11) M. Charikar. Greedy approximation algorithms for finding dense components in a graph. In

    International Workshop on Approximation Algorithms for Combinatorial Optimization

    , pages 84–95, 2000.
  • (12) K.-Y. Chiang, J. J. Whang, and I. S. Dhillon. Scalable clustering of signed networks using balance normalized cut. In CIKM, 2012.
  • (13) Y. Choi, Y. Jung, and S.-H. Myaeng. Identifying controversial issues and their sub-topics in news articles. In Pacific-Asia Workshop on Intelligence and Security Informatics, 2010.
  • (14) L. Chu, Z. Wang, J. Pei, J. Wang, Z. Zhao, and E. Chen. Finding gangs in war from signed networks. In KDD, 2016.
  • (15) T. Coleman, J. Saunderson, and A. Wirth. A local-search 2-approximation for 2-correlation-clustering. In ESA, 2008.
  • (16) M. D. Conover, B. Gonçalves, J. Ratkiewicz, A. Flammini, and F. Menczer. Predicting the political alignment of twitter users. In SocialCom/PASSAT, 2011.
  • (17) F. Cribari-Neto, N.L. Garcia, and K. Vasconcellos.

    A note on inverse moments of binomial variates.

    Brazilian Review of Econometrics, 20:2, 2000.
  • (18) J.-M. Esteban and D. Ray. On the measurement of polarization. Econometrica: Journal of the Econometric Society, pages 819–851, 1994.
  • (19) L. Feldman, T. A. Myers, J. D. Hmielowski, and A. Leiserowitz. The mutual reinforcement of media selectivity and effects: Testing the reinforcing spirals framework in the context of global warming. Journal of Communication, 64(4):590–611, 2014.
  • (20) M. Gao, E.-P. Lim, D. Lo, and P. K. Prasetyo. On detecting maximal quasi antagonistic communities in signed graphs. Data mining and knowledge discovery, 30(1):99–146, 2016.
  • (21) K. Garimella, G. De Francisci Morales, A. Gionis, and M. Mathioudakis. Reducing controversy by connecting opposing views. In WSDM, 2017.
  • (22) K. Garimella, G. D. F. Morales, A. Gionis, and M. Mathioudakis. Quantifying controversy on social media. ACM Transactions on Social Computing, 1(1):3, 2018.
  • (23) K. Garrett and N. J. Stroud. Partisan paths to exposure diversity: Differences in pro-and counter-attitudinal news consumption. Journal of Communication, 64(4):680–701, 2014.
  • (24) I. Giotis and V. Guruswami. Correlation clustering with a fixed number of clusters. In SODA, 2006.
  • (25) E. Graells-Garrido, M. Lalmas, and D. Quercia. People of opposing views can share common interests. In WWW, 2014.
  • (26) R. Guha, R. Kumar, P. Raghavan, and A. Tomkins. Propagation of trust and distrust. In WWW, 2004.
  • (27) F. Harary. On the notion of balance of a signed graph. The Michigan Mathematical Journal, 2(2):143–146, 1953.
  • (28) Y. Hou, J. Li, and Y. Pan. On the Laplacian eigenvalues of signed graphs. Linear and Multilinear Algebra, 51(1):21–30, 2003.
  • (29) Y. P. Hou. Bounds for the least Laplacian eigenvalue of a signed graph. Acta Mathematica Sinica, 21(4):955–960, 2005.
  • (30) J. Kunegis, A. Lommatzsch, and C. Bauckhage. The slashdot zoo: mining a social network with negative edges. In WWW, 2009.
  • (31) J. Kunegis, S. Schmidt, A. Lommatzsch, J. Lerner, E. W. De Luca, and S. Albayrak. Spectral analysis of signed graphs for clustering, prediction and visualization. In SDM, 2010.
  • (32) M. Lai, V. Patti, G. Ruffo, and P. Rosso. Stance evolution and twitter interactions in an italian political debate. In NLDB, 2018.
  • (33) J. Leskovec, D. Huttenlocher, and J. Kleinberg. Predicting positive and negative links in online social networks. In WWW, 2010.
  • (34) J. Leskovec, D. Huttenlocher, and J. Kleinberg. Signed networks in social media. In SIGCHI, 2010.
  • (35) Y. Li, W. Chen, Y. Wang, and Z.-L. Zhang. Influence diffusion dynamics and influence maximization in social networks with friend and foe relationships. In WSDM, 2013.
  • (36) Q. V. Liao and W.-T. Fu. Can you hear me now?: mitigating the echo chamber effect by source position indicators. In CSCW, 2014.
  • (37) Q. V. Liao and W.-T. Fu. Expert voices in echo chambers: effects of source expertise indicators on exposure to diverse opinions. In SIGCHI, 2014.
  • (38) D. Lo, D. Surian, K. Zhang, and E.-P. Lim. Mining direct antagonistic communities in explicit trust networks. In CIKM, 2011.
  • (39) H. Ma, M. R. Lyu, and I. King. Learning to recommend with trust and distrust relationships. In RecSys, 2009.
  • (40) A. W. Marshall, I. Olkin, and B. C. Arnold. Inequalities: theory of majorization and its applications, volume 143. Springer, 1979.
  • (41) S. A. Marvel, S. H. Strogatz, and J. M. Kleinberg. The energy landscape of social balance. Physical Review Letters, 103(19), 2009.
  • (42) Y. Mejova, A. X. Zhang, N. Diakopoulos, and C. Castillo. Controversy and sentiment in online news. CJ’14: Computation+Journalism Symposium, 2014.
  • (43) S. A. Munson, S. Y. Lee, and P. Resnick. Encouraging reading of diverse political viewpoints with a browser widget. In ICWSM, 2013.
  • (44) A.-M. Popescu and M. Pennacchiotti. Detecting controversial events from twitter. In CIKM, 2010.
  • (45) G. J. Puleo and O. Milenkovic. Correlation clustering with constrained cluster sizes and extended weights bounds. SIAM Journal on Optimization, 25(3), 2015.
  • (46) R. Shamir, R. Sharan, and D. Tsur. Cluster graph modification problems. Discrete Applied Mathematics, 144(1-2):173–182, 2004.
  • (47) J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence, 22(8):888–905, 2000.
  • (48) C. Swamy. Correlation clustering: maximizing agreements via semidefinite programming. In SODA, 2004.
  • (49) P. Symeonidis, E. Tiakas, and Y. Manolopoulos. Transitive node similarity for link prediction in social networks with positive and negative links. In RecSys, 2010.
  • (50) J. Tang, C. Aggarwal, and H. Liu. Node classification in signed social networks. In SDM, 2016.
  • (51) J. Tang, C. Aggarwal, and H. Liu. Recommendations in signed social networks. In WWW, 2016.
  • (52) J. Tang, Y. Chang, C. Aggarwal, and H. Liu. A survey of signed network mining in social media. ACM Computing Surveys (CSUR), 49(3):42, 2016.
  • (53) P. Victor, C. Cornelis, M. De Cock, and A. M. Teredesai. Trust-and distrust-based recommendations for controversial reviews. IEEE Intelligent Systems, 26(1), 2011.
  • (54) V. Vydiswaran, C. Zhai, D. Roth, and P. Pirolli. Overcoming bias to learn about controversial topics. Journal of the Association for Information Science and Technology, 66(8):1655–1672, 2015.
  • (55) M. Wojcieszak and D. Mutz. Online groups and political discourse: Do online discussion spaces facilitate exposure to political disagreement? Journal of communication, 59(1):40–56, 2009.

Appendix A Appendix

Hardness (Theorem 3.3). In this section, we refer to a solution of 2PC as , which denote the subsets of vertices that are assigned a or a , respectively, in the solution vector . Given a vertex and a subset of vertices , we use (respectively ) to denote the number of ‘’ edges (respectively ‘’ edges) connecting to other vertex in .

We exploit the following result in our proof. It can be easily verified by examining the behavior of the cost functions when moving one vertex from one set to the other, so we omit the proof.

Proposition A.1 ().

If we require , problem 2CC-Full is equivalent to 2CC, i.e., their optimal solutions are the same.

We now prove that 2PC is -hard by reduction from 2CC, which has been shown to be -hard by Shamir et al. (shamir2004cluster, ).

Proof of Theorem 3.3.

Given a graph with vertices as instance of 2CC, we construct a graph to be an instance of 2PC as follows. For every vertex in we create a corresponding vertex in , and for every edge in we add an edge in between the corresponding vertices in , and having the same sign. Furthermore, for every vertex in we introduce a clique of vertices (and positive edges) and a ‘’ edge between and every vertex in the clique. The strategy to prove hardness is the following. We first restrict ourselves to complete solutions of 2PC (i.e. ), which can of course be mapped to solutions of 2CC. We prove that if one such complete solution optimizes 2PC, the corresponding solution of 2CC is also a maximizer. Second, we show that any optimal solution of the constructed instance of 2PC is complete.

We denote the objective of the problems 2CC and 2PC, on instances and , by and , respectively. We consider a solution of 2CC, and a solution of 2PC, such that and . Let us first restrict our attention to complete solutions of 2PC. Observe that

where , that is, the sum of disagreements in the resulting clustering. Note that is exactly the objective of the 2CC-Full problem. In other words, the obective of 2PC on is proportional to the objective of 2CC-Full on plus a constant. By Proposition A.1, the first part of the proof is complete.

We now consider a complete solution and show that removing vertices leads to no further improvement. Suppose we remove a set of vertices from the solution. We want to show

where is the numerator of , and is the net change after removing the vertices in (i.e., the number of agreements minus disagreements that are removed). Equivalently, we want to show . We first consider that the removed vertices are in . Observe that

This upper bound holds because the right hand side simply counts all possible ‘’ and ‘’ edges, the edges between each actual vertex and its clique, and the edges within cliques. It is therefore sufficient to show

After some manipulations and relaxing the condition to remove the dependence on , we arrive at the following sufficient condition:

which holds for . The case where the removed vertices are not in can be analyzed in the same manner. We have shown that we can reduce an instance of 2CC to a polynomially-sized instance of 2PC. ∎

Tight example for Random-Eigensign. We consider a complete graph where all edges are positive, except for one Hamiltonian cycle comprised of negative edges. Without loss of generality, we can order the vertices so that the adjacency matrix is

That is, matrix is comprised entirely of ones, save for the subdiagonal and superdiagonal entries, which are -1, and . It is easy to see that a constant vector , i.e., satisfying is an eigenvector of eigenvalue . Since , the eigenvalue will be the largest if

which holds for . Note that is a feasible solution for 2PC. We now show that Random-Eigensign attains a value of .

We first rely on Equality (4) to obtain the following:

Now, observe that given , is constant for all . Thus, for arbitrary ,


Observe that when all entries of are equal in absolute value, is a binomial variable with parameters . Thus, by Jensen’s inequality we have

Furthermore, it is known (cribari2000note, ) that

That is,

Combining this with Equality (A) we get

[innerbottommargin=3pt,innertopmargin=3pt,innerleftmargin=6pt,innerrightmargin=6pt,backgroundcolor=gray!10,roundcorner=10pt] Acknowledgments. Francesco Bonchi acknowledges support from Intesa Sanpaolo Innovation Center. Aristides Gionis and Bruno Ordozgoiti were supported by three Academy of Finland projects (286211, 313927, and 317085), and the EC H2020RIA project “SoBigData” (654024).