Information Consumption and Social Response in a Segregated Environment: the Case of Gab

Most of the information operations involve users who may foster polarization and distrust toward science and mainstream journalism, without these users being conscious of their role. Gab is well known to be an extremist-friendly platform that performs little control on the posted content. Thus it represents an ideal benchmark for studying phenomena potentially related to polarization such as misinformation spreading. The combination of these factors may lead to hate as well as to episodes of harm in the real world. In this work we provide a characterization of the interaction patterns within Gab around the COVID-19 topic. To assess the spreading of different content type, we analyze consumption patterns based on both interaction type and source reliability. Overall we find that there are no strong statistical differences in the social response to questionable and reliable content, both following a power law distribution. However, questionable and reliable sources display structural and topical differences in the use of hashtags. The commenting behaviour of users in terms of both lifetime and sentiment reveals that questionable and reliable posts are perceived in the same manner. We can conclude that despite evident differences between questionable and reliable posts Gab users do not perform such a differentiation thus treating them as a whole. Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.


page 2

page 4

page 6

page 7

page 9

page 10

page 11

page 12


The Impact of Disinformation on a Controversial Debate on Social Media

In this work we study how pervasive is the presence of disinformation in...

Infodemics on Youtube: Reliability of Content and Echo Chambers on COVID-19

Social media radically changed how information is consumed and reported....

Comparing the Language of QAnon-related content on Parler, Gab, and Twitter

Parler, a "free speech" platform popular with conservatives, was taken o...

Comparative Analysis of Content-based Personalized Microblog Recommendations [Experiments and Analysis]

Microblogging platforms constitute a popular means of real-time communic...

A Puff of Steem: Security Analysis of Decentralized Content Curation

Decentralized content curation is the process through which uploaded pos...

An Empirical Investigation of Personalization Factors on TikTok

TikTok currently is the fastest growing social media platform with over ...

1. Introduction

Social media platforms play a crucial role in the public sphere, shaping the public discussion on a wide range of topics including politics, health, climate change, economics, migration (Bessi et al., 2015b; Chou et al., 2018; Bovet and Makse, 2019; Del Vicario et al., 2017a). Users online have shown tendency a) to acquire information adhering to their system of beliefs (Bessi et al., 2015a), b) to ignore dissenting information (Zollo et al., 2017), c) to form polarized groups around a shared narrative (Del Vicario et al., 2016b). One of the dominating traits of online social dynamics, indeed, is polarization (Vicario et al., 2019). Divided in echo chambers, users account for the coherence with their preferred narrative rather than the true value of the information (Del Vicario et al., 2016a; Conti et al., 2017). This scenario creates the perfect incubator for information operations (Cinelli et al., 2019). Among the most pressing issues is the spread of fictitious and low-quality information (e.g., fake news, rumors, hoaxes). Questionable means are often used to influence the public opinion toward polarization or to burst distrust in governments and institutions (Del Vicario et al., 2016a). The spread of low-quality information is sometimes carried out by groups of coordinated or automated accounts that pollute and tamper with our social environments by injecting and sharing a large number of targeted messages (Ferrara et al., 2016), i.e., what Facebook calls “coordinated inauthentic behavior” (Gleicher, 2018). Although some studies focused on the interplay between false and real information (Vosoughi et al., 2018), the main point to understand is how information fits into a larger disinformation campaign (Starbird, 2019; Cinelli et al., 2019). Most of the information operations involve users which are not aware of their role but which may foster polarization and distrust toward science and mainstream journalism (Bail et al., 2018; Hagen et al., 2020; Schmidt et al., 2017; Del Vicario et al., 2017b).

In this paper we characterize users behavior in an extremist social media platform. All social media platforms, indeed, present distinct features such as the type of content as well as users interaction options (Cinelli et al., 2020b, a). Along with mainstream platforms like Facebook, Twitter, YouTube and Instagram, niche ones such as Gab or Reddit have been created. These platforms strongly differ for various aspects mainly related to the political leaning of the user base and to the content regulation policy implemented. The latter element has crucial importance, being strongly linked to the risk of opinion polarization and to the development of hating contents (Cinelli et al., 2020b). Within such niche media, biased information proliferates (Cinelli et al., 2020c) and users, either interested in joining the community or banned by other social media, tend to develop the feeling of community belonging with like-minded individuals, i.e., an echo-chamber (Del Vicario et al., 2016a). Hence, studying users interaction patterns in platforms like Gab becomes of primary interest in order to shed light on the dynamics of content production and information spreading in such segregated environment.

In this study, we explore different aspects related to the spreading of both questionable and reliable contents in Gab by taking into account several aspects, namely: the users’ reactions, the topics embedded in hashtag networks and the users’ sentiment. In more detail, we first focus on the differences between interactions types in terms of frequency and time. Then we build statistically validated hashtag co-occurrence networks assessing the topological differences between questionable and reliable contents. Finally, we analyze the interplay between sentiment of comments and commenting behavior with respect to information source questionability.

The paper is organized as follows: Section 2 describes the current state of the art of non-mainstream social media, focusing in particular on Gab. Section 3 describes the mathematical tools behind the analysis. Section 4 describes how results are obtained, while Section 5 summarizes the results and discusses the possible applications associated.

2. State of the art

A wide research effort has been paid to characterize online information operations (Cinelli et al., 2019; Shu et al., 2017; Gitari et al., 2015; Waltz, 1998; Heickerö, 2010) especially in the case of terrorism (Theohary, 2011; Ingram, 2015; Burns and Eltham, 2009). Several works addressed the analysis of social behaviors to detect features with the aim to anticipate and thus inhibits information disorders (Wardle and Derakhshan, 2017; Vicario et al., 2019; MacNulty and Ryan, 2016; Chew and Turnley, 2017). Most of the tactics tend to exploit the confirmation bias (Del Vicario et al., 2016a) of users to foment heated debates (Waldron, 2012; Alsaad et al., 2018). Niche social media performing little regulation on their contents seem to be the ideal environment for triggering polarization dynamics that can eventually turn into actions of harm even offline (CNN, [n.d.]).

Gab is an online social platform, describing itself as “A social network that champions free speech, individual liberty and the free flow of information online. All are welcome” (gab, [n.d.]). Such a claim, together with the political leaning of its founders and developers, made Gab the “safe haven” for the alt-right movement. However, low moderation and regulation of contents has resulted in a widespread of hate speech and fake news. For these reasons, it has been repeatedly suspended by its service provider, and its mobile app has been banned from both App and Play stores (Zannettou et al., 2018). In particular, Gab attracted the interest of researchers due to its permissive content regulatory policy and the political leaning of its users. In (Lima et al., 2018) authors analyze the content shared on Gab and the leaning of users, finding a rather homogeneous environment prone to share right biased content. Authors of (Zannettou et al., 2018) characterize Gab in terms of users leaning and content shared, suggesting that it is more similar to a safe place for right wing extremist rather than a environment where free speech is protected. Moreover, a topological analysis performed by authors of (Cinelli et al., 2020b) reveals that Gab users appear as one quite homogeneous cluster biased to the right. Further, differently from other platforms such as Twitter and Facebook, in Gab there is a lack of users with leaning opposite to the most popular one. Overall, all these studies suggest that Gab can be considered as a homogeneous environment where biased content and misinformation may easily proliferate.

3. Preliminaries and Definitions


The basis for the conceptualization of a network is a graph , being the set of nodes and the set of edges. The nodes are denoted as or, similarly, , and the edge that formalizes the connection between and is denoted as .

With A we denote the adjacency matrix, a -squared binary matrix taking values 0 or 1, where the element if nodes and are connected and otherwise; the degree of the node is , and it quantifies the number of neighbours of the node ; the number of links in the graph is . A graph that respects the last formalized equality, called Handshaking Lemma, is an undirected graph. Another instance that we take into account is the bipartite graph, that is a graph in which the vertex set is the union of two disjoint independent sets called the partitions of . The equivalent of an adjacency matrix for a bipartite graph is a rectangular matrix called incidence matrix B that takes values 0 or 1, where the element if nodes and are connected. A bipartite graph can be easily projected onto one of its partitions by performing an operation called one-mode projection that can be formalized in terms of the product , in the case we are projecting onto the partition of size , and if we are projecting onto the partition of size . P is a symmetric matrix whose elements are nonnegative numbers that represent, in the case of off-diagonal elements, the number of common links of the nodes and to the partition of size or . The diagonal elements of the matrix P are also nonnegative numbers that represent the degree of the node in the bipartite graph. Since the elements on the diagonal of the matrix P have a different meaning with respect to the elements away from the diagonal, it is common practice to set the diagonal elements . The matrix P after such a treatment, can be also called co-occurence matrix since two elements are interconnected if they are co-connected to at least one node of the partition they don’t belong to. In addition, the number of co-connections between and is represented by the link weight, i.e., by the element of the matrix P.

Hashtag Co-occurence Network

Starting from the set of all posts and the set of the hashtags found in those posts, we create the bipartite network , whose nodes are posts and hashtags. A link between an hashtag and a post exists if the hashtag is used inside the post . From the bipartite network we create the hashtag co-occurrence weighted network by projecting the bipartite network onto the partition . Additionally, such a network is statistically validated by using the methodology presented in (J et al., 2011; Bovet et al., 2018). The result is hence a co-occurrence hashtag network where two hashtag are connected if their co-occurrence in the set of posts is statistically significant.

Data Collection

Gab provides a search tool that returns a list of users, hashtags and groups related to the keyword queried. Starting from Google Trends results, we select a set of popular research keywords that we feed into Gab search tool and we download all hashtags linked to them. We then inspect the results and manually filter them based on the hashtag meaning. Finally, we download all the posts with relative comments linked to each hashtag we select. We end up with 116343 posts, associated with 16144 different hashtags, that received 96757 likes, 20001 comments and 60563 reblogs by 4293 users in the period 01/01/2020 – 31/03/2020.

Questionable and Reliable Sources

In order to evaluate the questionability of information circulating in Gab we employ a source-based approach. We build a dataset of news outlets website domains where each domain in our dataset is labeled either as ”questionable” or ”reliable” by means of the classification of the respective news outlet provided by a fact-checking organization called MediaBias/FactCheck (MBFC, MBFC has been frequently used for source classification (Bovet et al., 2018; Atanasov et al., 2019; Cinelli et al., 2020b)

. It provides a classification determined by ranking bias in four different categories that are: Biased Wording/Headlines, Factual/Sourcing, Story Choices and Political Affiliation. A score is assigned to each category per each news outlet and the average score determined the bias of the outlet, as explained in the Methodology Section of the website. To each news outlet is associated a label that refers either to a political bias, namely, Right, Right-Center, Least-Biased, Left-Center and Left or to its reliability that is expressed in three labels namely, Conspiracy-Pseudoscience, Pro-Science or Questionable. Noticeably, also the Questionable set includes a wide range of political biases, from Extreme Left to Extreme Right. For instance, the Right label is associated to Fox News, the Questionable label to Breitbart (the well-known extreme right outlet) and the Pro-Science label to Science. Using such a classification, we divide the news outlets into Questionable outlets and Reliable outlets. All the outlets already classified as Questionable or belonging to the category Conspiracy-Pseudoscience are labelled as Questionable, the rest is labelled as Reliable.

Considering all the 2637 news outlets that we retrieve from the list provided by MBFC we end up with 800 outlets classified as Questionable 1837 outlets classified as Reliable.

Hashtag Questionability Index

In order to measure the extent to which an hashtag is used in posts associated with either reliable or questionable contents, we introduce a measure called questionability. The measure is defined in the range and it equals 0 when a certain hashtag is used exclusively in posts associated to reliable sources while it equals 1 when a certain hashtag is used only in posts associated to questionable sources.

Formally, hashtag questionability can be defined as follows: let be the set of all posts with a url matching a domain in our dataset and the set containing all the hashtags. At each element is associated a binary value based on the domain of the link contained: if the url refers to a domain classified as questionable then , otherwise . Considering and hashtag in the bipartite network T then the questionability index of hashtag can be defined as:


where is the questionability score of the j-th neighbour of the hashtag .

4. Result and Discussion

In this section we analyze and compare how users perceive news in terms of reactions to posts, topics embedded in hashtag networks and users’ sentiment.

4.1. Consumption Patterns

We investigate the news consumption and the activity of Gab users by considering a dataset of posts related to the COVID-19 pandemics. As shown in Figure 1, users tend to prefer a type of interaction that is more immediate and less cognitive-demanding (Levens et al., 2020). Indeed, the left panel of Figure 1 shows how Likes are the most active way to engage, consequently followed by Reblogs and Replies. The same behaviour is also confirmed by the cumulative number of interactions during the analyzed period. In this case, the difference between Likes and Reblogs is less accentuated until the beginning of February, with all distributions following an incremental trend that is comparable with consumption patterns from other social media (Cinelli et al., 2020c).

Figure 1. Frequency distribution of interactions with posts (left) and their cumulative engagement (right). A like is usually a positive feedback on a news item. A reblog indicates a desire to spread a news item to friends. A reply can have multiple features and meanings and can generate collective debate. The left panel display that every kind of reaction follows an heavy-tailed distribution that allows room for large deviations, i.e., some posts go viral. The right panel displays the evolution of the cumulative number of interactions over time. The trend is always increasing with a rapid increase at the beginning of February that is likely to be connected to the beginning of the COVID-19 infodemic. Both plots show how likes are the preferred type of interaction and how their frequencies are inversely proportional to the amount of cognitive effort required.

The consumption pattern can also be analyzed considering the categorization of posts into questionable and reliable. Panel 1(a) of Figure 2, displays the distribution of like reaction to questionable and reliable contents that, overall, show a rather similar behaviour. In fact, both type of posts are subject to receive a disproportionate amount of likes.

Panel 1(b) of Figure 2

shows the probability distributions of the number of posts by category with their corresponding fits.To check that those fits follow a power law distribution 

(Clauset et al., 2009), we perform a bootstrap procedure (Gillespie, 2015)

for both categories. After having estimated

and , we compute , i.e., the number of values below from the initial dataset, and , i.e the difference with the total cardinality and the previously calculated. Consequently, we perform

randomized instances where, for each of them, we performed a Kolgomorov-Smirnov test between the uniform distribution

, containing values, and the power law distribution with parameter , containing values. We end up with obtaining a p-value that ranges from to for the distribution of likes to questionable, and to for the reliable ones. Thus we can accept the hypothesis that data is generated by a power law distribution. The estimated exponent is 3.36 and 3.34 for questionable and reliable sources respectively, implying the presence of very large deviation in the number of likes for both categories.

Panels 1(c) and 1(d) of Figure 2 show the temporal evolution of the cumulative and average number of likes to questionable and reliable contents. The matching between the two curves observed in the panel 1(c) is due to an increase in the number of reliable posts rather than to an increase in the users endorsement to such posts. Indeed, as confirmed by panel  1(d) of Figure 2, questionable posts receive on average an higher number of likes. It is nonetheless interesting the inflation in the number of reliable posts happened at the beginning of February that could be related either to a growing concern about the global pandemic or to a growing debate around reliable news. Nonetheless, this inflation does not reflect in a correspondent growth in the number of likes, showing a constant interest of Gab users towards questionable sources.

Figure 2. Panel a: distribution of the frequency of likes obtained by posts related to questionable and reliable sources. Panel b: Probability distribution of the number of likes to posts related to questionable and reliable sources. Panel c: Cumulative number of like over time for questionable and reliable sources. Panel d: Average number of likes per post over time for questionable e reliable sources. The average is computed using a time window that contains all the posts since January the . Posts from both sources are similar in terms of likes’ distribution, whilst their temporal evolution shown a differentiation starting at the beginning of February.

4.2. Comparing Questionable and Reliable Hashtags

Hashtags are a good proxy for describing the semantic and topical elements of posts. Therefore, investigating the interplay between the use of hashtags and the diversity of information sources may unveil the narratives related to questionable and reliable contents. To achieve this goal, we consider hashtags appeared in labelled posts. The hashtags can be divided into three categories: those appearing only in questionable posts (), those appearing only in reliable posts () and those appearing in both types of posts (). The first subset is made up of hashtags, the second is made up of hashtags and the third is made up of hashtags. The hashtag questionability, described in Equation 1, follows a multimodal distribution with peaks located at extreme values, as shown in Figure 3.

Figure 3. Distribution of questionability between hashtags. The two peaks at the extremes suggest there are recurrent hashtags for posts belonging to questionable or reliable sources.

We use these sets in order to build their corresponding co-occurrence networks using the procedure described in Section 3, which are represented in Figure 4. The ten hashtags of the largest connected component with highest prominence in terms of degree are reported in Table 1. The networks related to purely questionable and purely reliable contents display a strongly disconnected structure made up of multiple connected components. In the case of questionable sources we have a decentralized structure with a largest connected component of vertices, accounting for the of the total network. We also notice how most of the other connected components are organized as cliques. This highlights that questionable news have their own dialect in terms of hashtags. In the case of reliable sources we have a more centralized structure due to the contribution of its largest connected component that consists of vertices. Noticeably, the largest connected components accounts for % of the number of reliable hashtags, revealing a different structure with respect to purely Questionable hashtags.

Questionable Reliable Intersection
hashtag degree hashtag degree hashtag degree
brands 7 outbreak 763 pandemic 235
apology 7 tc 422 cdc 200
dominic 7 wwg1wga 236 who 195
raab 7 kag 173 news 183
thanks 7 donaldjohntrump 163 cia 166
dominance 7 Coronavirus 147 trump 164
key 7 boingboing 141 health 156
shitholecountry 7 startups 137 virus 155
- - walkaway 133 democrats 148
- - school 122 maga 147
Table 1. Top 10 hashtags in the largest connected component. Translation from Tamil language.

The investigation of those two networks is then extended by looking at the most central hashtags. For the purely questionable network, the hashtags with the highest degree value are mostly associated with political facts and frustration about the current pandemic. In fact, hashtags referring to Dominic Raab (the first secretary of state in U.K.) were used, as well as hashtags like dominance or shitholecountry. For the purely reliable network the most central hashtags are mostly pandemic-related, e.g. outbreak and school. However, alt-right hashtags such as wwg1wga, which is generally associated with the Q-Anon movement have an important role.

The right panel of Figure 4 displays the co-occurrence network related to hashtags that are significantly used in both types of posts. In this case the network has a largest connected components of 2054 nodes that is higher than in the previous cases being about 98% of the total number of hashtags in the subset. The set of hashtags used in this case in also more general and related either to COVID-19 (e.g., pandemics, WHO, health) or to politics (e.g., trump, maga, democrats).

Figure 4. Projections of Post-Hashtag bipartite networks. Top Left: representation of the projection that contains only questionable hashtags, i.e., they are only used in posts from questionable sources. Top Right: representation of the projection that contains only reliable hashtags, i.e., they are only used in posts from reliable sources. Bottom: representation of the projection containing questionable and reliable posts which have at least one hashtag in common.

4.2.1. Characterizing Commenting Behaviour for Questionable and Reliable posts

In order to understand how news are perceived, we investigate the commenting behaviour of users by means of the sentiment expressed in the comments on questionable and reliable posts. We first pre-process the text of the comments via lemmatization and we use the Bing Lexicon

(Hu and Liu, 2004), a list containing around 6800 terms related to opinions and sentiments divided by category (Positive or Negative), to obtain the sentiment of each comment. The sentiment can be simply computed considering the number of positive and negative terms, and respectively, by means of the following equation:

Figure 5. Sentiment distribution for Questionable (red) and Reliable (blue) post’s comments.

Notice that for every , where means that the comment contains only negative terms, that terms are equally distributed between positives and negatives and that the comment contains only positive terms.

By computing the sentiment of comments on questionable and reliable posts, we obtain the distribution shown in Figure 5. Noticeably, there is no difference between the sentiment of comments under Questionable an Reliable sources. Furthermore, negative sentiment is what regulates user comments, with less pronounced peaks in correspondence of positive and neutrals . To assess the difference between the two distributions, we perform a Kolmogorov-Smirnov test (Clauset et al., 2009) that reveals no difference significant difference between the two distributions (p=0.73).

To provide further insights about the commenting behavior of users under questionable and reliable posts, we model the persistence of users commenting repeatedly under a post of the same category. The modeling is performed by means of Kaplan-Meier estimates of two survival functions: the first relies on time span between user’s first and last comment, i.e., lifetime of a user with respect to comments, whilst the second takes into account the number of comments of a users. Figure 6 shows Kaplan-Meier estimates of survival functions grouped by category for the two cases. Survival curves based on comments lifetime appear very similar (Figure 5(a)) , while the curves computed by means of number of comments (Figure 5(b)) seem to present a slight lower survival probability for comments to questionable posts. In spite of the latter observation, by performing a LogRank test (Peto and Peto, 1972) we detect no significant difference between the two survival functions (p-values ). Thus we can state that that the two categories are not significantly different in terms of survival probabilities: questionable and reliable sourced are perceived in the same way by users in Gab.

Figure 6. Panel a: Kaplan-Meier estimates of survival functions computed using user lifetime, i.e., time span between user’s first and last comment. Panel b: Kaplan-Meier estimates of survival functions computed using the number of comments per user. Distributions are statistically indistinguishable in both cases, revealing the independence of comments’ persistence from source questionability.

5. Conclusions

In this work we investigate the consumption patterns of users in a segregated environment bringing the COVID-19 topic as use case. We characterize users engagement on posts in terms of interaction and how it evolves during time as the pandemic arises. Furthermore, we classified posts into Questionable and Reliable categories depending on the questionability of the information source. We investigate users endorsement to both categories in terms of social response and their evolution over time, focusing on difference and similarities of users behavior. We also exploit a statistical approach in order to build several hashtag networks divided by source questionability. We analyze the hashtag networks from a topological perspective and discuss the differences related to source type. Finally, we consider comments from both categories and study whether the questionability of the information source influences the distribution of the comments sentiment and the persistence in commenting of users. Our analysis shows that users prefer less cognitive demanding interactions such as Likes and they attention respect to questionable and reliable sources changes over time: initially, Gab users tend to prefer questionable sources but they switch to reliable ones as the pandemic advances. In terms of hashtag associations through posts, the topological analysis reveals significant differences between reliable and questionable sources in terms of both structure and semantic content.

However, the distribution of the sentiment deriving from the analysis of the comments reveals a rather similar pattern between reliable and questionable sources. Indeed, both distributions show their peak in correspondence of negative sentiments revealing that the perception of the news does not depend on the source type. Thus, our results show that the way in which users process information in a segregated environment such as Gab is homogeneous and does not depend on the source. The unconcern of Gab users with respect to the source in terms of endorsement and sentiment dynamics seems to provide further evidence for a mechanism of reinforcement that tend to interpret every news within a collective narrative that is typically found in echo chambers.


  • (1)
  • gab ([n.d.]) [n.d.]. Gab.
  • Alsaad et al. (2018) Abdallah Alsaad, Abdallah Taamneh, and Mohamad Noor Al-Jedaiah. 2018. Does social media increase racist behavior? An examination of confirmation bias theory. Technology in Society 55 (2018), 41–46.
  • Atanasov et al. (2019) Atanas Atanasov, Gianmarco De Francisci Morales, and Preslav Nakov. 2019. Predicting the Role of Political Trolls in Social Media. arXiv preprint arXiv:1910.02001 (2019).
  • Bail et al. (2018) Christopher A Bail, Lisa P Argyle, Taylor W Brown, John P Bumpus, Haohan Chen, MB Fallin Hunzaker, Jaemin Lee, Marcus Mann, Friedolin Merhout, and Alexander Volfovsky. 2018. Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences 115, 37 (2018), 9216–9221.
  • Bessi et al. (2015a) Alessandro Bessi, Mauro Coletto, George Alexandru Davidescu, Antonio Scala, Guido Caldarelli, and Walter Quattrociocchi. 2015a. Science vs conspiracy: Collective narratives in the age of misinformation. PloS one 10, 2 (2015), e0118093.
  • Bessi et al. (2015b) Alessandro Bessi, Fabiana Zollo, Michela Del Vicario, Antonio Scala, Guido Caldarelli, and Walter Quattrociocchi. 2015b. Trend of narratives in the age of misinformation. PloS one 10, 8 (2015).
  • Bovet and Makse (2019) Alexandre Bovet and Hernán A Makse. 2019. Influence of fake news in Twitter during the 2016 US presidential election. Nature communications 10, 1 (2019), 1–14.
  • Bovet et al. (2018) Alexandre Bovet, Flaviano Morone, and Hernán A Makse. 2018. Validation of Twitter opinion trends with national polling aggregates: Hillary Clinton vs Donald Trump. Scientific reports 8, 1 (2018), 1–16.
  • Burns and Eltham (2009) Alex Burns and Ben Eltham. 2009. Twitter free Iran: An evaluation of Twitter’s role in public diplomacy and information operations in Iran’s 2009 election crisis. (2009).
  • Chew and Turnley (2017) Peter A Chew and Jessica G Turnley. 2017. Understanding Russian information operations using unsupervised multilingual topic modeling. In International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation. Springer, 102–107.
  • Chou et al. (2018) Wen-Ying Sylvia Chou, April Oh, and William MP Klein. 2018. Addressing health-related misinformation on social media. Jama 320, 23 (2018), 2417–2418.
  • Cinelli et al. (2020a) Matteo Cinelli, Emanuele Brugnoli, Ana Lucia Schmidt, Fabiana Zollo, Walter Quattrociocchi, and Antonio Scala. 2020a. Selective exposure shapes the facebook news diet. PloS one 15, 3 (2020), e0229129.
  • Cinelli et al. (2019) Matteo Cinelli, Mauro Conti, Livio Finos, Francesco Grisolia, Petra Kralj Novak, Antonio Peruzzi, Maurizio Tesconi, Fabiana Zollo, and Walter Quattrociocchi. 2019. (Mis) Information Operations: An Integrated Perspective. Journal of Information Warfare (2019).
  • Cinelli et al. (2020b) Matteo Cinelli, Gianmarco De Francisci Morales, Alessandro Galeazzi, Walter Quattrociocchi, and Michele Starnini. 2020b. Echo Chambers on Social Media: A comparative analysis. arXiv preprint arXiv:2004.09603 (2020).
  • Cinelli et al. (2020c) Matteo Cinelli, Walter Quattrociocchi, Alessandro Galeazzi, Carlo Michele Valensise, Emanuele Brugnoli, Ana Lucia Schmidt, Paola Zola, Fabiana Zollo, and Antonio Scala. 2020c. The covid-19 social media infodemic. arXiv preprint arXiv:2003.05004 (2020).
  • Clauset et al. (2009) Aaron Clauset, Cosma Rohilla Shalizi, and M. E. J. Newman. 2009. Power-Law Distributions in Empirical Data. SIAM Rev. 51, 4 (2009), 661–703. arXiv:
  • CNN ([n.d.]) CNN. [n.d.]. Gab, the social network used by the Pittsburgh suspect, has been taken offline. ([n. d.]).
  • Conti et al. (2017) Mauro Conti, Daniele Lain, Riccardo Lazzeretti, Giulio Lovisotto, and Walter Quattrociocchi. 2017. It’s always April fools’ day!: On the difficulty of social network misinformation classification via propagation features. In 2017 IEEE Workshop on Information Forensics and Security (WIFS). IEEE, 1–6.
  • Del Vicario et al. (2016a) Michela Del Vicario, Alessandro Bessi, Fabiana Zollo, Fabio Petroni, Antonio Scala, Guido Caldarelli, H. Eugene Stanley, and Walter Quattrociocchi. 2016a. The spreading of misinformation online. Proceedings of the National Academy of Sciences 113, 3 (2016), 554–559. arXiv:
  • Del Vicario et al. (2017a) Michela Del Vicario, Sabrina Gaito, Walter Quattrociocchi, Matteo Zignani, and Fabiana Zollo. 2017a. News consumption during the Italian referendum: A cross-platform analysis on facebook and twitter. In

    2017 IEEE International Conference on Data Science and Advanced Analytics (DSAA)

    . IEEE, 648–657.
  • Del Vicario et al. (2016b) Michela Del Vicario, Gianna Vivaldo, Alessandro Bessi, Fabiana Zollo, Antonio Scala, Guido Caldarelli, and Walter Quattrociocchi. 2016b. Echo chambers: Emotional contagion and group polarization on facebook. Scientific reports 6 (2016), 37825.
  • Del Vicario et al. (2017b) Michela Del Vicario, Fabiana Zollo, Guido Caldarelli, Antonio Scala, and Walter Quattrociocchi. 2017b. Mapping social dynamics on Facebook: The Brexit debate. Social Networks 50 (2017), 6–16.
  • Ferrara et al. (2016) Emilio Ferrara, Onur Varol, Clayton Davis, Filippo Menczer, and Alessandro Flammini. 2016. The rise of social bots. Commun. ACM 59, 7 (2016), 96–104.
  • Gillespie (2015) Colin Gillespie. 2015. Fitting Heavy Tailed Distributions: The poweRlaw Package. Journal of Statistical Software, Articles 64, 2 (2015), 1–16.
  • Gitari et al. (2015) Njagi Dennis Gitari, Zhang Zuping, Hanyurwimfura Damien, and Jun Long. 2015. A lexicon-based approach for hate speech detection. International Journal of Multimedia and Ubiquitous Engineering 10, 4 (2015), 215–230.
  • Gleicher (2018) Nathaniel Gleicher. 2018. Taking Down More Coordinated Inauthentic Behavior: What We’ve Found So Far. Facebook Newsroom (2018).
  • Hagen et al. (2020) Loni Hagen, Stephen Neely, Thomas E Keller, Ryan Scharf, and Fatima Espinoza Vasquez. 2020. Rise of the Machines? Examining the Influence of Social Bots on a Political Discussion Network. Social Science Computer Review (2020), 0894439320908190.
  • Heickerö (2010) Roland Heickerö. 2010. Emerging cyber threats and Russian views on Information warfare and Information operations. Defence Analysis, Swedish Defence Research Agency (FOI) Stockholm.
  • Hu and Liu (2004) Minqing Hu and Bing Liu. 2004. Mining and Summarizing Customer Reviews. Proceedings of the ACM SIGKDD International Conference on Knowledge (2004).
  • Ingram (2015) Haroro J Ingram. 2015. The strategic logic of Islamic State information operations. Australian Journal of International Affairs 69, 6 (2015), 729–752.
  • J et al. (2011) Martinez-Romo J, Araujo L, Borge-Holthoefer J, Arenas A, Capitán Ja, and Cuesta Ja. 2011. Disentangling Categorical Relationships Through a Graph of Co-Occurrences. Physical review. E, Statistical, nonlinear, and soft matter physics (Oct. 2011).
  • Levens et al. (2020) Sara Levens, Omar ElTayeby, Tiffany Gallicano, Michael Brunswick, and Samira Shaikh. 2020. Using Information Processing Strategies to Predict Message Level Contagion in Social Media. In

    Advances in Artificial Intelligence, Software and Systems Engineering

    , Tareq Ahram (Ed.). Springer International Publishing, Cham, 3–13.
  • Lima et al. (2018) Lucas Lima, Julio CS Reis, Philipe Melo, Fabricio Murai, Leandro Araujo, Pantelis Vikatos, and Fabricio Benevenuto. 2018. Inside the right-leaning echo chambers: Characterizing gab, an unmoderated social system. In 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). IEEE, 515–522.
  • MacNulty and Ryan (2016) CAR MacNulty and JJCH Ryan. 2016. Using Values-Based Cultural Data to Shape Information Operations Strategies. Journal of Information Warfare 15, 3 (2016), 1–6.
  • Peto and Peto (1972) Richard Peto and Julian Peto. 1972. Asymptotically Efficient Rank Invariant Test Procedures. Journal of the Royal Statistical Society: Series A (General) 135, 2 (1972), 185–198. arXiv:
  • Schmidt et al. (2017) Ana Lucía Schmidt, Fabiana Zollo, Michela Del Vicario, Alessandro Bessi, Antonio Scala, Guido Caldarelli, H Eugene Stanley, and Walter Quattrociocchi. 2017. Anatomy of news consumption on Facebook. Proceedings of the National Academy of Sciences 114, 12 (2017), 3035–3039.
  • Shu et al. (2017) Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake news detection on social media: A data mining perspective. ACM SIGKDD Explorations Newsletter 19, 1 (2017), 22–36.
  • Starbird (2019) Kate Starbird. 2019. Disinformation’s spread: bots, trolls and all of us. Nature 571, 7766 (2019), 449.
  • Theohary (2011) Catherine A Theohary. 2011. Terrorist use of the internet: Information operations in cyberspace. DIANE Publishing.
  • Vicario et al. (2019) Michela Del Vicario, Walter Quattrociocchi, Antonio Scala, and Fabiana Zollo. 2019. Polarization and fake news: Early warning of potential misinformation targets. ACM Transactions on the Web (TWEB) 13, 2 (2019), 1–22.
  • Vosoughi et al. (2018) Soroush Vosoughi, Deb Roy, and Sinan Aral. 2018. The spread of true and false news online. Science 359, 6380 (2018), 1146–1151.
  • Waldron (2012) Jeremy Waldron. 2012. The harm in hate speech. Harvard University Press.
  • Waltz (1998) Edward Waltz. 1998. Information warfare: Principles and operations. Artech House Boston.
  • Wardle and Derakhshan (2017) Claire Wardle and Hossein Derakhshan. 2017. Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe report 27 (2017).
  • Zannettou et al. (2018) Savvas Zannettou, Barry Bradlyn, Emiliano De Cristofaro, Haewoon Kwak, Michael Sirivianos, Gianluca Stringini, and Jeremy Blackburn. 2018. What is Gab. Companion of the The Web Conference 2018 on The Web Conference 2018 - WWW ’18 (2018).
  • Zollo et al. (2017) Fabiana Zollo, Alessandro Bessi, Michela Del Vicario, Antonio Scala, Guido Caldarelli, Louis Shekhtman, Shlomo Havlin, and Walter Quattrociocchi. 2017. Debunking in a world of tribes. PloS one 12, 7 (2017).