Soros, Child Sacrifices, and 5G: Understanding the Spread of Conspiracy Theories on Web Communities

11/03/2021
by   Pujan Paudel, et al.
0

This paper presents a multi-platform computational pipeline geared to identify social media posts discussing (known) conspiracy theories. We use 189 conspiracy claims collected by Snopes, and find 66k posts and 277k comments on Reddit, and 379k tweets discussing them. Then, we study how conspiracies are discussed on different Web communities and which ones are particularly influential in driving the discussion about them. Our analysis sheds light on how conspiracy theories are discussed and spread online, while highlighting multiple challenges in mitigating them.

READ FULL TEXT VIEW PDF

Authors

page 7

page 11

09/24/2020

Understanding the Use of Fauxtography on Social Media

Despite the influence that image-based communication has on online disco...
01/21/2021

The Gospel According to Q: Understanding the QAnon Conspiracy from the Perspective of Canonical Information

The QAnon conspiracy theory claims that a cabal of (literally) bloodthir...
06/15/2018

The Road to Success: Assessing the Fate of Linguistic Innovations in Online Communities

We investigate the birth and diffusion of lexical innovations in a large...
01/21/2020

From Pick-Up Artists to Incels: A Data-Driven Sketch of the Manosphere

Over the past few years, a number of "fringe" online communities have be...
01/21/2020

Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board

This paper presents a dataset with over 3.3M threads and 134.5M posts fr...
01/21/2022

Understanding and Detecting Hateful Content using Contrastive Learning

The spread of hate speech and hateful imagery on the Web is a significan...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Conspiracy theories have been a constant presence throughout modern history, and the rise of online social networks has significantly broadened their reach. Users are increasingly exposed to them, from plots to control people through COVID vaccines [14] to claims of mass shootings being false flag incidents [35] or that child-eating cabals are controlling the United States [3]. These conspiracy theories can have serious adverse effects on society, from fostering polarization to hindering the adoption of public health measures.

As a result, automatically tracking the spread of conspiracy theories is key to understanding their reach/how their narratives evolve and to devising effective mitigations. However, developing robust computational systems to track them on social media is challenging. Conspiratorial discussion changes over time and different Web communities likely discuss them in a markedly different manner, making automated detection systems trained on one community not generalize on another one. To address this problem, previous work developed tailored techniques that either focus on a single or a handful of conspiracy theories [32, 35], or work on a single online service [5, 9, 10, 36, 39]. However, an effective approach requires tracing any new conspiracy on multiple online services at once.

In this paper, we present a multi-platform computational pipeline to identify social media messages related to a set of known conspiracy claims. We start by looking at 189 conspiracy claims debunked by the Snopes fact-checking organization. To identify social media posts discussing a particular conspiracy theory, we develop an effective Learning-to-Rank technique that can approximate the performance of a human analyst when identifying conspiratorial discussion. We then collect relevant data from seven subreddits, including the “usual suspects” as well as non-conspiracy-oriented subreddits like /r/news, /r/Conservative, and /r/politics. We also gather data from Twitter.

We show that training our approach on Reddit data produces an accurate detector when tested on Twitter, demonstrating that our system can generalize to multiple online services. Our approach lets us understand how the same conspiracy theory is discussed on different social networks and how different communities influence each other regarding conspiratorial discussions.

Between 2016 and 2021, we identify 66,274 posts and 288,721 comments on Reddit, and 379,708 tweets on Twitter discussing 189 conspiracy claims. We then conduct experiments to better understand how conspiracy theories are discussed on different communities by looking at how long they are discussed for, measuring toxicity, and comparing language using word embedding models. Finally, we use Hawkes Processes [12] to identify which Web communities are more influential in spreading conspiracy theories (i.e., conspiracy claims posted on them predict those posted elsewhere).

In summary, we make the following findings:

  • Conspiracy theories on both Twitter and Reddit are discussed for long periods; 80% of them for longer than a year.

  • Reddit comments discussing conspiracy theories show higher levels of toxicity than general discussion on the same communities and Twitter. On the other hand, submissions discussing conspiracy theories are the least toxic.

  • Conspiratorial discussion on /r/Conservative, /r/Worldnews, /r/News, and /r/AskReddit is largely similar, while communities dedicated to conspiracy theories as well as Twitter use different language. Language in conspiracy-oriented subreddits investigates the details and invites further discussion about them, whereas the four closest communities mostly stick to reporting and referring to the stories.

  • Different Web communities are influential in discussing different types of conspiracy theories. General-purpose subreddits are more influential than those dedicated to conspiracy theories, where users seem to be dissecting their details rather than aiming to make them go viral.

Overall, our work provides a solid foundation for computational studies that aim to study conspiracy theories from a large-scale, cross-platform perspective.

2 Related Work

In this section, we review previous research analyzing conspiracy theories on different social media platforms.

Conspiracy theories. Conti et al. [9] identified conspiracy theories on Facebook using the structural features of the information cascade. However, their approach yields a relatively low F1 score (65%). Tangherlini et al. [36] developed an automated pipeline to discover conspiracy theories and the narrative frameworks surrounding them. Their work presented generative narrative frameworks on social media, focusing on two popular conspiracy theories: Pizzagate and Bridgegate.

Facebook-based measurements. Zollo et al. [40] collected posts from 280K Facebook users on conspiracy-theory and science-related pages. Discussion on the former are more negative than the latter. In follow-up work, Zollo et al. [39] studied how 54M Facebook users interacted with science-related and conspiracy theory news, reporting the existence of distinct community structures with highly echo-chamber-like behavior between the two communities of users. These results were somewhat confirmed, from a different perspective, by Del Vicario et al. [10] through the lens of information cascade dynamics. Bessi et al. [5] studied the consumption pattern of news and conspiracy theories on Facebook, reporting that polarized users contribute more to the diffusion of conspiracy theories.

Event-based measurements. Another line of work studies conspiracy theories on Twitter regarding specific events. Starbird [35] performed a mixed-method analysis on the alternative media ecosystem regarding the spread of conspiracy theories about mass shooting events. She discussed how alternate media propagate and shape alternative narratives through graph analysis of the URLs being shared on Twitter. Samory et al. [32] studied four tragic events (i.e., Boston marathon bombing, Sandy Hook shooting, Aurora theater shooting, and the Malaysia Airlines flight MH17 disaster) and 10 years of discussions on the primary conspiracy discussion subreddit, /r/conspiracy.

Generalized measurements. Samory et al. [33] developed a scalable method to examine the nature of conspiratorial discussions in online communities. They analyzed over ten years of discussions in /r/conspiracy by building agent-action-target triplets in conspiratorial statements, grouping them into clusters of conspiracies, and identifying themes of conspiracy discussions. Analyzing the linguistic aspects of conspiracy communities, Klein et al. [17] used topic modeling methods to reveal distinct interests of users within an online conspiracy forum.

Further, Klein et al. [16] studied the social and linguistic precursors of involvement in conspiracy discussing employing a retrospective case-control study design. Their analysis reports consistent differences in usage of language between users who eventually join the conspiracy communities and users who do not. Following a similar theme, Phadke et al. [27] analyzed the factors contributing to the users joining conspiracy communities by studying longitudinal data from 56 different conspiracy communities on Reddit, finding that mutual interactions with conspiracy communities and marginalization outside the conspiracy communities play the most critical role on Reddit users joining conspiracy communities. Finally, Phadke et al. [26] used a mixed-methods approach to characterize the social imaginaries in conspiracy communities and the various dimensions of their language.

Our study differs from previous work in two different ways: first, we present a generalized end-to-end pipeline starting from a list of conspiracy claims, officially debunked by Snopes (easily extendable to any other source list of claims), to analyzing the language and influence between communities surrounding the conspiracy. Second, our work studies discussion of conspiracy theories between different types of communities (conspiracy-oriented and non-conspiracy-oriented subreddits) within a social media platform and across social media platforms (Reddit and Twitter).

3 Methodology and Datasets

This section presents our dataset and the methodology used to identify and analyze the discussion of conspiracy theories across multiple Web communities. Our analysis pipeline consists of the following five components:

  1. Claim and social media data collection: Selecting conspiracy theory claims from Snopes and identifying Web communities to search for conspiratorial discussions.

  2. Keyword Identification: Extracting the keywords from conspiracy claims and learning to rank the keywords for automated extraction over the entire conspiracy claims.

  3. Filtered Data Extraction: Extracting relevant posts and tweets from the extracted keywords for further analysis.

  4. Discussion of Conspiracies: Analyzing the language of discussion of conspiracy theories within the platforms using word embeddings.

  5. Influence Estimation

    : Assessing the influence Web communities have on each other wrt spreading the conspiracy theories.

3.1 Claim collection and social media data

Conspiracy claims. We begin by extracting the conspiracy claims for the conspiracy theories listed by the fact-checking organization Snopes. Snopes does not present their operational definition of a conspiracy theory nor distinguish it from the different claims they investigate on their website. Following Zannettou et al. [38], we use the operational definition of conspiracy theories as “stories that try to explain a situation or an event by invoking prior closely related stories: without proof, adopting an evidence-based approach relying on leaps of faiths to connect ambiguous actors and events.” Conspiracy theories are mostly about actions of governments or influential individuals, and range from discussing “Illuminati” to false flags on mass crisis scenarios.

Conspiracy theories on Snopes are posted under the section Politics; hence, we can expect a “bias” toward political conspiracies. Every entry on Snopes includes a claim title and a subtitle section. The former is mostly framed as questions, consisting of the subject of the conspiracy, the actions that invoked the conspiracy theory, and other actors who could be affected by it. Some examples of claim titles are: “Did AT&T have a contract to audit Dominion voting systems,” and “No, China is not amassing troops in Canada to invade the US.” From each article in the conspiracy section, we extract the claim title as the source text. We discard subtitles, as they often ignore the primary actors of the particular conspiracy claim and are primarily used to provide additional context. Overall, we collect a total of 189 conspiracy claims in our dataset, spanning from June 2016 to April 2021.

Reddit. Reddit is a social news website and forum, made up of communities where content is socially curated and promoted by members of the community through voting. We begin our data collection with the primary conspiracy discussion community of Reddit, /r/conspiracy. We then add other subreddits that are most similar to /r/conspiracy; to do so, we rely on the work by Phadke et al. [27], who identify the ancillary communities related to /r/conspiracy by calibrating a conspiracy scale on Reddit communities. We select the top 10 subreddits from [27], which we collectively identify as conspiracy-oriented communities in the rest of this paper. Naturally, conspiracy theories might be discussed on non-conspiracy-oriented subreddits as well. Hence, we select six subreddits: /r/news, /r/worldnews, /r/democrats, /r/politics, /r/Conservative, and /r/The_Donald as what we defined as non-conspiracy-oriented communities. We do so to understand the nature of discussion, and the influence of these non-conspiratorial communities.

Overall, we use the monthly dumps available from Pushshift [4] to extract the Reddit data, from January 2016 to September 2021. Although conspiracy claims gathered from Snopes only span June 2016–April 2021, we also add additional time windows to let us factor in the conversation about conspiracy theories that might occur before or after Snopes publication.

Twitter. We collect tweets available through the 1% Public Streaming API between January 1, 2016, and August 31, 2021.

Ethical Considerations. We only use data published publicly on Web communities and do not interact with users in any way. As such, this research is not considered human subjects by the IRB at our institution. Nonetheless, we follow standard ethical guidelines; for example, we make no attempts to de-anonymize users.

3.2 Keyword Identification

As a first step, we need to identify Reddit posts and comments as well as tweets that discuss specific conspiracy claims. To this end, we apply the Learning to Rank (LTR) technique [8] to determine, given a document title (i.e., the Snopes Claim) and a document store (i.e., social media posts), the optimal set of keywords that retrieve the best results from the document store.

To train the LTR model, we begin by building a ground truth dataset for a fraction of the conspiracy claims we collected, manually labeling relevant posts and tweets. We then develop features to train the LTR model and evaluate it on unseen claims to test the model’s effectiveness. We also compare our LTR model with existing keyword extraction approaches, showing that it outperforms them substantially. Finally, we use the LTR model to filter a dataset of posts that discuss the 189 conspiracy claims in our dataset, which we then use for further analysis. In the following, we describe these steps in detail.

Building ground truth annotations. We first randomly sample 50 conspiracy claims from our existing 189 Snopes claims to use as a training set for the ranking model. We refer to these claims as initial train claims. We then manually annotate the ground truth keywords for each conspiracy claim to use in training. For each claim, the ground truth is a set of keywords made up of terms from the conspiracy claims, which produces the most accurate set of results when queried against our Reddit dataset, optimizing for both relevance and size of results. We only use the posts data for building the ground truth to reduce the human efforts required to judge the relevance of comments; these are traditionally longer than the posts and are not self-contained (comments might require additional efforts to understand the context of the discussion).

Our ground-truth selection process is iterative. We start by initializing base keywords, which are two words that are part of each claim, consisting of the subject and the object of a conspiracy claim. We query the Reddit post data store with the base keywords and retrieve a set of results. Intuitively, the base keywords usually has a very broad context and will return many irrelevant results as false negatives. To evaluate the effectiveness of our keywords, we check randomly sampled 20 posts from the returned results and then adjust the base keywords by either adding new words from the claim or removing the words which are causing the false positives. We repeat the process until we find the most relevant set of keywords for the individual conspiracy claims. We repeat the whole process for each claims in initial train claims. At the end of the human annotation process, we end up with the ground truth keywords for each of the 50 conspiracy claims. This ground truth is then used to learn the ranking function to identify the best set of keywords. At the end of the process, the LTR algorithm can select the best keyword from a list of potential query terms. The potential query terms are derived from the conspiracy claims and should not require any external context. The learning model on LTR is optimized to select the ground truth we annotated as the best keyword.

Data pre-processing.

We remove stopwords from the conspiracy claims while doing basic pre-processing, e.g., lowercasing text and removing punctuation marks. We break down the cleaned-up conspiracy claims to n-grams of length four, which are our potential set of query terms. We then query our Reddit data store, retrieving all the posts matching the query terms of each of the potential query terms. The results returned for the potential query terms are then used to extract our learning to rank model features, as discussed in the upcoming section.

Feature Engineering. LTR requires us to generate a dataset consisting of a query set, relevance information, and feature values to learn the ranking. The most widely used benchmark to build models based on LTR is the LETOR dataset [28], which contains query sets, learning features, and labeled rankings related to the 25 million pages GOV2 web page collection. The dataset uses 46 different features related to Web documents retrieval. We cannot directly utilize the LETOR dataset since our data store is composed of posts from microblogs (Reddit and Twitter), which are fundamentally different from Web pages. Instead, we take inspiration from the LTR dataset and develop seven features that we use in our LTR experiments. In the following, we briefly discuss these features. First, we use the term spanning subset to describe the oldest 20%, newest 20%, and a 10% sample from between the oldest and the latest posts results returned from the query, based on the timestamp attached with their posts.

From there, we derive seven features:

  1. Count of the total number of hits (posts and comments) produced by the query.

  2. The median pairwise similarity score between the entries of spanning subset.

  3. The median similarity score between the entries in spanning subset and the conspiracy claim.

  4. The mean pairwise similarity score between the entries of spanning subset.

  5. The mean similarity score between the entries in spanning subset and the conspiracy claim.

  6. The median value of the TextRank scores 

    [23] of the query terms.

  7. The median score of the Term Frequency-Inverse Document Frequency scores [29] of the query terms.

The similarity score between the set of results, and between the query and the results are calculated using the Word Mover’s distance [18].

Training the LTR model. We use the RankLib project, part of the Lemur Toolkit [25]

which includes a family of learning to rank algorithms. We use all eight algorithms implemented by Lemur (MART, RankNet, RankBoost, AdaRank, Coordinate Ascent, LambdaMART, ListNet, and Random Forests) for our experiments. To evaluate our model, we use a rank-based evaluation metric commonly used in information retrieval settings: Mean Average Precision (MAP). We choose MAP for evaluation because it is well suited for the binary nature of the rankings (a query keyword is either relevant to the conspiracy claim or not).

Due to the inherent nature of the keyword extraction problem, our dataset has many incorrect combinations (majority class) of keywords compared to one or two correct combinations (minority class) of keywords as the optimal query. For this reason, we have to use an undersampling algorithm as this imbalance caused the learning algorithms to perform very poorly. Undersampling techniques remove examples from the training dataset that belong to the majority class to reduce class distribution skew. We experiment with multiple undersampling techniques available in the

Imbalanced-learn library [19] such as the Near Miss Undersampling, Condensed Nearest Neighbor Rule Undersampling, Tomek Links Undersampling, Edited Nearest Neighbors Rule for Undersampling, among which the NearMiss-3 algorithm produced the best results. After applying NearMiss-3 to filter out the representative samples for the false query set, each conspiracy theory claim has one or two combinations of ground truth keywords and the same number of queries with the False label.

To test our LTR model, we perform 5-fold cross-validation on our ground truth. Random Forest with LambdaMART [6] as the bagging ranker produces the best results. We further increase performance by performing a grid search over hyper-parameters (number of leaves, number of bags, number of trees, and minimum leaf support). The 5-fold cross-validation on our ground truth using the tuned Random Forest model achieves a MAP of 0.75. As we will show later in this section, this convincingly outperforms other state-of-the-art keyword extraction approaches. We further refer to this LTR model as initial train model.

Validating the LTR model. The previous experiment showed that our LTR approach could train an accurate model on our training dataset. We now want to understand whether our model generalizes and can effectively identify posts related to claims that are not in the training set. To this end, we design and conduct two experiments.

In the first experiment, we select 40 additional Snopes claims not part of the ground truth set, referring to them as reddit validation claims. These claims were posted on Snopes after September 2020, while those from the ground-truth dataset were posted before September 2020. The output keywords for these new claims can be inferred from the previously trained model. However, they will still require their respective ground truth label to evaluate the method. Hence, we generate the ground truth for the reddit validation claims through the same iterative method we used for initial train claims. Similarly, we generate the potential query terms set for the reddit validation claims, following the same steps for initial train claims. We then query the Reddit data store and generate the feature values for candidate keywords for each claim. Finally, we perform inference on these new claims to see our trained model can identify the best set of keywords. Our model achieves a MAP of 0.782, indicating our approach effectively identifies keywords and retrieves data for previously unseen claims. Additionally, this experiment also demonstrates the efficacy of our model on a small window of discussion scenarios. The model was validated on claims produced after September 2020, which is a comparatively smaller window of discussion of the conspiracies on social media than the discussion window of conspiracies the model was initially trained on (Pre-September 2020). This effectiveness on small window data implies the model helps detect emerging conspiracy theory discussions early in their formation on social media. Finally, since we have the ground truth of 40 additional Snopes claims from reddit validation claims, we expand our training set to a total of 90 claims, which we call expanded claims. We retrain a new LTR model on the expanded claims, which we refer to further as expanded model.

In the second experiment, we aim to verify the learned model is not biased towards the social media platform it was trained on (i.e., Reddit) and can be applied to cross-platform detection of conspiracy theories. To this end, we draw 40 new claims that were not a part of training the expanded model and call them twitter validation claims. Following the same steps as for the initial train claims and reddit validation claims, we generate the potential query terms set for the twitter validation claims, query the Twitter dataset, and generate the feature values for each of the candidate keywords for the claim. Note we previously generated the features from results queried on Reddit for validation, while we build the features from results queried on Twitter for this experiment. We use the previously trained expanded model model for inference on the twitter validation claims. This experiment achieves a MAP of 0.794, showing that our LTR model is portable to other platforms.

Figure 1: Validating the performance of various keyword identification methods.

Comparison with other keyword extraction methods. While we showed that our LTR model effectively identifies a set of keywords that allows us to retrieve accurate posts based on a conspiracy claim, other keyword extraction algorithms have been proposed by the research community to achieve similar tasks. We, therefore, compare our approach with the two state-of-the-art keyword extraction algorithms, YAKE [7], and KeyBERT [11].

We compare the keyword extraction methods on expanded claims, which consists of 90 conspiracy claims. The results of our comparison are illustrated in Figure 1 where we compare the percentage of valid results returned by the keywords returned from the different algorithms, including our LTR model. Yake and KeyBert perform well only for a small number of claims, and the quality of their results drops dramatically the more claims they analyze. On the other hand, our approach achieves a similar performance as the manual process that we followed to build ground truth, demonstrating that our technique is ideal for identifying conspiracy theory posts in the wild.

Extracting conspiracy posts from our dataset. After training our LTR model and validating its performance on ranking experiments with the other methods, we run it on the remaining 49 conspiracy claims to get the best set of keywords for the unannotated claims. After applying the keywords, we obtain 66,724 total posts (33,285 from conspiracy-oriented subreddits and 33,439 from non-conspiracy-oriented subreddits) and 288,721 total comments (38,186 from conspiracy-oriented subreddits and 250,535 from non-conspiracy-oriented subreddits) from Reddit, and 379,708 tweets from Twitter.

Looking at the discussion on Twitter, the average number of tweets per conspiracy theory is 994, the median is 124, and the standard deviation is 2968. Similarly, the conspiracy-oriented subreddits have an average of 149 posts per conspiracy theory, with a median of 60 and a standard deviation of 220 posts. The conspiracy-oriented subreddits have an average of 149.25 comments per conspiracy discussion, with the median being 34, and the standard deviation is 290. In contrast, the non-conspiracy-oriented subreddits have an average of 830 comments per conspiracy claim, with a median value of 123 comments, and a standard deviation of 1945. In contrast, the non-conspiracy-oriented subreddits have an average of 830 comments per conspiracy claim, with a median value of 123 comments and a standard deviation of 1945.

3.3 Discussion of Conspiracies

Language of conspiracy discussion between communities. After extracting the dataset of relevant conspiracy posts, we aim to study how the conspiracy theories are discussed on the different Web communities. To this end, we study the similarity of the language used by different communities to discuss the same conspiracy theories. We train a separate Word2Vec model [24]

for each conspiracy claim for each Web community. We use the skip-gram model to train word embeddings, shallow neural networks aiming to predict the context of a specific word. To train them, we use the full corpus (both posts and comments) about a conspiracy claim for each Web community, and all tweets for Twitter. That is, each conspiracy claim will have eight different trained word2vec models.

We extract the vector embedding of the identified query keywords and compare their cosine similarity. We align the pairwise vector embeddings by using the Procrustes matrix alignment method 

[31] to compare vector similarity of the same word across two different models. Since there are multiple keywords as part of the query set associated with a conspiracy claim, we compute the average of the cosine similarities across all keywords identified by our LTR model. The results of this experiment are discussed in Section 5.

3.4 Influence Estimation between Communities

Next, we study the temporal nature of the conspiracy discussions on the Web communities and in particular, the influence multiple communities have on each other in this context. First, we create a time series capturing the cascades of each conspiracy claim per Web community. Then, we model the influence between Web communities using a statistical framework known as Hawkes Processes [12]. Hawkes Processes can be used to quantify the influence of each Web community on the others to the discussion of conspiracy theories.

In this paper, we follow three steps to calculate influence between communities using Hawkes Processes:

  • For each claim, we extract the posts, comments, and tweets discussing it and build a time series for each community. We consider each community as a separate process.

  • We fit a Hawkes model for each conspiracy claim by following the approach discussed in ,[20, 21] which uses Gibbs sampling to infer the parameters of models from the data. The approach that we follow automatically samples the background rates and the shape of the impulse responses between the processes.

  • After calculating the influence that each community has on the others for each claim, we aggregate this influence to study the normalized influence for each community. We focused on the normalized influence since the total number of conspiracy discussions happening in each community might be different; hence the normalized influence gives us an approximation of the “efficiency” of a community to influence other communities. This allows us to understand what communities are particularly influential in spreading different types of conspiracy theories.

4 General Characterization

Figure 2: Duration of discussion of conspiracy on platforms.
Figure 3: Distribution of Severe Toxicity score for conspiracy discussions.
Figure 4: Discussion similarity between Web communities.

In this section, we first analyze how long conspiracy theories are discussed on social media. We then look at the toxicity of language used to discuss them, comparing it to the general discussion on the same Web communities.

Lifespan. We define the lifespan of a conspiracy theory as the duration between the first appearance of a conspiracy discussion and the latest appearance on the respective Reddit and Twitter datasets. In Figure 4

, we plot the Complementary Cumulative Distribution Function (CCDF) of the lifespan of conspiracy theories (in months) being discussed on both Reddit and Twitter. Conspiracy theories have a similar lifespan on both Reddit and Twitter. The vast majority of conspiracy theories are discussed for longer than a year (84.78% on Reddit and 82.85% on Twitter), with 29.79% on Reddit and 25.65% on Twitter being discussed for more than five years.

Toxicity of posts and comments. We use Google’s Perspective API [1] to measure the toxicity of the discussion surrounding the conspiracy theories. We use the Severe Toxicity model to characterize this aspect since previous work found it more reliable and less prone to “false positives” when measuring toxic speech [3, 30, 13].

For Reddit, we separate the posts and the comments discussing conspiracy theories. We also retrieve the comments replying to Reddit submissions of conspiracy theories to understand the toxicity of interaction of these discussions. We measure the Perspective score for these comments separately and refer to them as Conspiracy Submission Comments.111These comments may or may not include the keywords related to the conspiracy claim from the submission.

For Twitter, we look at the Severe Toxicity score for tweets that discussed conspiracy theories and were detected from the method discussed in Section 3.2. To compare the toxicity measures against a baseline, we sample the same number of random posts and comments from the non-conspiracy-oriented subreddits discussed in Section 3.1.

Figure 4 reports the cumulative distribution function (CDF) of the distribution of Severe Toxicity scores for our datasets. We observe that the comments about conspiracy theories are the most toxic, and by a large margin. Interestingly, while comments discussing conspiracy theories are more toxic than the general baseline, the same is not true for posts or tweets. We can also observe that the baseline posts and comments are very close to each other in terms of the distribution of toxicity. In contrast, there is a substantial gap between the toxicity score of comments and posts. The discussion on Twitter is also very close to the discussion on non-conspiracy-oriented subreddits but happens to be more toxic than the conspiracy posts themselves. Finally, we observe that the Conspiracy Submission Comments happen to be more toxic than the posts under which the discussion happens but are far less toxic than the comments discussing conspiracy.

We assess the differences between these distributions by running a two-sample Kolmogorov-Smirnov (KS) test [22]. We first compare the toxicity score in conspiracy posts and comments to the baseline set of posts and comments. We find that the differences between the following distributions are statistically significant at the level: conspiracy posts compared to the baseline posts (D=0.207), conspiracy comments compared to the baseline comments(D=0.137). As for conspiracy posts, and comments we have statistically significant differences between the distribution of conspiracy posts and comments (, D=0.337). We also find that the toxicity scores of comments discussing conspiracy and conspiracy submission comments are statistically different (, D=0.151). Finally, we find that the differences between the conspiracy posts and comments compared with the tweets discussing conspiracy are statistically significant (), and so are conspiracy posts and tweets (D=0.221), and conspiracy comments and tweets (D=0.160).

Takeaways. In summary, we find that conspiracy theories on both Twitter and Reddit are discussed for long periods, with over 80% of conspiracy theories on both platforms being discussed for over a year. We find that Reddit comments discussing conspiracy theories are more toxic than general discussions. While Reddit posts and tweets show closer toxicity to general posts, they still present statistically significant differences compared to the baseline.

5 Language Analysis

As mentioned in Section 3.3, we want to understand whether different communities discuss conspiracy theories differently. For each claim, we take the word embedding model calculated for each community and compute the pairwise cosine similarity between all community pairs. We then average these values across all claims and build a heatmap of the average similarity of conspiracy discussion in Web communities in Figure 4.

We observe that, on average, conspiracy-oriented subreddits are the least similar to other communities with respect to language. Discussion on Twitter maintains close similarity to the discussions across other non-conspiracy-oriented subreddits. There are four subreddits (/r/AskReddit, /r/news, /r/worldnews, and /r/Conservative) which stand out as the most similar to each other with average cosine similarity higher than . The most similar remaining subreddits are /r/The_Donald, /r/politics, with an average similarity score of between them. We also note that /r/AskReddit, which advertises itself as a “place to ask and answer thought-provoking questions,” is much closer to /r/news, /r/worldnews than the conspiracy-oriented communities. Upon further investigation, some representative examples of conspiracy discussion occurring on /r/AskReddit are:

  • “can someone explain the supposed murder of vince foster by hillary clinton?”

  • “trump’s birther place ? donald trump was born in the country of pakistan and not in the united states is it true ? how.”

  • “is the child rape lawsuit against trump real or fake news?”

This implies that the discussion on /r/AskReddit is relatively shallow, mostly bringing up the news snippets from those news-based communities and asking if the claim is true or false. This is in stark contrast to the more investigative and exploratory discussion of claims that happens in conspiracy-oriented subreddits.

Takeaways. On Reddit, the discussion on /r/news, /r/worldnews, and /r/Conservative are closest to each other, while conspiracy-focused subreddits tend to show discussion that is further away. This is also discussed in some selected case studies presented in the next section.

6 Influence Estimation

We start by looking at the normalized influence222The total external influence is the sum of the normalized influence from a single source community to the rest of the platforms, and can amount to more than 100%. for each community for all the claims in our dataset; see Figure 5. Note that /r/Conservative is by far the community with more influence on others, followed by /r/worldnews, /r/news, and /r/AskReddit. In particular, /r/Conservative has the biggest influence on Twitter, indicating that the members of this subreddits are actively spreading conspiracy theories on that platform. Conspiracy-oriented subreddits have a lower external influence, perhaps indicating that their users are mainly focused on dissecting conspiracy theories rather than spreading them to other communities. Interestingly, Twitter has a much lower external influence than the other subreddits under study, which could be attributed to the much larger size of the platform compared to other communities. The large size of the platform could explain the lower external influence of Twitter.

Figure 5: Normalized Influence estimation: All conspiracies.

Next, we are interested in investigating whether different communities are influential in spreading different types of conspiracy theories. Based on our dataset, we select conspiracy claims that belong to three topics: Hillary Clinton (42 claims), Donald Trump (22 claims), and Covid-19 (44 claims). Note that we do not include /r/The_Donald community while computing the normalized influence for Covid-19 related claims because the subreddit was banned on June 29, 2020, which makes understanding Covid-19 related conspiracy on the community not possible.

Figure 6: Normalized Influence estimation: Clinton related conspiracies.
Figure 7: Normalized Influence estimation: Trump related conspiracies.
Figure 8: Normalized Influence estimation: Covid-19 related conspiracies.

Figure 8 shows the normalized influence for each community regarding conspiracies related to Hillary Clinton. /r/news has the most external influence for this type of conspiracy. This is interesting because this subreddit is not dedicated to conspiracy discussion but rather to discussing news in general, yet, it is highly influential in spreading conspiracy theories externally to other platforms. The second most influential Web community is /r/Conservative, which is perhaps expected since supporters of a political party have incentives in discrediting the candidate from the other party.

Figure 8 reports the normalized influence for each Web community regarding Donald Trump-related conspiracies. Here we observe that the same trend as Clinton conspiracies does not hold: /r/Conservative is by far the most influential community in spreading Donald Trump-related conspiracies, with a particularly outsized influence on Twitter (209.9%) /r/Democrats, which could have an interest in discrediting Trump as a political opponent, actually have the second-lowest external influence after Twitter. We observe that /r/The_Donald is more influential in spreading Clinton-related conspiracies (44.81%), and much less influential for Trump-related ones (18.85%). This might be indicative of the motivation of the community to actively smear and spread false rumors about Donald Trump’s detractors/political opponents during the presidential election as well as a general avoidance of discussing conspiracies that paint in a bad light.

Finally, Figure 8 illustrates the normalized influence of different Web communities with regards to Covid-19 related conspiracy theories; /r/worldnews is the most influential community, followed by /r/politics. Both communities have an outsized influence on Twitter (165.68% and 76.90%, respectively). This indicates that communities dedicated to general news discussion are fertile breeding grounds for the spread of conspiracies.

Case Studies: Qualitative Discourse Analysis. While we find that /r/conservative, /r/worldnews, and /r/news are the most influential communities in spreading conspiracy theories, the actual discourse used in these communities may be different from the rest of the communities that we study, as the results in Section 5 seem to suggest. To confirm this, we perform a qualitative analysis of two conspiracy theories for which /r/politics, /r/news, and /r/worldnews are the most influential. The first conspiracy theory states that Rep. Marjorie Taylor Greene said that “Jewish Lasers” caused California wildfires, while the second one states that Cesar Sayoc333A person who sent pipe bombs to Democratic officials in 2020. was a lifelong Democrat who only recently covered his van in Trump Stickers. Our qualitative analysis finds that /r/news, /r/politics, and /r/worldnews merely report on the conspiracy theories without delving into the details. Looking at Twitter, we find that the platform users are mostly critical of the conspiracies and the actors spreading them. Conversely, more politically polarized communities (e.g., /r/The_Donald) primarily discuss the conspiracy to defend the actors involved in it. Finally, we find that the conspiracy-oriented subreddits discuss the conspiracies in greater detail, investigating the claim from a neutral and investigative angle. Examples of comments from the various communities we studied can be found in the Appendix (omitted due to space reasons).

Takeaways. Our first finding is that non-conspiracy-oriented subreddits like /r/Conservative, /r/news, and /r/worldnews emerge as the most influential communities in spreading conspiracy theories and conspiracy dedicated communities are not as influential. When we consider these results together with the findings from Section 5, they further reinforce that conspiracy-oriented subreddits mainly act as echo chambers when discussing conspiracy theories.

We also find that different communities are influential in spreading conspiracy theories on different topics. While /r/news is the most influential subreddits for conspiracies related to Hillary Clinton, /r/conservative is the most influential for conspiracy theories related to Donald Trump. Interestingly, /r/democrats is the least influential subreddit when discussing these conspiracies. When looking at Covid-19 related conspiracies, /r/worldnews and /r/politics are the two most influential subreddits in spreading them. This is quite alarming as it highlights that mainstream news communities play a major role in spreading health-related conspiracy theories.

Finally, a qualitative analysis of selected case studies shows that the detailed breakdown of the conspiracy theories happens away from the most influential communities, which are primarily limited to reporting related events.

7 Discussion and Conclusion

This paper presented a computational pipeline based on Learning-to-Rank to identify online posts discussing a set of known conspiracy theories. We used it to collect Twitter and Reddit discussion for a set of 189 known conspiracy theories identified by Snopes, finding hundreds of thousands Reddit submissions, comments, and tweets discussing them. We make two main findings in how these conspiracies are discussed and spread on Web communities.

First, we find that there are quantitative differences in how conspiracy-oriented and more mainstream-oriented communities discuss conspiracy theories. In short, conspiracy-oriented communities are very clearly involved in investigative and exploratory discussion; i.e., they are actively engaged in developing the theories, while more non-conspiracy-oriented communities primarily discuss conspiracy theories at a much higher level. Second, we find that conspiracy-oriented communities themselves have relatively little influence concerning the overall spread of conspiracy theories. Instead, the spread is mainly driven by more mainstream communities (e.g., /r/worldnews in the case of COVID-19 conspiracies).

In conjunction, these two findings have profound implications. Our findings contribute to the growing body of evidence that simple solutions like banning/deplatforming worrying communities may not be as effective as they would seem at first glance [2, 15, 13]. For conspiracy theories, in particular, our results show that most of them are not driven by the communities with an evident devotion to them. While conspiracy-oriented communities go deeper into the claims, they are relatively isolated with little influence outside their sphere. This implies that the net effect of removing them would be minimal, at least in terms of quashing the prevalence of their discussion elsewhere.

This leads to the obvious question: what is a good solution for this class of problems? While we provide a data-driven method to detect the discussion of conspiracy theories, alas, this does not say much about to do about them. That said, we do have promising suggestions for future work. First, if hard moderation techniques are employed, our findings suggest that they should also target communities that use conspiratorial claims to push an agenda. Although these are not actively involved in the creation of conspiracy theories, a good case can be made that they are weaponizing them. Moreover, early research results indicate that soft moderation techniques might help mitigate the spread of disinformation [34, 37], and our method could be used to automate the use of these techniques. For instance, it could be integrated into Reddit’s automod bot to post a warning under conspiracy theory-related discussion.

Acknowledgments. This work was supported by the National Science Foundation under grants CNS-1942610, IIS-2046590, CNS-2114407, and CNS-2114411, a grant from the Media Analysis Ecosystems Group, and the UK’s National Research Centre on Privacy, Harm Reduction, and Adversarial Influence Online (REPHRAIN, UKRI grant: EP/V011189/1).

References

  • [1] Perspective api. https://www.perspectiveapi.com/.
  • [2] S. Ali, M. H. Saeed, E. Aldreabi, J. Blackburn, E. De Cristofaro, S. Zannettou, and G. Stringhini. Understanding the effect of deplatforming on social networks. In ACM Web Science Conference, 2021.
  • [3] M. Aliapoulios, A. Papasavva, C. Ballard, E. De Cristofaro, G. Stringhini, S. Zannettou, and J. Blackburn. The gospel according to q: Understanding the qanon conspiracy from the perspective of canonical information. arXiv preprint arXiv:2101.08750, 2021.
  • [4] J. Baumgartner, S. Zannettou, B. Keegan, M. Squire, and J. Blackburn. The pushshift reddit dataset. In Proceedings of the international AAAI conference on web and social media, volume 14, pages 830–839, 2020.
  • [5] A. Bessi, M. Coletto, G. A. Davidescu, A. Scala, G. Caldarelli, and W. Quattrociocchi. Science vs conspiracy: Collective narratives in the age of misinformation. PloS one, 10(2):e0118093, 2015.
  • [6] C. J. Burges. From ranknet to lambdarank to lambdamart: An overview. Learning, 11(23-581):81, 2010.
  • [7] R. Campos, V. Mangaravite, A. Pasquali, A. M. Jorge, C. Nunes, and A. Jatowt. Yake! collection-independent automatic keyword extractor. In European Conference on Information Retrieval, pages 806–810. Springer, 2018.
  • [8] Z. Cao, T. Qin, T.-Y. Liu, M.-F. Tsai, and H. Li. Learning to rank: from pairwise approach to listwise approach. In

    Proceedings of the 24th international conference on Machine learning

    , pages 129–136, 2007.
  • [9] M. Conti, D. Lain, R. Lazzeretti, G. Lovisotto, and W. Quattrociocchi. It’s always april fools’ day!: On the difficulty of social network misinformation classification via propagation features. In 2017 IEEE Workshop on Information Forensics and Security (WIFS), pages 1–6. IEEE, 2017.
  • [10] M. Del Vicario, A. Bessi, F. Zollo, F. Petroni, A. Scala, G. Caldarelli, H. E. Stanley, and W. Quattrociocchi. The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3):554–559, 2016.
  • [11] M. Grootendorst. Keybert: Minimal keyword extraction with bert., 2020.
  • [12] A. G. Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 58(1):83–90, 1971.
  • [13] M. Horta Ribeiro, S. Jhaver, S. Zannettou, J. Blackburn, G. Stringhini, E. De Cristofaro, and R. West. Do platform migrations compromise content moderation? evidence from r/the_donald and r/incels. Proceedings of the ACM on Human-Computer Interaction, (CSCW2):1–24, Oct 2021.
  • [14] A. M. Jamison, D. A. Broniatowski, M. Dredze, A. Sangraula, M. C. Smith, and S. C. Quinn. Not just conspiracy theories: Vaccine opponents and proponents add to the covid-19 ‘infodemic’on twitter. Harvard Kennedy School Misinformation Review, 2020.
  • [15] S. Jhaver, C. Boylston, D. Yang, and A. Bruckman. Evaluating the effectiveness of deplatforming as a moderation strategy on twitter. Proc. ACM Hum.-Comput. Interact, 2021.
  • [16] C. Klein, P. Clutton, and A. G. Dunn. Pathways to conspiracy: The social and linguistic precursors of involvement in reddit’s conspiracy theory forum. PloS one, 14(11):e0225098, 2019.
  • [17] C. Klein, P. Clutton, and V. Polito. Topic modeling reveals distinct interests within an online conspiracy forum. Frontiers in psychology, 9:189, 2018.
  • [18] M. Kusner, Y. Sun, N. Kolkin, and K. Weinberger. From word embeddings to document distances. In International conference on machine learning, pages 957–966. PMLR, 2015.
  • [19] G. Lemaître, F. Nogueira, and C. K. Aridas. Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning. The Journal of Machine Learning Research, 18(1):559–563, 2017.
  • [20] S. Linderman and R. Adams. Discovering latent network structure in point process data. In International Conference on Machine Learning, pages 1413–1421. PMLR, 2014.
  • [21] S. W. Linderman and R. P. Adams. Scalable bayesian inference for excitatory point process networks. arXiv preprint arXiv:1507.03228, 2015.
  • [22] B. W. Lindgren. Statistical theory. 1993.
  • [23] R. Mihalcea and P. Tarau. Textrank: Bringing order into text. In

    Proceedings of the 2004 conference on empirical methods in natural language processing

    , pages 404–411, 2004.
  • [24] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, 2013.
  • [25] P. Ogilvie and J. Callan. Experiments using the lemur toolkit. In TREC, volume 1, pages 103–108, 2001.
  • [26] S. Phadke, M. Samory, and T. Mitra. Characterizing social imaginaries and self-disclosures of dissonance in online conspiracy discussion communities. Proceedings of the ACM on Human-Computer Interaction, 2021.
  • [27] S. Phadke, M. Samory, and T. Mitra. What makes people join conspiracy communities? role of social factors in conspiracy engagement. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW3):1–30, 2021.
  • [28] T. Qin, T.-Y. Liu, J. Xu, and H. Li. Letor: A benchmark collection for research on learning to rank for information retrieval. Information Retrieval, 13(4):346–374, 2010.
  • [29] J. Ramos et al. Using tf-idf to determine word relevance in document queries. In Proceedings of the first instructional conference on machine learning

    , volume 242, pages 29–48. Citeseer, 2003.

  • [30] M. H. Ribeiro, J. Blackburn, B. Bradlyn, E. De Cristofaro, G. Stringhini, S. Long, S. Greenberg, and S. Zannettou. The evolution of the manosphere across the web. In Proceedings of the International AAAI Conference on Web and Social Media, volume 15, pages 196–207, 2021.
  • [31] A. Ross. Procrustes analysis. Course report, Department of Computer Science and Engineering, University of South Carolina, 26, 2004.
  • [32] M. Samory and T. Mitra. Conspiracies online: User discussions in a conspiracy community following dramatic events. In Proceedings of the International AAAI Conference on Web and Social Media, volume 12, 2018.
  • [33] M. Samory and T. Mitra. ’the government spies using our webcams’ the language of conspiracy theories in online discussions. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW):1–24, 2018.
  • [34] F. Sharevski, R. Alsaadi, P. Jachim, and E. Pieroni. Misinformation warning labels: Twitter’s soft moderation effects on covid-19 vaccine belief echoes. arXiv preprint arXiv:2104.00779, 2021.
  • [35] K. Starbird. Examining the alternative media ecosystem through the production of alternative narratives of mass shooting events on twitter. In Proceedings of the International AAAI Conference on Web and Social Media, volume 11, 2017.
  • [36] T. R. Tangherlini, S. Shahsavari, B. Shahbazi, E. Ebrahimzadeh, and V. Roychowdhury. An automated pipeline for the discovery of conspiracy and conspiracy theory narrative frameworks: Bridgegate, pizzagate and storytelling on the web. PloS one, 15(6):e0233879, 2020.
  • [37] S. Zannettou. “i won the election!”: An empirical analysis of soft moderation interventions on twitter. In AAAI International Conference on Web and Social Media (ICWSM), 2021.
  • [38] S. Zannettou, M. Sirivianos, J. Blackburn, and N. Kourtellis. The web of false information: Rumors, fake news, hoaxes, clickbait, and various other shenanigans. Journal of Data and Information Quality (JDIQ), 11(3):1–37, 2019.
  • [39] F. Zollo, A. Bessi, M. Del Vicario, A. Scala, G. Caldarelli, L. Shekhtman, S. Havlin, and W. Quattrociocchi. Debunking in a world of tribes. PloS one, 12(7):e0181821, 2017.
  • [40] F. Zollo, P. K. Novak, M. Del Vicario, A. Bessi, I. Mozetič, A. Scala, G. Caldarelli, and W. Quattrociocchi. Emotional dynamics in the age of misinformation. PloS one, 10(9):e0138740, 2015.

Appendix A Case Studies

a.1 Case Study 1: Did Rep. Marjorie Taylor Greene Say ‘Jewish Lasers’ Caused California Wildfires?

Most influential community: /r/politics, /r/news.

a.1.1 /r/politics, /r/news

We first study how the subreddits which had the most influence in the discussion of the given conspiracy theory; /r/politics, and /r/news discuss the conspiracy. We find that the discussion in these most influential subreddits is happening mostly on the line of bringing up the incident, and not going into the details of it trying to further discuss it. A few posts sampled from the community discussing the conspiracy theory follows:

  • marjorie taylor greene penned conspiracy theory that a laser beam from space started deadly 2018 california wildfire

  • marjorie taylor greene spread false theory that jews started camp fire with space laser

a.1.2 Conspiracy-oriented Communities

While in the case of conspiracy-oriented communities, we observe that the discussions try to examine the conspiracy in depth; from both sides, about a neutral, investigative discourse. The discussion in this community attempts to examine the stupidity of the claim, as well as calls to examine it further with more introspection; despite how bizarre it might be. A few posts sampled from the community discussing the conspiracy theory follows:

  • strange: it works for me you can always just google it the gist of it is: “jewish" space laser refers to a conspiracy theory about an in orbit space laser possibly created by rothschild inc that is allegedly responsible for the california wildfires the theory was notably purported by congresswoman marjorie taylor greene in a 2018 facebook post

  • What’s weird is i tried to look for the origin of her claims she‘s jumping to conclusions and making pretty wild connections but [never explicitly says anything about jewish space lasers‚ (https://www newsweek com/marjorie taylor greene jewish space laser mockery 1565325) in her 2018 fb post. I’m not trying to defend her: she’s nucking futs: but unless i’m missing something i don’t see where the jewish space laser meme came from

  • fun fact: not once did margorie taylor greene say anything about "jewish space lasers" in her comment: she was wondering if people who claimed to see lasers starting the fire were actually seeing a mistargeted beam from a satellite made by a company called solaren …

a.1.3 Twitter

We see the discussion regarding the conspiracy in Twitter play out interestingly, and differently than other platforms. The discourse points towards being critical of the person who spread the conspiracy, and trying to hold them accountable if the actors of the conspiracy claim ( Marjorie Taylor Green in this instance) are present on the platform. They also invoke the conspiracy theory on other remotely relatable incidents, and due to the nature of the platform allowing them to do so; often confronting the actor directly about the conspiracy they spread. A few posts sampled from the community discussing the conspiracy theory follow:

  • I don’t disagree. But taking advice from someone who didn’t know about the holocaust and thinks Jewish space lasers start wildfires probably isn’t great parenting.

  • Hey: remember that time you said a Jewish space laser started the CA wildfires?

  • Perhaps the woman who thinks wildfires are set by Jewish space lasers shouldn’t be lecturing an epidemiologist on what measures are effective against the spread of an epidemic. Sit down: stay in your lane.

a.2 Case Study 2: Was Cesar Sayoc a Lifelong Democrat Who “Recently” Covered His Van in Trump Stickers?

Most influential community: /r/worldnews.

a.2.1 /r/worldnews

As we can see, the discourse nature in the community seems to be heavily focused regarding reporting of the news incidents, and the emerging details of the case. The new details seem to be adding up to the common theme towards blaming Trump for the incident. A few posts sampled from the community discussing the conspiracy theory follows:

  • suspected maga bomber id’d as ’native american trump supporter’ cesar sayoc.

  • pro trump mail bomb suspect cesar sayoc held without bail after new york court hearing

  • pipe bomb suspect cesar sayoc describes trump rallies as ’new found drug’

  • maga bomber cesar sayoc was radicalized by trump and fox news before terror plot: lawyer says

a.2.2 /r/The_Donald

Analyzing how the discussion plays out in this community is particularly interesting as this subreddit is comprised primarily of Trump supporters. We discovered that, as expected, the discussions were heavily communicating the idea that Trump shouldn’t be meddled in the whole incident, as he is not responsible for violence someone else started. Strikingly different from other communities blaming Trump, the discussion in this community is trying to defend him, and criticize the general public reaction of how the mainstream media has falsely involved Trump on the whole story. A few posts sampled from the community discussing the conspiracy theory follows:

  • just like james t hodgkinson: the anti trump bernie fan who tried to massacre republican congressmen: apparent trump fan cesar sayoc is clearly mentally ill bernie wasn’t to blame for hodgkinson’s actions: and trump isn’t to blame for sayoc’s actions

  • everything you wanted to know about cesar sayoc yes: he appears to be a trump fan if he’s guilty: he did nothing violent; he’s one bad apple out of how many millions of us?

  • clare lopez on twitter: more background on sayoc just doesn’t add up lots more investigation needed! cesar sayoc: maga bomber, facebook betrays democrat trump infiltrator: anti gop posts

a.2.3 Conspiracy-oriented communities

We find that discussion in the conspiracy-oriented communities regarding this conspiracy is very Trump critical, and discuss how his speech and actions are indirectly inciting violence. A few posts sampled from the community discussing the conspiracy theory follows:

  • i don’t have cable sorry man it makes perfect sense that a bunch of paranoid: backwoods: gun loving morons who have been told that liberals and the press are the enemies of the people would do this who have been told, 2nd amendment people you know what to do: requires no leap of faith whatsoever edit: [aaaand we have a bingo](https://metro co uk/2018/10/26/suspected maga bomber identified native american trump supporter cesar sayoc 8079040/)

  • he preaches hate he incites violence he inspires attacks we knew this before friday’s arrest of cesar sayoc: ….

  • viral photo does not show pipe bomb suspect cesar sayoc with a ’democrat donor’ (but of course that does not stop crazy conspiracy theorists who seem to disregard at the same time the extensively documented pro trump activity of the magabomber)

a.2.4 Twitter

As observed in the previous example, we find that discussions on Twitter try to connect the instance of a discussion to a greater cause/issue. They bring up the problem, be critical of Trump and his supporters, and recollect how activities such as in this incident have been a repeating pattern of violence over the years. This line of a discourse of connecting the individual incidents related to the conspiracy theory to a larger problem/ context occurs to be a common theme in Twitter discussions. A few posts sampled from the community discussing the conspiracy theory follows:

  • RT : Rabid MAGA savages have been dangerous for years. Remember MAGA Bomber Cesar Sayoc: who sent pipe bombs to Trump’s percieved enemies.

  • It seems like ages ago: but we had a warning how dangerous these lunatic Trump ppl were by this guy: Cesar Sayoc. Remember this asshole? He sent pipe bombs to Clinton: Obama: Deniro and CNN. Storming the Capitol isn’t far-fetched for the cultists if you think about it.

  • RT : Violence inspired by Trump: Charlottesville, Tree of Life Synagogue, Capital Gazette, Cesar Sayoc pipe bombs, El Paso shootings.

  • Not to mention Cesar Sayoc and the Tree of Life Synagogue. Those who stuck with Trump have a lot of blood on their hands.