Lumen: A Machine Learning Framework to Expose Influence Cues in Text

07/12/2021
by   Hanyu Shi, et al.
University of Florida
0

Phishing and disinformation are popular social engineering attacks with attackers invariably applying influence cues in texts to make them more appealing to users. We introduce Lumen, a learning-based framework that exposes influence cues in text: (i) persuasion, (ii) framing, (iii) emotion, (iv) objectivity/subjectivity, (v) guilt/blame, and (vi) use of emphasis. Lumen was trained with a newly developed dataset of 3K texts comprised of disinformation, phishing, hyperpartisan news, and mainstream news. Evaluation of Lumen in comparison to other learning models showed that Lumen and LSTM presented the best F1-micro score, but Lumen yielded better interpretability. Our results highlight the promise of ML to expose influence cues in text, towards the goal of application in automatic labeling tools to improve the accuracy of human-based detection and reduce the likelihood of users falling for deceptive online content.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

10/31/2018

Multimodal Machine Learning for Automated ICD Coding

This study presents a multimodal machine learning model to predict ICD-1...
09/30/2021

The Explanatory Gap in Algorithmic News Curation

Considering the large amount of available content, social media platform...
09/10/2018

Multi-view Models for Political Ideology Detection of News Articles

A news article's title, content and link structure often reveal its poli...
08/26/2019

Are We Safe Yet? The Limitations of Distributional Features for Fake News Detection

Automatic detection of fake news — texts that are deceitful and misleadi...
10/27/2018

Suspicious News Detection Using Micro Blog Text

We present a new task, suspicious news detection using micro blog text. ...
05/01/2021

Capturing Logical Structure of Visually Structured Documents with Multimodal Transition Parser

While many NLP papers, tasks and pipelines assume raw, clean texts, many...
09/16/2015

amLite: Amharic Transliteration Using Key Map Dictionary

amLite is a framework developed to map ASCII transliterated Amharic text...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Web has increasingly become an ecosystem for deception. Beyond social engineering attacks such as phishing which put Internet users and even national security at great peril [35, 16], false information is greatly shaping the political, social, and economic landscapes of our society, exacerbated and brought to light in recent years by social media. Recent years have undoubtedly brought to light the dangers of selective exposure111A theory akin to confirmation bias and often used in Communication research pertaining to the idea that individuals favor information that reinforces their prior beliefs [66]., and false content can increase individuals’ beliefs in the falsehood [54]. These deceptive and divisive misuses of online media have evolved the previously seemingly tacit political lines to the forefront of our very own individual identities [27], thus raising concern for the anti-democracy effects caused by this polarization of our society [6].

A key invariant of deceptive content is the application of influence cues in the text. Research on deception detection [12, 56, 28, 55, 25] reveals that deceivers apply influence cues in messages to increase their appeal to the recipients. We posit several types of influence cues that are relevant and prevalent in deceptive texts: (i) the principle of persuasion applied [12, 14] (e.g., authority, scarcity), (ii) the framing of the message as either potentially causing a gain or a loss [55, 25], (iii) the positive/negative emotional salience/valence of the content [56, 28], (iv) the subjectivity or objectivity of sentences in the text, (v) attribution of blame/guilt, and (vi) the use of emphasis.

Additionally, works such as Ross et al. [54] found that the ability to think deliberately and analytically (i.e., “System 2” [25]) is generally associated with the rejection of disinformation, regardless of the participants’ political alignment—thus, the activation of this analytical thinking mode may act as an “antidote” to today’s selective exposure. We therefore advocate that interventions should mitigate deceptive content via the exposure of influence cues in texts. Similar to the government and state-affiliated media account labels on Twitter [69], bringing awareness to the influence cues present in misleading texts may, in turn, aid users by providing additional context in the message, thus helping users think analytically, and benefit future work aimed at the automatic detection of deceptive online content.

Towards this goal, we introduce Lumen222From Latin, meaning “to illuminate.”

, a two-layer learning framework that exposes influence cues in text using a novel combination of well-known existing methods: (i) topic modeling to extract structural features in text; (ii) sentiment analysis to extract emotional salience; (iii) LIWC 

333LIWC [49] is a transparent text analysis program that counts words in psychologically meaningful categories and is widely used to quantify psychometric characteristics of raw text data.to extract dictionary features related to influence cues; and (iv) a classification model to leverage the extracted features to predict the presence of influence cues. To evaluate Lumen’s effectiveness, we leveraged our dataset of 2,771 diverse pieces of online texts, manually labeled by our research team according to the influence cues in the text using standard qualitative analysis methods. We must, however, emphasize that Lumen is not a consumer-focused end-product, and instead is insomuch as a module for application in future user tools that we shall make publicly available to be leveraged by researchers in future work (as described in Sec. 6.2).

Our newly developed dataset is comprised of nearly 3K texts, where 1K were mainstream news articles, and 2K deceptive or misleading content in the form of: Russia’s Internet Research Agency’s (IRA) propaganda targeting Americans in the 2016 U.S. Presidential Election, phishing emails, and fake and hyperpartisan news articles. Here, we briefly define these terms, which we argue fall within the same “deceptive text umbrella.” Disinformation constitutes any any purposefully deceptive content aimed at altering the opinion of or confusing an individual or group. Within disinformation, we find instances of propaganda (facts, rumors, half-truths, or lies disseminated manipulatively for the purpose of influencing public opinion [62]) and fake news (fabricated information that mimics real online news [54] and considerably overlaps with hyperpartisan news [6]). Misinformation’s subtler, political form is hyperpartisan news, which entails a misleading coverage of factual events through the lens of a strong partisan bias, typically challenging mainstream narratives [54, 6]. Phishing is a social engineering attack aimed at influencing users via deceptive arguments into an action (e.g., clicking on a malicious link) that will go against the user’s best interests. Though phishing differs from disinformation in its modus operandi, we argue that it overlaps with misleading media in their main purpose—to galvanize users into clicking a link or button by triggering the victim’s emotions [6], and leveraging influence and deception.

We conducted a quantitative analysis of the dataset, which showed that authority and commitment were the most common principles of persuasion in the dataset (71% and 52%, respectively), the latter of which was especially common in news articles. Phishing emails had the largest occurrence of scarcity (65%). Framing was a relatively rare occurrence (13% gain and 7% loss), though gain framing was predominantly prevalent in phishing emails (41%). The dataset invoked an overall positive sentiment (VADER compound score of ), with phishing emails containing the most positive average sentiment () and fake news with the most negative average sentiment (). Objectivity and subjectivity occurred in over half of the dataset, with objectivity most prevalent in fake news articles (72%) and subjectivity most common in IRA ads (77%). Attribution of blame/guilt was disproportionately frequent for fake and hyperpartisan news (between 38 and 45%). The use of emphasis was much more common in informal texts (e.g., IRA social media ads, 70%), and less common in news articles (e.g., mainstream media, 17%).

We evaluated Lumen in comparison with other traditional ML and deep learning algorithms.

Lumen presented the best performance in terms of its -micro score (69.23%), performing similarly to LSTM (69.48%). In terms of -macro, LSTM (64.20%) performed better than Lumen (58.30%); however, Lumen presented better interpretability for intuitively understanding of the model, as it provides both the relative importance of each feature and the topic structure of the training dataset without additional computational costs, which cannot be obtained with LSTM as it operates as a black-box. Our results highlight the promise of exposing influence cues in text via learning methods.

33footnotetext: Lumen and dataset will be available upon publication.

This paper is organized as follows. Section 2 positions this paper’s contributions in comparison to related work in the field. Section 3 details the methodology used to generate our coded dataset. Section 4 describes Lumen’s design and implementation, as well as Lumen’s experimental evaluation. Section 5 contains a quantitative analysis of our dataset, and Lumen’s evaluation and performance. Section 6 summarizes our findings and discusses the limitations of our work, as well as recommendations for future work. Section 7 concludes the paper.

2 Related Work

This section briefly summarizes the extensive body of work on machine learning methods to automatically detect disinformation and hyperpartisan news, and initial efforts to detect the presence of influence cues in text.

2.1 Automatic Detection of Deceptive Text

2.1.1 Phishing and Spam

Most anti-phishing research has focused on automatic detection of malicious messages and URLs before they reach a user’s inbox via a combination of blocklists [15, 39] and ML [48, 10]. Despite yielding high filtering rates in practice, these approaches cannot prevent zero-day phishing444A new, not-yet reported phishing email.from reaching users because determining maliciousness of text is an open problem and phishing constantly changes, rendering learning models and blocklists outdated in a short period of time [10]. Unless the same message has been previously reported to an email provider as malicious by a user or the provider has the embedded URL in its blocklist, determining maliciousness is extremely challenging. Furthermore, the traditional approach to automatically detect phishing takes a binary standpoint (phishing or legitimate, e.g., [7, 60, 11]), potentially overlooking distinctive nuances and the sheer diversity of malicious messages.

Given the limitations of automated detection in handling zero-day phishing, human detection has been proposed as a complementary strategy. The goal is to either warn users about issues with security indicators in web sites, which could be landing pages of malicious URLs [17, 67] or train users into recognizing malicious content online [58]. These approaches are not without their own limitations. For example, research on the effectiveness of SSL warnings shows that users either habituate or tend to ignore warnings due to false positives or a lack of understanding about the warning message [72, 2].

2.1.2 Fake & Hyperpartisan Media

The previously known “antidote” to reduce polarization and increase readers’ tolerance to selective exposure was via the use of counter-dispositional information [6]. However, countering misleading texts with mainstream or high-quality content in the age of rapid-fire social media comes with logistical and nuanced difficulties. Pennycook and Rand [50] provide a thorough review of the three main approaches employed in fighting misinformation: automatic detection, debunking by field experts (which is not scalable), and exposing the publisher of the news source.

Similar to zero-day phishing, disinformation is constantly morphing, such that “zero-day” disinformation may thwart already-established algorithms, such was the case with the COVID-19 pandemic [50]. Additionally, the final determination of a fake, true, or hyperpartisan label is fraught with subjectivity. Even fact-checkers are not immune—their agreement rates plummet for ambiguous statements [31], calling into question their efficacy in hyperpartisan news.

We posit that one facet of the solution lies within the combination of human and automated detection. Pennycook and Rand [50] conclude that lack of careful reasoning and domain knowledge is linked to poor truth discernment, suggesting (alongside [54, 5]) that future work should aim to trigger users to to think slowly and analytically [25] while assessing the accuracy of the information presented. Lumen aims to fulfill the first step of this goal, as our framework exposes influence cues in texts, which we hypothesize is disproportionately leveraged in deceptive content.

2.2 Detecting Influence in Deceptive Texts

2.2.1 Phishing

We focus on prior work that has investigated the extent to which Cialdini’s principles of persuasion (PoP) [14, 12] (described in Sec. 3) are used in phishing emails [64, 46, 18] and how users are susceptible to them [45, 30].

Lawson et al. [30] leveraged a personality inventory and an email identification task to investigate the relationship between personality and Cialdini’s PoP. The authors found that extroversion was significantly correlated with increased susceptibility to commitment, liking, and the pair (authority, commitment), the latter of which was found in 41% of our dataset. Following Cialdini’s PoP, after manually labeling 200 phishing emails, Akbar [1] found that authority was the most frequent principle in the phishing emails, followed by scarcity, corroborating our findings for high prevalence of authority. However, in a large-scale phishing email study with more than 2,000 participants, Wright et al. [73] found that liking receives the highest phishing response rate, while authority received the lowest. Oliveira et al. [45, 32] unraveled the complicated relationship between PoP and Internet user age and susceptibility, finding that young users are most vulnerable to scarcity, while older ones are most likely to fall for reciprocation, with authority highly effective for both age groups. These results are promising in highlighting the potential usability of exposing influence cues to users.

2.2.2 Fake & Hyperpartisan News

Contrary to phishing, few studies have focused on detecting influence cues or analyzing how users are susceptible to them in the context of fake or highly partisan content. Xu et al. [74] stands out as the authors used a mixed-methods analysis, leveraging both manual analysis of the textual content of 1.2K immigration-related news articles from 17 different news outlets, and computational linguistics (including, as we did, LIWC). The authors found that moral frames that emphasize that support authority/respect were shared/liked more, while the opposite occurred for reciprocity/fairness. Whereas we solely used trained coders, they measured the aforementioned frames by applying the moral foundations dictionary [21].

To the best of our knowledge, no prior work has investigated or attempted to automatically detect influence cues in texts in such a large dataset, containing multiple types of deceptive texts. In this work, we go beyond Cialdini’s principles to also detect gain and loss framing, emotional salience, subjectivity and objectivity, and the use of emphasis and blame. Further, no prior work has made available to the research community a dataset of deceptive texts labeled according to the influence cues applied in the text.

3 Dataset Curation & Coding Methodology

This section describes the methodology to generate the labeled dataset of online texts used to train Lumen, including the definition of each of the influence cues labels.

3.1 Curating the Dataset

We composed a diverse dataset by gathering different types of texts from multiple sources, split into three groups: (Deceptive Texts) pieces of text containing disinformation and/or deception tactics, (Hyperpartisan News) hyperpartisan media news from politically right- and left-leaning publications, and (Mainstream News) center mainstream media news. Our dataset therefore contained pieces of text in total.

For the Deceptive Texts Group, we mixed Facebook ads created by the Russian Internet Research Agency (IRA), known fake news articles, and phishing emails:

Facebook IRA Ads. We leveraged a dataset of 3,517 Facebook ads created by the Russian IRA and made publicly available to the U.S. House of Representatives Permanent Select Committee on Intelligence [63]

by Facebook after internal audits. These ads were a small representative sample of over 80K organic content identified by the Committee and are estimated to have been exposed to over 126M Americans between June 2015 and August 2017. After discarding ads that did not have text entry, the dataset was decreased to 3,286 ads, which were mostly (52.8%) posted in 2016 (U.S. election year).We randomly selected

for inclusion.

Fake News. We leveraged a publicly available555https://ieee-dataport.org/open-access/fnid-fake-news-inference-dataset#filesdataset of nearly 17K news labeled as fake or real collected by Sadeghi et al. [3] from PoliticFact.com, a reputable source of fact-finding. We randomly selected fake news ranging from words dated between 2007 to 2020.

Phishing Emails. To gather our dataset, we collected approximately 15K known phishing emails from multiple public sources [61, 44, 43, 57, 41, 70, 40, 42]. The emails were then cleaned and formatted to remove errors, noise (e.g., images, HTML elements), and any extraneous formatting so that only the raw email text remained. We randomly selected of these emails ranging from words to be included as part of the Deceptive Texts.

For the Hyperpartisan News and Mainstream News Groups, we used a public dataset666https://components.one/datasets/all-the-news-2-news-articles-dataset/comprised of 2.7M news articles and essays from 27 American publications dated from 2013 to early 2020. We first selected articles ranging from

words and then classified them as

left, right, or center news according to the AllSides Bias Rating777https://www.allsides.com/media-bias/media-bias-ratings. For inclusion in the Hyperpartisan News Group, we randomly selected right news and left news; the former were dated from 2016 to 2017 and came from two publications sources (Breitbart and National Review) while the latter were dated from 2016 to 2019 and came from six publications (Buzzfeed News, Mashable, New Yorker, People, VICE, and Vox). To compose Mainstream News Group, we randomly selected center news from all seven publications (Business Insider, CNBC, NPR, Reuters, TechCrunch, The Hill, and Wired) dated from 2014 to 2019.

3.1.1 Coding Process

We then developed coding categories and a codebook based on Cialdiani’s principles of influence [13], subjectivity/objectivity, and gain/loss framing [26]. These categories have been used in prior works (e.g., [46, 45], and were adapted for the purposes of this study, as well as with the additional emphasis and blame/guilt attribution categories. Next, we held an initial training session with nine undergraduate students. The training involved a thorough description of the coding categories, their definitions and operationalizations, as well as a workshop-style training where coders labeled a small sample of the texts to get acquainted with the coding platform, the codebook, and the texts. Coders were instructed to read the text at least twice before starting the coding to ensure they understood it. After that, coders were asked to share their experiences labeling the texts and to discuss any issues or questions about the process. After this training session, two intercoder reliability pretests were conducted; in the first pretest, coders independently co-coded a sample of texts, and in the second pretest, coders independently co-coded a sample of texts. After each one of these pretests, a discussion and new training session followed to clarify any issues with the categories and codebook.

Following these additional discussion and training sessions, coders were then instructed to co-code texts which served as our intercoder reliability sample. To calculate intercoder reliability, we used three indexes. Cohen’s kappa and Percent of Agreement ranged from to , and 66% to 99%, respectively, which was considered moderately satisfactory. Due to the nature of the coding and type of texts, we also opted to use Perrault and Leigh’s index because (a) it has been used in similar studies that also use nominal data [20, 23, 34, 53]; (b) it is the most appropriate reliability measure for coding (i.e., when coders mark for absence or presence of given categories), as traditional approaches do not take into consideration two zeros as agreement and thus penalize reliability even if coders disagree only a few times [51]; and (c) indexes such as Cohen’s kappa and Scott’s pi have been criticized for being overly conservative and difficult to compare to other indexes of reliability [33]. Perrault and Leigh’s index () returned a range of to , which was considered satisfactory. Finally, the remaining texts were divided equally between all coders, who coded all the texts independently using an electronic coding sheet in Qualtrics. Coders were instructed to distribute their workload equally over the coding period to counteract possible fatigue effects. This coding process lasted three months.

3.1.2 Influence Cues Definitions

The coding categories were divided into five main concepts: principles of influence, gain/loss framing, objectivity/subjectivity, attribution of guilt, and emphasis. Coders marked for the absence () or presence () of each of the categories. Definitions and examples for each influence are detailed in Appendix A, leveraged from the coding manual we curated to train our group of coders.

Principles of Persuasion (PoP). Persuasion refers to a set of principles that influence how people concede or comply with requests. The principles of influence were based on Cialdini’s marketing research work [12, 14], and consist of the following six principles: (i) authority888e.g., people tend to comply with requests or accept arguments made by figures of authority. or expertise, (ii) reciprocation, (iii) commitment and consistency, (iv) liking, (v) scarcity, and (vi) social proof. We added subcategories to the principles of commitment (i.e., indignation and call to action) and social proof (i.e., admonition) because an initial perusal of texts revealed consistent usage across texts.

Framing. Framing refers to the presentation of a message (e.g., health message, financial options, and advertisement) as implying a possible gain (i.e., possible benefits of performing the action) vs. implying a possible loss (i.e., costs of not performing a behavior) [55, 29, 25]. Framing can affect decision-making and behavior; work by Kahneman and Tversky [25] on loss aversion supports the human tendency to prefer avoiding losses over acquiring equivalent gains.

Slant. Slant refers to whether a text is written subjectively or objectively; subjectivesentences generally refer to a personal opinion/judgment or emotion, whereas objective sentences fired to factual information that is based on evidence, or when evidence is presented. It is important to note that we did not ask our coders to fact check, instead asking them to rely on sentence structure, grammar, and semantics to determine the label of objective or subjective.

Attribution of Blame/Guilt. Blame or guilt refers to when a text references “another” person/object/idea for wrong or bad things that have happened.

Emphasis. Emphasis refers to the use of all caps text, exclamation points (either one or multiple), several question marks, bold text, italics text, or anything that is used to call attention in text.

4 Lumen Design and Implementation

This section describes the design, implementation and evaluation of Lumen, our proposed two-level learning-based framework to expose influence cues in texts.

4.1 Lumen Overview

Exposing presence of persuasion and framing is tackled as a multi-labeling document classification problem, where zero, one, or more labels can be assigned to each document. Due to recent developments in natural language processing, emotional salience is an input feature that Lumen exposes leveraging sentiment analysis. Note that Lumen’s goal is not to distinguish deceptive vs. benign texts, but to expose different influence cues applied in different types of texts.

Figure 1 illustrates Lumen’s two-level hierarchical learning- based architecture. On the first level, the following features are extracted from the raw text: (i) topical structure inferred by topic modeling, (ii) LIWC features related to influence keywords [49], and (iii) emotional salience features learned via sentiment analysis [24]. On the second level, a classification model is used to identify the influence cues existing in the text.

Fig. 1: Lumen’s two-level architecture. Pre-processed text undergoes sentiment analysis for extraction of emotional salience, LIWC analysis for extraction of features related to influence keywords, and topic modeling for structural features. These features are inputs to ML analysis for prediction of influence cues applied to the message.

4.2 Topic Structure Features

Probabilistic topic modeling algorithms are often used to infer the topic structure of unstructured text data [65, 9], which in our case are deceptive texts, hyperpartisan news, and mainstream news. Generally, these algorithms assume that a collection of documents (i.e., a corpus) are created following the generative process.

Suppose that there are documents in the corpus and each document has length . Also suppose that there are in total different topics in the corpus and the vocabulary includes

unique words. The relations between documents and topics are determined by conditional probabilities

, which specify the probability of topic given document . The linkage between topics and unique words are established by conditional probabilities , which indicate the probability of word given topic . According to the generative process, for each token , which denotes the -th word in document , we will first obtain the topic of this token, , according to . With the obtained , we then draw the a word according to .

In this work, we leveraged Latent Dririchlet Allocation (LDA), one of the most widely used topic modeling algorithms, to infer topic structure in texts [8]. In LDA, both and are assumed to have Dirichlet prior distributions. Given our dataset, which is the evidence to the probabilistic model, the goal of LDA is to infer the most likely conditional distribution and , which is usually done by either variational Bayesian approach [8] or Gibbs Sampling [22]. In Lumen, the conditional probabilities represent the topic structure of the dataset.

4.3 LIWC Influence Features

We use language to convey our thoughts, intentions, and emotions, with words serving as the basic building blocks of languages. Thus, the way different words are used in unstructured text data provide meaningful information to streamline our understanding of the use of influence cues in text data. Lumen thus leverages LIWC, a natural language processing framework that connects commonly-used words with categories [68, 49] to retrieve influence features of texts to aid ML classification. LIWC includes more than 70 different categories in total, such as Perceptual Processes, Grammar, and Affect, and more than 6K common words.

However, not all the categories are related to influence. After careful inspection, we manually selected seven categories as features related to influence for Lumen. For persuasion, we selected the category time (related to scarcity); for emotion, we selected the categories anxiety, anger, and sad; and for framing, we selected the categories reward and money (gain), and risk (loss).

We denote the collection of the chosen LIWC categories as set . Given a text document with document length from the corpus , to build the LIWC feature , we first count the number of words in the text belonging to the LIWC category , denoted as , then normalize the raw word count with the document length:

(1)

4.4 Emotional Salience by Sentiment Analysis

Emotional salience refers to both valence (positive to negative) and arousal (arousing to calming) of an experience or stimulus [56, 47, 28], and research has shown that deception detection is reduced for emotional compared to neutral stimuli [47]. Similarly, persuasion messages that generate high (compared to low) arousal lead to poorer consumer decision-making [28]. Emotional salience may impair full processing of deceptive content and high arousal content may trigger System 1, the fast, shortcut-based brain processing mode [4].

In this work, we used a pre-trained rule-based model, VADER, to extract the emotional salience and valence from a document [24]. Both levels of emotion range from 0 to 1, where a small value means low emotional levels and a large number means high emotional levels. Therefore, emotional salience is both an input feature to the learning model and one of Lumen’s outputs (see Fig. 1).

4.5 Machine Learning to Predict Persuasion & Framing

Lumen’s second level corresponds to the application of a general-purpose ML algorithm for document classification. Although Lumen is general enough to allow application of any general-purpose algorithm, in this paper, we applied Random Forest (RF) because it can provide the level of importance for each input predicative feature without additional computational cost, which aids in model understanding. Another advantage of RF is its robustness to the magnitudes of input predicative features, i.e., RF does not need feature normalization.

We use the grid search approach to fine-tune the parameters in the RF model and follow the cross-validation to overcome any over-fitting issues of the model.

4.5.1 Dataset Pre-Processing

As described previously, Lumen generates three types of features at its first hierarchical level (emotional salience, LIWC categories, and topic structure), which serve as input for the learning-based prediction algorithm (Random Forest, for this analysis) at Lumen’s second hierarchical level (Fig. 1); these features rely on the unstructured texts in the dataset. However, different features need distinct preprocessing procedures. In our work, we used the Natural Language Toolkit (NLTK) [38] to pre-process the dataset. For all three types of features, we first removed all the punctuation, special characters, digital numbers, and words with only one or two characters. Next, we tokenized each document into a list of lowercase words.

For topic modeling features, we removed stopwords (which provide little semantic information) and applied stemming (replacing a word’s inflected form with its stem form) to further clean-up word tokens. For LIWC features, we matched each word in each text with the pre-determined word list in each LIWC category; we also performed stemming for LIWC features. We did not need to perform pre-processing for emotional salience because we applied NLTK [24], which has its own tokenization and pre-processing procedures.

Additionally, we filtered out documents with less than ten words since topic modeling results for extremely short documents are not reliable [59]. We were then left with 2,771 cleaned documents, with 183,442 tokens across the corpus, and 14,938 unique words in the vocabulary.

4.5.2 Training and Testing

Next, we split the the 2,771 documents into a training and a testing set. In learning models, hyper-parameters are of crucial importance because they control the structure or learning processing of the algorithms. Lumen applies two learning algorithms: an unsupervised topic modeling algorithm, LDA, on the first hierarchical level and RF on the second level. Each algorithm introduces its own types of hyper-parameters; for LDA, examples include the number of topics and the concentration parameters for Dirichlet distributions, whereas for RF are the number of trees and the maximum depth of a tree. We also used the grid search approach to find a better combination of hyper-parameters. Note that due to time and computational power constraints, it is impossible to search all hyper-parameters and all their potential values. In this work, we only performed the grid search for number of topics (LDA) and the number of trees (RF). The results show that the optimal number of topics is 10 and the optimal number of trees in RF is 200. Note also that the optimal result is limited by the grid search space, which only contains a finite size of parameter combinations.

If we only trained and tested Lumen on one single pair of training and testing sets, there would be high risk of overfitting. To lower this risk, we used 5-fold cross-validation, wherein the final performance of the learning algorithm is the average performance over the five training and testing pairs.

4.5.3 Evaluation Metrics

To evaluate our results (Sec. 5), we compared Lumen’s performance in predicting the influence cues applied to a given document with three other document classification algorithms: (i) Labeled-LDA, (ii) LSTM, and (iii) naïve algorithm.

Labeled-LDA is a semi-supervised variation of the original LDA algorithm [52, 71]. When training the Labeled-LDA, both the raw document and the human coded labels for influence cues were input into the model. Compared to Lumen, Labeled-LDA only uses the word frequency information from the raw text data and has a very rigid assumption of the relation between the word frequency information and the coded labels, which limits its flexibility and prediction ability.

Long Short-Term Memory

(LSTM) takes the input data recurrently, regulates the flow of information, and determines what to pass on to the next processing step and what to forget. Since neural networks mainly deal with vector operations, we used

50-dimensional word embedding matrix to transfer each word into vector space [36]. The main shortcoming of neural network is that it works as a blackbox, making it difficult to understand the underlying mechanism.

The naïve algorithm

served as a base line for our evaluation. We randomly generated each label for each document according to a Bernoulli distribution with equal probabilities for two outcomes.

As shown in Table I, we used -score (following the work by Ramage et al. [52] and van der Heijden et al. [71]), and accuracy rate to quantify the performance of the algorithms. We note that the comparison of

-scores is only meaningful under the same experiment setup. It would be uninformative to compare F-scores from distinctive experiments in different pieces of work in the literature due to varying experiment conditions.

-score can be easily calculated for single-labeling classification problems, where each document will only be assigned to one label. However, in our work, we are dealing with a multi-labeling classification problem, which means that no limit is imposed on how many labels each document can include. Thus, we employed two variations of the -score to quantify the overall performance of the learning algorithm: macro and micro -scores.

5 Results

This section details Lumen’s evaluation. First, we provide a quantitative analysis of our newly developed dataset used to train Lumen, and the results of Lumen classification in comparison to other ML algorithms.

5.1 Quantitative Analysis of the Dataset

We first begin by quantifying the curated dataset of 2,771 deceptive, hyperpartisan, or mainstream texts, hand-labeled by a group of coders. When considering all influence cues, most texts used between three and six cues per texts; only 3% of all texts leveraged a single influence cue, and 2% used zero cues ().

When considering the most common pairs and triplets between all influence cues, slant (i.e., subjectivity or objectivity) and principles of persuasion (PoP) dominated the top 10 most common pairings and triplets. As such, the most common pairs were (authority, objectivity) and (authority, subjectivity), occurring for 48% and 45% of all texts, respectively. The most common (PoP, PoP) pairing was between authority and commitment, co-occurring in 41% of all texts. Emphasis appeared once in the top 10 pairs and twice in the top triplets: (emphasis, subjectivity) occurring for 29% of texts, and (emphasis, authority, subjectivity) and (emphasis, commitment, subjectivity) for 20% and 19% of texts, respectively. Blame/guilt appeared only once in the top triplets as (authority, blame/guilt, objectivity), representing 19% of all texts. Gain framing appeared only as the 33rd most common pair (gain, subjectivity) and 18th most common triplet (call to action, scarcity, gain), further emphasizing its scarcity in our dataset.

5.1.1 Principles of Persuasion

We found that most texts in the dataset contained one to four principles of persuasion, with only 4% containing zero and 3% containing six or more PoP labels; 29% of texts apply two PoP and 23% leverage three PoP. Further, Fig. 2 shows that authority and commitment were the most prevalent principles appearing, respectively, in 71% and 52% of the texts; meanwhile, reciprocation and indignation were the least common PoP (5% and 9%, respectively).

Almost all types of texts contained every PoP to varying degrees; the only exception is reciprocation (the least-used PoP overall) which was not at all present in fake news texts (in the Deceptive Texts Group) and barely present (, 0.6%) in right-leaning hyperpartisan news. Authority was the most-used PoP for all types of texts, except phishing emails (most: call to action) and IRA ads (most: commitment), both of which are in the Deceptive Texts Group.

,

Fig. 2: The total for each text type, broken down based on principles of persuasion.

Deceptive Texts. Fake news was notably reliant on authority (92% of all fake news leveraged the authority label) compared to phishing emails (45%) and the IRA ads (32%); however, fake news used liking, reciprocation, and scarcity (5%, 0%, 3%, respectively) much less often than phishing emails (27%, 8%, 65%) or IRA ads (41%, 10%, 24%). Interestingly, admonition was most used by fake news (35%), though overall, admonition was only present in 14% of all texts. Phishing emails were noticeably more reliant on call to action (80%) and scarcity (65%) compared to fake news (33%, 3%) and IRA ads (40%, 24%), yet barely used indignation (0.4%) compared to the same (13% for fake news and 17% for IRA ads). The IRA ads relied on indignation, liking, reciprocation, and social proof much more than the others; note again that reciprocation was the least occurring PoP (5% overall), but was most commonly occurring in IRA ads (10%).

Hyperpartisan News. Right-leaning texts had nearly twice as much call to action and indignation than left-leaning texts (61% and 19% vs. 31% and 8%, respectively). Meanwhile, left-leaning hyperpartisan texts had noticeably more liking (30% vs. 13%), reciprocation (8% vs. 0.6%), and scarcity (27% vs. 13%) than right-leaning texts.

Mainstream News. Authority (88%) and commitment (43%) were the most frequently appearing PoP in center news, though this represents the third highest occurrence of authority and lowest use of commitment across all six text type groups. Mainstream news also used very little indignation (3%) compared the the other text types except phishing emails (0.4%), and also demonstrated the lowest use of social proof (7%).

[] Authority and commitment were the most common PoP in the dataset, with the former most common in fake news articles. Phishing emails had the largest occurrence of scarcity).

5.1.2 Framing

There were few gain or loss labels for the overall dataset (only 13% and 7%, respectively). Very few texts (18%) were framed exclusively as either gain or loss, 81% did not include any framing at all, and only 1% of the texts used both gain and loss framing in the same message. We also found that gain was much more prevalent than loss across all types of texts, except for fake news, which showed an equal amount (1.5% for both gain and loss). Notably, phishing emails had significantly more gain and loss than any other text type (41% and 29%, respectively); mainstream center news and IRA ads showed some use of gain framing (10% and 13%, respectively) compared to the remaining text types.

Next, we investigated how persuasion and framing were used in texts by analyzing the pairs and triplets between the two influence cues. Gain framing most frequently occurred with call to action and commitment, though these represent only 9% of pairings. Gain, call to action and scarcity was the the most common triplet between PoP and framing, occurring for 7% of all texts—this is notable as phishing emails had call to action and scarcity as its top PoP, and gain framing was also most prevalent in phishing. Also of note is that loss appeared in even fewer common pairs and triplets compared to gain (e.g., loss and call to action appeared in just 5% of texts).

[] Framing was a relatively rare occurrence in the dataset, though predominantly present in phishing emails, wherein gain was invoked more often than loss.

5.1.3 Emotional Salience

We used VADER’s compound sentiment score (, wherein , , , and denote positive, negative, and neutral sentiment, respectively) and LIWC’s positive and negative emotion word count metrics to measure sentiment. Overall, our dataset was slightly positive in terms of average compound sentiment () and with an average of 4.0 positive emotion words and 1.7 negative emotion words per text.

In terms of specific text types, fake news contained the only negative average compound sentiment (), and right-leaning hyperpartisan news had the only neutral average compound sentiment (); all other text types had, on average, positive sentiment, with phishing emails as the most positive text type (). Left-leaning hyperpartisan news had the highest average positive emotion word count () followed by phishing emails (), whereas fake news had the highest average negative word count () followed by left hyperpartisan news ().

We also analyzed whether emotional salience has indicative powers to predict the influence cues. Most influence cues and LIWC categories had an average positive sentiment, with liking and gain framing having the highest levels of positive emotion. Anxiety and anger (both LIWC categories) showed the only neutral sentiment, whereas admonition, blame/guilt, and indignation as the only negative sentiment (with the latter being the most negative out of all categories). Interestingly, items such as loss framing and LIWC’s risk both had positive sentiment.

[] The dataset invoked an overall positive sentiment, with phishing emails containing the most positive average sentiment and fake news with the most negative average sentiment.

5.1.4 Slant

The objective and subjective labels were present in 52% and 64% of all texts in the dataset, respectively. This % frequency for both categories was present in all text types except phishing emails and IRA ads, where subjectivity was approximately more common than objectivity. The most subjective text type were IRA ads (77%) and the most objective texts were fake news (72%); inversely, the least objective texts were phishing emails (27%) and least subjective were mainstream center news (58%).

More notably, there was an overlap between the slants, wherein 29% of all texts contained both subjective and objective labels. This could reflect mixing factual (objective) statements with subjective interpretations of them. Nonetheless, objectivity and subjectivity were independent variables, . The parings (objectivity, authority) and (subjectivity, authority) were the the top two most common pairs considering PoP and slant; these pairs occurred at nearly the same frequency within the dataset (48% and 45%, respectively). This pattern repeats itself for other (PoP, slant) pairings and triplets, insofar as (objectivity, subjectivity, authority) is the third most commonly occurring triplet. When comparing just (PoP, slant) triplets, slant is present in top triplets, with (subjectivity, authority, commitment) and (objectivity, authority, commitment) as the two most common triplets (30% and 27%, respectively).

[] Objectivity and subjectivity occurred over half of the dataset, with the latter was much more common in phishing emails and IRA ads, while the former was most common in fake news articles.

5.1.5 Attribution of Blame & Guilt

Twenty-nine percent of all texts contained the blame/guilt label. Interestingly, nearly the same proportions of fake news (45.4%) and right-leaning hyperpartisan news (45.0%) were labeled with blame/guilt, followed by left-leaning hyperpartisan news (38%). Phishing emails, IRA ads, and mainstream center media used blame/guilt at the lowest frequencies (ranging from 15% to 25%).

Blame/guilt was somewhat seen in the top 10 pairs with PoP, only pairing with authority (4th most common pairing with 26% frequency) and commitment (6th most common, 18%). However, blame/guilt appeared more frequently amongst the top 10 triplets with PoP, co-occurring with authority, commitment, call to action, and social admonition.

[] Blame/guilt was disproportionately frequent for fake and hyperpartisan news, commonly co-occurring with authority or commitment.

5.1.6 Emphasis

Emphasis was used in nearly 35% of all texts in the dataset. Among them, all news sources (fake, hyperpartisan, and mainstream) appeared with the smallest use of emphasis (range: 17% to 26%). This follows as news (regardless of veracity) likely is attempting to purport itself as legitimate. On the other hand, phishing emails and IRA ads were both shared on arguably more informal environments of communication (email and social media), and were thus often found to use emphasis (over 54% for both categories). Additionally, similar to previous analyses for other influence cues, emphasis largely co-occurred with authority, commitment, and call to action.

[] The use of emphasis was much more common in informal text typed (phishing emails and IRA social media ads), and less common in news-like sources (fake, hyperpartisan, or mainstream).

5.1.7 LIWC Features of Influence

We also explored whether LIWC features have indicative powers to predict the influence cues. Table 1 in Appendix B shows that indignation and admonition had the highest average anxiety feature, while liking and gain framing had the lowest. Indignation also scored three times above the overall average for the anger feature, as well as for sadness (alongside blame/guilt), whereas gain had the lowest average for both anger and sadness. The reward feature was seen most in liking and in gain, while risk was slightly more common in loss framing. The time category had the highest overall average and was most common in blame/guilt, while money had the second largest overall average and was most common in loss.

We also saw that that left-leaning hyperpartisan news had the highest average anxiety, sadness, reward, and time counts compared to all text types, whereas right-leaning hyperpartisan news averaged slightly higher than left-leaning media only in the risk feature. Note, however, that LIWC is calculated based on word counts and is therefore possibly biased towards longer length texts; it should thus be noted that while hyperpartisan left media had the highest averages for four of the seven LIWC features, hyperpartisan media also had the second largest average text length compared to other text types.

For the Deceptive Texts Group, phishing emails had the largest risk and money averages over all text types, while averaging lowest in anxiety, anger, and sadness. Fake news was highest overall in anger, though it was slightly higher in anxiety, sadness, and time compared to phishing emails and IRA ads. On the other hand, the IRA ads were lowest in reward, risk, time, and money compared to the its group.

Lastly, mainstream center media had no LIWC categories in either high or low extremities—most of its average LIWC values were close to the overall averages for the entire dataset.

[] LIWC influence features varied depending on the type of text. Left hyperpartisan news had the highest averages for four features (anxiety, sadness, reward, and time). Phishing evoked risk and money, while fake news evoked anger.

5.2 Lumen’s Multi-Label Prediction

Algorithm -macro -micro Overall Accuracy
Lumen 58.30% 69.23% 72.43%
Labeled-LDA 52.35% 60.55% 64.22%
LSTM 64.20% 69.48% 72.34%
Naïve 43.55% 46.80% 49.58%
TABLE I: Evaluation metric results for different learning algorithms in detecting influence cues.

This section describes our results in evaluating Lumen’s multi-label prediction using the dataset. We compared Lumen’s performance against three other ML algorithms: Labeled-LDA, LSTM, and a naïve algorithm. The former two learning algorithms and Lumen performed much better than the naïve algorithm, which shows that ML is promising for retrieval of influence cues in texts. From Table I we can see that Lumen’s performance is as good as the state-of-the-art prediction algorithm LSTM in terms of -micro score and overall-accuracy (with 0.25% difference between each metric). On the other hand, LSTM outperformed Lumen in terms of -macro, which is an unweighted mean of the metric for each labels, thus potentially indicating that Lumen underperforms LSTM in some labels although both algorithms share similar overall prediction result (accuracy). Nonetheless, Lumen presented better interpretability than LSTM (discussed below). Finally, both Lumen and LSTM presented better performance than Labeled-LDA in both -scores and accuracy, further emphasizing that additional features besides topic structures can help improve the performance of the prediction algorithm.

To show Lumen’s ability to provide better understanding to practitioners (i.e., interpretability), we trained it with our dataset and the optimal hyper-parameter values from grid search. After training, Lumen provided both the relative importance of each input feature and the topic structure of the dataset without additional computational costs, which LSTM cannot provide because it operates as a black-box.

Input Feature Importance Keywords
Topic-1 0.073 account, bank, security, time
Positive sentiment 0.071 N/A
Negative sentiment 0.070 N/A
Topic-8 0.065 report, share, billion, source, profit
Topic-2 0.062 black, people, trump, police, twitter
TABLE II: The top-five most important features for Lumen’s prediction. Topic features are related to LDA topic modeling results.

Table II shows the top-five important features in Lumen’s prediction decision-making process. Among these features, two are related to sentiment, and the remaining three are topic features (related to bank account security, company profit report, and current events tweets), which shows the validity for the choice of these types of input features. Positive and negative sentiment had comparable levels of importance to Lumen, alongside the bank account security topic.

6 Discussion

In this paper, we posit that interventions to aid human-based detection of deceptive texts should leverage a key invariant of these attacks: the application of influence cues in the text to increase its appeal to users. The exposure of these influence cues to users can potentially improve their decision-making by triggering their analytical thinking when confronted with suspicious texts, which were not flagged as malicious via automatic detection methods. Stepping towards this goal, we introduced Lumen, a learning framework that combines topic modeling, LIWC, sentiment analysis, and ML to expose the following influence cues in deceptive texts: persuasion, gain or loss framing, emotional salience, subjectivity or objectivity, and use of emphasis or attribution of guilt. Lumen was trained and tested on a newly developed dataset of 2,771 texts, comprised of purposefully deceptive texts, and hyperpartisan and mainstream news, all labeled according to influence cues.

6.1 Key Findings

Most texts in the dataset applied between three and six influence cues; we hypothesize that these findings may reflect the potential appeal or popularity of texts of moderate complexity. Deceptive or misleading texts constructed without any influence cues are too simple to convince the reader, while texts with too many influence cues might be far too long or complex, which are in turn more time-consuming to write (for attackers) and to read (for receivers).

Most texts also applied authority, which is concerning as it has been shown to be one of the most impactful in user susceptibility to phishing studies [45]. Meanwhile, reciprocation was the least used principle at only 5%; this may be an indication that reciprocation does not lend itself well to be applied in text, as it requires giving something to the recipient first and expecting an action in return later. Nonetheless, reciprocation was most common in IRA ads (10%); these ads were posted on Facebook, and social media might be a more natural and intuitive location to give gifts or compliments. We also found that the application of the PoP was highly imbalanced with reciprocation, indignation, social proof, and admonition each being applied less than 15% the texts during the coding process.

The least used influence cue were gain and loss framing, appearing in only 13% and 7% of all texts. Though Kahneman and Tversky [25] posited that loss is more impactful than the possibility of a gain, our dataset indicates that gain was more prevalent than loss. This is especially the case in phishing emails, wherein the framing frequencies increase to 41% and 29%; this difference suggests that in phishing emails, attackers might be attempting to lure users to potential financial gain. We further hypothesize that phishing emails exhibited these high rates of framing because successful phishing survives only via a direct action from the user (e.g., clicking a link), which may therefore motivate attackers to implement framing as a key influence method. Phishing emails also exhibited the most positive average sentiment (0.635) compared to other text types, possibly related to its large volume of gain labels, which were also strongly positive in sentiment (0.568).

Interestingly, texts varied among themselves in terms of influence cues even within their own groups. For example, within the Deceptive Texts Group, fake news used notably more authority, objectivity, and blame/guilt compared to phishing emails and IRA ads, and was much lower in sentiment compared to the latter two. Though phishing emails and IRA ads were more similar, phishing was nonetheless different in its use of higher positive sentiment, gain framing, scarcity, and lower blame/guilt. This was also evident within the Hyperpartisan News Group—while right-learning news had a higher frequency of commitment, call to action, and admonition than left-learning news, the opposite was also true for liking, reciprocation, and scarcity. Even comparing among all news types (fake, hyperpartisan, and mainstream), this diversity of influence cues still prevailed, with the only resounding agreement in a relative lack of use of emphasis. This diversity across text types gives evidence of the highly imbalanced application of influence cues in real deceptive or misleading campaigns.

We envision the use of Lumen (and ML methods in general) to expose influence cues as a promising direction for application tools to aid human detection of cyber-social engineering and disinformation. Lumen presented a comparable performance compared to LSTM in terms of the

-micro. Lumen’s interpretability can allow a better understanding of both the dataset and the decision-making process Lumen undergoes, consequently providing invaluable insights for feature selection.

6.2 Limitations & Future Work

Dataset. One of the limitations of our work is that the dataset is unbalanced. For example, our coding process revealed that some influence (e.g., authority) were disproportionately more prevalent than others (e.g., reciprocation, framing). Even though an unbalanced dataset is not ideal for ML analyses, we see this as part of the phenomenon. Attackers and writers might find it more difficult to construct certain concept via text, thus favoring other more effective and direct influence cues such as authority. Ultimately, our dataset is novel in that each of the nearly 3K items were coded according to 12 different variables; this was a time-expensive process and we shall test the scalability of Lumen in future work. Nevertheless, we plan to alleviate this dataset imbalance in our future work by curating a larger, high-quality labeled dataset by reproducing our coding methodology, and/or with the generation of synthetic, balanced datasets. Though we predict that a larger dataset will still have varying proportions of certain influence cues, it will facilitate machine learning with a larger volume of data points.

Additionally, our dataset is U.S.-centric, identified as a limitation in some prior work (e.g., [27, 19, 37]). All texts were ensured to be in the English language and all three groups of data were presumably aimed at an American audience. Therefore, we plan future work to test Lumen in different cultural contexts.

ML Framework. Lumen, as a learning framework, has three main limitations. First, although the two-level architecture provides high degree of flexibility and is general enough to include other predictive features in the future, it also introduces complexity and overhead because tuning the hyper-parameters and training the model will be more computationally expensive.

Second, topic modeling, a key component of Lumen, generally requires a large number of documents of a certain length (usually thousands of documents and hundreds of words in each document, such as a collection of scientific paper abstracts) for topic inference. This will limit Lumen’s effectiveness on short texts or when the training data is limited.

Third, some overlap between the LIWC influence features and emotional salience might exist (e.g., the sad LIWC category may correlate with the negative emotional salience), which may negatively impact the prediction performance of the machine learning algorithm used in Lumen. In other words, correlation of input features makes machine learning algorithms hard to converge in general.

7 Conclusion

In this paper, we introduced Lumen, a learning-based framework to expose influence cues in text by combining topic modeling, LIWC, sentiment analysis, and machine learning in a two-layer hierarchical architecture. Lumen was trained and tested with a newly developed dataset of 2,771 total texts manually labeled according to the influence cues applied to the text. Quantitative analysis of the dataset showed that authority was the most prevalent influence cue, followed by subjectivity and commitment; gain framing was most prevalent in phishing emails, and use of emphasis commonly occurred in fake, partisan, and mainstream news articles. Lumen presented comparable performance with LSTM in terms of -micro score, but better interpretability, providing insights of feature importance. Our results highlight the promise of ML to expose influence cues in text with the goal of application in tools to improve the accuracy of human detection of cyber-social engineering threats, potentially triggering users to think analytically. We advocate that the next generation of interventions to mitigate deception expose influence to users, complementing automatic detection to address new deceptive campaigns and improve user decision-making when confronted with potentially suspicious text.

Acknowledgments

The authors would like to thank the coders for having helped with the labeling of the influences cues in our dataset. This work was support by the University of Florida Seed Fund award P0175721 and by the National Science Foundation under Grant No 2028734. This material is based upon work supported by (while serving at) the National Science Foundation.

References

  • [1] N. Akbar (2014) Analysing persuasion principles in phishing emails. Master’s Thesis, University of Twente. Cited by: §2.2.1.
  • [2] D. Akhawe and A. P. Felt (2013) Alice in Warningland: A Large-scale Field Study of Browser Security Warning Effectiveness. In 22nd USENIX Security Symposium, pp. 257–272. Cited by: §2.1.1.
  • [3] F. S. A. J. B. H. Amirkhani (2020) FNID: fake news inference dataset. IEEE Dataport. External Links: Document, Link Cited by: §3.1.
  • [4] D. Ariely, U. Gneezy, G. Loewenstein, and N. Mazar (2009) Large Stakes and Big Mistakes. Rev. Econ. Stud. 76 (2), pp. 451–469. External Links: ISSN 0034-6527, Document Cited by: §4.4.
  • [5] B. Bago, D. G. Rand, and G. Pennycook (2020-08) Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines. J. Exp. Psychol. Gen. 149 (8), pp. 1608–1613 (en). Cited by: §2.1.2.
  • [6] M. Barnidge and C. Peacock (2019-07) A third wave of selective exposure research? The challenges posed by hyperpartisan news on social media. Media and Communication 7 (3), pp. 4–7. Cited by: §1, §1, §2.1.2.
  • [7] R. Basnet, S. Mukkamala, and A. H. Sung (2008) Detection of Phishing Attacks: A Machine Learning Approach. In Soft Computing Applications in Industry, B. Prasad (Ed.), pp. 373–383. External Links: Document, ISBN 978-3-540-77465-5, Link Cited by: §2.1.1.
  • [8] D. M. Blei and A. Y. N. M. I. Jordan (2003) Latent Dirichlet Allocation. Journal of Machine Learning Research 3 (Jan), pp. 993–1022. Cited by: §4.2.
  • [9] D. M. Blei and J. D. Lafferty (2009) Topic models. In Text Mining: Classification, Clustering, and Applications, A. N. Srivastava and M. Sahami (Eds.), pp. 71–89. Cited by: §4.2.
  • [10] E. Bursztein and D. Oliveira (2019-08) Deconstructing the Phishing Campaigns that Target Gmail Users. Note: BlackHat 2019 Cited by: §2.1.1.
  • [11] M. Chandrasekaran, K. Narayanan, and S. Upadhyaya (2006) Phishing email detection based on structural properties. In NYS Cyber Security Conference, Vol. 3. Cited by: §2.1.1.
  • [12] R. B. Cialdini (1993) Influence: The Psychology of Persuasion. Morrow New York. Cited by: §1, §2.2.1, §3.1.2.
  • [13] R. B. Cialdini (1993) The psychology of persuasion. New York. Cited by: §3.1.1.
  • [14] R. B. Cialdini (2001) The Science of Persuasion. Scientific American 284 (2), pp. 76–81. External Links: ISSN 00368733, 19467087, Link Cited by: §1, §2.2.1, §3.1.2.
  • [15] Z. Dong, A. Kapadia, J. Blythe, and L. J. Camp (2015-05) Beyond the Lock Icon: Real-time Detection of Phishing Websites Using Public Key Certificates. In 2015 APWG Symposium on Electronic Crime Research, pp. 1–12. External Links: ISSN 2159-1245, Document Cited by: §2.1.1.
  • [16] (2021-02) Fact check: Courts have dismissed multiple lawsuits of alleged electoral fraud presented by Trump campaign. Reuters. Cited by: §1.
  • [17] A. P. Felt, A. Ainslie, R. W. Reeder, S. Consolvo, S. Thyagaraja, A. Bettes, H. Harris, and J. Grimes (2015) Improving SSL Warnings: Comprehension and Adherence. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, New York, NY, USA, pp. 2893–2902. External Links: ISBN 9781450331456, Document Cited by: §2.1.1.
  • [18] A. Ferreira and S. Teles (2019) Persuasion: How phishing emails can influence users and bypass security measures. Vol. 125. Cited by: §2.2.1.
  • [19] R. Fletcher, A. Cornia, L. Graves, and R. K. Nielsen (2018-02) Measuring the reach of “fake news” and online disinformation in Europe. Technical report University of Oxford: Reuters Institute for the Study of Journalism. Cited by: §6.2.
  • [20] R. P. Fuller and R. E. Rice (2014) Lights, camera, conflict: newspaper framing of the 2008 screen actors guild negotiations. Journalism & Mass Communication Quarterly 91 (2), pp. 326–343. Cited by: §3.1.1.
  • [21] J. Graham, J. Haidt, and B. A. Nosek (2009-05) Liberals and conservatives rely on different sets of moral foundations. J. Pers. Soc. Psychol. 96 (5), pp. 1029–1046 (en). Cited by: §2.2.2.
  • [22] T. L. Griffiths and M. Steyvers (2004) Finding Scientific Topics. Proceedings of the National Academy of Sciences of the United States of America 101 Suppl, pp. 5228–35. External Links: Document, ISBN 0027-8424 (Print)$\$r0027-8424 (Linking), ISSN 0027-8424, Link Cited by: §4.2.
  • [23] T. Hove, H. Paek, T. Isaacson, and R. T. Cole (2013) Newspaper portrayals of child abuse: frequency of coverage and frames of the issue. Mass Communication and Society 16 (1), pp. 89–108. Cited by: §3.1.1.
  • [24] C. J. Hutto and E. Gilbert (2014) Vader: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. In Eighth international AAAI Conference on Weblogs and Social Media, Cited by: §4.1, §4.4, §4.5.1.
  • [25] D. Kahneman and A. Tversky (1979) Prospect Theory: An Analysis of Decision under Risk. Econometrica 47 (2), pp. 263. External Links: ISSN 0012-9682, Document Cited by: §1, §1, §2.1.2, §3.1.2, §6.1.
  • [26] D. Kahneman and A. Tversky (1980) Prospect theory. Econometrica 12. Cited by: §3.1.1.
  • [27] B. Kalsnes and A. O. Larsson (2021-02) Facebook News Use During the 2017 Norwegian Elections—Assessing the Influence of Hyperpartisan News. Journalism Practice 15 (2), pp. 209–225. Cited by: §1, §6.2.
  • [28] K. Kircanski, N. Notthoff, M. DeLiema, G. R. Samanez-Larkin, D. Shadel, G. Mottola, L. L. Carstensen, and I. H. Gotlib (2018-03) Emotional Arousal May Increase Susceptibility to Fraud in Older and Younger Adults. Psychol. Aging 33 (2), pp. 325–337 (en). External Links: ISSN 0882-7974, 1939-1498, Document Cited by: §1, §4.4.
  • [29] A. Kühberger (1998) The Influence of Framing on Risky Decisions: A Meta-analysis. Organ. Behav. Hum. Decis. Process. 75 (1), pp. 23–55. External Links: ISSN 0749-5978, Document Cited by: §3.1.2.
  • [30] P. Lawson, O. Zielinska, C. Pearson, and C. B. Mayhorn (2017) Interaction of Personality and Persuasion Tactics in Email Phishing Attacks. Vol. 61. Cited by: §2.2.1, §2.2.1.
  • [31] C. Lim (2018-07) Checking how fact-checkers check. Research & Politics 5 (3), pp. 2053168018786848. Cited by: §2.1.2.
  • [32] T. Lin, D. E. Capecci, D. M. Ellis, H. A. Rocha, S. Dommaraju, D. S. Oliveira, and N. C. Ebner (2019-07) Susceptibility to Spear-Phishing Emails: Effects of Internet User Demographics and Email Content. ACM Trans. Comput.-Hum. Interact. 26 (5), pp. 32:1–32:28. External Links: Document, ISSN 1073-0516, Link Cited by: §2.2.1.
  • [33] M. Lombard, J. Snyder-Duch, and C. C. Bracken (2002) Content analysis in mass communication: assessment and reporting of intercoder reliability. Human communication research 28 (4), pp. 587–604. Cited by: §3.1.1.
  • [34] A. C. Morey and W. P. Eveland Jr (2016) Measures of political talk frequency: assessing reliability and meaning. Communication Methods and Measures 10 (1), pp. 51–68. Cited by: §3.1.1.
  • [35] R. S. Mueller (2019) Report on the investigation into Russian interference in the 2016 presidential election. US Dept. of Justice. Washington, DC. Cited by: §1.
  • [36] M. Naili, A. H. Chaibi, and H. H. B. Ghezala (2017) Comparative study of word embedding methods in topic segmentation. Procedia Computer Science 112, pp. 340–349. Note: Knowledge-Based and Intelligent Information & Engineering Systems: Proceedings of the 21st International Conference, KES-20176-8 September 2017, Marseille, France External Links: ISSN 1877-0509 Cited by: §4.5.3.
  • [37] N. Newman, R. Fletcher, A. Kalogeropoulos, and R. K. Nielsen (2019) Reuters Institute Digital News Report 2019. Technical report Reuters Institute. Cited by: §6.2.
  • [38] NLTK.org (2020) Natural Language Toolkit. External Links: Link Cited by: §4.5.1.
  • [39] A. Oest, Y. Safaei, A. Doupé, G. Ahn, B. Wardman, and K. Tyers (2019) PhishFarm: A Scalable Framework for Measuring the Effectiveness of Evasion Techniques against Browser Phishing Blacklists. In 2019 IEEE Symposium on Security and Privacy, Cited by: §2.1.1.
  • [40] U. of Arizona (2019) Phishing alerts, ua security. Cited by: §3.1.
  • [41] P. S. |. O. of Information Security (2019) “Phishing. Cited by: §3.1.
  • [42] U. of Michigan (2019) Phishes & scams. Cited by: §3.1.
  • [43] U. of Minnesota (2019) Phishing scams targeting the umn. Cited by: §3.1.
  • [44] U. of Pittsburgh (2019) Alerts & notifications, information technology. Cited by: §3.1.
  • [45] D. Oliveira, H. Rocha, H. Yang, D. Ellis, S. Dommaraju, M. Muradoglu, D. Weir, A. Soliman, T. Lin, and N. Ebner (2017) Dissecting Spear Phishing Emails for Older vs Young Adults: On the Interplay of Weapons of Influence and Life Domains in Predicting Susceptibility to Phishing. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI ’17, New York, NY, USA, pp. 6412–6424. External Links: Document, ISBN 978-1-4503-4655-9 Cited by: §2.2.1, §2.2.1, §3.1.1, §6.1.
  • [46] D. S. Oliveira, T. Lin, H. Rocha, D. Ellis, S. Dommaraju, H. Yang, D. Weir, S. Marin, and N. C. Ebner (2019-04) Empirical Analysis of Weapons of Influence, Life Domains, and Demographic-Targeting in Modern Spam - An Age-Comparative Perspective. Crime Science 8 (1), pp. 3 (en). External Links: ISSN 2193-7680, Document Cited by: §2.2.1, §3.1.1.
  • [47] K. A. Peace and S. M. Sinclair (2012-02) Cold-blooded Lie Catchers? An Investigation of Psychopathy, Emotional Processing, and Deception Detection: Psychopathy and Deception Detection. Legal and Criminological Psychology 17 (1), pp. 177–191. External Links: ISSN 1355-3259, Document Cited by: §4.4.
  • [48] T. Peng, I. Harris, and Y. Sawa (2018-01) Detecting Phishing Attacks Using Natural Language Processing and Machine Learning. In 2018 IEEE 12th International Conference on Semantic Computing, pp. 300–301. External Links: Document Cited by: §2.1.1.
  • [49] J. W. Pennebaker, R. L. Boyd, K. Jordan, and K. Blackburn (2015) The Development and Psychometric Properties of LIWC. Technical report Cited by: §4.1, §4.3, footnote 3.
  • [50] G. Pennycook and D. G. Rand (2021-05) The Psychology of Fake News. Trends Cogn. Sci. 25 (5), pp. 388–402 (en). Cited by: §2.1.2, §2.1.2, §2.1.2.
  • [51] W. D. Perreault Jr and L. E. Leigh (1989) Reliability of nominal data based on qualitative judgments. Journal of marketing research 26 (2), pp. 135–148. Cited by: §3.1.1.
  • [52] D. Ramage, D. Hall, R. Nallapati, and C. D. Manning (2009) Labeled LDA: A Supervised Topic Model for Credit Attribution in Multi-labeled Corpora. Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (August), pp. 248–256. External Links: Document, ISBN 978-1-932432-59-6, ISSN 1932432590 Cited by: §4.5.3, §4.5.3.
  • [53] R. E. Rice, A. Gustafson, and Z. Hoffman (2018) Frequent but accurate: a closer look at uncertainty and opinion divergence in climate change print news. Environmental Communication 12 (3), pp. 301–321. Cited by: §3.1.1.
  • [54] R. M. Ross, D. G. Rand, and G. Pennycook (2021) Beyond ”fake news”: Analytic thinking and the detection of false and hyperpartisan news headlines. Judgement and Decision Making 16 (2), pp. 484–504. Cited by: §1, §1, §1, §2.1.2.
  • [55] A. J. Rothman and P. Salovey (1997) Shaping Perceptions to Motivate Healthy Behavior: The Role of Message Framing. Psychol. Bull. 121 (1), pp. 3–19. External Links: ISSN 0033-2909, Document Cited by: §1, §3.1.2.
  • [56] J. A. Russell (1980) A Circumplex Model of Affect. Journal of Personality and Social Psychology 39 (6), pp. 1161–1178. External Links: ISSN 0022-3514, Document Cited by: §1, §4.4.
  • [57] U. I. Services (2019) Phish bowl/phishing scams. Cited by: §3.1.
  • [58] S. Sheng, B. Magnien, P. Kumaraguru, A. Acquisti, L. F. Cranor, J. Hong, and E. Nunge (2007) Anti-Phishing Phil: The Design and Evaluation of a Game That Teaches People Not to Fall for Phish. In Proceedings of the 3rd Symposium on Usable Privacy and Security, ACM, New York, NY, USA, pp. 88–99. External Links: ISBN 9781595938015, Document Cited by: §2.1.1.
  • [59] H. Shi, M. Gerlach, I. Diersen, D. Downey, and L. Amaral (2019) A New Evaluation Framework for Topic Modeling Algorithms Based on Synthetic Corpora. In

    Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS)

    ,
    Cited by: §4.5.1.
  • [60] C. E. Shyni, S. Sarju, and S. Swamynathan (2016)

    A Multi-Classifier Based Prediction Model for Phishing Emails Detection Using Topic Modeling, Named Entity Recognition and Image Processing

    .
    Circuits and Systems 07 (09), pp. 2507–2520. External Links: Document, ISSN 2153-1285 Cited by: §2.1.1.
  • [61] M. Smiles (2019) Phishing scam reports archive. Cited by: §3.1.
  • [62] B. L. Smith (2021-01) Propaganda. In Encyclopedia Britannica, Cited by: §1.
  • [63] Social Media Advertisements. Note: https://intelligence.house.gov/social-media-content/social-media-advertisements.htmAccessed: 2020-8-9 Cited by: §3.1.
  • [64] F. Stajano and P. Wilson (2011-03) Understanding Scam Victims: Seven Principles for Systems Security. Commun. ACM 54 (3), pp. 70–75. External Links: ISSN 0001-0782, Document Cited by: §2.2.1.
  • [65] M. Steyvers and T. Griffiths (2007) Probabilistic Topic Models. Handbook of latent semantic analysis 427 (7), pp. 424–440. Cited by: §4.2.
  • [66] N. J. Stroud (2014-05) Selective Exposure Theories. The Oxford Handbook of Political Communication. Cited by: footnote 1.
  • [67] J. Sunshine, S. Egelman, H. Almuhimedi, N. Atri, and L. F. Cranor (2009) Crying Wolf: An Empirical Study of SSL Warning Effectiveness. In USENIX Security Symposium, Montreal, Canada, pp. 399–416. Cited by: §2.1.1.
  • [68] Y. R. Tausczik and J. W. Pennebaker (2010-03) The Psychological Meaning of Words: LIWC and Computerized Text Analysis Methods. Journal of Language and Social Psychology 29 (1), pp. 24–54. External Links: Document, ISSN 0261-927X Cited by: §4.3.
  • [69] Twitter Help Center Government and state-affiliated media account labels. (en). Note: https://help.twitter.com/en/rules-and-policies/state-affiliatedAccessed: 2021-5-19 Cited by: §1.
  • [70] L. University (2019) Recent phishing examples, library & technology services. Cited by: §3.1.
  • [71] A. van der Heijden and L. Allodi (2019-08) Cognitive Triaging of Phishing Attacks. In 28th USENIX Security Symposium, Santa Clara, CA, pp. 1309–1326. External Links: ISBN 978-1-939133-06-9 Cited by: §4.5.3, §4.5.3.
  • [72] A. Vance, B. Kirwan, D. Bjornn, J. Jenkins, and B. B. Anderson (2017) What Do We Really Know About How Habituation to Warnings Occurs Over Time?: A Longitudinal fMRI Study of Habituation and Polymorphic Warnings. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, pp. 2215–2227. External Links: ISBN 9781450346559, Document Cited by: §2.1.1.
  • [73] R. T. Wright, M. L. Jensen, J. B. Thatcher, M. Dinger, and K. Marett (2014) Influence techniques in phishing attacks: An examination of vulnerability and resistance. Information Systems Research 25 (2), pp. 385–400. External Links: Document, ISSN 15265536 Cited by: §2.2.1.
  • [74] W. W. Xu, Y. Sang, and C. Kim (2020-05) What Drives Hyper-Partisan News Sharing: Exploring the Role of Source, Style, and Content. Digital Journalism 8 (4), pp. 486–505. Cited by: §2.2.2.

Appendix A Codebook

Persuasion

Persuasion constitutes a series of influence principles based on Robert Cialdini’s work, split into the following categories:

  1. Authority or Expertise/Source Credibility

  2. Reciprocation

  3. Commitment (sub-categories: Indignation, Call to Action)

  4. Liking

  5. Scarcity/Urgency/Opportunity

  6. Social Proof (sub-category: Admonition)

Authority or Expertise/Source Credibility. Humans tend to comply with requests made by figures of authority and/or with expertise/credibility. The text can include:

  • Literal authority (e.g., law enforcement personnel, lawyers, judges, politicians)

  • Reputable/credible entity that could exert some power over people (e.g., a bank)

  • Indirect authority (especially a fictitious company/person) that builds a setting of authority

Examples:

  • Tupac Shakur was indeed not just one of the greatest rappers of all time but a worldly icon whose status in hip-hop culture can never be replaced. His revolutionary knowledge mixed with street experience made him powerful unstoppable force that spoke to the hearts of millions of people.”

  • “According to data from Mapping Police Violence

  • Autopsy says”

  • Fox & Friends hosts declare”

Reciprocation. Humans tend to repay, in kind, what another person has provided them. Text might first give/offer something, expecting that the person/user will reciprocate. Even if the person does not reciprocate, s/he will still keep the ”gift.” Therefore, if the user thinks they received a gift, they may reciprocate the kindness (and may only find out later that the ”gift” was fake).

Example:

  • “Aww! Because you need such a cutie on your timeline!”

Commitment. Once humans have taken a stand, they will feel pressured to behave in line with their commitment. Text leverages a role assumed by the target and their commitment to that role. Petitions and donations/charity (gun control, animal abuse, children’s issue, political issues), or political affiliations engagements.

Examples:

  • “But let’s remember Tupac and his ability to question the social order. Changes, one of his popular songs, asks everyone to change their lifestyles for better society. He always asked people to share with each other and to learn to love each other.”

  • “Patriotism comes from your heart… follow its dictates and don’t live a false life. Join!”

  • “We will stand for our right to keep and bear arms!”

  • “Black Matters”

Indignation. Still within the definition of commitment, text employing indignation will also focus on anger or annoyance provoked by what is perceived as unfair treatment, unjust, unworthy, mean.

Examples:

  • “Why should we be a target for police violence and harassment?”

  • “Why the pool party in Georgia is a silent story? Why the police was not aware of a large party? Why this story has no national outrage? Is it ok when a black teenager dies?”

  • “Obama never tried to protect blacks from police pressure”

Call to Action. Still within the definition of commitment, ads/text employing a call to action will represent an exhortation or stimulus to do something in order to achieve an aim or deal with a problem. A piece of content intended to induce a viewer, reader, or listener to perform a specific act, typically taking the form of an instruction or directive (e.g., buy now or click here).

Examples:

  • “Stop racism! We all belong to ONE HUMAN RACE.”

  • “We really can change the world if we stay united”

  • “We can be heard only when we stand together”

  • “White House must reduce the unemployment rates of black population”

  • “If this is a war against police - we’re joining this war on the cop’s side!”

  • “If we want to stop it, we should fight as our ancestors did it for centuries.”

Liking. Humans tend to comply with requests from people they like or with whom they share similarities. Forms of liking:

  • Physical attractiveness: Good looks suggest other favorable traits, i.e. honesty, humor, trustworthiness

  • Similarity: We like people similar to us in terms of interests, opinions, personality, background, etc.

  • Compliments: We love to receive praises, and tend to like those who give it

  • Contact and Cooperation: We feel a sense of commonality when working with others to fulfill a common goal

  • Conditioning and Association: We like looking at models, and thus become more favorable towards the cars behind them

May also come in the form of establishing a familiarity or rapport with the object of liking

Example:

  • “What a beautiful and intelligent child she is. How magnificent is her mind…”

Scarcity/Urgency/Opportunity. Opportunities seem more valuable when their availability is limited Text can leverage this principle by tricking/asking an Internet user into clicking on a link to avoid missing out on a “once-in-a-lifetime” opportunity; creates a sense of urgency.

Examples:

  • “Is it time to call out the national guard?” (Urgency)

  • “Free Figure’s Black Power Rally at VCU” (Opportunity)

  • “CLICK TO GET LIVE UPDATES ON OUR PAGE” (Opportunity)

Social Proof. People tend to mimic what the majority of people do or seem to be doing. People let their guard and suspicion down when everyone else appears to share the same behaviors and risks. In this way, they will not be held solely responsible for their actions (i.e., herd mentality). The actions of the group drive the decision making process.

Examples:

  • “More riots are coming this summer”

  • “America is deceased. Islamic terror has penetrated our homeland and now spreads at a threw. Remember Victims Of Islamic Terror”

Admonition. Within the definition of social proof, admonition pertains to texts that may include the following:

  • Caution, advise, or counsel against something.

  • Reprove or scold, especially in a mild and good-willed manner: The teacher admonished him about excessive noise.

  • Urge to a duty or admonish them about their obligations.

Examples:

  • “More riots are coming this summer”

  • “America is deceased. Islamic terror has penetrated our homeland and now spreads at a threw Remember Victims Of Islamic Terror”

Slant

Slant encompasses subjectivity and objectivity.

Subjectivity. Subjective sentences generally refer to personal opinion, emotion or judgment. The use of popular adverbs (e.g, very, actually), upper case, exclamation and interrogation marks, hash tags, indicates subjectivity.

Examples:

  • “I doubt that it’s true”

  • “A beautiful message was seen on the streets of the capitol,”

  • “A timely message for today.”

  • “No matter what Defense Secretary or POTUS are saying they don’t fool me with promises of gay military equality as key to the nation’s agenda.”

  • “This is something that America has a serious issue with - RACISM!” “Is it time to call out the national guard?”

  • ”This makes me ANGRY!”

Objectivity. Objective sentences refers to factual information, based on evidence, or when evidence is presented. May or may not include statistics.

Examples:

  • “It has been discovered that”

  • “According to data from Mapping Police Violence”

  • “The McKinney Police Department, Chief Of Police Greg Conley said”

Framing

Gain/Loss Framing refers to the presentation of a message (e.g., health message, financial options, advertisement etc.) as implying a possible gain (e.g., refer to possible benefits of performing a behavior) vs. implying a possible loss (e.g., refer to the costs of not performing a behavior).

Gain. People are likely to act in ways that benefit them in some way. A reward will increase the probability of a behavior A promise that a product (or something else) can provide some form of self-improvement or benefit to the user. This product can come in the form of an ad, job offer, joining a group, etc.

Examples:

  • “Think about the benefits of recycling.”

  • “Think about what you can gain if you join.”

Loss. People are likely to act in ways that reduce loss/harm to them. Avoiding loss will increase the probability of a behavior A promise that a product (or something else) can help avoid some behavior/outcome.

Examples:

  • “Think about the costs of recycling.”

  • “Think about what you can lose if you don’t join.”

Attribution of Blame/Guilt

When the text references an “another” (who/what) for the wrong/bad things happening. “Who” can be a person, organization, etc., and “what” can be a cause, object, etc.

Example:

  • “…Hillary is a Satan, and her crimes and lies had proved just how evil she is.”

Emphasis

Emphasis refers to the use of all caps text, several exclamation points, several question marks, or anything used to call attention.

Example:

  • “Our women are the most powerful!!”

  • “LATIN WOMEN CAN DO THINGS TO MEN WITH THERE EYES”

Appendix B LIWC Emotional Features Summary

Influence Cue or
Text Type
Anxiety Anger Sadness Reward Risk Time Money
Authority 0.23 0.69 0.36 1.69 0.78 6.68 2.21
Commitment 0.20 0.67 0.28 1.61 0.78 5.97 1.87
Call to Action 0.18 0.60 0.26 1.59 0.94 5.75 2.19
Indignation 0.34 1.62 0.46 1.49 0.91 6.06 0.88
Liking 0.13 0.34 0.24 1.84 0.57 5.87 1.67
Reciprocation 0.19 0.32 0.36 1.53 0.59 5.00 1.14
Scarcity 0.18 0.51 0.30 1.67 1.01 5.84 2.73
Social Proof 0.26 0.69 0.37 1.61 0.79 5.81 1.64
Admonition 0.34 1.14 0.39 1.52 1.09 6.43 1.52
Emphasis 0.20 0.51 0.26 1.43 0.87 5.51 2.27
Blame/guilt 0.30 1.22 0.47 1.47 1.00 6.85 1.60
Gain framing 0.12 0.23 0.24 1.89 1.09 5.50 3.42
Loss framing 0.20 0.41 0.37 1.70 1.16 5.78 3.73
Objectivity 0.23 0.70 0.36 1.60 0.74 6.71 2.31
Subjectivity 0.22 0.61 0.32 1.60 0.78 6.05 1.96
Fake News 0.25 1.08 0.35 1.35 0.80 7.51 1.56
(News) Center 0.19 0.34 0.39 1.54 0.60 6.20 3.01
(News) Left 0.33 1.03 0.46 2.26 0.74 8.85 1.14
(News) Right 0.25 1.01 0.30 1.69 0.79 6.66 0.88
Phishing Email 0.06 0.05 0.15 1.63 1.21 4.96 4.27
IRA Ads 0.12 0.55 0.18 0.65 0.54 2.34 0.56
Average 0.20 0.59 0.32 1.56 0.75 6.02 2.11
TABLE III: Breakdown of LIWC emotional features for each influence cue and text type; highest values for each LIWC category (columns) per influence cue and text type are highlighted bold text.