Experiments in Detecting Persuasion Techniques in the News

by   Seunghak Yu, et al.
Hamad Bin Khalifa University

Many recent political events, like the 2016 US Presidential elections or the 2018 Brazilian elections have raised the attention of institutions and of the general public on the role of Internet and social media in influencing the outcome of these events. We argue that a safe democracy is one in which citizens have tools to make them aware of propaganda campaigns. We propose a novel task: performing fine-grained analysis of texts by detecting all fragments that contain propaganda techniques as well as their type. We further design a novel multi-granularity neural network, and we show that it outperforms several strong BERT-based baselines.



There are no comments yet.


page 1

page 2

page 3

page 4


Fine-Grained Analysis of Propaganda in News Articles

Propaganda aims at influencing people's mindset with the purpose of adva...

Fine-grained Event Categorization with Heterogeneous Graph Convolutional Networks

Events are happening in real-world and real-time, which can be planned a...

Incorporating Fine-grained Events in Stock Movement Prediction

Considering event structure information has proven helpful in text-based...

Learning Fine-Grained Knowledge about Contingent Relations between Everyday Events

Much of the user-generated content on social media is provided by ordina...

Predicting the Role of Political Trolls in Social Media

We investigate the political roles of "Internet trolls" in social media....

Exercise? I thought you said 'Extra Fries': Leveraging Sentence Demarcations and Multi-hop Attention for Meme Affect Analysis

Today's Internet is awash in memes as they are humorous, satirical, or i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Journalistic organisations, such as Media Bias/Fact Check,111http://mediabiasfactcheck.com/

provide reports on news sources highlighting the ones that are propagandistic. Obviously, such analysis is time-consuming and possibly biased and it cannot be applied to the enormous amount of news that flood social media and the Internet. Research on detecting propaganda has focused primarily on classifying entire articles as propagandistic/non-propagandistic 

Barrón-Cedeño et al. (2019); Barrón-Cedeno et al. (2019); Rashkin et al. (2017). Such learning systems are trained using gold labels obtained by transferring the label of the media source, as per Media Bias/Fact Check judgment, to each of its articles. Such distant supervision setting inevitably introduces noise in the learning process Horne et al. (2018) and the resulting systems tend to lack explainability.

We argue that in order to study propaganda in a sound and reliable way, we need to rely on high-quality trusted professional annotations and it is best to do so at the fragment level, targeting specific techniques rather than using a label for an entire document or an entire news outlet. Therefore, we propose a novel task: identifying specific instances of propaganda techniques used within an article. In particular, we design a novel multi-granularity neural network, and we show that it outperforms several strong BERT-based baselines.

Our corpus could enable research in propagandistic and non-objective news, including the development of explainable AI systems. A system that can detect instances of use of specific propagandistic techniques would be able to make it explicit to the users why a given article was predicted to be propagandistic. It could also help train the users to spot the use of such techniques in the news.

2 Corpus Annotated with Propaganda Techniques

We retrieved 451 news articles from 48 news outlets, both propagandistic and non-propagandistic according to Media Bias/Fact Check, which professionals annotators222http://www.aiidatapro.com. The company performs professional annotations in the NLP domain, although they were not expert in propaganda techniques before this work. annotated according to eighteen persuasion techniques Miller (1939), ranging from leveraging on the emotions of the audience —such as using loaded language or appeal to authority Goodwin (2011) and slogans Dan (2015)— to using logical fallacies —such as straw men Walton (1996) (misrepresenting someone’s opinion), hidden ad-hominem fallacies, and red herring (Weston, 2018, p. 78) (presenting irrelevant data).333For a complete list see http://propaganda.qcri.org/annotations/definitions.html Some of these techniques weren studied in tasks such as hate speech detection and computational argumentation Habernal et al. (2018).

The total number of technique instances found in the articles, after the consolidation phase, is , out of a total number of sentences (35.2%). The distribution of the techniques in the corpus is also uneven: while there are occurrences of loaded language, there are only instances of straw man (more statistics about the corpus can be found in Da San Martino et al. (2019)). We define two tasks based on the corpus described in Section 2: (iSLC (Sentence-level Classification), which asks to predict whether a sentence contains at least one propaganda technique, and (iiFLC (Fragment-level classification), which asks to identify both the spans and the type of propaganda technique. Note that these two tasks are of different granularity, and , namely tokens for FLC and sentences for SLC. We split the corpus into training, development and test, each containing 293, 57, 101 articles and 14,857, 2,108, 4,265 sentences, respectively.

Our task requires specific evaluation measures that give credit for partial overlaps of fragments. Thus, in our precision and recall versions, we give partial credit to imperfect matches at the character level, as in plagiarism detection 

(Potthast et al., 2010).

Let and be two fragments, i.e., sequences of characters. We measure the overlap of two annotated fragments as , where is a normalizing factor, is the labelling of fragment , and if , and otherwise.

We now define variants of precision and recall able to account for the imbalance in the corpus:


In eq. (1), we define to be zero if and to be zero if

. Finally, we compute the harmonic mean of precision and recall in Eq. (

1) and we obtain an F-measure. Having a separate function for comparing two annotations gives us additional flexibility compared to standard NER measures that operate at the token/character level, e.g., we can change the factor that gives credit for partial overlaps and be more forgiving when only a few characters are wrong.

3 Models

We depart from BERT (Devlin et al., 2019), and we design three baselines.

BERT. We add a linear layer on top of BERT and we fine-tune it, as suggested in Devlin et al. (2019)

. For the FLC task, we feed the final hidden representation for each token to a layer

that makes a 19-way classification: does this token belong to one of the eighteen propaganda techniques or to none of them (cf. Figure 1-a). For the SLC task, we feed the final hidden representation for the special [CLS] token, which BERT uses to represent the full sentence, to a two-dimensional layer to make a binary classification.

BERT-Joint. We use the layers for both tasks in the BERT baseline, and , and we train for both FLC and SLC jointly (cf. Figure 1-b).

BERT-Granularity. We modify BERT-Joint to transfer information from SLC directly to FLC. Instead of using only the layer for FLC, we concatenate and , and we add an extra 19-dimensional classification layer on top of that concatenation to perform the prediction for FLC (cf. Figure 1-c).

Multi-Granularity Network. We propose a model that can drive the higher-granularity task (FLC) on the basis of the lower-granularity information (SLC), rather than simply using low-granularity information directly. Figure 1-d shows the architecture of this model.

More generally, suppose there are tasks of increasing granularity, e.g., document-level, paragraph-level, sentence-level, word-level, subword-level, character-level. Each task has a separate classification layer that receives the feature representation of the specific level of granularity and outputs . The dimension of the representation depends on the embedding layer, while the dimension of the output depends on the number of classes in the task. The output is used to generate a weight for the next granularity task through a trainable gate :


The gate

consists of a projection layer to one dimension and an activation function. The resulting weight is multiplied by each element of the output of layer

to produce the output for task :


If for a given example, the output of the next granularity task would be 0 as well. In our setting, this means that, if the sentence-level classifier is confident that the sentence does not contain propaganda, i.e., , then and there would be no propagandistic technique predicted for any span within that sentence. Similarly, when back-propagating the error, if for a given example, the final entropy loss would become zero, i.e., the model would not get any information from that example. As a result, only examples strongly classified as negative in a lower-granularity task would be ignored in the high-granularity task. Having the lower-granularity as the main task means that higher-granularity information can be selectively used as additional information to improve the performance, but only if the example is not considered as highly negative.

Figure 1: The architecture of the baseline models (a-c), and of our multi-granularity network (d).

For the loss function, we use a cross-entropy loss with sigmoid activation for every layer, except for the highest-granularity layer

, which uses a cross-entropy loss with softmax activation. Unlike softmax, which normalizes over all dimensions, the sigmoid allows each output component of layer

to be independent from the rest. Thus, the output of the sigmoid for the positive class increases the degree of freedom by not affecting the negative class, and vice versa. As we have two tasks, we use sigmoid activation for

and softmax activation for . Moreover, we use a weighted sum of losses with a hyper-parameter :


Again, we use BERT Devlin et al. (2019) for the contextualized embedding layer and we place the multi-granularity network on top of it.

4 Experiments and Evaluation

We used the PyTorch

444http://pytorch.org framework and the pretrained BERT model,555http://github.com/huggingface/pytorch-pretrained-BERT which we fine-tuned for our tasks.666Our source code together with the dataset are available in GitHub: http://anonymous.for.review. To deal with class imbalance, we give weight to the binary cross-entropy according to the proportion of positive samples. For the in the joint loss function, we use 0.9 for sentence classification, and 0.1 for word-level classification. In order to reduce the effect of random fluctuations for BERT, all the reported numbers are the average of three experimental runs with different random seeds. As it is standard, we tune our models on the dev partition and we report results on the test partition.

The left side of Table 1

shows the performance for the three baselines and for our multi-granularity network on the FLC task. For the latter, we vary the degree to which the gate function is applied: using ReLU is more aggressive compared to using the Sigmoid, as the ReLU outputs zero for a negative input. Table 

1 (right) shows that using additional information from the sentence-level for the token-level classification (BERT-Granularity) yields small improvements. The multi-granularity models outperform all baselines thanks to their higher precision. This shows the effect of the model excluding sentences that it determined to be non-propagandistic from being considered for token-level classification.

Model Task SLC Task FLC
All-Propaganda 23.92 100.0 38.61 - - -
BERT 63.20 53.16 57.74 21.48 21.39 21.39
    Joint 62.84 55.46 58.91 20.11 19.74 19.92
    Granu 62.80 55.24 58.76 23.85 20.14 21.80
    ReLU 60.41 61.58 60.98 23.98 20.33 21.82
    Sigmoid 62.27 59.56 60.71 24.42 21.05 22.58
Table 1: Sentence-level (left) and fragment-level experiments (right). All-propaganda is a baseline that always output the propaganda class.

The right side of Table 1 shows the results for the SLC task. We apply our multi-granularity network model to the sentence-level classification task to see its effect on low granularity when we train the model with a high granularity task. Interestingly, it yields huge performance improvements on the sentence-level classification result. Compared to the BERT baseline, it increases the recall by 8.42%, resulting in a 3.24% increase of the F score. In this case, the result of token-level classification is used as additional information for the sentence-level task, and it helps to find more positive samples. This shows the opposite effect of our model compared to the FLC task.

5 Conclusions

We have argued for a new way to study propaganda in news media: by focusing on identifying the instances of use of specific propaganda techniques. Going at this fine-grained level can yield more reliable systems and it also makes it possible to explain to the user why an article was judged as propagandistic by an automatic system.

We experimented with a number of BERT-based models and devised a novel architecture which outperforms standard BERT-based baselines. Our fine-grained task can complement document-level judgments, both to come out with an aggregated decision and to explain why a document —or an entire news outlet— has been flagged as potentially propagandistic by an automatic system.

In future work, we plan to include more media sources, especially from non-English-speaking media and regions. We further want to extend the tool to support other propaganda techniques.

6 Acknowledgements

This research is part of the Propaganda Analysis Project,777http://propaganda.qcri.org which is framed within the Tanbih project.888http://tanbih.qcri.org

The Tanbih project aims to limit the effect of “fake news”, propaganda, and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking. The project is developed in collaboration between the Qatar Computing Research Institute (QCRI), HBKU and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).


  • A. Barrón-Cedeño, G. Da San Martino, I. Jaradat, and P. Nakov (2019) Proppy: a system to unmask propaganda in online news. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, AAAI ’19, Honolulu, HI, USA, pp. 9847–9848. Cited by: §1.
  • A. Barrón-Cedeno, I. Jaradat, G. Da San Martino, and P. Nakov (2019) Proppy: organizing the news based on their propagandistic content. Information Processing & Management 56 (5), pp. 1849–1864. Cited by: §1.
  • G. Da San Martino, S. Yu, A. Barrón-Cedeño, R. Petrov, and P. Nakov (2019) Fine-grained analysis of propaganda in news articles. In

    Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing

    EMNLP-IJCNLP 2019, Hong Kong, China, pp. 5640–5650. Cited by: §2.
  • L. Dan (2015) Techniques for the Translation of Advertising Slogans. In Proceedings of the International Conference Literature, Discourse and Multicultural Dialogue, LDMD ’15, Mures, Romania, pp. 13–23. Cited by: §2.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT ’19NAACL ’19, Minneapolis, MN, USA, pp. 4171–4186. Cited by: §3, §3, §3.
  • J. Goodwin (2011) Accounting for the force of the appeal to authority. In Proceedings of the 9th International Conference of the Ontario Society for the Study of Argumentation, OSSA ’11, Ontario, Canada, pp. 1–9. Cited by: §2.
  • I. Habernal, H. Wachsmuth, I. Gurevych, and B. Stein (2018) Before name-calling: dynamics and triggers of ad hominem fallacies in web argumentation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT ’18, New Orleans, LA, USA, pp. 386–396. Cited by: §2.
  • B. D. Horne, S. Khedr, and S. Adali (2018) Sampling the news producers: a large news and feature data set for the study of the complex media landscape. In Proceedings of the Twelfth International AAAI Conference on Web and Social Media, ICWSM ’18, Stanford, CA, USA. Cited by: §1.
  • C. R. Miller (1939) The Techniques of Propaganda. Note: The Center for learningFrom “How to Detect and Analyze Propaganda,” an address given at Town Hall Cited by: §2.
  • M. Potthast, B. Stein, A. Barrón-Cedeño, and P. Rosso (2010) An evaluation framework for plagiarism detection. In Proceedings of the 23rd international conference on computational linguistics: Posters, COLING ’10, Beijing, China, pp. 997–1005. Cited by: §2.
  • H. Rashkin, E. Choi, J. Y. Jang, S. Volkova, and Y. Choi (2017) Truth of varying shades: analyzing language in fake news and political fact-checking. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP ’17, Copenhagen, Denmark, pp. 2931–2937. Cited by: §1.
  • D. Walton (1996) The straw man fallacy. Royal Netherlands Academy of Arts and Sciences. Cited by: §2.
  • A. Weston (2018) A rulebook for arguments. Hackett Publishing. Cited by: §2.