In Search of Credible News

11/19/2019 ∙ by Momchil Hardalov, et al. ∙ Qatar Foundation Sofia University 7

We study the problem of finding fake online news. This is an important problem as news of questionable credibility have recently been proliferating in social media at an alarming scale. As this is an understudied problem, especially for languages other than English, we first collect and release to the research community three new balanced credible vs. fake news datasets derived from four online sources. We then propose a language-independent approach for automatically distinguishing credible from fake news, based on a rich feature set. In particular, we use linguistic (n-gram), credibility-related (capitalization, punctuation, pronoun use, sentiment polarity), and semantic (embeddings and DBPedia data) features. Our experiments on three different testsets show that our model can distinguish credible from fake news with very high accuracy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Internet and the proliferation of smart mobile devices have changed the way information spreads, e.g., social media, blogs, and micro-blogging services such as Twitter, Facebook and Google+ have become some of the main sources of information for millions of users on a daily basis. On the positive side, this has democratized and accelerated content creation and sharing. On the negative side, it has made people vulnerable to manipulation, as the information in social media is typically not monitored or moderated in any way. Thus, it has become increasingly harder to distinguish real news from misinformation, rumors, unverified, manipulative, and even fake content. Not only are online blogs nowadays flooded by biased comments and fake content, but also online news media in turn are filled with unreliable and unverified content, e.g., due to the willingness of journalists to be the first to write about a hot topic, often by-passing the verification of their information sources; there are also some online information sources created with the sole purpose of spreading manipulative and biased information. Finally, the problem extends beyond the cyberspace, as in some cases, fake news from online sources have crept into mainstream media.

Journalists, regular online users, and researchers are well aware of the issue, and topics such as information credibility, veracity, and fact checking are becoming increasingly important research directions [3, 5, 20]. For example, there was a recent 2016 special issue of the ACM Transactions on Information Systems journal on Trust and Veracity of Information in Social Media [15], and there is an upcoming SemEval-2017 shared task on rumor detection.

As English is the primary language of the Web, most research on information credibility and veracity has focused on English, while other languages have been largely neglected. To bridge this gap, below we present experiments in distinguishing real from fake news in Bulgarian; yet, our approach is in principle language-independent. In particular, we distinguish between real news vs. fake news that in some cases are designed to sound funny (while still resembling real ones); thus, our task can be also seen as humor detection [11, 16].

As there was no publicly available dataset that we could use, we had to create one ourselves. We collected two types of news: credible, coming from trusted online sources, and fake news, written with the intention to amuse, or sometimes confuse, the reader who is not knowledgeable enough about the subject. We then built a model to distinguish between the two, which achieved very high accuracy.

The remainder of this paper is organized as follows: Section 2 presents some related work. Section 3 introduces our method for distinguishing credible from fake news. Section 4

presents our data, feature selection, the experiments, and the results. Finally, Section 

5 concludes and suggests directions for future work.

2 Related Work

Information credibility in social media is studied by Castillo & al. [3], who formulate it as a problem of finding false information about a newsworthy event. They focus on tweets using variety of features including user reputation, author writing style, and various time-based features.

Zubiaga & al. [19] studied how people handle rumors in social media. They found that users with higher reputation are more trusted, and thus can spread rumors among other users without raising suspicions about the credibility of the news or of its source.

Online personal blogs are another popular way to spread information by presenting personal opinions, even though researchers disagree about how much people trust such blogs. Johnson & al. [6] studied how blog users act in the time of newsworthy event, e.g., such as the crisis in Iraq, and how biased users try to influence other people.

It is not only social media that can spread information of questionable quality. The credibility of the information published on online news portals has also been questioned by a number of researchers [1, 8, 4]. As timing is a crucial factor when it comes to publishing breaking news, it is simply not possible to double-check the facts and the sources, as is usually standard in respectable printed newspapers and magazines. This is one of the biggest concerns about online news media that journalists have [2].

The interested reader can see [17] for a review of various methods for detecting fake news, where different approaches are compared based on linguistic analysis, discourse, linked data, and social network features.

Finally, we should also mention work on humor detection. Yang & al. [16] identify semantic structures behind humor, and then design sets of features for each structure; they further develop anchors that enable humor in a sentence. However, they mix different genres such as news, community question answers, and proverbs, as well as the One-Liner dataset [11]. In contrast, we focus on news both for positive and for negative examples, and we do not assume that the reason for a news being not credible is the humor it contains.

3 Method

We propose a language-independent approach for automatically distinguishing credible from fake news, based on a rich feature set. In particular, we use linguistic (-gram), credibility (capitalization, punctuation, pronoun use, sentiment polarity), and semantic (embeddings and DBPedia data) features.

3.1 Features

3.1.1 Linguistic (-gram) Features

Before generating these features, we first perform initial pre-processing: tokenization and stop word removal. We define stop words as the most common, functional words in a language (e.g., conjunctions, prepositions, interjections, etc.); while they fit well for problems such as author profiling, they turn out not to be particularly useful for distinguishing credible from fake news. Eventually, we experimented with the following linguistic features:

  • -grams: presence of individual uni-grams and bi-grams. The rationale is that some -grams are more typical of credible vs. fake news, and vice versa;

  • tf-idf: the same -grams, but weighted using tf-idf;

  • vocabulary richness: the number of unique word types used in the article, possibly normalized by the number of word tokens.

3.1.2 Credibility Features

We also used the following credibility features, which were previously proposed in the literature [3]:

  1. Length of the article (number of tokens);

  2. Fraction of words that only contain uppercase letters;

  3. Fraction of words that start with an uppercase letter;

  4. Fraction of words that contain at least one uppercase letter;

  5. Fraction of words that only contain lowercase letters;

  6. Fraction of plural pronouns;

  7. Fraction of singular pronouns;

  8. Fraction of first person pronouns;

  9. Fraction of second person pronouns;

  10. Fraction of third person pronouns;

  11. Number of URLs;

  12. Number of occurrences of an exclamation mark;

  13. Number of occurrences of a question mark;

  14. Number of occurrences of a hashtag;

  15. Number of occurrences of a single quote;

  16. Number of occurrences of a double quote.

We further added some sentiment-polarity features from lexicons generated from Bulgarian movie reviews

[7] (5,016 positive, and 2,415 negative words), which we further expanded with some more words. Based on these lexicons, we calculated the following features:

  1. Proportion of positive words;

  2. Proportion of negative words;

  3. Sum of the sentiment scores for the positive words;

  4. Sum of the sentiment scores for the negative words.

Note that we eventually ended up using only a subset of the above features, as we performed feature selection as described in Section 4.2 below.

3.1.3 Semantic (Embedding and DBPedia) Features

Finally, we use embedding vectors to model the semantics of the documents. We wanted to model implicitly some general world knowledge, and thus we trained word2vec vectors on the text of the long abstracts from the Bulgarian DBPedia.

111http://wiki.dbpedia.org/ Then, we built vectors for a document as the average of the word2vec vectors of the non-stop word tokens it is composed of.

3.2 Classification

As we have a rich set of partially overlapping features, we used logistic regression for classification with L-BFGS

[10] optimizer and elastic net regularization [18]

, which combines L1 and L2 regularization. This classification setup converges very fast, fits well in huge feature spaces, is robust to over-fitting, and handles overlapping features well. We fine-tuned the hyper-parameters of our classifier (maximum number of iterations, elastic net parameters, and regularization parameters) on the training dataset. We further applied feature selection as described below.

4 Experiments and Evaluation

4.1 Data

As there was no pre-existing suitable dataset for Bulgarian, we had to create one of our own. For this purpose, we collected a diverse dataset with enough samples in each category. We further wanted to make sure that our dataset will be good for modeling credible vs. fake news, i.e., that will not degenerate into related tasks such as topic detection (which might happen if the credible and the fake news are about different topics), authorship attribution (which could be the case if the fake news are written by just 1-2 authors) or source prediction (which can occur if all credible/fake news come from just one source). Thus, we used four Bulgarian news sources (from which we generated one training and three separate balanced testing datasets):

  1. We retrieved most of our credible news from Dnevnik,222http://www.dnevnik.bg/ a respected Bulgarian newspaper; we focused mostly on politics. This dataset was previously used in research on finding opinion manipulation trolls [12, 13, 14], but its news content fits well for our task too (5,896 credible news);

  2. As our main online source of fake news, we used a website with funny news called Ne!Novinite.333http://www.nenovinite.com/ We crawled topics such as politics, sports, culture, world news, horoscopes, interviews, and user-written articles (6,382 fake news);

  3. As an additional source of fake news, we used articles from the Bazikileaks444https://neverojatno.wordpress.com/ blog. These documents are written in the form of blog-posts and the content may be classified as “fictitious”, which is another subcategory of fake news. The domain is politics (656 fake news);

  4. And finally, we retrieved news from the bTV Lifestyle section,555http://www.btv.bg/lifestyle/all/ which contains both credible (in the bTV subsection) and fake news (in the bTV Duplex subsection). In both subsections, the articles are about popular people and events (69 credible and 68 fake news);

We used the documents from Dnevnik and Ne!Novinite for training and testing: 70% for training and 30% for testing. We further had two additional test sets: one of bTV vs. bTV Duplex, and one on Dnevnik vs. Bazikileaks. All test datasets are near-perfectly balanced.

Finally, as we have already mentioned above, we used the long abstracts in the Bulgarian DbPedia to train word2vec vectors, which we then used to build document vectors, which we used as features for classification. (171,444 credible samples).

4.2 Feature Selection

We performed feature selection on the credibility features. For this purpose, we first used Learning Vector Quantization (LVQ) [9] to obtain a ranking of the features from Section 3.1 by their importance on the training dataset; the results are shown in Table 1. See also Figure 1 for a comparison of the distribution of some of the credibility features in credible. vs. funny news.

Features Importance
doubleQuotes 16 0.7911
upperCaseCount 4 0.7748
lowerUpperCase 5 0.7717
firstUpperCase 3 0.7708
pluralPronouns 6 0.6558
firstPersonPronouns 8 0.6346
allUpperCaseCount 2 0.6282
negativeWords 18 0.5944
positiveWords 17 0.5834
tokensCount 1 0.5779
singularPronouns 7 0.5286
thirdPersonPronouns 10 0.5273
negativeWordsScore 20 0.5206
hashtags 14 0.4998
urls 11 0.4987
positiveWordsScore 17 0.4910
singleQuotes 15 0.4884
secondPersonPronouns 9 0.4408
questionMarks 13 0.4407
exclMarks 12 0.3160
Table 1: Features ranked by the LVQ importance metric.

Then, we experimented with various feature combinations of the top-ranked features, and we selected the combination that worked best on cross-validation on the training dataset (compare to Table 1):

  • Fraction of negative words in the text (negativeWords);

  • Fraction of words that contain uppercase letters only (allUpperCaseCount);

  • Fraction of words that start with an uppercase letter (firstUpperCase);

  • Fraction of words that only contain lowercase letters (lowerUpperCase);

  • Fraction of plural pronouns in the text (pluralPronouns);

  • Number of occurrences of exclamation marks (exclMarks);

  • Number of occurrences of double quotes (doubleQuotes).

Figure 1: Boxplots presenting the distributions of some credibility features in credible vs. funny news.
Feature Groups Dnevnik bTV vs. Dnevnik vs.
Ne!Novinite bTV Duplex Bazikileaks

Credibility + Linguistic + Semantic
99.36 62.04 85.53

Credibility + Semantic
92.67 75.91 82.99

Linguistic + Credibility
96.02 59.12 61.94

Semantic
98.95 61.31 71.01

Linguistic
95.71 56.93 73.25

Credibility
83.25 62.04 79.85

Baseline (majority class)
52.60 50.36 50.86
Table 2: Accuracy for different feature group combinations.

4.3 Results

Table 2 shows the results when using all feature groups and when turning off some of them. We can see that the best results are achieved when experimenting with “Credibility + Semantic” and “Credibility + Linguistic + Semantic” feature combinations, and the results are worse when only using credibility and linguistic features.

Analyzing the results on the Dnevnik vs. the Ne!Novinite testset (first column), we can see that the linguistic features are more important than the credibility ones. Yet, the semantic features are even more important. When we combine all the feature groups, we achieve 99.36% accuracy, but this is only marginally better than using the semantic features alone. Note, however, that using semantic features only does not perform so well on the other two test datasets, especially on the last one.

The linguistic features work relatively well on two of the test datasets, but not on bTV, where the combination of “Credibility + Semantic” is the best-performing one.

Naturally, the best results are on the Dnevnik vs. NE!Novinite, where the classifier achieves near perfect accuracy (note that this is despite the different class distribution on training vs. testing). The hardest testing dataset is bTV, where both the positive and the negative class are from sources different from those used in the training dataset; yet, we achieve up to 75.91% accuracy, which is well above the majority class baseline of 50.36. The Dnevnik vs. Bazikileaks dataset falls somewhere in between, with up to 85.53% accuracy; this is to be expected as the positive examples come from the same source as for the training dataset (even though the negative class is different).

Overall, on all three datasets, we achieved accuracy of 75-99%, which is well above the majority class baseline. The strong relative performance on the three different test datasets that come from different sources suggests that our model really learns to distinguish credible vs. fake news rather than learning to classify topics, sources, or author style.

5 Conclusion and Future Work

We have presented a feature-rich language-independent approach for distinguishing credible from fake news. In particular, we used linguistic (-gram), credibility-related (capitalization, punctuation, pronoun use, sentiment polarity, etc., with feature selection), and semantic (embeddings and DBPedia data) features. Our experiments on three different testsets, derived from four different sources, have shown that our model can distinguish credible from fake news with very high accuracy, well above a majority-class baseline.

In future work, we plan to experiment with more features, e.g., based on linked data [17], or on discourse analysis [17]. Looking at features used for related tasks such as humor- [16] and rumor-related [19]

is another promising direction for future work. We also want to apply deep learning, which can eliminate the need for feature engineering altogether.

Last but not least, we would like to note that we have made our source code and datasets publicly available for research purposes at the following URL:

Acknowledgments.

This research was performed by Momchil Hardalov, a student in Computer Science in the Sofia University “St Kliment Ohridski”, as part of his MSc thesis. It is also part of the Interactive sYstems for Answer Search (Iyas) project, which is developed by the Arabic Language Technologies (ALT) group at the Qatar Computing Research Institute (QCRI), HBKU, part of Qatar Foundation in collaboration with MIT-CSAIL.

References

  • [1] A. M. Brill (2001) Online journalists embrace new marketing function. Newspaper Research Journal 22 (2), pp. 28. Cited by: §2.
  • [2] W. P. Cassidy (2007) Online news credibility: an examination of the perceptions of newspaper journalists. Journal of Computer-Mediated Communication 12 (2), pp. 478–498. Cited by: §2.
  • [3] C. Castillo, M. Mendoza, and B. Poblete (2013) Predicting information credibility in time-sensitive social media.. Internet Research 23 (5), pp. 560–588. External Links: Link Cited by: §1, §2, §3.1.2.
  • [4] H. Finberg, M. L. Stone, and D. Lynch (2002) Digital journalism credibility study. Online News Association. Retrieved November 3, pp. 2003. Cited by: §2.
  • [5] L. Graves (2013) Deciding what’s true: fact-checking journalism and the new ecology of news. Ph.D. Thesis, Columbia University. Cited by: §1.
  • [6] T. J. Johnson, B. K. Kaye, S. L. Bichard, and W. J. Wong (2007) Every blog has its day: politically-interested internet users’ perceptions of blog credibility. Journal of Computer-Mediated Communication 13 (1), pp. 100–122. Cited by: §2.
  • [7] B. Kapukaranov and P. Nakov (2015)

    Fine-grained sentiment analysis for movie reviews in Bulgarian

    .
    In

    Proceedings of Recent Advances in Natural Language Processing

    ,
    RANLP ’15, Hissar, Bulgaria, pp. 266––274. Cited by: §3.1.2.
  • [8] S. Ketterer (1998) Teaching students how to evaluate and use online resources. Journalism & Mass Communication Educator 52 (4), pp. 4. Cited by: §2.
  • [9] T. Kohonen (1990) Improved versions of learning vector quantization. In

    IJCNN International Joint Conference on Neural Networks

    ,
    pp. 545–550. Cited by: §4.2.
  • [10] D. C. Liu and J. Nocedal (1989) On the limited memory BFGS method for large scale optimization. Mathematical programming 45 (1-3), pp. 503–528. Cited by: §3.2.
  • [11] R. Mihalcea and C. Strapparava (2005) Making computers laugh: investigations in automatic humor recognition. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, HLT-EMNLP ’05, Vancouver, British Columbia, Canada, pp. 531–538. External Links: Link Cited by: §1, §2.
  • [12] T. Mihaylov, G. Georgiev, and P. Nakov (2015) Finding opinion manipulation trolls in news community forums. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, CoNLL ’15, Beijing, China, pp. 310–314. External Links: Link Cited by: item 1.
  • [13] T. Mihaylov, I. Koychev, G. Georgiev, and P. Nakov (2015) Exposing paid opinion manipulation trolls. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP ’15, Hissar, Bulgaria, pp. 443–450. External Links: Link Cited by: item 1.
  • [14] T. Mihaylov and P. Nakov (2016) Hunting for troll comments in news community forums. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL ’16, Berlin, Germany. Cited by: item 1.
  • [15] S. Papadopoulos, K. Bontcheva, E. Jaho, M. Lupu, and C. Castillo (2016-04) Overview of the special issue on trust and veracity of information in social media. ACM Trans. Inf. Syst. 34 (3), pp. 14:1–14:5. External Links: ISSN 1046-8188, Link, Document Cited by: §1.
  • [16] D. Yang, A. Lavie, C. Dyer, and E. Hovy (2015) Humor recognition and humor anchor extraction. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP ’15, Lisbon, Portugal, pp. 2367–2376. External Links: Link Cited by: §1, §2, §5.
  • [17] M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica (2010) Spark: cluster computing with working sets. In Proceedings of the 2nd USENIX Conference on Hot Topics in Cloud Computing, HotCloud ’10, Boston, MA, pp. 10–10. Cited by: §2, §5.
  • [18] H. Zou and T. Hastie (2005) Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67 (2), pp. 301–320. Cited by: §3.2.
  • [19] A. Zubiaga, G. W. S. Hoi, M. Liakata, R. Procter, and P. Tolmie (2015) Analysing how people orient to and spread rumours in social media by looking at conversational threads. arXiv preprint arXiv:1511.07487. Cited by: §2, §5.
  • [20] A. Zubiaga and H. Ji (2014) Tweet, but verify: epistemic study of information verification on Twitter. Social Network Analysis and Mining 4 (1), pp. 1–12. External Links: ISSN 1869-5469, Document, Link Cited by: §1.