SAFE: Similarity-Aware Multi-Modal Fake News Detection

02/19/2020 ∙ by Xinyi Zhou, et al. ∙ Syracuse University 0

Effective detection of fake news has recently attracted significant attention. Current studies have made significant contributions to predicting fake news with less focus on exploiting the relationship (similarity) between the textual and visual information in news articles. Attaching importance to such similarity helps identify fake news stories that, for example, attempt to use irrelevant images to attract readers' attention. In this work, we propose a Similarity-Aware FakE news detection method (SAFE) which investigates multi-modal (textual and visual) information of news articles. First, neural networks are adopted to separately extract textual and visual features for news representation. We further investigate the relationship between the extracted features across modalities. Such representations of news textual and visual information along with their relationship are jointly learned and used to predict fake news. The proposed method facilitates recognizing the falsity of news articles based on their text, images, or their "mismatches." We conduct extensive experiments on large-scale real-world data, which demonstrate the effectiveness of the proposed method.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Following the 2016 U.S. presidential election, the impact of “fake news” has become a major concern. Based on a broad investigation of 126,000 verified true and fake news stories on Twitter from 2006 to 2017, Vosoughi and colleagues revealed that fake news stories spread more frequently and faster compared to true news stories [20]. As indicated by the fundamental theories on fake news in psychology and social sciences (see a comprehensive survey in Ref. [27]), the more a fake news article spreads, the higher the possibility of social media users spreading and trusting it due to repeated exposure. and/or peer pressure. Such levels of trust and beliefs can easily be amplified and reinforced within social media due to its echo chamber effect [3]. Hence, extensive research has been conducted on effective detection of fake news to block its dissemination on social media. Fake news detection methods can be generally grouped into (1) content-based and (2) social-context-based methods. The main difference between the two types of methods is whether or not they rely on social context information: the information on how the news has propagated on social media, where abundant auxiliary information of social media users involved and their connections/networks can be utilized. Many innovative and significant solutions (e.g., [1, 15, 13]) have been proposed to exploit social context information. With more social context information available, one can often better detect fake news; however, detection becomes more challenging depending on the stage the news is currently at. It is difficult to detect fake news using social-context-based methods when it has been just published and has not been propagated (i.e., no social context information), which motivates us to further explore the role that news content can play in fake news detection.

As “a news article that is intentionally and verifiably false” [25], fake news content often contains textual and visual information. Existing content-based fake news detection methods either solely consider textual information [26], or combine both types of data ignoring the relationship (similarity) between them [23, 24, 5, 4]. The values in understanding such relationship (similarity) for predicting fake news are two-fold. To attract public attention, some fake news stories (or news stories with low-credibility) prefer to use dramatic, humorous (facetious), and tempting images whose content is far from the actual content within the news text. Furthermore, when a fake news article tells a story with fictional scenarios or statements, it is difficult to find both pertinent and non-manipulated images to match these fictions; hence a “gap” exists between the textual and visual information of fake news when creators use non-manipulated images to support non-factual scenarios or statements.111Examples at https://www.snopes.com/fact-check/rating/miscaptioned/.

With such considerations, we propose a imilarity-ware ak news detection method (

). The method consists of three modules, performing (1) multi-modal (textual and visual) feature extraction; (2) within-modal (or say, modal-independent) fake news prediction; (3) cross-modal similarity extraction, respectively. For each news article, we first adopt neural networks to automatically obtain the latent representation of both its textual and visual information, based on which a similarity measure is defined between them. Then, such representations of news textual and visual information with their similarity are jointly learned and used to predict fake news. The proposed method aims to recognize the falsity of a news article on either its text or images, or the “mismatch” between the text and images.

The main contributions of our work are summarized as below.

  1. To our best knowledge, we present the first approach that investigates the role of the relationship (similarity) between news textual and visual information in predicting fake news;

  2. We propose a new method to jointly exploit multi-modal (textual and visual) and relational information to learn the representation of news articles and predict fake news; and

  3. We conduct extensive experiments on large-scale real-world data to demonstrate the effectiveness of the proposed method.

Next, we will first review the related work in Sec. 2. The proposed method will be detailed in Sec. 3, along with its iterative learning process in Sec. 4. We will detail the experiments and the results in Sec. 5. We will conclude in Sec. 6.

2 Related Work

There has been extensive research on fake news detection. Fake news detection methods can be generally grouped into (I) content-based and (II) social-context-based methods.

2.0.1 I. Content-based Fake News Detection

Content-based methods detect fake news by utilizing news content, i.e., the textual information and/or visual information within news content.

Most content-based methods have comprehensively investigated news textual information. Within a traditional statistical natural language processing framework, such investigation has crossed multiple levels of language. By assuming that fake news differs from true news in linguistic/writing styles in the content, various hand-crafted features have been extracted from news content for representation and used for classification by, e.g., SVM and random forest. For example, Pérez-Rosas et al. employed lexical features by using bag-of-words with

-gram models, semantic features relying on LIWC [10], syntactic features such as context-free grammars, and news readability [11]. Instead of extracting features based on experience, Zhou et al. [26]

validated the role of fundamental theories in psychology and social science in guiding fake news feature engineering. Rhetorical structures among sentences or phrases within news content have also been investigated with either a vector space model 

[14] or Bi-LSTM [6]. Researchers have also explored the political bias [12] and homogeneity [2] of news publishers by mining news content that they have published, and have demonstrated how such information can help detect fake news.

In addition to textual information, greater – while still limited – attention has been recently paid to visual information within news content. Jin et al. analyzed images between true news and fake news in terms of, e.g., their clarity [5]

. Along with the recent advances in deep learning, various RNNs and CNNs have been developed for multi-modal fake news detection and related tasks 

[4, 23, 7, 18, 21, 24]. To learn the multi-modal (textual and visual) representation of news content, Jin et al. developed VGG-19 and LSTM with an attention mechanism [4], and Khattar et al. designed an encoder-decoder mechanism [7]. Yang et al. proposed TI-CNN, which detects fake news by extracting both explicit and latent multi-modal features within news content [24]. Wang et al. proposed Event Adversarial Neural Network (EANN) to learn event-invariant features representative of news content across various topics and domains [23]. While current techniques have facilitated the development of multi-modal fake news detection, the relationship across modalities has been barely explored and exploited. Our work bridges this gap by directly capturing the relationship (similarity) between the textual and visual information within news content, and firstly learning the representation of news articles through mining its multi-modal information and the relationship across modalities.

2.0.2 II. Social-context-based Fake News Detection

Social-context-based methods detect fake news by investigating social-context information related to news articles, i.e., how news articles spread on social media. Significant contributions have been made on identifying the differences in propagation patterns between fake news and the truth [20]. Such contributions have also focused on how user profiles [1] and opinions [13, 15] can help news verification using feature engineering [1] and neural networks [13, 15]. Nevertheless, verifying a news article that has been published online, e.g., on a news outlet such as BuzzFeed (buzzfeed.com), before it has been disseminated on social media demands content-based methods as social-context information at this stage does not exist. For this purpose, we focus on mining news content in this work, where the proposed method will be detailed next.

Figure 1: Overview of the framework

3 Methodology

In this section, the proposed method () is detailed in terms of its three modules performing: (I) multi-modal feature extraction (Sec. 3.1), (II) modal-independent fake news prediction (Sec. 3.2), and (III) cross-modal similarity extraction (Sec. 3.3). Then, we detail in Sec. 3.4 how various modules can work collectively to predict fake news. An overview of the framework is presented in Fig. 1. Before further specification, we formally define the problem and introduce some key notations as follows.

3.0.1 Problem Definition and Key Notation.

Given a news article consisting of textual information and visual information , we denote and as the corresponding representations, where and . Let denote the similarity between and , where . Our goal is to predict whether is a fake news article () or a true one () by investigating its textual information, visual information, and their relationship, i.e., to determine , where are parameters to be learned.

3.1 Multi-modal Feature Extraction

The multi-modal feature extraction module of aims to represent the (I) textual information and (II) visual information of a given news article in -dimensional space, respectively.

3.1.1 Text

We extend Text-CNN [8] by introducing an additional fully connected layer to automatically extract textual features for each news article. The architecture of Text-CNN is provided in Fig. 2

, which contains a convolutional layer and max pooling. Given a piece of content with

words, each word is first embedded as  [9]. The convolutional layer is used to produce a feature map, denoted as , from a sequence of local inputs , via a filter . As shown in Fig. 2, each local input is a group of continuous words. Mathematically,

(1)
(2)

where , is a bias, is the concatenation operator, and

is ReLU function. Note that

and are all parameters within Text-CNN to be learned. Then, a max-over-time pooling operation is applied on the obtained feature map for dimension reduction, i.e., . Finally, the representation of the news text can be obtained by , where , is the different number of window sizes chosen; and are parameters to be learned.

Figure 2: Text-CNN Architecture

3.1.2 Image

For representing news images, we also use Text-CNN with an additional fully connected layer while we first process visual information within news content using a pre-trained image2sentence model222https://github.com/nikhilmaram/Show_and_Tell [19]. Compared to existing multi-modal fake news detection studies that often directly apply a pre-trained CNN (e.g., VGG) model to obtain the representation of news images [23, 4], we adopt the aforementioned processing strategy for consistency and to increase insights when computing the similarity across modalities. As we will demonstrate later in our experiments, it also leads to performance improvements. Let denote the output of the neural network with parameters (filter) and (bias). Similarly, the final representation of news visual information is then computed by , where and are parameters to be learned.

3.2 Modal-independent Fake News Prediction

To properly represent news textual and visual information in predicting fake news, we aim to correctly map the extracted textual and visual features of news content to their possibilities of being fake, and further to their actual labels. Mathematically, such possibilities can be computed by

(3)

where , is the concatenation operator, and

are parameters. To let the computed possibilities of news articles being fake approach their actual labels, a cross-entropy-based loss function is defined:

(4)

where , , , and

(5)

3.3 Cross-modal Similarity Extraction

When attempting to correctly map the multi-modal features of news articles to their labels, features belonging to two different modals are considered separately – concatenating them with no relation between them explored (see Sec. 3.2). However, besides that, the falsity of a news article can be also detected by assessing how (ir)relevant the textual information is compared to its visual information; fake news creators sometimes actively use irrelevant images for false statements to attract readers’ attention, or passively use them due to the difficulty in finding a supportive non-manipulated image (see case studies in Sec. 5

for examples). Compared to news articles delivering relevant textual and visual information, those with disparate statements and images are more likely to be fake. We define the relevance between news textual and visual information as follows by slightly modifying cosine similarity:

(6)

In such a way, it is guaranteed that is positive and (to be utilized in Eq. (7)); 0 indicates that and are far from being similar, while 1 indicates that and are exactly the same.

Then, we can define the loss function based on cross-entropy as below, which assumes that news articles formed with mismatched textual and visual information are more likely to be fake compared to those with matching textual statements and images, when analyzing from a pure similarity perspective:

(7)
(8)

3.4 Model Integration and Joint Learning

When detecting fake news, we aim to correctly recognize fake news stories whose falsity is in their (1) textual and/or visual information, or (2) their relationship, as specified in Sec. 3.2 and Sec. 3.3, respectively. To involve both cases, we specify our final loss function as

(9)

where parameters can be jointly learned by

(10)
Input: , , ,
Output: , ,
1 Randomly initialize ;
2 while not convergence do
3        foreach  do
4               Update : Eq. (12);
5               foreach  do
6                      Update : Eqs. (14-18);
7                      Update : similar to updating ;
8                     
9               end foreach
10              
11        end foreach
12       
13 end while
return
Algorithm 1

4 Optimization

We outline the optimization process to learn the model parameters, i.e., iteratively solving Eq. (10). The process is summarized in Algorithm 1. The updating rule for each parameter is as follows:

Update .

Let be the learning rate, the partial derivative of w.r.t. is:

(11)

As , updating is equivalent to updating both and in each iteration, which respectively follow the following rules:

(12)

where .

Update .

The partial derivative of w.r.t. is generally computed by

(13)

Let , , , and denote the first columns of , we can have

(14)
(15)

based on which the parameters in are respectively updated as follows:

(16)
(17)

where , is a diagonal matrix with entry value , and

(18)
Update .

It is similar to updating ; we omit details due to space constraints.

5 Experiments

We detail experimental setup in Sec 5.1, followed by evaluating in Sec 5.2.

5.1 Experimental Setup

We detail (I) the data used in our experiments, (II) the baselines is compared to, and (III) implementation details such as how data was pre-processed and hyper-parameters were set.

5.1.1 Datasets

Our experiments are conducted on two well-established public benchmark datasets of fake news detection333https://github.com/KaiDMML/FakeNewsNet [16]. News articles in datasets are respectively collected from PolitiFact and GossipCop. PolitiFact (politifact.com) is a well-known non-profit fact-checking website of political statements and reports in the U.S. [22]. GossipCop (gossipcop.com) is a website that fact-checks celebrity reports and entertainment stories published in magazines and newspapers. News articles in PolitiFact dataset were published from May 2002 to July 2018 and those in GossipCop dataset were published from July 2000 to December 2018. Ground truth labels (fake or true) of news articles in both datasets were provided by domain experts, which guarantees the quality of news labels. Statistics of the two datasets are provided in Tab. 1.

PolitiFact GossipCop
Fake True Overall Fake True Overall
# News articles 432 624 1,056 5,323 16,817 22,140
  – with textual information 420 528 948 4,947 16,694 21,641
  – with visual information 336 447 783 1,650 16,767 18,417
Table 1: Data Statistics

5.1.2 Baselines

We compare to the following baselines, which detect fake news using (i) textual (LIWC [10]), (ii) visual (VGG-19 [17]), or (iii) multi-modal information (att-RNN [4]).

  • LIWC [10]

    : LIWC is a widely-accepted psycho-linguistics lexicon. Given a news story, LIWC can count the words in the text falling into one or more of over 80 linguistic, psychological, and topical categories. These numbers act as hand-crafted features used by, e.g., random forest, to predict fake news;

  • VGG-19444https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models [17]: VGG-19 is a widely-used CNN with 19 layers for image classification. We use a fine-tuned VGG-19 as one of the baselines; and

  • att-RNN [4]: att-RNN is a deep neural network model applicable for multi-modal fake news detection. It employs LSTM and VGG-19 with attention mechanism to fuse textual, visual and social-context features of news articles. We set the hyper-parameters the same as that in [4] and exclude the social-context features for a fair comparison.

We also include the following variants of the proposed method:

  • T: The proposed method without using textual information;

  • V: The proposed method without using visual information;

  • S: without capturing the relationship (similarity) between news textual and visual information. In this case, the extracted multi-modal features of each news article are fused by concatenating them; and

  • W

    : The proposed method when only the relationship between textual and visual information is assessed. In this case, the classifier is directly connected with the output of the cross-modal similarity extraction module, i.e.,

    , where and are parameters.

5.1.3 Implementation Details

In our experiments, each dataset was separated into 80% for training and 20% for testing based on the publication dates of news articles, where newly published articles were treated as test data. five-fold cross-validation was used for model training. We set the learning rate as

, the number of iterations as 100, and the strides (

) as .

5.2 Performance Analysis

We evaluate the general performance of by comparing it with (I) state-of-the-art fake news detection methods and (II) its variants. Next, (III) parameters within are analyzed and (IV) case studies are presented to validate its effectiveness. We use accuracy, precision, recall, and score to evaluate how well the representation and prediction perform.

max width = LIWC VGG-19 att-RNN T V S W Acc. 0.822 0.649 0.769 0.674 0.721 0.796 0.738 0.874 Politi- Fact Pre. 0.785 0.668 0.735 0.680 0.740 0.826 0.752 0.889 Rec. 0.846 0.787 0.942 0.873 0.831 0.801 0.844 0.903 0.815 0.720 0.826 0.761 0.782 0.813 0.795 0.896 Acc. 0.836 0.775 0.743 0.721 0.802 0.814 0.812 0.838 Gossip- Cop Pre. 0.878 0.775 0.788 0.734 0.853 0.875 0.853 0.857 Rec. 0.317 0.970 0.913 0.974 0.883 0.872 0.901 0.937 0.466 0.862 0.846 0.837 0.868 0.874 0.876 0.895

  • : Text-based methods

  • : Image-based methods

  • : Multi-modal methods

Table 2: Performance of Methods in Detecting Fake News

5.2.1 General Performance Analysis

The general performance of and baselines are provided in Tab. 2. Results indicate when predicting fake news, can outperform all baselines based on the accuracy values and scores for both datasets. Based on PolitiFact data, the general performance of methods is ; while for GossipCop data, such performance is

. Note that multiple supervised learners (such as SVM, decision tree, logistic regression, and

-NN) have been used with LIWC in our experiments, where we present the best performance (obtained from random forest) in Tab. 2.

(a) PolitiFact (b) GossipCop
Figure 3: Module Analysis
(a) PolitiFact
(b) GossipCop
Figure 4: Parameter Analysis

5.2.2 Module Analysis

The performance of and its variants are presented in Tab. 2 and Fig. 4. Results indicate when predicting fake news, (1) integrating news textual information, visual information, and their relationship () performs best among all variants, (2) using multi-modal information (S or W) performs better compared to using single-modal information (T or V); (3) it is comparable to detect fake news by either independently using multi-modal information (S) or mining their relationship (W); and (4) textual information (V) is more important compared to visual information (T).

5.2.3 Parameter Analysis

In Eq. (9), and are used to allocate the relative importance between the extracted multi-modal features () and the similarity across modalities (). To assess their influence in method performance, we changed the value of and respectively from 0 to 1 with a step size of 0.2. Results in Fig. 4 show that various parameter values lead to the accuracy (or score) of ranging from 0.75 to 0.85 (or from 0.8 to 0.9) for both datasets. The proposed method performs best when in PolitiFact and in GossipCop, which again validates the importance of both multi-modal information and cross-modal relationship in predicting fake news.

5.2.4 Case Study

In our case studies, we aim to answer the following questions: is there any real-world fake news story whose textual and visual information are not closely related to each other? If there is, can correctly recognize such irrelevance and further recognize its falsity? For this purpose, we went through the news articles in the two datasets, and compared their ground truth labels with their similarity scores computed by . Several examples are presented in Figs. 6-6. It can be observed that (I) the gap between textual and visual information exist for some fictitious stories for (but not limited to) two reasons. First, such stories are difficult to be supported by non-manipulated images. An example is in Fig. 5(a), where no voting- and bill-related image is actually available. Compared to the couples having a real intimate relationship (see Fig. 6(c)), the fake ones often have rare group photos or use collages (see Fig. 5(c)). Second, using “attractive” though not closely relevant images can help increase the news traffic. For example, the fake news in Fig. 5(b) includes an image with a smiling individual that conflicts with the death story. (II) helps correctly assess the relationship (similarity) between news textual and visual information. For fake news stories in Fig. 6, their corresponding similarity scores are all low and correctly labels them as fake news. Similarly, assigns all true news stories in Fig. 6 a high similarity score, and predicts them as true news.

(a)
(b)
(c)
(a)
(b)
(c)
Figure 5: Fake News
Figure 6: True News
Figure 5: Fake News

6 Conclusion

In this work, a similarity-aware multi-modal method, named , is proposed to predict fake news. The method extracts both textual and visual features of news content, and investigates their relationship. Experimental results indicate multi-modal features and the cross-modal relationship (similarity) are valuable with a comparable importance in fake news detection. Case studies conducted further validate the effectiveness of the proposed method in assessing such similarity and predicting fake news. Nevertheless, we should point out the proposed method investigates textual and visual information without considering, e.g., network and video information. Additionally, relationships within modalities are valuable as well such as the textual (or visual) similarity among or between pairwise news articles, which both will be part of our future work.

References

  • [1] C. Castillo, M. Mendoza, and B. Poblete (2011) Information credibility on twitter. In The World Wide Web Conference, pp. 675–684. Cited by: §1, §2.0.2.
  • [2] B. D. Horne, J. Nørregaard, and S. Adalı (2019) Different Spirals of Sameness: A Study of Content Sharing in Mainstream and Alternative Media. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 13, pp. 257–266. Cited by: §2.0.1.
  • [3] K. H. Jamieson and J. N. Cappella (2008) Echo chamber: Rush Limbaugh and the conservative media establishment. Oxford University Press. Cited by: §1.
  • [4] Z. Jin, J. Cao, H. Guo, Y. Zhang, and J. Luo (2017)

    Multimodal fusion with recurrent neural networks for rumor detection on microblogs

    .
    In Proceedings of the 2017 ACM on Multimedia Conference, pp. 795–816. Cited by: §1, §2.0.1, §3.1.2, 3rd item, §5.1.2.
  • [5] Z. Jin, J. Cao, Y. Zhang, J. Zhou, and Q. Tian (2017) Novel visual and statistical image features for microblogs news verification. IEEE Transactions on Multimedia 19 (3), pp. 598–608. Cited by: §1, §2.0.1.
  • [6] H. Karimi and J. Tang (2019) Learning hierarchical discourse-level structure for fake news detection. arXiv preprint arXiv:1903.07389. Cited by: §2.0.1.
  • [7] D. Khattar, J. S. Goud, M. Gupta, and V. Varma (2019)

    MVAE: Multimodal Variational Autoencoder for Fake News Detection

    .
    In WWW, pp. 2915–2921. Cited by: §2.0.1.
  • [8] Y. Kim (2014) Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. Cited by: §3.1.1.
  • [9] T. Mikolov, K. Chen, G. Corrado, and J. Dean (2013)

    Efficient estimation of word representations in vector space

    .
    arXiv preprint arXiv:1301.3781. Cited by: §3.1.1.
  • [10] J. W. Pennebaker, R. L. Boyd, K. Jordan, and K. Blackburn (2015) The development and psychometric properties of LIWC2015. Technical report Cited by: §2.0.1, 1st item, §5.1.2.
  • [11] V. Pérez-Rosas, B. Kleinberg, A. Lefevre, and R. Mihalcea (2017) Automatic detection of fake news. arXiv preprint arXiv:1708.07104. Cited by: §2.0.1.
  • [12] M. Potthast, J. Kiesel, K. Reinartz, J. Bevendorff, and B. Stein (2017) A stylometric inquiry into hyperpartisan and fake news. arXiv preprint arXiv:1702.05638. Cited by: §2.0.1.
  • [13] F. Qian, C. Gong, K. Sharma, and Y. Liu (2018) Neural User Response Generator: Fake News Detection with Collective User Intelligence.. In IJCAI, pp. 3834–3840. Cited by: §1, §2.0.2.
  • [14] V. L. Rubin and T. Lukoianova (2015) Truth and deception at the rhetorical structure level. Journal of the Association for Information Science and Technology 66 (5), pp. 905–917. Cited by: §2.0.1.
  • [15] N. Ruchansky, S. Seo, and Y. Liu (2017) CSI: a hybrid deep model for fake news detection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 797–806. Cited by: §1, §2.0.2.
  • [16] K. Shu, D. Mahudeswaran, S. Wang, D. Lee, and H. Liu (2018) FakeNewsNet: a data repository with news content, social context and dynamic information for studying fake news on social media. arXiv preprint arXiv:1809.01286. Cited by: §5.1.1.
  • [17] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: 2nd item, §5.1.2.
  • [18] Q. Truong and H. Lauw (2019) Multimodal review generation for recommender systems. In The World Wide Web Conference, pp. 1864–1874. Cited by: §2.0.1.
  • [19] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan (2016) Show and tell: lessons learned from the 2015 mscoco image captioning challenge. IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (4), pp. 652–663. Cited by: §3.1.2.
  • [20] S. Vosoughi, D. Roy, and S. Aral (2018) The spread of true and false news online. Science 359 (6380), pp. 1146–1151. Cited by: §1, §2.0.2.
  • [21] H. Wang, D. Sahoo, C. Liu, E. Lim, and S. C. Hoi (2019) Learning cross-modal embeddings with adversarial networks for cooking recipes and food images. In CVPR, pp. 11572–11581. Cited by: §2.0.1.
  • [22] W. Y. Wang (2017) “Liar, liar pants on fire”: A new benchmark dataset for fake news detection. arXiv preprint arXiv:1705.00648. Cited by: §5.1.1.
  • [23] Y. Wang, F. Ma, Z. Jin, Y. Yuan, G. Xun, K. Jha, L. Su, and J. Gao (2018) EANN: Event Adversarial Neural Networks for Multi-Modal Fake News Detection. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 849–857. Cited by: §1, §2.0.1, §3.1.2.
  • [24] Y. Yang, L. Zheng, J. Zhang, Q. Cui, Z. Li, and P. S. Yu (2018) TI-cnn: convolutional neural networks for fake news detection. arXiv preprint arXiv:1806.00749. Cited by: §1, §2.0.1.
  • [25] R. Zafarani, X. Zhou, K. Shu, and H. Liu (2019) Fake News Research: Theories, Detection Strategies, and Open Problems. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 3207–3208. Cited by: §1.
  • [26] X. Zhou, A. Jain, V. V. Phoha, and R. Zafarani (2019) Fake News Early Detection: A Theory-driven Model. arXiv preprint arXiv:1904.11679. Cited by: §1, §2.0.1.
  • [27] X. Zhou and R. Zafarani (2018) Fake news: a survey of research, detection methods, and opportunities. arXiv preprint arXiv:1812.00315. Cited by: §1.