Call Attention to Rumors: Deep Attention Based Recurrent Neural Networks for Early Rumor Detection

04/20/2017 ∙ by Tong Chen, et al. ∙ UNSW Deakin University The University of Queensland 0

The proliferation of social media in communication and information dissemination has made it an ideal platform for spreading rumors. Automatically debunking rumors at their stage of diffusion is known as early rumor detection, which refers to dealing with sequential posts regarding disputed factual claims with certain variations and highly textual duplication over time. Thus, identifying trending rumors demands an efficient yet flexible model that is able to capture long-range dependencies among postings and produce distinct representations for the accurate early detection. However, it is a challenging task to apply conventional classification algorithms to rumor detection in earliness since they rely on hand-crafted features which require intensive manual efforts in the case of large amount of posts. This paper presents a deep attention model on the basis of recurrent neural networks (RNN) to learn selectively temporal hidden representations of sequential posts for identifying rumors. The proposed model delves soft-attention into the recurrence to simultaneously pool out distinct features with particular focus and produce hidden representations that capture contextual variations of relevant posts over time. Extensive experiments on real datasets collected from social media websites demonstrate that (1) the deep attention based RNN model outperforms state-of-the-arts that rely on hand-crafted features; (2) the introduction of soft attention mechanism can effectively distill relevant parts to rumors from original posts in advance; (3) the proposed method detects rumors more quickly and accurately than competitors.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

The explosive use of contemporary social media in communication has witnessed the widespread of rumors which can pose a threat to the cyber security and social stability. For instance, on April 23rd 2013, a fake news claiming two explosions happened in the White House and Barack Obama got injured was posted by a hacked Twitter account named Associated Press. Although the White House and Associated Press assured the public minutes later the report was not true, the fast diffusion to millions of users had caused severe social panic, resulting in a loss of $136.5 billion in the stock market111 This incident of a false rumor showcases the vulnerability of social media on rumors, and highlights the practical value of automatically predicting the veracity of information.

(a) Streaming posts in regards to an event
(b) Statistics on textual phrases
Figure 1. Posts from users on social media platforms exhibit duplication to great extent. For a specific event, e.g., “Trump being Disqualified from U.S. Election”, the texts of “Donald Trump”, “Obama” and “Disqualified” appear very frequently in disputed postings.

Debunking rumors at their formative stage is particularly crucial to minimizing their catastrophic effects. Most existing rumor detection models employ learning algorithms that incorporate a wide variety of features and formulate rumor detection into a binary classification task. They commonly craft features manually from the content, sentiment (Zimbra et al., 2016), user profiles (Zafarani and Liu, 2015; Wang et al., 2017, 2015), and diffusion patterns of the posts (Wu et al., 2015; Ma et al., 2015; Liu et al., 2015; Wang et al., 2013, 2017; Kwon et al., 2013). Embedding social graphs into a classification model also helps distinguish malicious user comments from normal ones (Rayana and Akoglu, 2016, 2015). These approaches aim at extracting distinctive features to describe rumors faithfully. However, feature engineering is extremely time-consuming, biased, and labor-intensive. Moreover, hand-crafted features are data-dependent, making them incapable of capturing contextual variations in different posts.

Figure 2.

Schematic overview of our framework. In regards to each event, posts in sequence are collected and transformed into their tf-idf vectors. Then, deep recurrent neural networks augmented with soft-attention mechanism are deployed to derive temporal latent representations by capturing long-term dependency among post series and selectively focusing on important relevance. Additional layer is topped upon learned representations to determine the event to be rumor/non-rumor.

More close examinations on rumors reveal that social posts related to an event under discussion are coming in the form of time series wherein users forward or comment on it continuously over time. As shown in Fig.1 (a), posts in regards to an event of US presidency are coming continuously along the event’s timelines. Thus, to tackle with time series of posts, descriptive features should be extracted from contexts. However, as shown in Fig.1 (b), users’ posts exhibit high duplication in their textual phrases due to the repeated forwarding, reviews, and/or inquiry behavior (Zhao et al., 2015). This poses a challenge on efficiently distilling distinct information from duplication, and flexible to capture their contextual variations as the rumor diffuses over time.

The propagation of information on social media has temporal characteristics, whilst most existing rumor detection methodologies ignore such a crucial property or are not able to capture the temporal dimension of data. One exception is (Ma et al., 2016) where Ma et al. uses an RNN to capture the dynamic temporal signals of rumor diffusion and learn textual representations under supervision. However, as the rumor diffusion evolves over time, users tend to comment differently in various stages, such as from expressing surprise to questioning, or from believing to debunking. As a consequence, textual features may change their importance with time and we need to determine which of them are more important to the detection task. On the other hand, the existence of duplication in textual phrases impedes the efficiency of training a deep network. Although some studies on duplication detection are available and effective in different tasks (Song et al., 2011; Zhou et al., 2017; Liu et al., 2013), these approaches are not applicable in our case where the duplication cannot be determined beforehand but is rather varied across post series over time. In this sense, two aspects of temporal long-term characteristic and dynamic duplication should be addressed simultaneously in an early rumor detection model.

1.1. Challenges and Our Approach

In summary, there are three challenges in early rumor detection to be addressed: (1) automatically learning representations for rumors instead of using labor-intensive hand-crafted features; (2) the difficulty of maintaining the long-range dependency among variable-length post series to build their internal representations; (3) the issue of high duplication compounded with varied contextual focus. To combat these challenges, we propose a novel deep attention based recurrent neural network (RNN) for early detection on rumors, namely CallAtRumors (Call Attention to Rumors). The overview of our framework is illustrated in Fig 2

. Our model processes streaming textual sequences constructed by encoding contextual information from posts related to one event into a series of feature matrices. Then, the RNN with attention mechanism automatically learns latent representations by feed-forwarding each input weighted by attention probability distribution while adaptive to contextual variations. Moreover, an additional hidden layer with

activation function using the learned latent representations predicts the event to be rumors or not.

Our framework is premised on the RNNs which are proved to be effective in recent machine learning tasks

(Graves, 2013; Ba et al., 2015)

in handling sequential data. This offers us the opportunity to automatically explore deep feature representations from original inputs for efficient rumor detection, thus avoiding the complexity of feature engineering. With attention mechanism, the proposed approach is able to selectively associate more importance with relevant features. Hence, we are able to tackle the problem in the context of high textual duplication and the efficiency of feature learning in early detection is ensured.

1.2. Contributions

The main contributions of our work are summarized as follows:

  • We propose a deep attention based model that learns to perform rumor detection automatically in earliness. The model is based on RNN, and capable of learning continuous hidden representations by capturing long-range dependency an contextual variations of posting series.

  • The deterministic soft-attention mechanism is embedded into recurrence to enable distinct feature extraction from high duplication and advanced importance focus that varies over time.

  • We quantitatively validate the effectiveness of attention in terms of detection accuracy and earliness by comparing with state-of-the-arts on two real social media datasets: Twitter and Weibo.

The rest of the paper is organized as follows. Section 2 and Section 3 present the relationship to existing work and preliminary on RNN. We introduce the main intuition and formulate the problem in Section 4. Section 5 discusses the experiments and the results on effectiveness and earliness. We conclude this paper in Section 6 and points out future directions.

2. Related Work

Our work is closely connected with early rumor detection and attention mechanism. We will briefly introduce the two aspects in this section.

2.1. Early Rumor Detection

The problem of rumor detection (Castillo et al., 2011)

can be cast as binary classification tasks. The extraction and selection of discriminative features significantly affects the performance of the classifier. Hu

et al. first conducted a study to analyze the sentiment differences between spammers and normal users and then presented an optimization formulation that incorporates sentiment information into a novel social spammer detection framework (Hu et al., 2014). Also the propagation patterns of rumors were developed by Wu et al. through utilizing a message propagation tree where each node represents a text message to classify whether the root of the tree is a rumor or not (Wu et al., 2015). In (Ma et al., 2015), a dynamic time series structure was proposed to capture the temporal features based on the time series context information generated in every rumor’s life-cycle. However, these approaches requires daunting manual efforts in feature engineering and they are restricted by the data structure.

Early rumor detection is to detect viral rumors in their formative stages in order to take early action (Sampson et al., 2016). In (Zhao et al., 2015), some very rare but informative enquiry phrases play an important role in feature engineering when combined with clustering and a classifier on the clusters as they shorten the time for spotting rumors. Manually defined features has shown their importance in the research on real-time rumor debunking by Liu et al. (Liu et al., 2015). By contrast, Wu et al. proposed a sparse learning method to automatically select discriminative features as well as train the classifier for emerging rumors (Wu et al., 2016). As those methods neglect the temporal trait of social media data, a time-series based feature structure(Ma et al., 2015) is introduced to seize context variation over time. Recently, recurrent neural network was first introduced to rumor detection by Ma et al. (Ma et al., 2016), utilizing sequential data to spontaneously capture temporal textual characteristics of rumor diffusion which helps detecting rumor earlier with accuracy. However, without abundant data with differentiable contents in the early stage of a rumor, the performance of these methods drops significantly because they fail to distinguish important patterns.

2.2. Attention Mechanism

As a rising technique in NLP (natural language processing) problems

(Rocktäschel et al., 2015; Yang et al., 2016; Sutskever et al., 2014), Bahdanau et al.

extended the basic encoder-decoder architecture of neural machine translation with attention mechanism to allow the model to automatically search for parts of a source sentence that are relevant to predicting a target word

(Bahdanau et al., 2014), achieving a comparable performance in the English-to-French translation task. Vinyals et al. improved the attention model in (Bahdanau et al., 2014), so their model computed an attention vector reflecting how much attention should be put over the input words and boosted the performance on large scale translation (Vinyals et al., 2015). In addition, Sharma et al. applied a location softmax function (Sharma et al., 2015)

to the hidden states of the LSTM (Long Short-Term Memory) layer, thus recognizing more valuable elements in sequential inputs for action recognition. In conclusion, motivated by the successful applications of attention mechanism, we find that attention-based techniques can help better detect rumors with regards to both effectiveness and earliness because they are sensitive to distinctive textual features.

3. Recurrent Neural Networks

Recurrent neural networks, or RNNs (Rumelhart et al., 1988)

, are a family of feed-forward neural networks for processing sequential data, such as a sequence of values

. RNNs process an input sequence one element at a time, updates the hidden units , a “state vector” that implicitly contains information about the history of all the past elements of the sequence (), and generates output vector (LeCun et al., 2015). The forward propagation begins with a specification of the initial state , then, for each time step from to , the following update equations are applied (Goodfellow et al., 2016):


where parameters , and are weight matrices for input-to-hidden, hidden-to-output and hidden-to-hidden connections, respectively. and

are the bias vectors.

is a hyperbolic tangent non-linear function.

The gradient computation of RNNs involves performing back-propagation through time (BPTT) (Rumelhart et al., 1988). In practice, a standard RNN is difficult to be trained due to the well-known vanishing or exploding gradients caused by the incapability of RNN in capturing the long-distance temporal dependencies for the gradient based optimization (Bengio et al., 1994; Wu et al., 2017). To tackle this training difficulty, an effective solution is to includes “memory” cells to store information over time, which are known as Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997; Graves, 2013). In this work, we employ LSTM as basic unit to capture long term temporal dependency among streaming variable-length post series.

4. CallatRumors: Early Rumor Detection with Deep Attention based RNN

(a) A LSTM unit (b) The soft attention mechanism (c) The proposed deep attention based recurrent model
Figure 3. (a) A LSTM cell. Each cell learns how to weigh its input components (input gate), while learning how to modulate that contributions to the memory (input modulator). It also learns weights which erase the memory cell (forget gate), and weights which control how this memory should be emitted (output gate). (b) The attention module computes the current input as an average of the tf-idf features weighted according to the attention softmax . (c) At each time stamp, the proposed model takes the feature slice as input and propagates through stacked layers of LSTM and predicts next location probability and class label .

In this section, we present the details of our framework with deep attention for classifying social textual events into rumors and non-rumors. First, we introduce a strategy that converts the incoming streams of social posts into continuous variable-length time series. Then, we describe the soft attention mechanism which can be embedded into recurrent neural networks to focus on selectively textual cues to learn distinct representations for rumor and/or non-rumor binary classification.

4.1. Problem Statement

Individual social posts contain very limited content due to their nature of shortness in context. On the other hand, a claim is generally associated with a number of posts that are relevant to the claim. These relevant posts regarding a claim can be easily collected to describe the central content more faithfully. Hence, we are interested in detecting rumor on an aggregate level instead of identifying each single posts (Ma et al., 2016). In other words, we focus on detecting rumors on event-level wherein sequential posts related to the same topics are batched together to constitute an event, and our model determines whether the event is a rumor or not.

Let denote a set of given events, where each event consists of all relevant posts at time stamp , and the task is to classify each event as a rumor or not.

4.2. Constructing Variable-Length Post Series

For each event , we collect a set of relevant post series to be the input of our model to learn latent representations. Within every event, posts are divided into time intervals, each of which is regarded as a batch. This is because it is not practical to deal with each post individually in the large number scale. To ensure a similar word density for each time step within one event, we group posts into batches according to a fixed post amount rather than slice the event time span evenly.

Algorithm 1 describes the construction of variable-length post series. Specifically, for every event , post series are constructed with variable lengths due to different amount of posts relevant to different events. We set a minimum series length to maintain the sequential property for all events. For events containing no less than posts, we iteratively take the first posts out of and feed them into a time interval . The last posts are treated as the last time interval. For events containing less than posts, we put posts to the first intervals and assign the rest into the last interval .

To model different words, we calculate the tf-idf (Term Frequency-Inverse Document Frequency) for the most frequent vocabularies within all posts. Proved to be an effective and lightweight textual feature, tf-idf is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus (Leskovec et al., 2014). In this case, for each post we have a -word dictionary reflecting the importance of every word in a post, and the value is 0 if the word never appears in this post. Finally, every post is encoded by the corresponding K-word tf-idf dictionary, and within a specific internal a matrix of can be constructed as the input of our model. If there are less than

posts within an interval, we will expand it to the same scale by padding with

s. Hence, each set of post series consists of at least feature matrices with a same size of (number of vocabularies) (vocabulary feature dimension).

Input : Event-related posts , post amount , minimum series length
Output : Post Series
1 /*Initialization*/;
2 ; ; ;
3 while  do
4       if  then
5             while  do
6                   ;
7                   ;
8                   ;
9                   ;
11             end while
12            ;
14      else
15             while  do
16                   ;
17                   ;
18                   ;
19                   ;
21             end while
22            ;
24       end if
26 end while
return ;
Algorithm 1 Constructing Variable-Length Post Series

4.3. Long Short-Term Memory (LSTM) with Deterministic Soft Attention Mechanism

To capture the long-distance temporal dependencies among continuous time post series, we employ Long Short-Term Memory (LSTM) unit (Graves, 2013; Zaremba et al., 2014; Xiang et al., 2017) to learn high-level discriminative representations for rumors. The structure of LSTM is formulated as



is the logistic sigmoid function, and

, , , are the input gate, forget gate, output gate and cell input activation vector, respectively. In each of them, there are corresponding input-to-hidden, hidden-to-output, and hidden-to-hidden matrices: , , and the bias vector . The LSTM architecture is essentially a memory cell which can maintain its state over time, and non-linear gating units can regulate the information flow into and out of the cell (Greff et al., 2016). A LSTM unit is shown graphically in Fig. 3 (a).

In Eq.(2), the context vector is a dynamic representation of the relevant part of the social post input at time . To calculate , we introduce an attention weight , corresponding to the feature extracted at different element positions in a tf-idf matrix . Specifically, at each time stamp , our model predicts , a softmax over positions, and , a softmax over the binary class of rumors and non-rumors with an additional hidden layer with activations (see Fig.3 (c)). The location softmax (Sharma et al., 2015) is thus, applied over the hidden states of the last LSTM layer to calculate , the attention weight for the next input matrix :


where is the attention probability for the -th element (word index) at time step , is the weight matrix allocated to the -th element, and

is a random variable which represents the word index and takes 1-of-K values. The attention vector

is a probability distribution, representing the importance attached to each word in the input matrix . Our model is optimized to assign higher focus to words that are believed to be distinct in learning rumor/non-rumor representations. After calculating these probabilities, the soft deterministic attention mechanism (Bahdanau et al., 2014) computes the expected value of the input at the next time step by taking expectation over the word matrix at different positions:


where is the input matrix at time step and is the feature vector of the -th position in the matrix . Thus, Eq.(4) formulates a deterministic attention model by computing a soft attention weighted word vector . This corresponds to feeding a soft--weighted context into the system, whilst the whole model is smooth and differential under the deterministic attention, and thus learning end-to-end is trivial by using standard back-propagation.

We remark that attention models can be classified into soft attention and hard attention models. Soft attention models are shown to be deterministic and can be trained by using back-propagation whereas hard attention models are stochastic and the training requires the REINFORCE algorithm (Mnih et al., 2014) or by maximizing a variational lower bound or using importance sampling (Ba et al., 2015; Xu et al., 2015). If we use hard attention models, we should sample from a softmax distribution of Eq.(3). The input would then be the feature at the sampled location instead of taking expectation over all the elements in . Apparently, hard attention solutions are not differentiable and have to resort to some sampling, and thus we deploy soft attention in our model.

4.4. Loss Function and Model Training

In model training, we employ cross-entropy loss coupled with the doubly stochastic regularization (Xu et al., 2015) that encourages the model to pay attention to every element of the input word matrix. This is to impose an additional constraint over the location softmax, so that

. The loss function is defined as follows:


where is the one hot label vector, is the vector of binary class probabilities at time stamp , is the total number of time stamps, is the number of output classes (rumors or non-rumors), is the attention penalty coefficient, is the weight decay coefficient, and represents all the model parameters.

The cell state and the hidden state for LSTM are initialized using the input tf-idf matrices for faster convergence:


where and

are two multi-layer perceptrons, and

is the number of time stamps in the model. These values are used to compute the first location softmax which determines the initial input .

5. Experiments

This section reports how we evaluate the performance of our proposed methodology using real-world data collected from two different social media platforms. We first describe the construction datasets, and then perform self-evaluation to determine optimal parameters. Finally, we assess the effectiveness and efficiency of our model, CallAtRumors, by comparing with state-of-the-art methods.

5.1. Datasets

Figure 4. Data collection and dataset structure. For each event, its authenticity is verified through official news verification services. Then we manually extract suitable keywords for each event to ensure a precise search result of relevant posts. After that, we crawl posts with query search and store our collected data using the data storage structure shown in the table. Rumor events are labelled as 1 and normal events are labelled as 0.

We use two public datasets published by (Ma et al., 2016). The datasets are collected from and Sina respectively. Both of the datasets are organised at event-level, in which the posts related to the same events are aggregated, and each event is labeled to 1 for rumor and 0 for non-rumor. In the following, we describe how the two datasets are originally constructed and how we expand them:

  • In the Twitter dataset, 498 rumors are collected using the keywords extracted from verified fake news published on, a real-time rumor debunking website. It also contains 494 normal events from Snopes and two public datasets (Castillo et al., 2011; Kwon et al., 2013). For each event, the keywords are extracted and manually refined until the composed queries can have precise Twitter search results (Ma et al., 2016). All labelled events and related Tweet IDs are published by the authors, however some Tweets are no longer available when we crawled those Tweets, causing a 10% shrink on the scale of data compared with the original Twitter dataset.

  • The Weibo dataset contains 2,313 rumors and 2,351 non-rumors. The polarity of all events are verified on Sina Community Management Center555 Then the keywords are manually summarized and modified for comprehensive post search for data collection using Weibo API.

In addition, to balance the ration of rumors and non-romors, we follow the criteria from (Ma et al., 2016) to manually gather 4 non-rumors from Twitter and 38 rumors from Weibo to achieve a 1:1 ratio of rumors to non-rumors. The data collection procedure and our dataset structure are shown in Figure  4666The webpage in this figure is downloaded from

Statistic Twitter Weibo
Involved Users 466,577 2,755,491
Total Posts 1,046,886 3,814,329
Total Events 996 4,702
Total Rumors 498 2,351
Total Non-Rumors 498 2,351
Average Posts per Event 1,051 811
Minimum Posts per Event 8 10
Maximum Posts per Event 44,316 59,318
Table 1. Statistical details of datasets

Table  1 gives statistical details of the two datasets. We observe that more than 80% of the users tend to repost the original news with very short comments to reflect their attitudes towards those news. As a consequence, the contents of the posts related to one event are mostly duplicate, triggering scarcity when extracting distinctive textual patterns within overlapping context. However, by implementing textual attention mechanism, CallAtRumors is able to lay more emphasis on discriminative words, and can guarantee high performance in such case.

5.2. Self Evaluations

Figure 5. Results w.r.t varied number of LSTM layers. The best result can be achieved in the case of a three-layer LSTM model with1,024, 512 and 64 hidden states, respectively.

The model is implemented by using Theano

777 All parameters are set using cross-validation. To generate the input variable-length post series, we set the amount of posts for each time step as 50 and the minimum post series length as 5. We selected =10,000 top words for the construction tf-idf matrices. Apart from lowercasing, we do not apply any other special preprocessing like stemming (Bahdanau et al., 2014). The recurrent neural network with attention mechanism can automatically learn to ignore those unimportant or irrelevant expressions in the training procedure.

For a hold-out dataset occupying 15% of the events in each dataset, a self evaluation is performed to optimize the number of LSTM layers by varying the number of layers from 2 to 6. Results are shown in Figure 5. Thus, we apply a three-layer LSTM model with descending numbers of hidden states of 1024, 512 and 64 respectively. The learning rate is set as 0.45 and we apply a dropout (Srivastava et al., 2014) of 0.3 at all non-recurrent connections. For attention penalty coefficient, we set to be 1.5, and the weight decay is set to be . Our model is trained by measuring the derivative of the loss through back-propagation (Collobert et al., 2011) algorithm, namely the Adam optimization algorithm (Kingma and Ba, 2014). We iterate the whole training procedure until the loss value converges.

Figure 6. Visualization on varied attention on a detected rumor. Different color degrees reflect various attention degrees paid to each word in a post. In the rumor “School Principal Eujin Jaela Kim banned the Pledge of Allegiance, Santa and Thanksgiving”, most of the vocabularies closely connected with the event itself are given less attention weight than words expressing users’ doubting, esquiring and anger caused by the rumor. Our model learns to focus on expressions more useful in rumor detection while ignore unrelated words.

5.3. Settings and Baselines

We evaluate the effectiveness and efficiency of CallAtRumors by comparing with the following state-of-the-art approaches in terms of precision, recall and F-measure.

  • DT-Rank (Zhao et al., 2015)

    : This is a decision-tree based ranking model, and is able to identify trending rumors by recasting the problem as finding entire clusters of posts whose topic is a disputed factual claim. We implement their enquiry phrases and features to make it comparable to our method.

  • SVM-TS (Ma et al., 2015)

    : This is a SVM (support vector machine) model that uses time-series structures to capture the variation of social context features. SVM-TS can capture the temporal characteristics of these features based on the time series of rumors’ lifecycle with time series modelling technique applied to incorporate carious social context information. We use the features provided by them from contents, users and propagation patterns.

  • LK-RBF (Sampson et al., 2016): To tackle the problem of implicit data without explicit links and jointed conversations, Sampson et al.

    proposed two methods based on hashtags and web links to aggregate individual tweets with similar keywords from different threads into a conversation. We choose the link-based approach and combine it with the RBF (Radial Basis Function) kernel as a supervised classifier because it achieved the best performance in their experiments.

  • ML-GRU (Ma et al., 2016)

    : This method utilizes recurrent neural networks to automatically discover deep data representations for efficient rumor detection. It also allows for early rumor detection with efficiency. Following the settings in their work, we choose the multi-layer GRU (gated recurrent unit) as baseline which shows the best result in the effectiveness and earliness test.

  • CERT (Wu et al., 2016): This is a cross-topic emerging rumor detection model which can jointly cluster data, select features and train classifiers by using the abundant labeled data from prior rumors to facilitate the detection of an emerging rumor. CERT is capable of extracting useful patterns in the case of data scarcity. Since CERT requires Tweet instances instead of event-level data, we use the tf-idf feature vector of the all the Tweets in one event to construct the feature matrix as required.

We hold out 15% of the events in each dataset for cross-validation, and split the rest of data with a ratio of 3:2 for training and test respectively. In particular, we keep the ratio between rumor events and normal events in both training and test set as 1:1. In the test on the effectiveness of CallAtRumors, all posts within each event are used during training and evaluation. In the study of efficiency, we take different ratios of the posts starting from the first post within all events, ranging from 10% to 80% in order to test how early CallAtRumors can detect rumors successfully.

(a) Precision on partial (b) Recall on partial (c) Precision on partial (d) Recall on partial
Twitter data Twitter data Weibo data Weibo data
Figure 7. Results of early rumor detection.

5.4. Effectiveness Validation

Method Precision Recall F-measure
DT-Rank 71.50% 63.41% 0.6721
LK-RBF 78.54% 60.52% 0.6836
SVM-TS 76.33% 77.92% 0.7712
CERT 81.12% 79.66% 0.8038
ML-GRU 80.87% 82.97% 0.8191
CallAtRumors 88.63% 85.71% 0.8715
Table 2. Performance on the Twitter dataset
Method Precision Recall F-measure
DT-Rank 67.24% 61.33% 0.6415
LK-RBF 75.49% 61.08% 0.6752
SVM-TS 80.69% 78.26% 0.7946
CERT 79.70% 76.32% 0.7797
ML-GRU 82.44% 81.58% 0.8301
CallAtRumors 87.10% 86.34% 0.8672
Table 3. Performance on the Weibo dataset

Table  2 and Table  3 shows the performance of all approaches on Twitter dataset and Weibo dataset respectively. DT-Rank cannot effectively distinguish rumor from normal events when facing datasets with duplication in contents and scarcity in textual features. LK-RBF and SVM-TS achieve better results, indicating the ability of feature engineering to help classifiers detect rumors better. However, both LK-RBF and SVM-TS show the lack of adequate recall which represents how sensitive the models are towards rumors. Since CERT can jointly select discriminative features and train the topic-independent classifier with selected features (Wu et al., 2016)

, it achieves better results than the former three approaches in our datasets. The ML-GRU is competitive in both precision and recall due to its capability of processing sequential data and learning hidden states from raw inputs.

On the Twitter dataset, CallAtRumors outperforms competitors by achieving the precision, recall and F-measure of 88.63%, 85.71% and 0.8694 respectively. The same result can be seen on the Weibo dataset, where CallAtRumors achieves the precision, recall and F-measure of 87.10%, 86.34% and 0.8672 respectively. Figure  6 illustrates the intermediate attention results on different words within a detected rumor event. The effectiveness validation proves the affect of attention mechanism in making LSTM units sensitive to distinctive words and tokens by associating more importance to certain locations in every feature matrix against other ones.

5.5. More Comparison with the State-of-the-art: CERT (Wu et al., 2016)

To demonstrate how the conditions of datasets affect the performance of rumor detection, we compare the performance of CallAtRumors with CERT using different datasets. To reproduce the same experimental conditions as (Wu et al., 2016), we have also organized a sample dataset using the criteria described in the work. We use queries generated from 220 reported rumors on Snopes and regular expressions belonging to the same topics to crawl 7,580 Tweets and manually labeled each Tweet by reading the content and referring to the Snopes article. At last we result in a sample dataset containing 1,193 rumor Tweets and 6,387 non-rumor Tweets, which also has a similar ratio of rumors to non-rumors as the dataset in (Wu et al., 2016). Table  4 shows the different results when CallAtRumors and CERT are applied to this sample dataset and our Twitter dataset. The results further explains our model’s capability of capturing valuable patterns within our large-scale duplicated datasets by applying attention to more representative words.

Method Dataset Precision Recall F-measure
CallAtRumors Sample 91.82% 89.43% 0.9061
CERT Sample 90.35% 85.78% 0.8801
CallAtRumors Twitter 88.63% 85.71% 0.8715
CERT Twitter 81.12% 79.66% 0.8038
Table 4. More comparison with CERT (Wu et al., 2016) on the Sample and Twitter datasets

5.6. Earliness Analysis

In this experiment, we study the property of our approach in its earliness. To have fair comparison, we allow exiting rumor detection methods to be trained on rumors that are for evaluation. Through incrementally adding training data in the chronological order, we are able to estimate the time that our method can detect emerging rumors. The results on earliness are shown in Fig  

7. At the early stage with 10% to 60% training data, CallAtRomors outperforms four comparative methods by a noticeable margin. In particular, compared with the most relevant method of ML-GRU, as the data proportion ranging from 10% to 20%, CallAtRumors outperforms ML-GRU by 5% on precision and 4% on recall on both Twitter and Weibo datasets. The result shows that attention mechanism is more effective in early stage detection by focusing on the most distinct features in advance. With more data applied into test, all methods are approaching their best performance. For Twitter dataset and Weibo Dataset with averagely 80% duplicate contents in each event, our method starts with 74.02% and 71.73% in precision while 68.75% and 70.34% in recall, which means an average time lag of 20.47 hours after the emerge of one event. This result is promising because the average report time over the rumors given by Snopes and Sina Community Management Center is 54 hours and 72 hours respectively (Ma et al., 2016), and we can save much manual effort with the help of our deep attention based early rumor detection technique.

6. Conclusion

Rumor detection on social media is time-sensitive because it is hard to eliminate the vicious impact in its late period of diffusion as rumors can spread quickly and broadly. In this paper, we introduce CallAtRumors, a novel recurrent neural network model based on soft attention mechanism to automatically carry out early rumor detection by learning latent representations from the sequential social posts. We conducted experiments with five state-of-the-art rumor detection methods to illustrate that CallAtRumors is sensitive to distinguishable words, thus outperforming the competitors even when textual feature is sparse at the beginning stage of a rumor. In addition, we demonstrate the capability of our model to handle duplicate data with a further comparison. In our future work, it would be appealing to investigate the possibility to combine more complexed feature with our deep attention model. For example, we can model the propagation pattern of rumors as sequential inputs for RNNs to improve the detection accuracy. The future work may investigate the efficiency issue using hashing techniques (Wang et al., 2015a; Wu and Wang, 2017) over multi-level feature spaces (Wu et al., 2016; Wang et al., 2015b; Wu and Cao, 2010; Wang et al., 2014; Wu et al., 2013).


  • (1)
  • Ba et al. (2015) Jimmy Ba, Roger Grosse, Ruslan Salakhutdinov, and Brendan Frey. 2015. Learning wake-sleep recurrent attention models. In NIPS.
  • Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014).
  • Bengio et al. (1994) Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient decent is difficult. IEEE Transactions on Neural Networks 5, 2 (1994), 157–166.
  • Castillo et al. (2011) Carlos Castillo, Marcelo Mendoza, and Barbara Poblete. 2011. Information credibility on twitter. In Proceedings of the 20th international conference on World wide web. ACM, 675–684.
  • Collobert et al. (2011) Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12, Aug (2011), 2493–2537.
  • Goodfellow et al. (2016) Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press.
  • Graves (2013) Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 (2013).
  • Greff et al. (2016) Klaus Greff, Rupesh K Srivastava, Jan Koutník, Bas R Steunebrink, and Jürgen Schmidhuber. 2016. LSTM: A search space odyssey. IEEE transactions on neural networks and learning systems (2016).
  • Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. In Neural computation.
  • Hu et al. (2014) Xia Hu, Jiliang Tang, Huiji Gao, and Huan Liu. 2014. Social spammer detection with sentiment information. In Data Mining (ICDM), 2014 IEEE International Conference on. IEEE, 180–189.
  • Kingma and Ba (2014) Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
  • Kwon et al. (2013) Sejeong Kwon, Meeyoung Cha, Kyomin Jung, Wei Chen, and Yajun Wang. 2013. Prominent features of rumor propagation in online social media. In Data Mining (ICDM), 2013 IEEE 13th International Conference on. IEEE, 1103–1108.
  • LeCun et al. (2015) Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521, 7553 (2015), 436–444.
  • Leskovec et al. (2014) Jure Leskovec, Anand Rajaraman, and Jeffrey David Ullman. 2014. Mining of massive datasets. Cambridge University Press.
  • Liu et al. (2013) Jiajun Liu, Zi Huang, Hong Cheng, Yueguo Chen, Heng Tao Shen, and Yanchun Zhang. 2013. Presenting diverse location views with real-time near-duplicate photo elimination. In ICDE.
  • Liu et al. (2015) Xiaomo Liu, Armineh Nourbakhsh, Quanzhi Li, Rui Fang, and Sameena Shah. 2015. Real-time rumor debunking on twitter. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. ACM, 1867–1870.
  • Ma et al. (2016) Jing Ma, Wei Gao, Prasenjit Mitra, Sejeong Kwon, Bernard J Jansen, Kam-Fai Wong, and Meeyoung Cha. 2016. Detecting rumors from microblogs with recurrent neural networks. In Proceedings of IJCAI.
  • Ma et al. (2015) Jing Ma, Wei Gao, Zhongyu Wei, Yueming Lu, and Kam-Fai Wong. 2015. Detect rumors using time series of social context information on microblogging websites. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. ACM, 1751–1754.
  • Mnih et al. (2014) Volodymyr Mnih, Nicolas Heess, Alex Graves, and Koray Kavukcuoglu. 2014. Recurrent models of visual attention. In NIPS.
  • Rayana and Akoglu (2015) Shebuti Rayana and Leman Akoglu. 2015. Collective opinion spam detection: Bridging review networks and metadata. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 985–994.
  • Rayana and Akoglu (2016) Shebuti Rayana and Leman Akoglu. 2016. Collective opinion spam detection using active inference. In Proceedings of the 2016 SIAM International Conference on Data Mining. SIAM, 630–638.
  • Rocktäschel et al. (2015) Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiskỳ, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664 (2015).
  • Rumelhart et al. (1988) David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1988. Learning representations by back-propagating errors. Cognitive modeling 5, 3 (1988), 1.
  • Sampson et al. (2016) Justin Sampson, Fred Morstatter, Liang Wu, and Huan Liu. 2016. Leveraging the Implicit Structure within Social Media for Emergent Rumor Detection. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. ACM, 2377–2382.
  • Sharma et al. (2015) Shikhar Sharma, Ryan Kiros, and Ruslan Salakhutdinov. 2015. Action recognition using visual attention. arXiv preprint arXiv:1511.04119 (2015).
  • Song et al. (2011) Jingkuan Song, Yi Yang, Zi Huang, Heng Tao Shen, and Richang Hong. 2011. Multiple feature hashing for real-time large scale near-duplicate video retrieval. In ACM Multimedia.
  • Srivastava et al. (2014) Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15, 1 (2014), 1929–1958.
  • Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. 3104–3112.
  • Vinyals et al. (2015) Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems. 2773–2781.
  • Wang et al. (2015) Yang Wang, Xuemin Lin, Lin Wu, and Wenjie Zhang. 2015. Effective Multi-Query Expansions: Robust Landmark Retrieval. In ACM Multimedia.
  • Wang et al. (2017) Yang Wang, Xuemin Lin, Lin Wu, and Wenjie Zhang. 2017. Effective Multi-Query Expansions: Collaborative Deep Networks for Robust Landmark Retrieval. IEEE Trans. Image Processing 26, 3 (2017), 1393–1404.
  • Wang et al. (2015a) Yang Wang, Xuemin Lin, Lin Wu, Wenjie Zhang, and Qing Zhang. 2015a. LBMCH: Learning Bridging Mapping for Cross-modal Hashing. In ACM SIGIR.
  • Wang et al. (2015b) Yang Wang, Xuemin Lin, Lin Wu, Wenjie Zhang, Qing Zhang, and Xiaodi Huang. 2015b. Robust Subspace Clustering for Multi-View Data by Exploiting Correlation Consensus. IEEE Trans. Image Processing 24, 11 (2015), 3939–3949.
  • Wang et al. (2013) Yang Wang, Xuemin Lin, and Qing Zhang. 2013. Towards metric fusion on multi-view data: a cross-view based graph random walk approach. In ACM CIKM.
  • Wang et al. (2014) Yang Wang, Xuemin Lin, Qing Zhang, and Lin Wu. 2014. Shifting Hypergraphs by Probabilistic Voting. In PAKDD.
  • Wang et al. (2017) Yang Wang, Wenjie Zhang, Lin Wu, Xuemin Lin, and Xiang Zhao. 2017. Unsupervised Metric Fusion Over Multiview Data by Graph Random Walk-Based Cross-View Diffusion. IEEE Trans. Neural Netw. Learning Syst 28, 1 (2017), 57–70.
  • Wu et al. (2015) Ke Wu, Song Yang, and Kenny Q Zhu. 2015. False rumors detection on sina weibo by propagation structures. In Data Engineering (ICDE), 2015 IEEE 31st International Conference on. IEEE, 651–662.
  • Wu and Cao (2010) Lin Wu and Xiaochun Cao. 2010. Geo-location estimation from two shadow trajectories. In CVPR.
  • Wu et al. (2016) Liang Wu, Jundong Li, Xia Hu, and Huan Liu. 2016. Gleaning Wisdom from the Past: Early Detection of Emerging Rumors in Social Media. In SDM.
  • Wu et al. (2017) Lin Wu, Chunhua Shen, and Anton van den Hengel. 2017. Deep linear discriminant analysis on fisher networks: A hybrid architecture for person re-identification. Pattern Recognition 65 (2017), 238–250.
  • Wu and Wang (2017) Lin Wu and Yang Wang. 2017. Robust hashing for multi-view data: Jointly learning low-rank kernelized similarity consensus and hash functions. Image Vision Comput 57 (2017), 58–66.
  • Wu et al. (2016) Lin Wu, Yang Wang, and Shirui Pan. 2016. Exploiting Attribute Correlations: A Novel Trace Lasso based Weakly Supervised Dictionary Learning Method. IEEE Trans. Cybernetics (2016).
  • Wu et al. (2013) Lin Wu, Yang Wang, and John Shepherd. 2013. Efficient image and tag co-ranking: a bregman divergence optimization method. In ACM Multimedia.
  • Xiang et al. (2017) Yang Xiang, Qingcai Chen, Xiaolong Wang, and Yang Qin. 2017. Answer Selection in Community Question Answering via Attentive Neural Networks. IEEE Signal Processing Letters 24, 4 (2017), 505–509.
  • Xu et al. (2015) Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In ICML.
  • Yang et al. (2016) Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of NAACL-HLT. 1480–1489.
  • Zafarani and Liu (2015) Reza Zafarani and Huan Liu. 2015. 10 bits of surprise: Detecting malicious users with minimum information. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. ACM, 423–431.
  • Zaremba et al. (2014) Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329 (2014).
  • Zhao et al. (2015) Zhe Zhao, Paul Resnick, and Qiaozhu Mei. 2015. Enquiring minds: Early detection of rumors in social media from enquiry posts. In Proceedings of the 24th International Conference on World Wide Web. ACM, 1395–1405.
  • Zhou et al. (2017) Zhili Zhou, Yunlong Wang, Q. M. Jonathan Wu, Ching-Nung Yang, and Xingming Sun. 2017. Effective and Efficient Global Context Verification for Image Copy Detection. IEEE Transactions on Information Forensics and Security 12, 1 (2017), 48–63.
  • Zimbra et al. (2016) David Zimbra, M Ghiassi, and Sean Lee. 2016.

    Brand-Related Twitter Sentiment Analysis Using Feature Engineering and the Dynamic Architecture for Artificial Neural Networks. In

    System Sciences (HICSS), 2016 49th Hawaii International Conference on. IEEE, 1930–1938.