Sentiment Uncertainty and Spam in Twitter Streams and Its Implications for General Purpose Realtime Sentiment Analysis

by   Nils Haldenwang, et al.
Universität Osnabrück

State of the art benchmarks for Twitter Sentiment Analysis do not consider the fact that for more than half of the tweets from the public stream a distinct sentiment cannot be chosen. This paper provides a new perspective on Twitter Sentiment Analysis by highlighting the necessity of explicitly incorporating uncertainty. Moreover, a dataset of high quality to evaluate solutions for this new problem is introduced and made publicly available.



There are no comments yet.


page 1

page 2

page 3


SemEval-2015 Task 10: Sentiment Analysis in Twitter

In this paper, we describe the 2015 iteration of the SemEval shared task...

Efficient Twitter Sentiment Classification using Subjective Distant Supervision

As microblogging services like Twitter are becoming more and more influe...

Development of a General Purpose Sentiment Lexicon for Igbo Language

There are publicly available general purpose sentiment lexicons in some ...

A Framework for Fast Polarity Labelling of Massive Data Streams

Many of the existing sentiment analysis techniques are based on supervis...

How Will Your Tweet Be Received? Predicting the Sentiment Polarity of Tweet Replies

Twitter sentiment analysis, which often focuses on predicting the polari...

Real Time Sentiment Change Detection of Twitter Data Streams

In the past few years, there has been a huge growth in Twitter sentiment...

SemEval-2014 Task 9: Sentiment Analysis in Twitter

We describe the Sentiment Analysis in Twitter task, ran as part of SemEv...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As a field of research Twitter Sentiment Analysis has gained much attention recently. For a multitude of applications such as sales prediction [Asur and Huberman2010], stock market prediction [Bollen et al.2011] or political debate analysis [Diakopoulos and Shamma2010] it has been shown to generate practical value. Twitter Sentiment Analysis denotes the task of assigning a given tweet a sentiment label of either positive or negative and is an integral part of many practical applications. Few methods consider neutral as a third class. However, defining a neutral class is a hard task. pak2010twitter for example label tweets of popular news sites as neutral. This assumption is not always true. The headline “Multiple children were killed in the attack.” would be labeled as negative by most human labelers. Thus, we propose an alternative approach to this problem. Its basic idea is the explicit incorporation of sentiment uncertainty.

2 The State of the Art and Its Shortcomings

SemEval-2014 Task 9 [Rosenthal et al.2014]

provides a widely used state of the art benchmark for Twitter Sentiment Analysis and compares the performance of many current approaches. From a dataset collected from January 2012 to January 2013, popular topics have been extracted through identification of frequently mentioned named entities. Only tweets scoring above a certain polarity threshold determined by a sentiment lexicon were considered to ensure the inclusion of a sentiment. The labels included in the dataset are

positive, negative and neutral, determined by a majority vote of five labelers who were told to vote for the sentiment they perceive as strongest, when in doubt. This assigns tweets to the classes positive and negative which do not carry a distinct sentiment. Methods performing well on this dataset are shown to be able to distinguish between positive and negative sentiment under the assumption that all tweets can be assigned one of these labels. Moreover, all test tweets include popular named entities of the time. As the authors themselves noted: The dataset is biased. Moreover, the majority vote along with the treatment of ambiguity adds noise to the dataset. While providing a dataset of high quality for the desired purpose, the general composition of the public Twitter stream is not represented by the dataset. Hence, only part of the problems arising in practical analysis of the live stream are addressed with the related research.

3 A General Purpose Dataset

When analysing the Twitter stream we are interested in the “Electronic Word of Mouth”[Jansen et al.2009], i.e. the personal opinions of private Twitter users. While labeling tweets, we noticed that a relatively high percentage of tweets are spam, advertising or marketing messages which we are not interested in. Those tweets shall be labeled spam. Moreover, it became obvious that for the remaining tweets only a small fraction can be distinctly labeled as positive or negative. The remaining tweets may still include polarity and can often not be labeled neutral while being neither positive nor negative. Hence, we propose the new category uncertain. Tweets labeled as neutral can be assigned to the class uncertain too, as they provide no additional information for sentiment analysis and can be treated in the same way as tweets of uncertain sentiment. This approach reduces the noise for the sentiment bearing classes which is a desirable feature if political or business decisions are supposed to be supported by the analysis results.

To acquire a representative view on the label composition of the public Twitter stream, we randomly sampled our dataset from a collection of about 43 million tweets with their creation dates ranging from June 2012 to August 2013 to minimize topical bias. Each tweet was labeled by two human labelers who had to assign it one of the labels positive, negative, uncertain or spam. In total 14506 tweets have been labeled by 27 labelers. The labelers consisted of master’s students from the University of Osnabrück, Germany and researchers from our group.

The distribution of labels is shown in figure 1. There is a total of 9356 (64.5% of total tweets labeled) tweets to which both human labelers assigned the same label. Of these tweets 15% are spam and 55% are labeled uncertain. A definite sentiment label could only be assigned to 30% of tweets with 13% being positive and 17% being negative.

Figure 1: Distribution of labels for tweets which both labelers agreed upon.

These results provide evidence for our claim that one has to deal with uncertainty in sentiment analysis when working with the public Twitter stream.

To assess the inter annotator agreement we computed Fleiss’ Kappa [Fleiss1971] resulting in a value of which can be interpreted as moderate agreement [Landis and Koch1977]. At first sight this value seems to be rather low but when considering the disagreement matrix shown in table 1 the claim of the necessity to deal with uncertainty is further strengthened.

positive negative uncert. spam
positive 1176 106 1666 143
negative 1620 2263 58
uncert. 5138 914
spam 1422
Table 1: Disagreement matrix showing the absolute number of label combinations.

Labelers seem to have a very good understanding of what distinguishes the classes positive and negative, only 106 tweets have been assigned both these labels. The disagreement for positive/spam and negative/spam

is of similar or even smaller magnitude. Looking at these tweets we noticed that the disagreement is mainly related to misunderstanding of the labeling instructions or probably accidentally clicking the wrong label. Hence, these tweets should be omitted from the test set when evaluating methods for reliable Twitter Sentiment Analysis.

However, the disagreement between positive/negative and uncertain is relatively large. These tweets make up about 76% of the tweets to which the two labelers assigned different labels. This indicates that in many cases not even two humans can agree upon whether a tweet contains a distinct sentiment or should be labeled uncertain. Systems aiming to perform reliable sentiment analysis of the public Twitter stream should be able to deal with these tweets. While not strictly belonging to the category uncertain they should still be labeled as such or at least not be considered for sentiment analysis. Another possible approach can be to interpret them as rather positive or rather negative, depending on the amount of reliability the respective application requires.

Moderate disagreement (914 tweets) can be noted for the classes uncertain and spam. Since these tweets may still contain useful information in the sense of answering the question “What do people talk about?” they probably should not be considered spam. However, they also should not be assigned a sentiment. A system labeling these as uncertain will still produce reliable results with regard to sentiment analysis.

As a first approach one can make use of just the tweets with two identical labels to asses methods for reliable sentiment analysis of the public Twitter stream. However, it should be considered that in practice the tweets upon which the labelers disagreed can also appear in the stream and have to be handled to provide reliable sentiment results. To enable researchers to develop systems which meet all the aforementioned requirements the complete dataset including the tweets disagreed upon is publicly available.

4 Conclusion and Outlook

When performing analysis on the public live stream of Twitter with regard to sentiment, it needs to be considered that more than half of the tweets cannot be assigned a distinct sentiment. These tweets have to be filtered or explicitly dealt with before sentiment analysis takes place. Moreover, one has to deal with spam tweets. Spam adds unwanted noise by polluting topics with artificially injected tweets. Most of the work on spam detection on Twitter focusses on catching the users generating the spam by looking at the accounts’ behaviour over time [Grier et al.2010, Lin and Huang2013]. When performing realtime analysis, a given tweet has to be determined to be spam or no spam by looking at its content and meta data only as there is no time to examine the author’s account in detail. New methods have to be developed which are able to deal with sentiment uncertainty and spam if reliable representations of the public opinion are to be acquired from the Twitter stream. The dataset presented in this paper can be used to develop and evaluate methods for reliable Twitter Sentiment Analysis.


  • [Asur and Huberman2010] Sitaram Asur and Bernardo A Huberman. 2010. Predicting the future with social media. In 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT), volume 1, pages 492–499. IEEE.
  • [Bollen et al.2011] Johan Bollen, Huina Mao, and Xiaojun Zeng. 2011. Twitter mood predicts the stock market. Journal of Computational Science, 2(1):1–8.
  • [Diakopoulos and Shamma2010] Nicholas A Diakopoulos and David A Shamma. 2010. Characterizing debate performance via aggregated twitter sentiment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1195–1198. ACM.
  • [Fleiss1971] Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378.
  • [Grier et al.2010] Chris Grier, Kurt Thomas, Vern Paxson, and Michael Zhang. 2010. @ spam: the underground on 140 characters or less. In Proceedings of the 17th ACM conference on Computer and communications security, pages 27–37. ACM.
  • [Jansen et al.2009] Bernard J Jansen, Mimi Zhang, Kate Sobel, and Abdur Chowdury. 2009. Twitter power: Tweets as electronic word of mouth. Journal of the American society for information science and technology, 60(11):2169–2188.
  • [Landis and Koch1977] J Richard Landis and Gary G Koch. 1977. The measurement of observer agreement for categorical data. biometrics, pages 159–174.
  • [Lin and Huang2013] Po-Ching Lin and Po-Min Huang. 2013. A study of effective features for detecting long-surviving twitter spam accounts. In Advanced Communication Technology (ICACT), 2013 15th International Conference on, pages 841–846. IEEE.
  • [Pak and Paroubek2010] Alexander Pak and Patrick Paroubek. 2010. Twitter as a corpus for sentiment analysis and opinion mining. In Proceedings of the 7th International Conference on Language Resources, volume 10, pages 1320–1326.
  • [Rosenthal et al.2014] Sara Rosenthal, Alan Ritter, Preslav Nakov, and Veselin Stoyanov. 2014. Semeval-2014 task 9: Sentiment analysis in twitter. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 73–80, Dublin, Ireland, August. Association for Computational Linguistics and Dublin City University.