How to evaluate sentiment classifiers for Twitter time-ordered data?

03/14/2018
by   Igor Mozetic, et al.
0

Social media are becoming an increasingly important source of information about the public mood regarding issues such as elections, Brexit, stock market, etc. In this paper we focus on sentiment classification of Twitter data. Construction of sentiment classifiers is a standard text mining task, but here we address the question of how to properly evaluate them as there is no settled way to do so. Sentiment classes are ordered and unbalanced, and Twitter produces a stream of time-ordered data. The problem we address concerns the procedures used to obtain reliable estimates of performance measures, and whether the temporal ordering of the training and test data matters. We collected a large set of 1.5 million tweets in 13 European languages. We created 138 sentiment models and out-of-sample datasets, which are used as a gold standard for evaluations. The corresponding 138 in-sample datasets are used to empirically compare six different estimation procedures: three variants of cross-validation, and three variants of sequential validation (where test set always follows the training set). We find no significant difference between the best cross-validation and sequential validation. However, we observe that all cross-validation variants tend to overestimate the performance, while the sequential methods tend to underestimate it. Standard cross-validation with random selection of examples is significantly worse than the blocked cross-validation, and should not be used to evaluate classifiers in time-ordered data scenarios.

READ FULL TEXT

page 11

page 12

page 13

page 14

page 15

page 16

research
10/30/2010

Concentration inequalities of the cross-validation estimator for Empirical Risk Minimiser

In this article, we derive concentration inequalities for the cross-vali...
research
11/20/2019

Scalable and Generalizable Social Bot Detection through Data Selection

Efficient and reliable social bot classification is crucial for detectin...
research
02/24/2016

Multilingual Twitter Sentiment Classification: The Role of Human Annotators

What are the limits of automated Twitter sentiment classification? We an...
research
05/25/2021

Testing Cross-Validation Variants in Ranking Environments

This research investigates how to determine whether two rankings can com...
research
02/08/2021

Model Rectification via Unknown Unknowns Extraction from Deployment Samples

Model deficiency that results from incomplete training data is a form of...
research
03/23/2018

A Concept Learning Tool Based On Calculating Version Space Cardinality

In this paper, we proposed VeSC-CoL (Version Space Cardinality based Con...
research
06/26/2023

Gain Confidence, Reduce Disappointment: A New Approach to Cross-Validation for Sparse Regression

Ridge regularized sparse regression involves selecting a subset of featu...

Please sign up or login with your details

Forgot password? Click here to reset