How Does Tweet Difficulty Affect Labeling Performance of Annotators?

08/01/2018 ∙ by Stefan Räbiger, et al. ∙ Sabancı University 0

Crowdsourcing is a popular means to obtain labeled data at moderate costs, for example for tweets, which can then be used in text mining tasks. To alleviate the problem of low-quality labels in this context, multiple human factors have been analyzed to identify and deal with workers who provide such labels. However, one aspect that has been rarely considered is the inherent difficulty of tweets to be labeled and how this affects the reliability of the labels that annotators assign to such tweets. Therefore, we investigate in this preliminary study this connection using a hierarchical sentiment labeling task on Twitter. We find that there is indeed a relationship between both factors, assuming that annotators have labeled some tweets before: labels assigned to easy tweets are more reliable than those assigned to difficult tweets. Therefore, training predictors on easy tweets enhances the performance by up to 6 techniques and crowdsourcing.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Studies in crowdsourcing have found that labels assigned by workers111We use ”worker” and ”annotator” interchangeably in this work, where the former term is more suitable for a crowdsourcing environment, while the latter one is preferred in more general contexts. to documents become more reliable towards the end of a worker’s labeling session [6, 13, 17]. Similarly, the time needed to assign labels to documents drops rapidly in a worker’s early phase until it converges to a roughly constant level in the late phase. Since annotation times are typically associated with labeling costs, shorter annotation times are preferred. Thus, when experimenters want to recruit workers on a crowdsourcing platform who are likely to assign high-quality labels, suitable workers should (a) have completed similar tasks before and (b) have reached the state where labeling costs are approximately constant to keep the time needed for task completion short.

In practice, however, we suspect that this strategy could be affected by the inherent difficulty of the documents to be labeled since some documents are more difficult to label than others. Therefore, we expect that labels assigned to difficult documents will be less reliable. Using these difficult documents for training could affect the performance of the resulting predictors adversely. In contrast, if the reliability of the labels in the training set is high, resulting predictors could improve their performance. Thus, we assume that label reliability can be inferred from measuring the performance of predictors: given the performances of two predictors, we assume that the one achieving better performance was trained on documents with more reliable labels.

If this idea holds, we imagine to build in the future a difficulty predictor that estimates the difficulty level of documents in a preprocessing step to separate difficult from easy documents. For example, in crowdsourcing the difficulty could be trained on a small seed set and then estimate the difficulty of the remaining documents. Only easy documents would then be retained. This could potentially help avoid wasting human labor and budget on difficult documents which should not be annotated at all. Similarly, such a difficulty predictor would also be a helpful means as a preprocessing step in active (machine) learning

[20]. Whenever a label for a document is to be requested in active learning, it is expected that the human oracle (annotator) provides a reliable label. If a document to be labeled is easy, the label assigned by the annotator will be more likely reliable, but if the document is difficult, then there might be no suitable label available due to the difficulty of the document. Hence, applying the difficulty predictor in advance would allow to invoke active learning strategies only for easy documents.

The concept of ”early” and ”late” annotation phase is inspired by the observation that annotators need some time to learn how to annotate [25, 21, 17]. The time, translated here to the number of documents one sees, depends on the annotator. We roughly split the annotation process into an early phase encompassing the first documents and a late phase comprising the next documents. (In our experiments, some annotators labeled more than documents, but we ignore these documents to avoid the effects of fatigue.) We define ”document difficulty” informally as the set of factors that determine to what extend workers are hesitant in choosing among the available labels for a document. These factors may be features of the document, e.g. words in the document, but may also be in the eye of the beholder, e.g. affected by the workers’ perception of and attitude towards the subject matter. Since we cannot fix the factors making a document difficult as solely inherent to the document, we rather rely on difficulty indicators, which are labeling cost, worker disagreement [19] and predictor certainty [1]. Then, we propose predictors of annotator performance and study how phase and tweet difficulty influence the expected label of a document.

Since modeling the difficulty of tweets has been rarely the subject of investigation, we use the dataset from [17]. Another advantage of this dataset is that sentiment analysis is known to be subjective and therefore sufficiently difficult. This difficulty is also perceived by crowd workers [7], which allows us studying the interplay between tweet difficulty and the label reliability in annotators’ early/late annotation phase. To the best of our knowledge, this problem has not been analyzed before. Specifically, we address the following research questions in this report:

  • RQ1. How does document difficulty in the training set affect the performance of resulting predictors in the early phase and in the late phase?

  • RQ2. Are these effects from RQ1 meaningful?

Our analysis should be regarded as a preliminary study because the dataset is relatively small. However, if there is a connection between label reliability and document difficulty, in the next step real crowdsourcing experiments can be performed. This is a common approach in crowdsourcing, e.g. [23, 2, 18], for multiple reasons. For one, budget may be saved if proposed methods turn out not to work. Another reason is that one might want to run an experiment first in a controlled environment, as done in [17], to avoid external influence factors which cannot be ensured in crowdsourcing.

2 Related Work

The most relevant literature for our work addresses how document difficulty, and in particular tweet difficulty, is modeled in crowdsourcing and similar environments.

Martinez et al. utilize a predictor’s certainty to approximate the difficulty of a document [14]

. The underlying assumption is that a predictor is less certain about predicting labels for difficult documents. We employ the same idea in this work to derive tweet difficulty heuristically. Label difficulty has also been acknowledged and researched in the context of active learning

[5] and crowdsourcing [8]. However, Gan et al. [8] focus on modeling the difficulty of labeling tasks in crowdsourcing instead of single documents. Paukkeri et al. [15] propose a method to estimate a document’s subjective difficulty for each user separately based on comparing a document’s terms with the known vocabulary of an individual. Sameki et al. model tweet difficulty in the context of crowdsourcing [19] where they devise a system that minimizes the labeling costs for micro-tasks by allocating more budget to difficult tweets and less to easy ones. The authors argue that more sentiment makes a tweet more difficult to understand. Hence, they formulate the problem of estimating tweet difficulty as a task of distinguishing sarcastic from non-sarcastic tweets. One of the factors that they utilize is annotator disagreement - if more individuals agree on a label, it is considered easier. An approach that is related to this idea in spirit exists for estimating the difficulty of queries [4]: topic difficulty is approximated by analyzing the performances of existing systems - a lower performance indicates more difficult topics. In our work, we also harness annotator disagreement to approximate tweet difficulty - lower annotator disagreement is associated with easier tweets. While our work bears similarities with [19], the objectives differ: we are explicitly interested in analyzing how tweet difficulty affects the reliability of tweets that annotators assign, while Sameki et al. employ tweet difficulty as a feature to predict the number of annotators who should label the tweet. Furthermore, we combine worker disagreement with two more factors to model tweet difficulty. Another related approach is described in [22] where the authors propose a probabilistic method that takes image difficulty and crowd worker expertise into account to derive a ground truth – the authors show that this idea is more accurate than majority voting. However, they do not consider that workers learn during a labeling task. In addition, we focus on analyzing how the performance of predictors is affected by tweet difficulty.

Although we are investigating tweet difficulty in crowdsourcing, we do not analyze any online crowdsourcing activity [3, 24] on tweets, because we first need to know how annotators behave in a fully controlled experiment, before we include the uncertainty associated with worker diversity/background knowledge and engagement/disinterest. Similarly, in this work we do not discuss human factors, e.g. how worker expertise affects label reliability [10] because we performed an experiment in a controlled environment with volunteers which we consider faithful. Likewise, the annotators share a similar background in that they are computer science students.

Despite tweets being text documents, we do not use any of the proposed methods, e.g. [9], to model difficulty. This is because tweets are too short to extract meaningful grammatical features and sometimes they even do not contain any well-formed sentences at all. Therefore, we model tweet difficulty using the abovementioned heuristics from the crowdsourcing context which correlate intuitively with tweet difficulty.

3 Our Approach

We first describe briefly the dataset we use for performing our experiment. This is followed by addressing the different steps involved in designing our experiment for the analysis of the research questions.

3.1 Description of Dataset

In our analysis we use the dataset from [17], which contains 500 tweets labeled hierarchically in terms of sentiment in two geographically different regions, Magdeburg (MD) and Sabancı (SU). Conducting our experiment for both regions separately reduces the chances of our results being coincidence or biased by location-specific factors. The collected tweets address the first US presidential debate between Hillary Clinton and Donald Trump in 2016. One sample tweet is shown below:

Did trump just say there needs to be
law and order immediately after
saying that he feels justified not
paying his workers??  #Debates

The hierarchical labeling scheme for this dataset is depicted in Fig. 1 and comprises three levels. On the first level, a tweet is either Relevant or Irrelevant with respect to the topic and on the second level either Factual (= neutral) or Non-factual. If a tweet is considered Non-factual, it is either Positive or Negative on the third level. For Irrelevant tweets any additional labels (e.g. Factual) and their corresponding metadata (annotation times) are ignored as we are only interested in the sentiment of Relevant tweets. Note that this labeling scheme enforces annotators to assign no sentiment to neutral tweets.

Figure 1: Hierarchical labeling scheme. Labels with dashed lines were removed from the dataset. Each hierarchy level corresponds to a label set: the first set is Relevant/Irrelevant, the second one is Factual/Non-factual, and the third one is Positive/Negative.

In total, a similar number of students participated in the annotation process: 19 in MD and 25 in SU as depicted in Table 1.

Group Tweet set size Total
S M L
MD 10 8 1 19
SU 13 9 3 25
Table 1: Annotator distribution and total number of labeled tweets per group. S is 50 tweets, M is 150 tweets, and L is 500 tweets.

3.2 Measuring Tweet Similarity

Since we employ a kNN predictor

222We opted for kNN as it considers neighborhoods and we believe that the type of difficulty we investigate is a local phenomenon (”Are similar tweets difficult or easy to label?”), so we do not want to use an SVM or similar predictors as they learn globally optimal models (”Is the tweet easy or difficult to label?”) in our experiment, we must compute the similarity between any two tweets and . As a result of tweets exhibiting different lengths, we normalize this similarity by the longer tweet to avoid any influence of the text length on the similarity. Therefore, this normalized similarity yields values between zero (tweet texts are disjoint) and one (identical tweets). We refer to this normalized similarity as and it is computed between and as:

(1)

where and represent the words in the tweets and and computes the number of shared words between and according to a similarity metric. In this preliminary study, we utilize as metric the same three metrics that were used in [17], namely longest common subsequence, longest common substring, and edit distance .

These three metrics are typically defined on character-level, i.e. they compute the similarity between two single words by comparing these words character by character. Since we deal with tweets containing multiple words, we apply them on word-level. For example, edit distance between two strings usually counts how many characters in one string need to be changed to transform it into the other one. However, we count how many words in must be replaced s.t. it results in . Longest common subsequence counts how many characters in both words are in the same relative, but not necessarily contiguous, order. Extending this to tweets means we now count the words in and that are in the same relative, but not necessarily contiguous, order. Similarly, longest common substring counts how many contiguous characters both words share. That means in our case we count the number of words that are contiguously shared among and .

For to yield values between zero and one, the term needs to be inversed when using edit distance because large values indicate that and are different as opposed to being similar. Thus, when using edit distance, we use instead of in the numerator of Equation 1.

3.3 Modeling Annotation Difficulty

Since there is no ground truth for tweet difficulty available, we approximate the difficulty of a tweet by computing its difficulty score . combines three heuristics, namely worker agreement () [19], predictor certainty () [14], and labeling cost ():

(2)

where . We define higher difficulty scores in this equation to correspond to easier tweets.

The labeling agreement measures the extent to which annotators agree on a label, where higher values indicate easier tweets. To compute for , we devise a scoring function yielding values between 0 (no agreement) and 1 (perfect agreement). Furthermore, the worker agreement of each hierarchy level must contribute to . Specifically, we use majority voting to assign a label to each hierarchy level. A level should contribute more to if more workers agreed on the label. Since lower hierarchy levels might have been labeled by less workers than the first level (namely if workers deemed a tweet Irrelevant or Factual), higher levels tend to contribute more to . This reasoning is reflected in the following equation:

(3)

where are the annotators who assigned the majority label on hierarchy level , are the annotators who labeled on level , is the total number of annotators across all hierarchy levels that assigned majority labels, and is the set of hierarchy levels in the labeling scheme, in our case . The first term in the product describes the fraction of annotators who agreed on level on the majority label, while the second expression accounts for the overall contribution of level to the overall agreement. Whenever there is a tie on level regarding the majority label, is incremented by one. This lowers the contribution of levels that have no ties to the overall labeling agreement, which generally leads to lower agreement ratings for tweets with ties.

The following two examples illustrate how Equation 3 approximates annotator agreement. First, suppose that four annotators labeled tweet and assigned the labels:

  • First hierarchy level: Relevant, Relevant, Relevant, Relevant

  • Second hierarchy level: Factual, Non-factual, Non-factual, Non-factual

  • Third hierarchy level: -, Negative, Negative, Positive

Therefore, the majority labels for are Relevant, Non-factual, and Negative, leading to . In total, nine workers assigned the majority labels (four on the first level, three on the second level, two on the third level), so . In the second example, suppose there was a tie on the second level of , i.e.

  • First hierarchy level: Relevant, Relevant, Relevant, Relevant

  • Second hierarchy level: Factual, Non-factual, Non-factual, Factual

  • Third hierarchy level: -, Negative, Negative, -

This time there are two possibilities for the majority labels: either Relevant and Factual or Relevant, Non-factual, and Negative. In this case the majority labels would be chosen randomly. Regardless of that outcome, the resulting worker disagreement score would be now . Note that in this case instead of because exactly one tie occurred on the second hierarchy level, leading to a lower agreement score than in the first example.

A higher predictor certainty for a tweet indicates easier tweets. To compute it, we build a kNN predictor for each annotator separately since sentiment is subjective. The predictor is trained on 40% of an annotator’s labeled tweets and the longest common substring333We obtained similar results when choosing edit distance or longest common subsequence. is used to compute the similarity between any pair of tweets. Since kNN does not naturally provide a certainty for the predicted label of tweet , we approximate it as follows:

(4)

where is the number of the neighbors that share label ,

being a smoothing factor to avoid zero probabilities, and

being the number of possible classes that exist on a certain hierarchy level. In our experiment we set . We store for each tweet of a worker’s test set (60% of the labeled tweets) the certainty of the predicted labels. Repeating this process for all workers yields a list of predictions per tweet on each hierarchy level. To obtain a single certainty per tweet, we first average the certainties (of the different workers who labeled the tweet) per level and from these certainties we pick the maximum certainty per level, i.e. this process yields three values. Each of these three certainties corresponds to the predicted majority label on the respective hierarchy level. Averaging these three values yields . This procedure is reflected in the following equation:

(5)

where is the set of predicted labels for on hierarchy level , is the set of annotators who labeled in their test sets, and is the set of hierarchy levels in the labeling scheme, in our case . Note that in this procedure we are not accessing the sentiment labels which kNN predicts for a tweet. Instead, we only use the predictor certainties of the sentiment labels that kNN assigned to the tweets. Therefore, we are not leaking any information about the actual sentiment labels to the sentiment predictors that are built in the experiment. Table 2 illustrates how is obtained for . In this case two annotators have in their test set, hence we have four predictor certainties (two predicted labels per worker) per level. For example, kNN is 80% certain, according to Equation 4, that worker 1 (first row, first column) would assign Relevant to on the first hierarchy level. In contrast, kNN is only 20% certain for her to assign Irrelevant. The certainties are averaged per label and per level (row 3), e.g. the average certainty of kNN to assign Relevant on the first hierarchy level is , while it is for Irrelevant. Averaging these three remaining certainties results in .

First level Second level Third level
Annotator 1 (, .8), (, .2) (, .4), (, .6) (, .3), (, .7)
Annotator 2 (, .7), (, .3) (, .2), (, .8) (, .3), (, .7)
Avg. certainty (, .75), (, .25) (, .5), (, .5) (, .4), (, .6)
Maximum certainty .75 .7 .6
Table 2: Example how Equation 5 aggregates the predicted certainties for tweet . The columns represent the hierarchy levels in the labeling task. We use the following acronyms to represent the predicted sentiment labels: : Relevant, : Irrelevant, : Factual, : Non-factual, : Positive, : Negative. Suppose two annotators labeled in their test sets and kNN predicted for each worker a tuple of (sentiment label, certainty) according to Equation 4 per hierarchy level. ”Avg. certainty” averages the predicted certainties per label per hierarchy level. ”Maximum certainty” shows which certainty would be kept according to Equation 5 and the last row shows the final result of the computation, thus in this case.

The labeling cost for tweet corresponds to ’s median annotation time. The higher it is, the more difficult it is to label . However, since high values of are associated with easy tweets, must be inverted. We choose as labeling cost for

the median annotation time across all annotators who labeled it. The median is more appropriate than the average in our case due to its robustness toward outliers because some annotators had a few random spikes in their annotation times. After normalizing the labeling cost, the following equation follows:

(6)

where is the median labeling cost of tweet , () is the lowest (highest) median labeling cost across all tweets.

After computing

for each tweet, we apply k-means with

to cluster the difficulty scores. Each tweet is now assigned a difficulty label, easy or difficult, according to its cluster membership.

3.4 Design of the Simulation Experiment

By training predictors we want to answer RQ1, i.e. if difficult tweets affect label reliability in the early phase and in the late phase. The goal is to predict the hierarchical sentiment labels (Relevant, Irrelevant, Factual, Non-factual, Positive, Negative). We measure predictor performance in terms of hierarchical F1-score, which is recommended by Kiritchenko et al. for hierarchical labeling tasks [11]. Specifically, we analyze the effect of the following independent variables on predictor performance:

  • difficulty: difficult or easy tweets

  • phase: early phase or late phase(cf. Section 3.5)

  • training set size: number of tweets in the training set

  • neighbors: number of nearest neighbors in kNN

  • institution: either MD or SU

We expect meaningful patterns observed in this simulation to hold while varying the abovementioned variables. Otherwise the patterns might be due to chance. For example, if one predictor outperforms another one, this result should hold even if the size of the training set changes.

The core assumption in this simulation experiment is that the reliability of labels can be inferred from measuring the performance of trained predictors: if predictors achieve higher F1-scores, the sentiment labels in their training sets are considered more reliable. In other words, we use F1-score as a proxy for the reliability of labels. Therefore, we train two predictors per crowd worker, PredictorE trained only on easy tweets and PredictorD which is trained solely on difficult tweets. We fix all of the abovementioned variables, so that only the variable difficulty of the training set differs between both predictors. This allows us to draw conclusions about the effect of tweet difficulty on label reliability.

3.5 Early & Late Phase in Annotator Behavior

For the experiment, our dependent variable – predictor performance – is affected by two parameters: the number of tweets used in the training set and tweet difficulty. That means we plot a curve of the predictor performances once for difficult and once for easy tweets while varying the number of tweets in the training set. However, annotators undergo a early phase [13, 25, 17], i.e. a drop in annotation times occurs in the beginning of an annotation session. Thus, the phase – either early phase or late phase– is also an independent variable that we need to control for in our experiment. Therefore we perform the experiment once for the early phase and once for the late phase because within these phases the annotation times can be considered similar.

Originally, annotators labeled either S, M, or L tweets in the used dataset according to their annotator group and it was found that the length of the early phase differs across the annotator groups [17]. To avoid having to control for this variable as well, i.e. repeating the experiment with the two phases once for each annotator group, we fix the length of the early phase across all three annotator groups. When aggregating all annotation times per institution, either MD or SU, we obtain for the length of the early phase approximately 25 tweets, i.e. the first 25 labeled tweets of each annotator are used for their early phase and their next 25 labeled tweets are utilized for their late phase to have a balanced experimental setup. Therefore, we use in total the first 50 labeled tweets of each annotator in both institutions. Any other labeled tweets are discarded. Another reason for not using more tweets for the late phase is to avoid uncontrollable side effects such as fatigue because there are possible indicators for fatigued workers in the dataset [17].

3.6 Building Predictors

One sentiment predictor (kNN) is trained per crowd worker in MD and SU because sentiment analysis is subjective. The exact training procedure of PredictorEand PredictorD for a single crowd worker is illustrated schematically in Figure 2. The training set (containing only difficult or only easy tweets) is derived from Tweets 1-25 in the early phase and once from tweets 26-50 in the late phase. This leads effectively to four datasets per worker to which we refer in the remainder as strata, namely:

  1. EARLY_EASY: easy tweets that were labeled in a worker’s early phase

  2. EARLY_DIFFICULT: difficult tweets that were labeled in a worker’s early phase

  3. LATE_EASY: easy tweets that were labeled in a worker’s late phase

  4. LATE_DIFFICULT: difficult tweets that were labeled in a worker’s late phase

Hierarchical learning is performed by training in total six predictors (two predictors are trained per hierarchy level). Note that we introduced an extra label besides the sentiment labels to indicate that no label exists on a certain hierarchy level. This is necessary as Irrelevant tweets have only a label on the top-most hierarchy level. To assess the performance of the trained predictors in terms of hierarchical F1-scores (micro-averaged over all workers in a stratum), the labels of the remaining tweets in a worker’s stratum are estimated per hierarchy level. For example, if PredictorE is trained on five tweets that an annotator labeled in EARLY_EASY, it will be evaluated on her remaining 20 labeled tweets.

Figure 2: Overview how predictors, using tweets for training, are built for a single crowd worker.

3.7 Testing the Meaningfulness of Observed Patterns

Since we vary many parameters in our simulation, it will be hard to depict all plotted configurations. Instead, our main goal is to identify patterns that hold over different configurations as these are more likely to be meaningful. We will report all our results in an encoded form to make finding patterns more straightforward. Instead of showing how the F1-scores of the predictors develop when varying the size of the training set, we simply state if one of the two resulting F1-curves dominates the other one. In that case there are three possible outcomes: either curve dominates the other one or there is a tie. The details about the encoding are explained in Section 4.1. However, reporting these encoded results permits us to test if there are significant differences in the proportions of the three outcomes using the two-tailed Fisher’s exact test. Fisher’s exact test (instead of a chi-square test) is suitable since some of the outcomes occur rarely.

4 Results

First, we show some sample F1-curves of the trained predictors because afterwards we encode them into a compressed form to be able to report all of our results. This allows to identify certain trends whose statistical significance we examine thereafter.

4.1 Observed Patterns in the Simulation Experiment

This section addresses RQ1. In our dataset, easy and difficult tweets are roughly equally distributed, with easy tweets (according to Eq. 2) accounting for 50% to 57% of the tweets depending on the stratum as illustrated in Table 3. That means the classes are sufficiently balanced, thus there is no need to take any special countermeasures in the classification task.

MD SU
Easy 68 (50.4%) 93 (57.4%)
Difficult 67 (49.6%) 69 (42.6%)
Early stage
MD SU
Easy 78 (55.3%) 86 (54.3%)
Difficult 63 (44.7%) 72 (45.7%)
Late stage
Table 3: Absolute numbers and percentages of easy/difficult tweets per stratum for both groups, MD and SU.

First, we show some sample F1-curves of the trained predictors because afterwards we encode them into a compressed form to be able to report all of our results. This allows to identify certain trends whose statistical significance we examine thereafter.

We show the F1-curves of the kNN predictors trained on eight tweets per worker for the four strata while varying , the number of neighbors in kNN. The predictors utilize edit distance as a similarity metric. In Figure 3, the F1-curves of PredictorE trained on EARLY_EASY and PredictorD trained on EARLY_DIFFICULT are shown for MD and SU. In that case both predictors perform equally well. This observation holds in both groups and will be encoded as (T)ie in the compressed form. We note that the differences between the F1-curves in the early phase are generally small. The corresponding F1-scores for the late phase of MD and SU are depicted in Figure 4 using the same setup as described before. This means that now the performances of PredictorE trained on LATE_EASY and PredictorD trained on LATE_DIFFICULT are evaluated. This time, PredictorE outperforms PredictorD. This behavior is consistent in MD and SU and will be encoded as (E)asy in the compressed representation. In this specific case, the F1-scores of PredictorE in SU are between 1.5% and 4.5% higher than in PredictorD. In MD, PredictorE achieves between 2% and 6% better F1-scores than PredictorD. We note that the differences between the F1-curves tend to be larger if PredictorE outperforms PredictorD. If PredictorD wins, both F1-curves are close to each other. In both figures it seems that considering more neighbors for predictions mainly improves the F1-scores of PredictorD but not PredictorE. This could indicate that less workers are necessary to label easy tweets as opposed to difficult ones.

(a) MD
(b) SU
Figure 3: F1-scores of kNN with varying k. For each annotator the training set comprises eight (easy/difficult) tweets of the early phase.
(a) MD
(b) SU
Figure 4: F1-scores of kNN with varying . For each annotator the training set comprises eight (easy/difficult) tweets of the late phase.

We report the outcomes of the remaining F1-curves of the predictors for the four strata with varying training sets containing between two and ten tweets as follows. At all times we compare in a stratum the F1-scores of PredictorE and PredictorD while varying . We encode each outcome as follows (abbreviation in parentheses):

  • (T)ie (both predictors exhibit the same F1-scores),

  • (E)asy (PredictorE outperforms PredictorD),

  • (D)idifficult (PredictorD outperforms PredictorE).

Each table contains the encoded outcomes over training sets comprising between two and ten tweets using different distance metrics. More specifically, Table 4 depicts the outcomes for the edit distance, Table 5 shows the outcomes for the longest common subsequence, and Table 6 gives the results for the longest common substring. One tendency in these tables is that the likelihood of seeing T drops as the number of tweets used for learning increases. We suspect that this phenomenon occurs because a small number of training tweets leads to a poor predictor performance anyway, no matter whether these tweets were easy or difficult. As soon as the number of training tweets increases, the difference becomes apparent, whereupon it becomes more likely that PredictorE is the best one.

We juxtaposed the winner predictors between the two groups MD and SU once for the early phase and once for the late phase. The numbers are too small to deliver robust results, but we observe a general tendency: PredictorE is more often the winner in the late phase for SU than for MD. This could be seen as an indication that SU learned faster, but the phenomenon can also be explained by differences in size between the two groups: MD is smaller and thus more vulnerable to variations in the performance of the individual annotators. Another related pattern across all groups is that T occurs frequently in the early phase, while E tends to appear more often in the late phase.

2 3 4 5 6 7 8 9 10
Early T T T D D D E E E
Late T T T E E E E E E
MD
2 3 4 5 6 7 8 9 10
Early T T T T T D D T E
Late T E E E E E E E E
SU
Table 4: Outcomes for the different strata using kNN with edit distance and a varying number of tweets (2-10) in the training set of each annotator.
2 3 4 5 6 7 8 9 10
Early T T T D T T E E E
Late T D T T T E E E E
MD
2 3 4 5 6 7 8 9 10
Early T T T T T D D T E
Late T E E D E D E E E
SU
Table 5: Outcomes for the different strata using kNN with longest common subsequence and a varying number (2-10) of tweets in the training set of each annotator.
2 3 4 5 6 7 8 9 10
Early T T T T T E E E E
Late T D T E E T E E E
MD
2 3 4 5 6 7 8 9 10
Early T T T T T D D T E
Late T E E E E D E E E
SU
Table 6: Outcomes for the different strata using kNN with longest common substring and a varying number of tweets (2-10) in the training set of each annotator.

4.2 Significance of Observed Patterns

To analyze the meaningfulness of these patterns according to RQ2, we run the two-tailed Fisher’s exact test to see if the differences in the proportions of the outcomes are significant as described in Section 3.7. For comparing all pairwise proportions, our null hypotheses to be tested are: there is no difference in the proportion of E and D (T and E) (T and D) between early phase and late phase. The proportions are displayed in Table 7 and were obtained by adding up the outcomes from Tables 4-6. Using as significance level, we obtain the following results.

Early Late
T 31 12
E 13 36
E vs. T
Early Late
E 13 36
D 10 6
E vs. D
Early Late
T 31 12
D 10 6
T vs. D
Table 7: Occurrences of the encoded outcomes in an annotator’s early and late phase.

The proportions of E and T are significantly different in the early and late phase (). This suggests that ties between predictors occur more frequently in the early phase, while PredictorE outperforms PredictorD significantly more often in the late phase. Likewise, the proportions of E and D differ significantly () across both phases, which means that neither of PredictorE nor PredictorD wins significantly more frequently in the early phase, while in the later phase PredictorE outperforms PredictorD significantly more often. When it comes to the proportions of T and D, no significant differences exist in the proportions (). Thus, the significance tests confirm our intuition about the existing patterns in the results, namely that T occurs mainly in the early phase, E in the late phase and D appears rarely in both phases.

5 Discussion

The results of our preliminary study suggest that there is indeed a connection between the difficulty of tweets and the reliability of the labels that annotators assigned to them. More specifically, the label reliability of easy tweets seems higher, because predictors trained on them achieve higher F1-scores. However, this holds only for an annotator’s late phase, i.e. after annotators have already labeled 25 other tweets. In the early phase, i.e. for the first 25 tweets, our results do not show any evidence for such a relationship. One possible explanation for this result could be that the labels workers assign in their early phase [21, 25, 6, 13, 17] are generally of lower quality during that period [13, 17]. Therefore, the higher level of noisy, low-quality labels in the early phase could be masking the effect of tweet difficulty on label reliability in an annotator’s early phase.

It would be interesting to examine this hypothesis using a slightly different experiment setup than our current one in a new study: first, workers complete a labeling task in their first annotation session (same setup as in [17]) and after a short break, they repeat the task with new tweets in a second session. If the noisy, low-quality labels due to the early phase masked the relationship between tweet difficulty and label reliability in theearly phase of the first session, in the second session we would expect to see a pattern similar to the one we reported for the late phase in this thesis, because workers should not have to go through another early phase, assuming the break between two sessions is not too long. However, given that crowd workers tend to complete many micro-tasks, they will quickly reach their late phase, meaning that labeling easier tweets will increase the reliability of assigned labels in practice.

This motivates the idea of devising a tweet difficulty predictor to estimate the difficulty of unknown tweets for which a host of applications exist [16]. We plan to apply this predictor as a filter before an actual crowdsourcing task. Given a large dataset, one could crowdsource a small seed first to train the difficulty predictor. It then estimates the level of difficulty in the unlabeled dataset and only tweets which are estimated to be easy would be crowdsourced. This could also complement the approach proposed by Whitehill et al. [22] for aggregating crowdsourced labels more accurately because the prior for document difficulty in their probabilistic method could be tweaked such that easy documents are more likely to occur in the dataset. Building such a difficulty predictor on a small seed set would also benefit active learning techniques, as they could be invoked only on easy tweets to obtain reliable labels from experts. Here the difficulty predictor would be used before invoking an active learning algorithm only for easy tweets. Furthermore, incorporating tweet difficulty into cost models in active learning, that estimate the costs for acquiring labels for unlabeled tweets, could enhance the models’ accuracy.

Reducing the dataset size by filtering out difficult tweets could potentially increase the retention rate of the crowdsourcing task as workers might become less frustrated since micro-tasks can be completed with more ease. Furthermore, crowdsourcing a smaller dataset could save budget that will not be spent on difficult tweets. Even more budget could be saved if less crowd workers would be allocated to easy tweets, similar to [19]. Another way of using such a tweet difficulty predictor would be to assign easy tweets for labeling to inexperienced workers and difficult ones to experts [12]. The associated monetary compensation could possibly also vary depending on the level of expertise of crowd workers. This is related to the problem of optimal task routing in crowdsourcing where suitable workers should be identified for micro-tasks. For example, in [9] workers’ cognitive abilities are used to match them to suitable tasks. This works for language fluency and visual tasks, but has not been tested for other types of tasks, such as sentiment analysis. If tweets are involved, a tweet difficulty predictor could complement this approach.

We note several limitations in our preliminary study. First, our dataset was relatively small. Nevertheless, the tweets we used were diverse and we performed our experiment independently in two different locations. Second, we investigated a single labeling task and it could bias the results. For example, in other tasks easy tweets might not be diverse enough to train good predictors. However, if sufficiently diverse tweets exist for a labeling task, we believe that our results will hold. Third, we evaluated only one predictor, kNN. Thus, replicating this experiment on a larger scale with more diverse predictors would help establish our findings. Our dataset444https://www.researchgate.net/publication/325180810_Infsci2017_dataset and source code555https://github.com/fensta/PrelimStudy are publicly available.

6 Conclusion

In this preliminary study we examined how tweet difficulty affects the reliability of labels that annotators assign. The experiment we designed to investigate this hypothesis was performed independently in two locations and we obtained consistent empirical results. They suggest that the labels assigned to easy tweets are more reliable, but only if the annotators are familiar with the labeling task, i.e. they had labeled a certain number of tweets before. This observation implies that the performance of predictors could be theoretically enhanced by devising a predictor that can estimate the difficulty of tweets in advance. Due to its benefits for crowdsourcing and active learning, we plan to develop a method that employs such a tweet difficulty predictor at its core in the future [16]. Another subject for future investigation is the question of diversity in easy tweets: do the easy tweets in a labeling task always suffice to train meaningful predictors?

References

  • [1] Héctor Martınez Alonso, Anders Johannsen, Oier Lopez de Lacalle, and Eneko Agirre. Predicting word sense annotation agreement. In Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics (LSDSem), page 89, 2015.
  • [2] Omar Alonso and Stefano Mizzaro. Can we get rid of trec assessors? using mechanical turk for relevance assessment. In Proceedings of the SIGIR 2009 Workshop on the Future of IR Evaluation, volume 15, page 16, 2009.
  • [3] Kalina Bontcheva, Leon Derczynski, and Ian Roberts.

    Crowdsourcing named entity recognition and entity linking corpora.

    In Handbook of Linguistic Annotation, pages 875–892. Springer, 2017.
  • [4] David Carmel, Elad Yom-Tov, Adam Darlow, and Dan Pelleg. What makes a query difficult? In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 390–397. ACM, 2006.
  • [5] Aron Culotta and Andrew McCallum. Reducing labeling effort for structured prediction tasks. In AAAI, volume 5, pages 746–751, 2005.
  • [6] Ujwal Gadiraju, Besnik Fetahu, and Ricardo Kawase. Training workers for improving performance in crowdsourcing microtasks. In Design for Teaching and Learning in a Networked World, pages 100–114. Springer, 2015.
  • [7] Ujwal Gadiraju, Ricardo Kawase, and Stefan Dietze. A taxonomy of microtasks on the web. In Proceedings of the 25th ACM conference on Hypertext and social media, pages 218–223. ACM, 2014.
  • [8] Xiaoying Gan, Xiong Wang, Wenhao Niu, Gai Hang, Xiaohua Tian, Xinbing Wang, and Jun Xu. Incentivize multi-class crowd labeling under budget constraint. IEEE Journal on Selected Areas in Communications, 35(4):893–905, 2017.
  • [9] Jorge Goncalves, Michael Feldman, Subingqian Hu, Vassilis Kostakos, and Abraham Bernstein. Task routing and assignment in crowdsourcing based on cognitive abilities. In Proceedings of the 26th International Conference on World Wide Web Companion, pages 1023–1031. International World Wide Web Conferences Steering Committee, 2017.
  • [10] Gabriella Kazai, Jaap Kamps, and Natasa Milic-Frayling. An analysis of human factors and label accuracy in crowdsourcing relevance judgments. Information retrieval, 16(2):138–178, 2013.
  • [11] Svetlana Kiritchenko, Stan Matwin, Richard Nock, and A Fazel Famili. Learning and evaluation in the presence of class hierarchies: Application to text categorization. In Canadian Conference on AI, volume 2006, pages 395–406. Springer, 2006.
  • [12] Andrey Kolobov, Daniel S Weld, et al. Joint crowdsourcing of multiple tasks. In First AAAI Conference on Human Computation and Crowdsourcing, pages 36–37, 2013.
  • [13] Eddy Maddalena, Marco Basaldella, Dario De Nart, Dante Degl’Innocenti, Stefano Mizzaro, and Gianluca Demartini. Crowdsourcing relevance assessments: The unexpected benefits of limiting the time to judge. In Fourth AAAI Conference on Human Computation and Crowdsourcing, 2016.
  • [14] Miguel Martinez-Alvarez, Alejandro Bellogin, and Thomas Roelleke. Document difficulty framework for semi-automatic text classification. In International Conference on Data Warehousing and Knowledge Discovery, pages 110–121. Springer, 2013.
  • [15] Mari-Sanna Paukkeri, Marja Ollikainen, and Timo Honkela. Assessing user-specific difficulty of documents. Information Processing & Management, 49(1):198–212, 2013.
  • [16] Stefan Räbiger, Gizem Gezici, Myra Spliliopoulou, and Yücel Saygın. Predicting worker disagreement for more effective crowd labeling. In

    2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA)

    . IEEE, 2018.
  • [17] Stefan Räbiger, Myra Spiliopoulou, and Yücel Saygın. How do annotators label short texts? toward understanding the temporal dynamics of tweet labeling. Information Sciences, 457-458:29 – 47, 2018.
  • [18] Paola Salomoni, Catia Prandi, Marco Roccetti, Valentina Nisi, and N Jardim Nunes. Crowdsourcing urban accessibility:: Some preliminary experiences with results. In Proceedings of the 11th Biannual Conference on Italian SIGCHI Chapter, pages 130–133. ACM, 2015.
  • [19] Mehrnoosh Sameki, Mattia Gentil, Kate K Mays, Lei Guo, and Margrit Betke. Dynamic allocation of crowd contributions for sentiment analysis during the 2016 us presidential election. arXiv preprint arXiv:1608.08953, 2016.
  • [20] Burr Settles. Active learning.

    Synthesis Lectures on Artificial Intelligence and Machine Learning

    , 6(1):1–114, 2012.
  • [21] Burr Settles, Mark Craven, and Lewis Friedland. Active learning with real annotation costs. In Proceedings of the NIPS workshop on cost-sensitive learning, pages 1–10, 2008.
  • [22] Jacob Whitehill, Ting fan Wu, Jacob Bergsma, Javier R. Movellan, and Paul L. Ruvolo. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 2035–2043. Curran Associates, Inc., 2009.
  • [23] Sungwon Yang, Pralav Dessai, Mansi Verma, and Mario Gerla. Freeloc: Calibration-free crowdsourced indoor localization. In INFOCOM, 2013 Proceedings IEEE, pages 2481–2489. IEEE, 2013.
  • [24] Xiaoyan Yang, Shanshan Ying, Wenzhe Yu, Rong Zhang, and Zhenjie Zhang. Enhancing topic modeling on short texts with crowdsourcing. In Asian Conference on Machine Learning, pages 33–48, 2016.
  • [25] Dongqing Zhu and Ben Carterette. An analysis of assessor behavior in crowdsourced preference judgments. In SIGIR 2010 workshop on crowdsourcing for search evaluation, pages 17–20, 2010.