Autocompletion interfaces make crowd workers slower, but their use promotes response diversity

07/21/2017 ∙ by Xipei Liu, et al. ∙ 0

Creative tasks such as ideation or question proposal are powerful applications of crowdsourcing, yet the quantity of workers available for addressing practical problems is often insufficient. To enable scalable crowdsourcing thus requires gaining all possible efficiency and information from available workers. One option for text-focused tasks is to allow assistive technology, such as an autocompletion user interface (AUI), to help workers input text responses. But support for the efficacy of AUIs is mixed. Here we designed and conducted a randomized experiment where workers were asked to provide short text responses to given questions. Our experimental goal was to determine if an AUI helps workers respond more quickly and with improved consistency by mitigating typos and misspellings. Surprisingly, we found that neither occurred: workers assigned to the AUI treatment were slower than those assigned to the non-AUI control and their responses were more diverse, not less, than those of the control. Both the lexical and semantic diversities of responses were higher, with the latter measured using word2vec. A crowdsourcer interested in worker speed may want to avoid using an AUI, but using an AUI to boost response diversity may be valuable to crowdsourcers interested in receiving as much novel information from workers as possible.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Crowdsourcing applications vary from basic, self-contained tasks such as image recognition or labeling (Welinder  Perona, 2010) all the way to open-ended and creative endeavors such as collaborative writing, creative question proposal, or more general ideation (Little ., 2010). Yet scaling the crowd to very large sets of creative tasks may require prohibitive numbers of workers. Scalability is one of the key challenges in crowdsourcing: how to best apply the valuable but limited resources provided by crowd workers and how to help workers be as efficient as possible.

Efficiency gains can be achieved either collectively at the level of the entire crowd or by helping individual workers. At the crowd level, efficiency can be gained by assigning tasks to workers in the best order (Tran-Thanh ., 2013), by filtering out poor tasks or workers, or by best incentivizing workers (Allahbakhsh ., 2013). At the individual worker level, efficiency gains can come from helping workers craft more accurate responses and complete tasks in less time.

One way to make workers individually more efficient is to computationally augment their task interface with useful information. For example, an autocompletion user interface (AUI) (Sevenster ., 2012), such as used on Google’s main search page, may speed up workers as they answer questions or propose ideas. However, support for the benefits of AUIs is mixed and existing research has not considered short, repetitive inputs such as those required by many large-scale crowdsourcing problems. More generally, it is not yet clear what are the best approaches or general strategies to achieve efficiency gains for creative crowdsourcing tasks.

In this work, we conducted a randomized trial of the benefits of allowing workers to answer a text-based question with the help of an autocompletion user interface. Workers interacted with a web form that recorded how quickly they entered text into the response field and how quickly they submitted their responses after typing is completed. After the experiment concluded, we measured response diversity using textual analyses and response quality using a followup crowdsourcing task with an independent population of workers. Our results indicate that the AUI treatment did not affect quality, and did not help workers perform more quickly or achieve greater response consensus. Instead, workers with the AUI were significantly slower and their responses were more diverse than workers in the non-AUI control group.

2 Related Work

An important goal of crowdsourcing research is achieving efficient scalability of the crowd to very large sets of tasks. Efficiency in crowdsourcing manifests both in receiving more effective information per worker and in making individual workers faster and/or more accurate. The former problem is a significant area of interest (Karger ., 2014; Li ., 2016; McAndrew  Bagrow, 2016) while less work has been put towards the latter.

One approach to helping workers be faster at individual tasks is the application of usability studies. Kittur . (2008) famously showed how crowd workers can perform user studies, although this work was focused on using workers as usability testers for other platforms, not on studying crowdsourcing interfaces. More recent usability studies on the efficiency and accuracy of workers include: Cheng . (2015), who consider the task completion times of macrotasks and microtasks and find workers given smaller microtasks were slower but achieve higher quality than those given larger macrotasks; Lasecki . (2015), who study how the sequence of tasks given to workers and interruptions between tasks may slow workers down; and Demartini (2016), who study completion times for relevance judgment tasks, and find that imposed time limits can improve relevance quality, but do not focus on ways to speed up workers. These studies do not test the effects of the task interface, however, as we do here.

The usability feature we study here is an autocompletion user interface (AUI). AUIs are broadly familiar to online workers at this point, thanks in particular to their prominence on Google’s main search bar (evolving out of the original Google Instant implementation). However, literature on the benefits of AUIs (and related word prediction and completion interfaces) in terms of improving efficiency is decidedly mixed.

It is generally assumed that AUIs make users faster by saving keystrokes (Bast  Weber, 2006). However, there is considerable debate about whether or not such gains are countered by increased cognitive load induced by processing the given autocompletions (Koester  Levine, 1994). Anson . (2006) showed that typists can enter text more quickly with word completion and prediction interfaces than without. However, this study focused on a different input modality (an onscreen keyboard) and, more importantly, on a text transcription task: typists were asked to reproduce an existing text, not answer questions. Sevenster . (2012) showed that medical typists saved keystrokes when using an autocompletion interface to input standardized medical terms. However, they did not consider the elapsed times required by these users, instead focusing on response times of the AUI suggestions, and so it is unclear if the users were actually faster with the AUI. There is some evidence that long-term use of an AUI can lead to improved speed and not just keystroke savings (Magnuson  Hunnicutt, 2002), but it is not clear how general such learning may be, and whether or not it is relevant to short-duration crowdsourcing tasks.

3 Experimental design

Here we describe the task we studied and its input data, worker recruitment, the design of our experimental treatment and control, the “instrumentation” we used to measure the speeds of workers as they performed our task, and our procedures to post-process and rate the worker responses to our task prior to subsequent analysis.

Task description and question data

For this work, we focused on a conceptualization or “IsA” task. Each task consisted of a question of the form: “FOO is a type of:” followed by a short one-line text field for the worker to respond. The particular terms “FOO” then defines each question. Before this question was a brief description of the task followed by two examples: “chair is a type of furniture” and “Microsoft is a corporation”. See Fig. 1.

A Control form

B Autocompletion User Interface (AUI) form
Figure 1: Screenshots of our conceptualization task interface. The presence of the AUI is the only difference between the task interfaces.

The question terms (“chair” and “Microsoft” in the above examples) were chosen from the Microsoft Concept Graph (MCG) dataset (Wu ., 2012; Wang ., 2015)

. These data provide a bipartite knowledge graph linking

entities to concepts, for example “city” is a concept related to the entity “Berlin”. We chose these data for our conceptualization task so that we have a comparative baseline, as the MCG captures the same relationships we measure in our task.

We chose 10 entities randomly from the MCG to act as question terms. The MCG data are somewhat noisy, heavily skewed to rare terms (often medical terms), and contain many abstract entity–concept relations, so we first performed a filtering step to focus on commonplace and easy-to-understand question terms. We also required that 5 of the chosen terms be one-word entities longer than two letters and 5 be multi-word phrases, both without numbers. See Table 

1 for our final chosen question terms.

ID Question term ID Question term
Q1 hail Q6 occupational therapist
Q2 millet Q7 standard deviation
Q3 steam Q8 motor vehicle
Q4 finland Q9 dengue fever
Q5 spider Q10 citric acid
Table 1: Question terms used in our conceptualization task. Workers were shown these questions in random order.

Crowdsourcing and treatment

We recruited workers on Amazon Mechanical Turk (AMT) to perform our task. Recruited workers must have 80% or better approval rating, be USA-located, and be able to view adult content. Each human intelligence task (HIT) was one conceptualization task, i.e. one of the ten questions. Workers could perform anywhere from one to ten HITs. Questions were shown to each worker in random order. Each worker response generates a question-response text pair which may or may not be unique as other workers may give the same response to the same question. Workers were compensated $0.05 per HIT.

Workers were blindly assigned to one of two conditions with equal probability (simple random assignment) when they accepted their first HIT. This assignment was then carried over for any subsequent HITs performed by that worker. The control group consisted of a HIT interface (web form) with a text entry field without an autocompletion user interface (AUI). We refer to this as the Control form and the workers assigned to the Control form as the Control group. The treatment consisted of a text entry field but with an associated AUI; corresponding to the Control group, we refer to this form as the AUI form and the workers assigned to the AUI form as the AUI group. Screenshots comparing Control and AUI forms are shown in Fig. 

1.

In all other respects the HIT interfaces were identical. In particular, for both forms, JavaScript was used on the field to prevent workers from inputting punctuation or responses exceeding four words. Copy or paste is prevented on the page; workers can only fill in the text entry by typing or, if it is available, by selecting from the AUI. The HIT was not submittable until the response field was filled.

Autocompletion user interface

The AUI we used was implemented with jQuery-UI’s (ver. 1.12.1) autocomplete widget with autofocus enabled111Autofocus makes it easy for the worker to quickly select the top AUI response.. Whenever two or more characters are present in the response field, a search based on the current contents of the response field is triggered of a database containing all MCG concepts with at least 5 associated entities ( 705,710 concept terms). Concept terms are indexed for speed and the search term is matched from both sides using MySQL’s “LIKE” operator, and the first six matches are dynamically displayed in the AUI (Fig. 0B) with up to another six available by scrolling. The search repeats whenever the current response changes; the AUI disappears if there are fewer than two characters present in the response field. Workers were not required to select a response from the AUI. Searching the MCG concepts helps provide meaningful autocompletions for our conceptualization task.

Instrumentation

Our experimental goal was to determine how workers would use an AUI and how an AUI may affect their responses. Would they be faster at answering such short questions by saving on typing time? Or would the cognitive load of reading the AUI as it appeared and updated slow down the worker, even enough to offset any savings from faster text entry? Would the AUI lead to more consistent responses across workers by mitigating typos, or less consistent responses, by acting as a cognitive primer?

To study the effects of the AUI, each HIT form was instrumented with JavaScript to record the times when workers first entered text into the response field, when they last entered text into the response field, and when the form was submitted. Note that while we also recorded the time when the HIT was accepted, we did not use these data because it is unclear when a worker accepts a HIT as opposed to when a worker actually begins work on that HIT (AMT workers sometimes open a series of HITs into separate browser tabs, and then later process those HITs). Due to this, our future experiments will also record when the browser window containing the HIT is active.

This instrumentation allows us to measure two important features of worker activity:

  1. Typing duration—Total elapsed time between the first and last keypress made by the worker into the text area.

  2. Submission delay—Total elapsed time between the final keypress into the text area and the submission of the form.

Response processing and quality ratings

Worker responses were post-processed by removing casing and transforming any whitespace to a single space character. Additional processing was unnecessary because of the in-browser processing done by the form (see above).

A second, non-experimental set of HITs was used to measure the perceived quality of each unique question-response pair. Instead of using additional workers to rate responses, the quality of responses for our conceptualization task could be assessed computationally using, for example, ontology datasets. However, combining free text responses from workers with a fixed-vocabulary dataset is a challenging natural language processing task beyond the scope of this work, so here we simply relied on ratings by independent workers. Workers were shown statements of the form “

FOO is a type of: BAR”, where BAR is a worker response to question term FOO, and asked to rate their agreement with this statement on a 1–5 rating scale (1—least agree; 5—strongest agree). Each worker was shown ten such statements per HIT, and compensated at a rate of $0.25 per HIT. Workers who belong to either Control or AUI groups were excluded from these tasks.

4 Results

4.1 Data collection

We recruited 176 AMT workers to participate in our conceptualization task. Of these workers, 90 were randomly assigned to the Control group and 86 to the AUI group. These workers completed 1001 tasks: 496 tasks in the control and 505 in the AUI. All responses were gathered within a single 24-hour period during April, 2017.

After Control and AUI workers were finished responding, we initiated our non-experimental quality ratings task. Whenever multiple workers provided the same response to a given question, we only sought ratings for that single unique question and response. Each unique question-response pair () was rated at least 8–10 times (a few pairs were rated more often; we retained those extra ratings). We recruited 119 AMT workers (who were not members of the Control or AUI groups) who provided 4300 total ratings.

4.2 Differences in response time

We found that workers were slower overall with the AUI than without the AUI. In Fig. 2 we show the distributions of typing duration and submission delay. There was a slight difference in typing duration between Control and AUI (median 1.97s for Control compared with median 2.69s for AUI)222Responses from the AUI group were slightly longer than those from the Control; median length of 11 characters vs. 9 characters.. However, there was a strong difference in the distributions of submission delay, with AUI workers taking longer to submit than Control workers (median submission delay of 7.27s vs. 4.44s). This is likely due to the time required to mentally process and select from the AUI options. We anticipated that the submission delay may be counter-balanced by the time saved entering text, but the total typing duration plus submission delay was still significantly longer for AUI than control (median 7.64s for Control vs. 12.14s for AUI). We conclude that the AUI makes workers significantly slower.

Figure 2: Distributions of time delays. Workers in the AUI treatment were significantly slower than in the control, and this was primarily due to the submission delay between when they finished entering text and when they submitted their response.

We anticipated that workers may learn over the course of multiple tasks. For example, the first time a worker sees the AUI will present a very different cognitive load than the 10th time. This learning may eventually lead to improved response times and so an AUI that may not be useful the first time may lead to performance gains as workers become more experienced.

To investigate learning effects, we recorded for each worker’s question-response pair how many questions that worker had already answered, and examined the distributions of typing duration and submission delay conditioned on the number of previously answered questions (Fig. 3). Indeed, learning did occur: the submission delay (but not typing duration) decreased as workers responded to more questions. However, this did not translate to gains in overall performance between Control and AUI workers as learning occurred for both groups: Among AUI workers who answered 10 questions, the median submission delay on the 10th question was 8.02s, whereas for Control workers who answered 10 questions, the median delay on the 10th question was only 4.178s. This difference between Control and AUI submission delays was significant (Mann-Whitney test: , , , ). In comparison, AUI (Control) workers answering their first question had a median submission delay of 10.97s (7.00s). This difference was also significant (Mann-Whitney test: , , , ). We conclude that experience with the AUI will not eventually lead to faster responses those of the control.

Figure 3: Workers became faster as they gained experience by answering more questions, but this improvement occurred in both Control and AUI groups.

4.3 Differences in response diversity

We were also interested in determining whether or not the worker responses were more consistent or more diverse due to the AUI. Response consistency for natural language data is important when a crowdsourcer wishes to pool or aggregate a set of worker responses. We anticipated that the AUI would lead to greater consistency by, among other effects, decreasing the rates of typos and misspellings. At the same time, however, the AUI could lead to more diversity due to cognitive priming: seeing suggested responses from the AUI may prompt the worker to revise their response. Increased diversity may be desirable when a crowdsourcer wants to receive as much information as possible from a given task.

To study the lexical and semantic diversities of responses, we performed three analyses. First, we aggregated all worker responses to a particular question into a single list corresponding to that question. Across all questions, we found that the number of unique responses was higher for the AUI than for the Control (Fig. 4A), implying higher diversity for AUI than for Control.

Figure 4: AUI workers had more lexically (A, B) and semantically (C) diverse responses than Control workers.

Second, we compared the diversity of individual responses between Control and AUI for each question. To measure diversity for a question, we computed the number of responses divided by the number of unique responses to that question. We call this the response density. A set of responses has a response density of 1 when every response is unique but when every response is the same, the response density is equal to the number of responses. Across the ten questions, response density was significantly lower for AUI than for Control (Wilcoxon signed rank test paired on questions: , , ) (Fig. 4B).

Third, we estimated the semantic diversity of responses using word vectors. Word vectors, or word embeddings, are a state-of-the-art computational linguistics tool that incorporate the semantic meanings of words and phrases by learning vector representations that are embedded into a high-dimensional vector space 

(Mikolov, Chen ., 2013; Mikolov, Sutskever ., 2013). Vector operations within this space such as addition and subtraction are capable of representing meaning and interrelationships between words (Mikolov, Sutskever ., 2013). For example, the vector is very close to the vector , indicating that these vectors capture analogy relations. Here we used 300-dimension word vectors trained on a 100B-word corpus taken from Google News333https://code.google.com/archive/p/word2vec (word2vec). For each question we computed the average similarity between words in the responses to that question—a lower similarity implies more semantically diverse answers. Specifically, for a given question , we concatenated all responses to that question into a single document , and averaged the vector similarities of all pairs of words in , where is the word vector corresponding to word :

(1)

where if and zero otherwise. We also excluded from 1 any word pairs where one or both words were not present in the pre-trained word vectors (approximately 13% of word pairs). For similarity we chose the standard cosine similarity between two vectors. As with response density, we found that most questions had lower word vector similarity (and are thus collectively more semantically diverse) when considering AUI responses as the document than when came from the Control workers (Fig. 4C). The difference was significant (Wilcoxon signed rank test paired on questions: , , ).

Taken together, we conclude from these three analyses that the AUI increased the diversity of the responses workers gave.

4.4 No difference in response quality

Following the collection of responses from the Control and AUI groups, separate AMT workers were asked to rate the quality of the original responses (see Experimental design). These ratings followed a 1–5 scale from lowest to highest. We present these ratings in Fig. 5. While there was variation in overall quality across different questions (Fig. 5A), we did not observe a consistent difference in perceived response quality between the two groups. There was also no statistical difference in the overall distributions of ratings per question (Fig. 5B). We conclude that the AUI neither increased nor decreased response quality.

Figure 5: Quality of responses. All question-response pairs were rated independently by workers on a 1-5 scale of perceived quality (1–lowest quality, 5–highest quality).

5 Discussion

We have showed via a randomized control trial that an autocompletion user interface (AUI) is not helpful in making workers more efficient. Further, the AUI led to a more lexically and semantically diverse set of text responses to a given task than if the AUI was not present. The AUI also had no noticeable impact, positive or negative, on response quality, as independently measured by other workers.

A challenge with text-focused crowdsourcing is aggregation of natural language responses. Unlike binary labeling tasks, for example, normalizing text data can be challenging. Should casing be removed? Should words be stemmed? What to do with punctuation? Should typos be fixed? One of our goals when testing the effects of the AUI was to see if it helps with this normalization task, so that crowdsourcers can spend less time aggregating responses. We found that the AUI would likely not help with this in the sense that the sets of responses became more diverse, not less. Yet, this may in fact be desirable—if a crowdsourcer wants as much diverse information from workers as possible, then showing them dynamic AUI suggestions may provide a cognitive priming mechanism to inspire workers to consider responses which otherwise would not have occurred to them.

One potential explanation for the increased submission delay among AUI workers is an excessive number of options presented by the AUI. The goal of an AUI is to present the best options at the top of the drop down menu (Fig. 1B). Then a worker can quickly start typing and choose the best option with a single keystroke or mouse click. However, if the best option appears farther down the menu, then the worker must commit more time to scan and process the AUI suggestions. Our AUI always presented six suggestions, with another six available by scrolling, and our experiment did not vary these numbers. Yet the size of the AUI and where options land may play significant roles in submission delay, especially if significant numbers of selections come from AUI positions far from the input area.

We aimed to explore position effects, but due to some technical issues we did not record the positions in the AUI that workers chose. However, our Javascript instrumentation logged worker keystrokes as they typed so we can approximately reconstruct the AUI position of the worker’s ultimate response. To do this, we first identified the logged text inputed by the worker before it was replaced by the AUI selection, then used this text to replicate the database query underlying the AUI, and lastly determined where the worker’s final response appeared in the query results. This procedure is only an approximation because our instrumentation would occasionally fail to log some keystrokes and because a worker could potentially type out the entire response even if it also appeared in the AUI (which the worker may not have even noticed). Nevertheless, most AUI workers submitted responses that appeared in the AUI (Fig. 6A) and, of those responses, most owere found in the first few (reconstructed) positions near the top of the AUI (Fig. 6B). Specifically, we found that 59.3% of responses were found in the first two reconstructed positions, and 91.2% were in the first six. With the caveats of this analysis in mind, which we hope to address in future experiments, these results provide some evidence that the AUI responses were meaningful and that the AUI workers were delayed by the AUI even though most chosen responses came from the top area of the AUI which is most quickly accessible to the worker.

Figure 6: Inferred positions of AUI selections based on the last text workers in the AUI group typed before choosing from the AUI. (A) Most submitted AUI responses appeared in the AUI. (B) Among the responses appearing in the AUI, the reconstructed positions of those responses tended to be at the top of the AUI, in the most prominent, accessible area.

Beyond AUI position effects and the number of options shown in the AUI, there are many aspects of the interplay between workers and the AUI to be further explored. We limited workers to performing no more than ten tasks, but will an AUI eventually lead to efficiency gains beyond that level of experience? It is also an open question if an AUI will lead to efficiency gains when applying more advanced autocompletion and ranking algorithms than the one we used. Given that workers were slower with the AUI primarily due to a delay after they finished typing which far exceeded the delays of non-AUI workers, better algorithms may play a significant role in speeding up or, in this case, slowing down workers. Either way, our results here indicate that crowdsourcers must be very judicious if they wish to augment workers with autocompletion user interfaces.

Acknowledgments

We thank S. Lehman and J. Bongard for useful comments and gratefully acknowledge the resources provided by the Vermont Advanced Computing Core. This material is based upon work supported by the National Science Foundation under Grant No. IIS-1447634.

References

  • Allahbakhsh . (2013) allahbakhsh2013qualityAllahbakhsh, M., Benatallah, B., Ignjatovic, A., Motahari-Nezhad, HR., Bertino, E.  Dustdar, S.  2013. Quality control in crowdsourcing systems: Issues and directions Quality control in crowdsourcing systems: Issues and directions. IEEE Internet Computing17276–81.
  • Anson . (2006) anson2006effectsAnson, D., Moist, P., Przywara, M., Wells, H., Saylor, H.  Maxime, H.  2006. The effects of word completion and word prediction on typing rates using on-screen keyboards The effects of word completion and word prediction on typing rates using on-screen keyboards. Assistive technology182146–154.
  • Bast  Weber (2006) bast2006typeBast, H.  Weber, I.  2006. Type less, find more: fast autocompletion search with a succinct index Type less, find more: fast autocompletion search with a succinct index. Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval Proceedings of the 29th annual international acm sigir conference on research and development in information retrieval ( 364–371).
  • Cheng . (2015) cheng_break_2015Cheng, J., Teevan, J., Iqbal, ST.  Bernstein, MS.  2015. Break It Down: A Comparison of Macro- and Microtasks Break It Down: A Comparison of Macro- and Microtasks. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems ( 4061–4064). ACM. 10.1145/2702123.2702146
  • Demartini (2016) demartini2016crowdsourcingDemartini, G.  2016. Crowdsourcing Relevance Assessments: The Unexpected Benefits of Limiting the Time to Judge Crowdsourcing relevance assessments: The unexpected benefits of limiting the time to judge. Proceedings of Conference on Human Computation and Crowdsourcing (HCOMP 2016). Proceedings of conference on human computation and crowdsourcing (hcomp 2016).
  • Karger . (2014) doi:10.1287/opre.2013.1235Karger, DR., Oh, S.  Shah, D.  2014. Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems Budget-optimal task allocation for reliable crowdsourcing systems. Operations Research6211-24. http://dx.doi.org/10.1287/opre.2013.1235 10.1287/opre.2013.1235
  • Kittur . (2008) kittur2008crowdsourcingKittur, A., Chi, EH.  Suh, B.  2008. Crowdsourcing user studies with Mechanical Turk Crowdsourcing user studies with mechanical turk. Proceedings of the SIGCHI conference on human factors in computing systems Proceedings of the sigchi conference on human factors in computing systems ( 453–456).
  • Koester  Levine (1994) 331567Koester, HH.  Levine, SP.  1994Sep. Modeling the speed of text entry with a word prediction interface Modeling the speed of text entry with a word prediction interface. IEEE Transactions on Rehabilitation Engineering23177-187. 10.1109/86.331567
  • Lasecki . (2015) lasecki_effects_2015Lasecki, WS., Rzeszotarski, JM., Marcus, A.  Bigham, JP.  2015. The Effects of Sequence and Delay on Crowd Work The Effects of Sequence and Delay on Crowd Work. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems ( 1375–1378). ACM. 10.1145/2702123.2702594
  • Li . (2016) li2016crowdsourcingLi, Q., Ma, F., Gao, J., Su, L.  Quinn, CJ.  2016. Crowdsourcing high quality labels with a tight budget Crowdsourcing high quality labels with a tight budget. Proceedings of the Ninth ACM International Conference on Web Search and Data Mining Proceedings of the ninth acm international conference on web search and data mining ( 237–246).
  • Little . (2010) little2010exploringLittle, G., Chilton, LB., Goldman, M.  Miller, RC.  2010. Exploring iterative and parallel human computation processes Exploring iterative and parallel human computation processes. Proceedings of the ACM SIGKDD workshop on human computation Proceedings of the acm sigkdd workshop on human computation ( 68–76).
  • Magnuson  Hunnicutt (2002) magnuson2002measuringMagnuson, T.  Hunnicutt, S.  2002. Measuring the effectiveness of word prediction: The advantage of long-term use Measuring the effectiveness of word prediction: The advantage of long-term use. TMH-QPSR43157–67.
  • McAndrew  Bagrow (2016) mcandrew2016replyMcAndrew, TC.  Bagrow, JP.  2016. Reply & Supply: Efficient crowdsourcing when workers do more than answer questions. Reply & supply: Efficient crowdsourcing when workers do more than answer questions. arXiv:1611.00954
  • Mikolov, Chen . (2013) mikolov2013efficientMikolov, T., Chen, K., Corrado, G.  Dean, J.  2013. Efficient estimation of word representations in vector space Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
  • Mikolov, Sutskever . (2013) NIPS2013_5021Mikolov, T., Sutskever, I., Chen, K., Corrado, GS.  Dean, J.  2013. Distributed Representations of Words and Phrases and their Compositionality Distributed representations of words and phrases and their compositionality. CJC. Burges, L. Bottou, M. Welling, Z. Ghahramani  KQ. Weinberger (), Advances in Neural Information Processing Systems 26 Advances in neural information processing systems 26 ( 3111–3119). Curran Associates, Inc.
  • Sevenster . (2012) Sevenster2012107Sevenster, M., van Ommering, R.  Qian, Y.  2012. Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary. Journal of Biomedical Informatics451107–119.
  • Tran-Thanh . (2013) tran2013efficientTran-Thanh, L., Venanzi, M., Rogers, A.  Jennings, NR.  2013. Efficient budget allocation with accuracy guarantees for crowdsourcing classification tasks Efficient budget allocation with accuracy guarantees for crowdsourcing classification tasks. Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems Proceedings of the 2013 international conference on autonomous agents and multi-agent systems ( 901–908).
  • Wang . (2015) wang2015inferenceWang, Z., Wang, H., Wen, JR.  Xiao, Y.  2015. An Inference Approach to Basic Level of Categorization An inference approach to basic level of categorization. Proceedings of the 24th ACM International on Conference on Information and Knowledge Management Proceedings of the 24th acm international on conference on information and knowledge management ( 653–662).
  • Welinder  Perona (2010) welinder2010onlineWelinder, P.  Perona, P.  2010. Online crowdsourcing: rating annotators and obtaining cost-effective labels Online crowdsourcing: rating annotators and obtaining cost-effective labels. Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on Computer vision and pattern recognition workshops (cvprw), 2010 ieee computer society conference on ( 25–32).
  • Wu . (2012) wu2012probaseWu, W., Li, H., Wang, H.  Zhu, KQ.  2012. Probase: A probabilistic taxonomy for text understanding Probase: A probabilistic taxonomy for text understanding. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data Proceedings of the 2012 acm sigmod international conference on management of data ( 481–492).