Modeling and Mitigating Human Annotation Errors to Design Efficient Stream Processing Systems with Human-in-the-loop Machine Learning

07/07/2020
by   Rahul Pandey, et al.
0

High-quality human annotations are necessary for creating effective machine learning-driven stream processing systems. We study hybrid stream processing systems based on a Human-In-The-Loop Machine Learning (HITL-ML) paradigm, in which one or many human annotators and an automatic classifier (trained at least partially by the human annotators) label an incoming stream of instances. This is typical of many near-real time social media analytics and web applications, including the annotation of social media posts during emergencies by digital volunteer groups. From a practical perspective, low-quality human annotations result in wrong labels for retraining automated classifiers and indirectly contribute to the creation of inaccurate classifiers. Considering human annotation as a psychological process allows us to address these limitations. We show that human annotation quality is dependent on the ordering of instances shown to annotators, and can be improved by local changes in the instance sequence/ordering provided to the annotators, yielding a more accurate annotation of the stream. We design a theoretically-motivated human error framework for the human annotation task to study the effect of ordering instances (i.e., an "annotation schedule"). Further, we propose an error-avoidance approach to the active learning (HITL-ML) paradigm for stream processing applications that is robust to these likely human errors when deciding a human annotation schedule. We validate the human error framework using crowdsourcing experiments and evaluate the proposed algorithm against standard baselines for active learning via extensive experimentation on classification tasks of filtering relevant social media posts during natural disasters.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/16/2019

Modeling Human Annotation Errors to Design Bias-Aware Systems for Social Stream Processing

High-quality human annotations are necessary to create effective machine...
research
03/26/2020

Integrating Crowdsourcing and Active Learning for Classification of Work-Life Events from Tweets

Social media, especially Twitter, is being increasingly used for researc...
research
08/12/2019

Active Annotation: bootstrapping annotation lexicon and guidelines for supervised NLU learning

Natural Language Understanding (NLU) models are typically trained in a s...
research
01/07/2021

Detecting Suspicious Events in Fast Information Flows

We describe a computational feather-light and intuitive, yet provably ef...
research
05/31/2023

ActiveAED: A Human in the Loop Improves Annotation Error Detection

Manually annotated datasets are crucial for training and evaluating Natu...
research
09/10/2018

Annotating shadows, highlights and faces: the contribution of a 'human in the loop' for digital art history

While automatic computational techniques appear to reveal novel insights...
research
10/02/2020

HUMAN: Hierarchical Universal Modular ANnotator

A lot of real-world phenomena are complex and cannot be captured by sing...

Please sign up or login with your details

Forgot password? Click here to reset