Efficient Online Scalar Annotation with Bounded Support

06/04/2018
by   Keisuke Sakaguchi, et al.
0

We describe a novel method for efficiently eliciting scalar annotations for dataset construction and system quality estimation by human judgments. We contrast direct assessment (annotators assign scores to items directly), online pairwise ranking aggregation (scores derive from annotator comparison of items), and a hybrid approach (EASL: Efficient Annotation of Scalar Labels) proposed here. Our proposal leads to increased correlation with ground truth, at far greater annotator efficiency, suggesting this strategy as an improved mechanism for dataset creation and manual system evaluation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2022

A Truthful Owner-Assisted Scoring Mechanism

Alice (owner) has knowledge of the underlying quality of her items measu...
research
10/27/2021

You Are the Best Reviewer of Your Own Papers: An Owner-Assisted Scoring Mechanism

I consider the setting where reviewers offer very noisy scores for a num...
research
07/05/2023

Evaluating AI systems under uncertain ground truth: a case study in dermatology

For safety, AI systems in health undergo thorough evaluations before dep...
research
09/18/2023

Drawing the Same Bounding Box Twice? Coping Noisy Annotations in Object Detection with Repeated Labels

The reliability of supervised machine learning systems depends on the ac...
research
05/03/2021

Scalar Adjective Identification and Multilingual Ranking

The intensity relationship that holds between scalar adjectives (e.g., n...
research
09/21/2023

A Vision-Centric Approach for Static Map Element Annotation

The recent development of online static map element (a.k.a. HD Map) cons...
research
10/04/2017

Analysis of NIST SP800-22 focusing on randomness of each sequence

NIST SP800-22 is a randomness test set applied for a set of sequences. A...

Please sign up or login with your details

Forgot password? Click here to reset