Analyzing Dataset Annotation Quality Management in the Wild

07/16/2023
by   Jan-Christoph Klie, et al.
0

Data quality is crucial for training accurate, unbiased, and trustworthy machine learning models and their correct evaluation. Recent works, however, have shown that even popular datasets used to train and evaluate state-of-the-art models contain a non-negligible amount of erroneous annotations, bias or annotation artifacts. There exist best practices and guidelines regarding annotation projects. But to the best of our knowledge, no large-scale analysis has been performed as of yet on how quality management is actually conducted when creating natural language datasets and whether these recommendations are followed. Therefore, we first survey and summarize recommended quality management practices for dataset creation as described in the literature and provide suggestions on how to apply them. Then, we compile a corpus of 591 scientific publications introducing text datasets and annotate it for quality-related aspects, such as annotator management, agreement, adjudication or data validation. Using these annotations, we then analyze how quality management is conducted in practice. We find that a majority of the annotated publications apply good or very good quality management. However, we deem the effort of 30 common errors, especially with using inter-annotator agreement and computing annotation error rates.

READ FULL TEXT
research
06/05/2022

Annotation Error Detection: Analyzing the Past and Present for a More Coherent Future

Annotated data is an essential ingredient in natural language processing...
research
06/25/2021

Semantic annotation for computational pathology: Multidisciplinary experience and best practice recommendations

Recent advances in whole slide imaging (WSI) technology have led to the ...
research
09/24/2020

Best Practices for Managing Data Annotation Projects

Annotation is the labeling of data by human effort. Annotation is critic...
research
06/26/2023

Transcending Traditional Boundaries: Leveraging Inter-Annotator Agreement (IAA) for Enhancing Data Management Operations (DMOps)

This paper presents a novel approach of leveraging Inter-Annotator Agree...
research
12/07/2021

Towards a Shared Rubric for Dataset Annotation

When arranging for third-party data annotation, it can be hard to compar...
research
01/07/2021

Dataset Definition Standard (DDS)

This document gives a set of recommendations to build and manipulate the...
research
10/13/2022

ezCoref: Towards Unifying Annotation Guidelines for Coreference Resolution

Large-scale, high-quality corpora are critical for advancing research in...

Please sign up or login with your details

Forgot password? Click here to reset