An Empirical Study into Annotator Agreement, Ground Truth Estimation, and Algorithm Evaluation

07/01/2013
by   Thomas A. Lampert, et al.
0

Although agreement between annotators has been studied in the past from a statistical viewpoint, little work has attempted to quantify the extent to which this phenomenon affects the evaluation of computer vision (CV) object detection algorithms. Many researchers utilise ground truth (GT) in experiments and more often than not this GT is derived from one annotator's opinion. How does the difference in opinion affect an algorithm's evaluation? Four examples of typical CV problems are chosen, and a methodology is applied to each to quantify the inter-annotator variance and to offer insight into the mechanisms behind agreement and the use of GT. It is found that when detecting linear objects annotator agreement is very low. The agreement in object position, linear or otherwise, can be partially explained through basic image properties. Automatic object detectors are compared to annotator agreement and it is found that a clear relationship exists. Several methods for calculating GTs from a number of annotations are applied and the resulting differences in the performance of the object detectors are quantified. It is found that the rank of a detector is highly dependent upon the method used to form the GT. It is also found that although the STAPLE and LSML GT estimation methods appear to represent the mean of the performance measured using the individual annotations, when there are few annotations, or there is a large variance in them, these estimates tend to degrade. Furthermore, one of the most commonly adopted annotation combination methods--consensus voting--accentuates more obvious features, which results in an overestimation of the algorithm's performance. Finally, it is concluded that in some datasets it may not be possible to state with any confidence that one algorithm outperforms another when evaluating upon one GT and a method for calculating confidence bounds is discussed.

READ FULL TEXT

page 6

page 9

page 10

page 13

page 14

page 16

research
12/20/2019

Assessing Data Quality of Annotations with Krippendorff Alpha For Applications in Computer Vision

Current supervised deep learning frameworks rely on annotated data for m...
research
10/14/2022

The Invariant Ground Truth of Affect

Affective computing strives to unveil the unknown relationship between a...
research
06/22/2021

Confidence-Aware Learning for Camouflaged Object Detection

Confidence-aware learning is proven as an effective solution to prevent ...
research
03/31/2021

A Statistical Analysis of Summarization Evaluation Metrics using Resampling Methods

The quality of a summarization evaluation metric is quantified by calcul...
research
07/06/2022

GLENet: Boosting 3D Object Detectors with Generative Label Uncertainty Estimation

The inherent ambiguity in ground-truth annotations of 3D bounding boxes ...
research
10/07/2020

What Can We Learn from Collective Human Opinions on Natural Language Inference Data?

Despite the subjective nature of many NLP tasks, most NLU evaluations ha...
research
12/19/2018

Automatic Classifiers as Scientific Instruments: One Step Further Away from Ground-Truth

Automatic detectors of facial expression, gesture, affect, etc., can ser...

Please sign up or login with your details

Forgot password? Click here to reset