Sparse Probability of Agreement

08/12/2022
by   Jeppe Nørregaard, et al.
0

Measuring inter-annotator agreement is important for annotation tasks, but many metrics require a fully-annotated dataset (or subset), where all annotators annotate all samples. We define Sparse Probability of Agreement, SPA, which estimates the probability of agreement when no all annotator-item-pairs are available. We show that SPA, with some assumptions, is an unbiased estimator and provide multiple different weighing schemes for handling samples with different numbers of annotation, evaluated over a range of datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2018

Measuring intergroup agreement and disagreement

This work is motivated by the need to assess the degree of agreement bet...
research
09/17/2022

DiPietro-Hazari Kappa: A Novel Metric for Assessing Labeling Quality via Annotation

Data is a key component of modern machine learning, but statistics for a...
research
09/08/2021

AgreementLearning: An End-to-End Framework for Learning with Multiple Annotators without Groundtruth

The annotation of domain experts is important for some medical applicati...
research
07/24/2019

Investigating Correlations of Inter-coder Agreement and Machine Annotation Performance for Historical Video Data

Video indexing approaches such as visual concept classification and pers...
research
03/07/2018

Sklar's Omega: A Gaussian Copula-Based Framework for Assessing Agreement

The statistical measurement of agreement is important in a number of fie...
research
06/07/2020

Overall Agreement for Multiple Raters with Replicated Measurements

Multiple raters are often needed to be used interchangeably in practice ...
research
12/15/2022

Measuring Annotator Agreement Generally across Complex Structured, Multi-object, and Free-text Annotation Tasks

When annotators label data, a key metric for quality assurance is inter-...

Please sign up or login with your details

Forgot password? Click here to reset