Explainable Agreement through Simulation for Tasks with Subjective Labels

06/13/2018
by   John Foley, et al.
0

The field of information retrieval often works with limited and noisy data in an attempt to classify documents into subjective categories, e.g., relevance, sentiment and controversy. We typically quantify a notion of agreement to understand the difficulty of the labeling task, but when we present final results, we do so using measures that are unaware of agreement or the inherent subjectivity of the task. We propose using user simulation to understand the effect size of this noisy agreement data. By simulating truth and predictions, we can understand the maximum scores a dataset can support: for if a classifier is doing better than a reasonable model of a human, we cannot conclude that it is actually better, but that it may be learning noise present in the dataset. We present a brief case study on controversy detection that concludes that a commonly-used dataset has been exhausted: in order to advance the state-of-the-art, more data must be gathered at the current level of label agreement in order to distinguish between techniques with confidence.

READ FULL TEXT

page 1

page 2

research
10/07/2020

What Can We Learn from Collective Human Opinions on Natural Language Inference Data?

Despite the subjective nature of many NLP tasks, most NLU evaluations ha...
research
03/29/2022

Agreement or Disagreement in Noise-tolerant Mutual Learning?

Deep learning has made many remarkable achievements in many fields but s...
research
04/27/2020

Difficulty Translation in Histopathology Images

The unique nature of histopathology images opens the door to domain-spec...
research
06/02/2021

Survey Equivalence: A Procedure for Measuring Classifier Accuracy Against Human Labels

In many classification tasks, the ground truth is either noisy or subjec...
research
05/19/2021

When Deep Classifiers Agree: Analyzing Correlations between Learning Order and Image Statistics

Although a plethora of architectural variants for deep classification ha...
research
04/01/2023

Evaluating the impact of an explainable machine learning system on the interobserver agreement in chest radiograph interpretation

We conducted a prospective study to measure the clinical impact of an ex...
research
06/17/2016

Improving Agreement and Disagreement Identification in Online Discussions with A Socially-Tuned Sentiment Lexicon

We study the problem of agreement and disagreement detection in online d...

Please sign up or login with your details

Forgot password? Click here to reset