What Can We Learn from Collective Human Opinions on Natural Language Inference Data?

10/07/2020
by   Yixin Nie, et al.
4

Despite the subjective nature of many NLP tasks, most NLU evaluations have focused on using the majority label with presumably high agreement as the ground truth. Less attention has been paid to the distribution of human opinions. We collect ChaosNLI, a dataset with a total of 464,500 annotations to study Collective HumAn OpinionS in oft-used NLI evaluation sets. This dataset is created by collecting 100 annotations per example for 3,113 examples in SNLI and MNLI and 1,532 examples in Abductive-NLI. Analysis reveals that: (1) high human disagreement exists in a noticeable amount of examples in these datasets; (2) the state-of-the-art models lack the ability to recover the distribution over human labels; (3) models achieve near-perfect accuracy on the subset of data with a high level of human agreement, whereas they can barely beat a random guess on the data with low levels of human agreement, which compose most of the common errors made by state-of-the-art models on the evaluation sets. This questions the validity of improving model performance on old metrics for the low-agreement part of evaluation datasets. Hence, we argue for a detailed examination of human agreement in future data collection efforts, and evaluating model outputs against the distribution over collective human opinions. The ChaosNLI dataset and experimental scripts are available at https://github.com/easonnie/ChaosNLI

READ FULL TEXT
research
05/19/2023

Self-Agreement: A Framework for Fine-tuning Language Models to Find Agreement among Diverse Opinions

Finding an agreement among diverse opinions is a challenging topic in mu...
research
06/13/2018

Explainable Agreement through Simulation for Tasks with Subjective Labels

The field of information retrieval often works with limited and noisy da...
research
06/30/2021

All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated Text

Human evaluations are typically considered the gold standard in natural ...
research
04/05/2022

Dynatask: A Framework for Creating Dynamic AI Benchmark Tasks

We introduce Dynatask: an open source system for setting up custom NLP t...
research
08/21/2021

CushLEPOR: Customised hLEPOR Metric Using LABSE Distilled Knowledge Model to Improve Agreement with Human Judgements

Human evaluation has always been expensive while researchers struggle to...
research
11/17/2022

Consultation Checklists: Standardising the Human Evaluation of Medical Note Generation

Evaluating automatically generated text is generally hard due to the inh...
research
07/01/2013

An Empirical Study into Annotator Agreement, Ground Truth Estimation, and Algorithm Evaluation

Although agreement between annotators has been studied in the past from ...

Please sign up or login with your details

Forgot password? Click here to reset