Understanding and Predicting Human Label Variation in Natural Language Inference through Explanation

04/24/2023
by   Nan-Jiang Jiang, et al.
0

Human label variation (Plank 2022), or annotation disagreement, exists in many natural language processing (NLP) tasks. To be robust and trusted, NLP models need to identify such variation and be able to explain it. To this end, we created the first ecologically valid explanation dataset with diverse reasoning, LiveNLI. LiveNLI contains annotators' highlights and free-text explanations for the label(s) of their choice for 122 English Natural Language Inference items, each with at least 10 annotations. We used its explanations for chain-of-thought prompting, and found there is still room for improvement in GPT-3's ability to predict label distribution with in-context learning.

READ FULL TEXT
research
05/25/2020

NILE : Natural Language Inference with Faithful Natural Language Explanations

The recent growth in the popularity and success of deep learning models ...
research
01/25/2023

Consistency is Key: Disentangling Label Variation in Natural Language Processing with Intra-Annotator Agreement

We commonly use agreement measures to assess the utility of judgements m...
research
09/02/2022

INTERACTION: A Generative XAI Framework for Natural Language Inference Explanations

XAI with natural language processing aims to produce human-readable expl...
research
04/30/2020

WT5?! Training Text-to-Text Models to Explain their Predictions

Neural networks have recently achieved human-level performance on variou...
research
06/16/2023

No Strong Feelings One Way or Another: Re-operationalizing Neutrality in Natural Language Inference

Natural Language Inference (NLI) has been a cornerstone task in evaluati...
research
06/20/2023

The Ecological Fallacy in Annotation: Modelling Human Label Variation goes beyond Sociodemographics

Many NLP tasks exhibit human label variation, where different annotators...
research
10/24/2020

Measuring Association Between Labels and Free-Text Rationales

Interpretable NLP has taking increasing interest in ensuring that explan...

Please sign up or login with your details

Forgot password? Click here to reset