Capturing Label Distribution: A Case Study in NLI

02/13/2021
by   Shujian Zhang, et al.
0

We study estimating inherent human disagreement (annotation label distribution) in natural language inference task. Post-hoc smoothing of the predicted label distribution to match the expected label entropy is very effective. Such simple manipulation can reduce KL divergence by almost half, yet will not improve majority label prediction accuracy or learn label distributions. To this end, we introduce a small amount of examples with multiple references into training. We depart from the standard practice of collecting a single reference per each training example, and find that collecting multiple references can achieve better accuracy under the fixed annotation budget. Lastly, we provide rich analyses comparing these two methods for improving label distribution estimation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/09/2021

Learning with Different Amounts of Annotation: From Zero to Many Labels

Training NLP systems typically assumes access to annotated data that has...
research
06/29/2017

Chord Label Personalization through Deep Learning of Integrated Harmonic Interval-based Representations

The increasing accuracy of automatic chord estimation systems, the avail...
research
05/02/2020

Generalized Entropy Regularization or: There's Nothing Special about Label Smoothing

Prior work has explored directly regularizing the output distributions o...
research
09/28/2022

Label Distribution Learning via Implicit Distribution Representation

In contrast to multi-label learning, label distribution learning charact...
research
06/27/2019

Adversarial Robustness via Adversarial Label-Smoothing

We study Label-Smoothing as a means for improving adversarial robustness...
research
01/23/2017

Regularizing Neural Networks by Penalizing Confident Output Distributions

We systematically explore regularizing neural networks by penalizing low...
research
09/07/2022

Investigating Reasons for Disagreement in Natural Language Inference

We investigate how disagreement in natural language inference (NLI) anno...

Please sign up or login with your details

Forgot password? Click here to reset