Learning with Different Amounts of Annotation: From Zero to Many Labels

09/09/2021
by   Shujian Zhang, et al.
0

Training NLP systems typically assumes access to annotated data that has a single human label per example. Given imperfect labeling from annotators and inherent ambiguity of language, we hypothesize that single label is not sufficient to learn the spectrum of language interpretation. We explore new annotation distribution schemes, assigning multiple labels per example for a small subset of training examples. Introducing such multi label examples at the cost of annotating fewer examples brings clear gains on natural language inference task and entity typing task, even when we simply first train with a single label data and then fine tune with multi label examples. Extending a MixUp data augmentation framework, we propose a learning algorithm that can learn from training examples with different amount of annotation (with zero, one, or multiple labels). This algorithm efficiently combines signals from uneven training data and brings additional gains in low annotation budget and cross domain settings. Together, our method achieves consistent gains in two tasks, suggesting distributing labels unevenly among training examples can be beneficial for many NLP tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/02/2021

Substructure Substitution: Structured Data Augmentation for NLP

We study a family of data augmentation methods, substructure substitutio...
research
02/13/2021

Capturing Label Distribution: A Case Study in NLI

We study estimating inherent human disagreement (annotation label distri...
research
06/12/2022

Mining Multi-Label Samples from Single Positive Labels

Conditional generative adversarial networks (cGANs) have shown superior ...
research
04/17/2020

Incorporating Multiple Cluster Centers for Multi-Label Learning

Multi-label learning deals with the problem that each instance is associ...
research
07/24/2022

From Multi-label Learning to Cross-Domain Transfer: A Model-Agnostic Approach

In multi-label learning, a particular case of multi-task learning where ...
research
06/27/2023

"Is a picture of a bird a bird": Policy recommendations for dealing with ambiguity in machine vision models

Many questions that we ask about the world do not have a single clear an...
research
05/10/2023

Auditing Cross-Cultural Consistency of Human-Annotated Labels for Recommendation Systems

Recommendation systems increasingly depend on massive human-labeled data...

Please sign up or login with your details

Forgot password? Click here to reset